Re: [R] Factors.
[last week,] [EMAIL PROTECTED] wrote: Hello, I'm new with R. I need some help; I have a matrix of data to wich i want to apply the function dudi.acm to perform multiple correspondence analysis. However to use it all variables must be factors, so how can i turn each column of the matrix into a factor? I've tried as.factor. It works isolated for each column, but when I form the matrix of all factors it doesnt work. Please help me! [Looks like there was no answer until now.] a) Please tell us which package the functions you are using are in (I know, it is ade4). b) Please follow the posting guide which tells you some more things (which are not *that* important here). c) Please read the help page! ?dudi.acm tells you df, df1, df2: data frames containing only factors ^^^ You cannot have a matrix of factors, because the required attributes cannot be set for each column separately. Instead, use a data.frame. Uwe Ligges __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] Daily time series
Hi, I'm dealing with time series with 1 observaton for day (data sampled daily). I will create a ts object using that time series and the function ts(). In ts() help is written: The value of argument 'frequency' is used when the series is sampled an integral number of times in each unit time interval. For example, one could use a value of '7' for 'frequency' when the data are sampled daily, and the natural time period is a week, or '12' when the data are sampled monthly and the natural time period is a year. Values of '4' and '12' are assumed in (e.g.) 'print' methods to imply a quarterly and monthly series respectively. But what value should assume start in ts function? Here is a time series: 1/1 10 2/1 20 3/1 30 4/1 40 5/1 50 6/1 60 x-c(10,20,30,40,50,60) ## observation serie-ts(dati, start=c(1,1),frequency=7) ##creating ts object serie ## printing ts output Time Series: Start = c(1, 1) End = c(1, 6) Frequency = 7 [1] 10 20 30 40 50 60 Could someone help me? Thanks in advance. Sincerely. Vito Ricci = Diventare costruttori di soluzioni Visitate il portale http://www.modugno.it/ e in particolare la sezione su Palese http://www.modugno.it/archivio/cat_palese.shtml __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] boxplot a list of objects
Hi list, #Imagine we have vectors of different length (in practice 100 vectors): a-c(1:10) b-c(1:20) c-c(1:30) #then we got a list of the names of those objects: list-c(a,b,c) #I don't find how to boxplot them using a less stupid way than : boxplot(get(list[1]),get(list[2]),get(list[3])) Thanks for any advice ! -- Tristan LEFEBURE Laboratoire d'écologie des hydrosystèmes fluviaux (UMR 5023) Université Lyon I - Campus de la Doua 6 rue Dubois 69622 Villeurbanne - France Phone: (33) (0)4 72 43 29 45 Fax: (33) (0)4 72 43 15 23 __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] Howto debug R on Windows XP?
Hello, I start working with R and I have tried to debug R on a Windows XP system. Unfortunately I am not able to set a breakpoint in the package SJava, which I am interested in. So far I succeed to compile R with the DEBUG=T option, and followed the hints given in the manual/FAQs about debugging. After starting the gdb, I also succeed with break WinMain run so that the program stops there. The debugger is also able to find und list the function R_ReadConsole, but the command break R_ReadConsole is replied: Cannot access memory at address 0x1e6a0 Can anybody give a hint how to continue? My motivation is to use the SJava package and to continue the work Jens Oehlschlaegel and Ingo von Otte started with. Thanks for your help! Herbert Brückner __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] AW: Howto debug R on Windows XP?
-Ursprüngliche Nachricht- Von: Brueckner-Keutmann-GbR [mailto:[EMAIL PROTECTED] Gesendet: Wednesday, July 07, 2004 9:48 AM An: R_Help Mailing List Betreff: Howto debug R on Windows XP? Hello, I start working with R and I have tried to debug R on a Windows XP system. Unfortunately I am not able to set a breakpoint in the package SJava, which I am interested in. So far I succeed to compile R with the DEBUG=T option, and followed the hints given in the manual/FAQs about debugging. After starting the gdb, I also succeed with break WinMain run so that the program stops there. The debugger is also able to find und list the function R_ReadConsole, but the command break R_ReadConsole is replied: Cannot access memory at address 0x1e6a0 Can anybody give a hint how to continue? My motivation is to use the SJava package and to continue the work Jens Oehlschlaegel and Ingo von Otte started with. Thanks for your help! Herbert Brückner __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] Random intercept model with time-dependent covariates, results different from SAS
Hello, I have been struggling with a similar problem, i.e. fitting an LME model to unbalanced repeated measures data. I found Linear Mixed Models by John Fox (http://socserv2.socsci.mcmaster.ca/jfox/Books/Companion/appendix-mixed-mode ls.pdf) quite helpful. Fox gives examples which are unbalanced, so I guess that balance is not a requirement (assuming Fox is correct). However, the sample sizes are large compared to yours (and mine), which may make a difference. Dan Bebber Dr. Daniel P. Bebber Department of Plant Sciences University of Oxford South Parks Road Oxford OX1 3RB Tel. 01865 275060 Web. http://www.forestecology.co.uk/ Data, data, data! he cried impatiently. I can't make bricks without clay - Sherlock Holmes, The Adventure of the Copper Beeches, 1892 Message: 24 Date: Sun, 4 Jul 2004 19:21:32 +1000 From: Keith Wong [EMAIL PROTECTED] Subject: Re: [R] Random intercept model with time-dependent covariates, results different from AS To: Prof Brian Ripley [EMAIL PROTECTED] Cc: [EMAIL PROTECTED] Message-ID: [EMAIL PROTECTED] Content-Type: text/plain; charset=ISO-8859-1 Thank you for the very prompt response. I only included a small part of the output to make the message brief. I'm sorry it did not provide enough detail to answer my question. I have appended the summary() and anova() outputs to the two models I fitted in R. Quoting Prof Brian Ripley [EMAIL PROTECTED]: Looking at the significance of a main effect (group) in the presence of an interaction (time:group) is hard to interpret, and in your case is I think not even interesting. (The `main effect' probably represents difference in intercept for the time effect, that is the group difference at the last time. But see the next para.) Note that the two systems are returning different denominator dfs. I take your point that the main effect is probably not interesting in the presence of an interaction. I was checking the results for consistency to see if I was doing the right thing. I was not 100% sure that the SAS code was in itself correct. At this point you have not told us enough. My guess is that you have complete balance with the same number of subjects in each group. In that case the `group' effect is in the between-subjects stratum (as defined for the use of Error in aov, which you could also do), and thus R's 11 df would be right (rather than 44, without W and Z). Without balance Type III tests get much harder to interpret and the `group' effect would appear in two strata and there is no simple F test in the classical theory. So further guessing, SAS may have failed to detect balance and so used the wrong test. I had not appreciated the need for balance: in actual fact, one group has 5 subjects and the other 7. Will this be a problem? Would the R analysis still be valid in that case? The time-dependent covariates muddy the issue more, and I looked mainly at the analyses without them. Again, a crucial fact is not here: do the covariates depend on the subjects as well? Yes the covariates are measures of blood pressure and pulse, and they depend on the subjects as well. The good news is that the results _are_ similar. You do have different time behaviour in the two groups. So stop worrying about tests of uninteresting hypotheses and concentrate of summarizing that difference. -- Brian D. Ripley, [EMAIL PROTECTED] Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/ University of Oxford, Tel: +44 1865 272861 (self) 1 South Parks Road, +44 1865 272866 (PA) Oxford OX1 3TG, UKFax: +44 1865 272595 Thank you. I was concerned that one or both methods were incorrect given the results were inconsistent. Perhaps reassuringly, the parameter estimates for the fixed effects in both SAS and R were the same. Is the model specification OK for the model with just time, group and their interaction? Is the model specification with the 2 time dependent covariates appropriate? Once again, I'm very grateful for the time you've taken to answer my questions. Keith [Output from the 2 models fitted in R follows] g1 = lme(Y ~ time + group + time:group, random = ~ 1 | id, data = datamod) anova(g1) numDF denDF F-value p-value (Intercept) 144 3.387117 0.0725 time444 10.620547 .0001 group 111 0.508092 0.4908 time:group 444 3.961726 0.0079 summary(g1) Linear mixed-effects model fit by REML Data: datamod AIC BIClogLik 372.4328 396.5208 -174.2164 Random effects: Formula: ~1 | id (Intercept) Residual StdDev:11.05975 3.228684 Fixed effects: Y ~ time + group + time:group Value Std.Error DF t-value p-value (Intercept) 8.250 4.073428 44 2.025321 0.0489 time1-0.250 1.614342 44
Re: [R] boxplot a list of objects
thanks a lot ! an other simple solution proposed by Stefano Guazzetti is : boxplot(list(a, b, c)) (ok I will never use again a function name for an object name) On Wednesday 07 July 2004 10:48, Unternährer Thomas, uth wrote: One possibility ist boxplot(sapply(ListOfNames, get, env = .GlobalEnv)) Hope that this helps Thomas -Ursprüngliche Nachricht- Von: Lefebure Tristan [mailto:[EMAIL PROTECTED] Gesendet: Mittwoch, 7. Juli 2004 10:34 An: [EMAIL PROTECTED] Betreff: [R] boxplot a list of objects Hi list, #Imagine we have vectors of different length (in practice 100 vectors): a-c(1:10) b-c(1:20) c-c(1:30) #then we got a list of the names of those objects: list-c(a,b,c) #I don't find how to boxplot them using a less stupid way than : boxplot(get(list[1]),get(list[2]),get(list[3])) Thanks for any advice ! -- Tristan LEFEBURE Laboratoire d'écologie des hydrosystèmes fluviaux (UMR 5023) Université Lyon I - Campus de la Doua 6 rue Dubois 69622 Villeurbanne - France Phone: (33) (0)4 72 43 29 45 Fax: (33) (0)4 72 43 15 23 __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] AW: Howto debug R on Windows XP?
On Wed, 7 Jul 2004 10:50:18 +0200, Brueckner-Keutmann-GbR [EMAIL PROTECTED] wrote: -Ursprüngliche Nachricht- Von: Brueckner-Keutmann-GbR [mailto:[EMAIL PROTECTED] Gesendet: Wednesday, July 07, 2004 9:48 AM An: R_Help Mailing List Betreff: Howto debug R on Windows XP? Hello, I start working with R and I have tried to debug R on a Windows XP system. Unfortunately I am not able to set a breakpoint in the package SJava, which I am interested in. So far I succeed to compile R with the DEBUG=T option, and followed the hints given in the manual/FAQs about debugging. After starting the gdb, I also succeed with break WinMain run so that the program stops there. The debugger is also able to find und list the function R_ReadConsole, but the command break R_ReadConsole is replied: Cannot access memory at address 0x1e6a0 Can anybody give a hint how to continue? My motivation is to use the SJava package and to continue the work Jens Oehlschlaegel and Ingo von Otte started with. I've written some web pages about debugging in R that are mostly oriented towards Windows users. Go to http://www.stats.uwo.ca/faculty/murdoch/software/debuggingR I'm not sure what's going wrong for you, but you're not doing things the way I would. These web pages are quite new; please let me know what is missing or unclear. Duncan Murdoch __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] Rmetrics Documentation Update
I like to announce that some of the Rmetrics Documents have been updated to Version R 191.10057 Rmetrics Flyer: http://www.itp.phys.ethz.ch/econophysics/R/pdf/DocRmetrics.pdf Rmetrics Fact Sheet: http://www.itp.phys.ethz.ch/econophysics/R/pdf/DocFactsheet.pdf Rmetrics Reference Card: http://www.itp.phys.ethz.ch/econophysics/R/pdf/DocRefcard.pdf Unfortunately, the User Guides are still behind, having Version No 1.8.1. They will be updated in the near future. Best Regards Diethelm __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] Using permax with Data Frame Containing Missing Values
I'm new to this site so I hope this isn't too naive a problem. I'm trying to use the permax function with a data frame containing gene expression measurements taken from 79 microarray experiments with 3000 genes per array. The data contains missing values and every time I use permax with the data frame I get the error: NA/NaN/Inf in foreign function call (arg 1) Could anyone suggest how I might get round this problem? Regards, Brian Lane Dept of Haematology University of Liverpool __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] vectorizing sapply() code (Modified by Aaron J. Mackey)
[ Not sure why, but the first time I sent this it never seemed to go through; apologies if you're seeing this twice ... ] I have some fully functional code that I'm guessing can be done better/quicker with some savvy R vector tricks; any help to make this run a bit faster would be greatly appreciated; I'm particularly stuck on how to calculate using row-wise vectors without iterating explicitly over the dataframe or table ... library(stats4); d - data.frame( ix=c(0,1,2,3,4,5,6,7), ct=c(253987, 9596, 18680, 2630, 8224, 3590, 5534, 18937), A=c( 0, 1, 0, 1, 0, 1, 0, 1), B=c( 0, 0, 1, 1, 0, 0, 1, 1), C=c( 0, 0, 0, 0, 1, 1, 1, 1) ); ct - round(logb(length(d$ix), 2)) ll - function( th=0.5, a1=log(0.5), a2=log(0.5), a3=log(0.5), b1=log(0.5), b2=log(0.5), b3=log(0.5) ) { a - exp(sapply(1:ct, function (x) { get(paste(a, x, sep=)) })); b - exp(sapply(1:ct, function (x) { get(paste(b, x, sep=)) })); -sum( d$ct * log( sapply( d$ix, function (ix, th, a, b) { x - d[ix+1,3:(ct+2)] (th * prod((b ^ (1-x)) * ((1-b) ^ x ))) + ((1-th) * prod((a ^ x) * ((1-a) ^ (1-x }, th, a, b ) ) ); } ml - mle(ll, lower=c(0+1e-5, rep(log(0+1e-8), 2*ct)), upper=c(1-1e-5, rep(log(1-1e-8), 2*ct)), method=L-BFGS-B ); For those interested in the math, this is the MLE procedure to estimate the false positive/false negative rates (a and b) of three diagnostic (A, B and C) tests that have the observed performance recapitulated in dataframe d, but no gold standard (sometimes called latent class analysis, or LCA). Thanks for any help, -Aaron __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] Using permax with Data Frame Containing Missing Values
There is currently no handling of NAs in permax. Your only simple option is to drop those rows with NA's in them, or to perform some sort of imputation. I will mention it to the package's author, Robert On Wed, Jul 07, 2004 at 12:47:05PM +0100, Brian Lane wrote: I'm new to this site so I hope this isn't too naive a problem. I'm trying to use the permax function with a data frame containing gene expression measurements taken from 79 microarray experiments with 3000 genes per array. The data contains missing values and every time I use permax with the data frame I get the error: NA/NaN/Inf in foreign function call (arg 1) Could anyone suggest how I might get round this problem? Regards, Brian Lane Dept of Haematology University of Liverpool __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html -- +---+ | Robert Gentleman phone : (617) 632-5250 | | Associate Professor fax: (617) 632-2444 | | Department of Biostatistics office: M1B20| | Harvard School of Public Health email: [EMAIL PROTECTED]| +---+ __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] Mapinfo mid and mif files
Hi , Has anyone any experience of converting Mapinfo mid and mif files into a format that can be used with the R spatial packages. Thanks. yours sincerely Andrew McCulloch Department of Urban Studies University of Glasgow G12 8RS __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] Creating Binary Outcomes from a continuous variable
On Wed, 7 Jul 2004, Doran, Harold wrote: Dear List: ?cut I have searched the archives and my R books and cannot find a method to transform a continuous variable into a binary variable. For example, I have test score data along a continuous scale. I want to create a new variable in my dataset that is 1=above a cutpoint (or passed the test) and 0=otherwise. My instinct tells me that this will require a combination of the transform command along with a conditional selection. Any help is much appreciated. Thanks, Harold [[alternative HTML version deleted]] __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html -- Roger Bivand Economic Geography Section, Department of Economics, Norwegian School of Economics and Business Administration, Breiviksveien 40, N-5045 Bergen, Norway. voice: +47 55 95 93 55; fax +47 55 95 93 93 e-mail: [EMAIL PROTECTED] __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] Mapinfo mid and mif files
Not sure if it will work straight with .mid or .mif files, but it works for me on .tab files, which were supposed to be read with the Excel map tool package and beleive confro to the MapInfo format. I asked Roger Bivand at the recent useR!2004 meeting and he made several suggestions. Following the leads he gave me, I found a small program (ogr2ogr) which translates my .tab files into .shp files, which I can then read into R. The MapInfo format is explicitly supported. I use Debian Linux (sarge), and ogr2ogr is part of the package gdal-bin. A batch file like for i in *.tab ; do ogr2ogr ${i%.*}.shp $i ; done converted a bunch of files in a snap. Hope it may be of help to you. ft. -- Fernando TUSELLe-mail: Departamento de Econometría y Estadística [EMAIL PROTECTED] Facultad de CC.EE. y Empresariales Tel: (+34)94.601.3733 Avenida Lendakari Aguirre, 83 Fax: (+34)94.601.3754 E-48015 BILBAO (Spain)Secr: (+34)94.601.3740 __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] Creating Binary Outcomes from a continuous variable
On Wed, 2004-07-07 at 07:57, Doran, Harold wrote: Dear List: I have searched the archives and my R books and cannot find a method to transform a continuous variable into a binary variable. For example, I have test score data along a continuous scale. I want to create a new variable in my dataset that is 1=above a cutpoint (or passed the test) and 0=otherwise. My instinct tells me that this will require a combination of the transform command along with a conditional selection. Any help is much appreciated. Example: a - rnorm(20) b - ifelse(a 0, 0, 1) a [1] -1.0735800 -0.6788456 1.9979801 -0.4026760 0.1781791 -1.1540434 [7] -1.0842728 1.6042602 -0.7950492 -0.1194323 0.4450296 1.9269333 [13] -0.4456181 -0.8374677 -1.1898772 1.7353067 1.8619422 -0.1679996 [19] -0.2656138 -1.5529884 b [1] 0 0 1 0 1 0 0 1 0 0 1 1 0 0 0 1 1 0 0 0 HTH, Marc Schwartz __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
R: [R] Creating Binary Outcomes from a continuous variable
consider a cutpoint of 20 x-runif(100, min=1, max=50) as.integer(x 20) Stefano -Messaggio originale- Da: Doran, Harold [mailto:[EMAIL PROTECTED] Inviato: mercoledì 7 luglio 2004 14.57 A: [EMAIL PROTECTED] Oggetto: [R] Creating Binary Outcomes from a continuous variable Dear List: I have searched the archives and my R books and cannot find a method to transform a continuous variable into a binary variable. For example, I have test score data along a continuous scale. I want to create a new variable in my dataset that is 1=above a cutpoint (or passed the test) and 0=otherwise. My instinct tells me that this will require a combination of the transform command along with a conditional selection. Any help is much appreciated. Thanks, Harold [[alternative HTML version deleted]] __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] Creating Binary Outcomes from a continuous variable
Dear List: I have searched the archives and my R books and cannot find a method to transform a continuous variable into a binary variable. For example, I have test score data along a continuous scale. I want to create a new variable in my dataset that is 1=above a cutpoint (or passed the test) and 0=otherwise. My instinct tells me that this will require a combination of the transform command along with a conditional selection. Any help is much appreciated. Thanks, Harold [[alternative HTML version deleted]] __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] lme with poly(x,2) terms
Hi there. Mac OSX 3.3.4 R 1.9.1 I am analysing a data set with the following model m4- lme(fixed=sr~time*poly(energy,2)*poly(dist,2),random=~time|pot,data=deh) where time is one of six months, pot is a jar in which the repeated measures of species number (sr) was made. energy and dist (disturbance) are fixed experimental treatments. We are trying to test the hypothesis that there is an interaction between energy and disturbance that varies through time, with the expectation that sr varies quadratically with energy and with disturbance. Our difficulty is interpreting the various outputs from the model, assuming it is specified correctly - sorry if this is more a stats question than a R mechanics question. summary(m1) and anova(m1) produce the tables below the . Q1) Am i correct to assume that the anova table is sequential? Q2) How does one interpret the fixed effects/coefficients table? Do the insignificant terms for poly(dist)2 all the way down (Up) to its main effect suggest that a quadratic function in dist is not significant? Q3) If we remove the quadratic term in dist and compare it to the model with poly(dist,2), the anova says the polynomial is significant anova(update(m2,~.,method=ML),update(m4,~.,method=ML)) Model df AIC BIClogLik Test L.Ratio p-value update(m2, ~., method = ML) 1 16 2781.683 2858.271 -1374.841 update(m4, ~., method = ML) 2 22 2771.380 2876.688 -1363.690 1 vs 2 22.303 0.0011 despite only the main effect of poly(dist,2) being significant in the terms. Is the best approach to use the anova test or the coefficients? How does one justify the insignificance of every term with poly(dist)2 in it? Many thanks in advance andrew - summary(m1) Linear mixed-effects model fit by REML Data: deh AIC BIClogLik 2687.974 2792.830 -1321.987 Random effects: Formula: ~time | pot Structure: General positive-definite, Log-Cholesky parametrization StdDevCorr (Intercept) 1.5503393 (Intr) time0.1858609 -0.862 Residual0.9234853 Fixed effects: sr ~ time * poly(energy, 2) * poly(dist, 2) Value Std.Error DF t-value p-value (Intercept)8.2424 0.14576 721 56.54737 0. time -1.1447 0.02376 721 -48.16926 0. poly(energy, 2)1 18.2052 4.34118 721 4.19361 0. poly(energy, 2)2 -43.8133 4.34213 721 -10.09028 0. poly(dist, 2)1-9.9600 4.34169 721 -2.29403 0.0221 poly(dist, 2)2 -10.6639 4.34198 721 -2.45599 0.0143 time:poly(energy, 2)1 1.7320 0.70705 721 2.44961 0.0145 time:poly(energy, 2)2 5.6245 0.70695 721 7.95608 0. time:poly(dist, 2)1 -0.6569 0.70701 721 -0.92908 0.3532 time:poly(dist, 2)20.0400 0.70697 721 0.05657 0.9549 poly(energy, 2)1:poly(dist, 2)1 356.6786 128.77967 721 2.76968 0.0058 poly(energy, 2)2:poly(dist, 2)1 -99.7288 128.60505 721 -0.77547 0.4383 poly(energy, 2)1:poly(dist, 2)2 -11.4295 129.65263 721 -0.08816 0.9298 poly(energy, 2)2:poly(dist, 2)2 149.5420 129.80979 721 1.15201 0.2497 time:poly(energy, 2)1:poly(dist, 2)1 -79.3803 20.96606 721 -3.78613 0.0002 time:poly(energy, 2)2:poly(dist, 2)1 59.4570 20.93577 721 2.83997 0.0046 time:poly(energy, 2)1:poly(dist, 2)2 -20.6131 21.10723 721 -0.97659 0.3291 time:poly(energy, 2)2:poly(dist, 2)2 -22.3304 21.13159 721 -1.05673 0.2910 anova(m4) numDF denDF F-value p-value (Intercept)1 721 888.6686 .0001 time 1 721 2321.2473 .0001 poly(energy, 2)2 721 77.1328 .0001 poly(dist, 2) 2 721 22.9940 .0001 time:poly(energy, 2) 2 721 34.6873 .0001 time:poly(dist, 2) 2 7210.4551 0.6345 poly(energy, 2):poly(dist, 2) 4 7212.5824 0.0361 time:poly(energy, 2):poly(dist, 2) 4 7216.1290 0.0001 __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] KalmanSmooth problem
Hello, In R I am trying to use Kalman filtering to find a solution for an hydrological problem. With Kalman Filtering I want to estimate the discharge comming from three storage bassins. I have programmed a function in R which can run KalmanSmooth. When I'm asking for the function and putting in values, R detects the following error: Error in as.vector(data) : Argument S1 is missing, with no default. I have try to find a solution for this error in the R help file, and in different manuals, but I can't find it. Please help me find a solution. Question: What does R mean with S1 and what am I doing wrong? Here is the way I have programmed the hydrological problem in R. discharge=read.table(file=C:/Program Files/R/rw1090/discharge.txt,header=T) deb=discharge[,1] deb [1] 11.545313 8.045465 5.670868 4.044584 2.919311 2.306668 2.940956 [8] 4.238159 5.017374 3.818236 2.928805 2.262183 1.757765 1.633945 [15] 2.295130 3.454054 4.035224 3.193967 2.533181 2.012406 1.600836 [22] 1.652155 2.428678 3.642827 4.019545 3.209473 2.563617 2.048347 [29] 1.637041 1.828952 2.757842 4.050821 4.147013 3.316503 2.652490 [36] 2.121535 1.696934 2.027763 3.107366 4.429670 4.160178 3.327950 [43] 2.662237 2.129710 1.703717 2.158095 3.337039 4.582359 3.905901 [50] 3.124690 2.499732 1.999772 1.599810 2.130893 3.302622 4.336081 [57] 3.468857 2.775081 2.220062 1.776048 1.560859 2.169537 3.348081 [64] 4.170552 3.336440 2.669151 2.135320 1.708256 1.648859 2.374217 [71] 3.624091 4.248563 3.398850 2.719080 2.175264 1.740211 1.826122 [78] 2.704749 4.056438 4.437309 3.549847 2.839878 2.271902 1.817522 [85] 2.053994 3.107875 4.548436 4.600601 3.680481 2.944385 2.355508 [92] 1.884406 2.273248 3.490148 4.949898 4.584409 3.667527 2.934022 [99] 2.347217 1.84 Kalm = function(x,O1,O2,O3,T1,T2,T3,T4,T5,t,ga){ + t=array(c(1+ga*O1+t/O1*(-(1/T2)-(1/T3)-(1/T1)),t/O1*(1/T2),t/O1*(1/T3), + t/O2*(1/T2),1+ga*O2+t/O2*(-(1/T2)-(1/T4)),t/O2*(1/T4), + t/O3*(1/T3),t/O3*(1/T4),1+ga*O3+t/O3*(-(1/T3)-(1/T4)-(1/T5))),dim=c(3,3)); + h=0.5; + r=array(c(1,0,0,0,1,0,0,0,1),dim=c(3,3)); + q=1; + v=r*q*t(r); + a=10.14286; + z=array(c((1/T1),0,0,0,0,0,0,0,(1/T5)), dim=c(3,3)); + p=array(c(1,0,0,0,1,0,0,0,1),dim=c(3,3)); + pn=array(c(1,0,0,0,1,0,0,0,1),dim=c(3,3)); + kal=KalmanSmooth(x, list(T=t,Z=z,h=h,V=v,a=a,P=p,Pn=pn), nit=0) + kal} Kalm(deb,4,.5,5,.7,.1,2,3,4,1,0.65) Error in as.vector(data) : Argument S1 is missing, with no default First I thought I had to make a timeserie of deb. But this doesn't change the problem. Lot's of thanks trying to help me. Best regards, Pieter Hazenberg Student Hydrology and Watermanagement Wageningen University The Netherlands __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] Daily time series
Hi Vito, Short answer: argument 'start' can take any value you want. For monthly observations (the case 'ts()' handles most nicely, with quarterly observations), 'start' will be used to specify the year and month (or quarter) of the first observation. From what I can gather from the help page, something like, say, ts(x, freq=7, start=c(35, 1)) would mean that the first observation is on the first day of week 35. Bear in mind that, to the best of my knowledge, the value of 'start' has absolutely no impact on calculations. It is merely there for labeling purposes. Hope this helps! On Wednesday 07 July 2004 03:30, Vito Ricci wrote: Hi, I'm dealing with time series with 1 observaton for day (data sampled daily). I will create a ts object using that time series and the function ts(). In ts() help is written: The value of argument 'frequency' is used when the series is sampled an integral number of times in each unit time interval. For example, one could use a value of '7' for 'frequency' when the data are sampled daily, and the natural time period is a week, or '12' when the data are sampled monthly and the natural time period is a year. Values of '4' and '12' are assumed in (e.g.) 'print' methods to imply a quarterly and monthly series respectively. But what value should assume start in ts function? Here is a time series: 1/1 10 2/1 20 3/1 30 4/1 40 5/1 50 6/1 60 x-c(10,20,30,40,50,60) ## observation serie-ts(dati, start=c(1,1),frequency=7) ##creating ts object serie ## printing ts output Time Series: Start = c(1, 1) End = c(1, 6) Frequency = 7 [1] 10 20 30 40 50 60 Could someone help me? Thanks in advance. Sincerely. Vito Ricci = Diventare costruttori di soluzioni Visitate il portale http://www.modugno.it/ e in particolare la sezione su Palese http://www.modugno.it/archivio/cat_palese.shtml __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html -- Vincent Goulet, Associate Professor École d'actuariat Université Laval, Québec [EMAIL PROTECTED] http://vgoulet.act.ulaval.ca __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] fast NA elimination ?
I find complete.cases() to be very useful for this kind of stuff (and very fast). As in, d - data.frame(x = c(1,2,3,NA,5), y = c(1,NA,3,4,5)) d x y 1 1 1 2 2 NA 3 3 3 4 NA 4 5 5 5 complete.cases(d) [1] TRUE FALSE TRUE FALSE TRUE use - complete.cases(d) d[use, ] x y 1 1 1 3 3 3 5 5 5 -roger ivo welch wrote: dear R wizards: an operation I execute often is the deletion of all observations (in a matrix or data set) that have at least one NA. (I now need this operation for kde2d, because its internal quantile call complains; could this be considered a buglet?) usually, my data sets are small enough for speed not to matter, and there I do not care whether my method is pretty inefficient (ok, I admit it: I use the sum() function and test whether the result is NA)---but now I have some bigger data sets. Is there a recommended method of doing NA elimination most efficiently? sincerely, /iaw --- ivo welch professor of finance and economics brown / nber / yale __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html -- Roger D. Peng http://www.biostat.jhsph.edu/~rpeng/ __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] fast NA elimination ?
On Wed, 2004-07-07 at 09:35, ivo welch wrote: dear R wizards: an operation I execute often is the deletion of all observations (in a matrix or data set) that have at least one NA. (I now need this operation for kde2d, because its internal quantile call complains; could this be considered a buglet?) usually, my data sets are small enough for speed not to matter, and there I do not care whether my method is pretty inefficient (ok, I admit it: I use the sum() function and test whether the result is NA)---but now I have some bigger data sets. Is there a recommended method of doing NA elimination most efficiently? sincerely, /iaw --- ivo welch professor of finance and economics brown / nber / yale Take a look at ?complete.cases HTH, Marc Schwartz __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] Code density functions
Thanks to Andy Liaw, James Holtman and Uwe Ligges; I downloaded and looked at the source and found what I need!! Francisco From: Uwe Ligges [EMAIL PROTECTED] To: F Z [EMAIL PROTECTED] CC: [EMAIL PROTECTED], [EMAIL PROTECTED] Subject: Re: [R] Code density functions Date: Wed, 07 Jul 2004 07:51:35 +0200 F Z wrote: Dear Andy Thanks for your reply. I don't seem to find the file that you suggested. I tried: file.show('C:/Program Files/R/rw1091/src/nmath/dnorm.c.') You have installed a binary distribution. You need to get the source tarball (directly accessible via the main CRAN page) that includes the file. Uwe Ligges NULL Warning message: file.show(): file C:/Program Files/R/rw1091/src/nmath/dnorm.c. does not exist Then I looked at the directory and tried a file with similar name: file.show('C:/Program Files/R/rw1091/src/include/Rmath.h') But this file does not show the actual code used to calculate the densities, only the declarations of the procedures. What am I doing wrong? Thanks again! Francisco Zagmutt :) From: Liaw, Andy [EMAIL PROTECTED] To: 'F Z' [EMAIL PROTECTED],[EMAIL PROTECTED] Subject: RE: [R] Code density functions Date: Tue, 6 Jul 2004 18:35:24 -0400 Dear insert your name here: You need to look at the C-level source codes, in R-1.9.1/src/nmath/d*.c. Andy From: F Z Hello I would like to see the algorithm that R uses to generate density functions for several distributions (i.e. Normal,Weibull, etc). I tried: dnorm function (x, mean = 0, sd = 1, log = FALSE) .Internal(dnorm(x, mean, sd, log)) environment: namespace:stats How can I see the code used for densities? Thanks! -- Notice: This e-mail message, together with any attachments, contains information of Merck Co., Inc. (One Merck Drive, Whitehouse Station, New Jersey, USA 08889), and/or its affiliates (which may be known outside the United States as Merck Frosst, Merck Sharp Dohme or MSD and in Japan, as Banyu) that may be confidential, proprietary copyrighted and/or legally privileged. It is intended solely for the use of the individual or entity named on this message. If you are not the intended recipient, and have received this message in error, please notify us immediately by reply e-mail and then delete it from your system. -- __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] fast NA elimination ?
Hi Ivo Try ?na.omit Example : d - data.frame(x = c(1:5,NA), y = c(NA,3:7)) d x y 1 1 NA 2 2 3 3 3 4 4 4 5 5 5 6 6 NA 7 do-na.omit(d) do x y 2 2 3 3 3 4 4 4 5 5 5 6 I usually pass na.omit within the data argument of a function i.e. m-lm(x~y,data=na.omit(d)). In this way you don't have to store 2 datasets. I hopw that this helps Francisco From: Marc Schwartz [EMAIL PROTECTED] Reply-To: [EMAIL PROTECTED] To: ivo welch [EMAIL PROTECTED] CC: R-Help [EMAIL PROTECTED] Subject: Re: [R] fast NA elimination ? Date: Wed, 07 Jul 2004 09:41:39 -0500 On Wed, 2004-07-07 at 09:35, ivo welch wrote: dear R wizards: an operation I execute often is the deletion of all observations (in a matrix or data set) that have at least one NA. (I now need this operation for kde2d, because its internal quantile call complains; could this be considered a buglet?) usually, my data sets are small enough for speed not to matter, and there I do not care whether my method is pretty inefficient (ok, I admit it: I use the sum() function and test whether the result is NA)---but now I have some bigger data sets. Is there a recommended method of doing NA elimination most efficiently? sincerely, /iaw --- ivo welch professor of finance and economics brown / nber / yale Take a look at ?complete.cases HTH, Marc Schwartz __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html Technology 101. http://special.msn.com/tech/technology101.armx __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] Win32 C code
Hi, I'm trying to get C code working with R. This is my first time writing C on Windows and I'm making a mess of it. Help! I'm following the example in Roger Peng's An Introduction to the .C interface to R. The C code is: #include R.h void hello(int *n){ int i; for(i=0; i *n; i++) { Rprintf(Hello, world!\n); } } I seem to be unable to make Windows pay attention to additions to the PATH variable so I stuck the code (test.c) into the $R_HOME\bin directory. I copied into the same directory mingw32-make.exe and renamed it make.exe (as the perl script SHLIB seems to want a make.exe). When I type Rcmd SHLIB test.c at a command prompt I get the following: C:\Program Files\R\rw1091\binRcmd SHLIB test.c C:/PROGRA~1/R/rw1091/src/gnuwin32/MkRules:110: warning: overriding commands for target `.c.d' C:/PROGRA~1/R/rw1091/src/gnuwin32/MkRules:98: warning: ignoring old commands for target `.c.d' C:/PROGRA~1/R/rw1091/src/gnuwin32/MkRules:126: warning: overriding commands for target `.c.o' C:/PROGRA~1/R/rw1091/src/gnuwin32/MkRules:114: warning: ignoring old commands for target `.c.o' MkRules:110: warning: overriding commands for target `.c.d' MkRules:98: warning: ignoring old commands for target `.c.d' MkRules:126: warning: overriding commands for target `.c.o' MkRules:114: warning: ignoring old commands for target `.c.o' MkRules:110: warning: overriding commands for target `.c.d' MkRules:98: warning: ignoring old commands for target `.c.d' MkRules:126: warning: overriding commands for target `.c.o' MkRules:114: warning: ignoring old commands for target `.c.o' make: *** No rule to make target `'test.c'', needed by `makeMakedeps'. Stop. I'm obviously an idiot but any help offered would be much appreciated. -- SC __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] Enumeration in R
On Tue, 7 Jul 2004, Peter Dalgaard wrote: Paul Roebuck [EMAIL PROTECTED] writes: I want the equivalent of this 'C' declaration. enum StoplightColor { green = 3, yellow = 5, red = 7 }; I think you *dis*abled it by specifying an initializer which doesn't check the validity: Thanks. It all seems obvious once it's pointed out to you. I have a couple related questions: 1) When I was reading Chambers book (pg 288), my initial impression was that there was a way I could have done this class without specifying the representation in 'setClass' - I just couldn't figure out how to assign values that way. Could this class have been defined 'slotless'? If so, how? 2) How do you define a class and instantiate some, yet prevent more from being created after that. Possibly better stated, from the package's API view, I would like these to be instance variables of opaque types. So I would like to create my 'global' constants in an initialization routine then prevent use of 'new' to create any more. I tried the following with no success. Possible? setMethod(new, stoplightColor, function(Class, ...) stop(can't make any more)) 3) This seems kind of painful for trivial stuff. My idea was to move some of the validation error checking out of my project by converting certain function arguments into classes that could be validated upon creation, improving the clarity of project routines. What is the canonical style used in R package authoring? - stoplightColor.R setClass(stoplightColor, representation(value = integer), prototype = integer(1)) stoplightColor - function(value) { new(stoplightColor, value) } valid.stoplightColor - function(object) { valid - switch(as([EMAIL PROTECTED], character), 3 = TRUE, 5 = TRUE, 7 = TRUE, FALSE) if (valid == FALSE) return('Invalid value - must be [3|5|7]'); return(TRUE); } setValidity(stoplightColor, valid.stoplightColor) initialize.stoplightColor - function(.Object, value) { if (missing(value) || is.na(value)) stop('Argument value is missing or NA') [EMAIL PROTECTED] - as.integer(value) validObject(.Object) .Object } setMethod(initialize, signature(.Object = stoplightColor), initialize.stoplightColor) stoplightColor.as.integer - function(from) { return([EMAIL PROTECTED]) } setAs(stoplightColor, integer, stoplightColor.as.integer) green - stoplightColor(3) yellow - stoplightColor(5) red - stoplightColor(7) -- SIGSIG -- signature too long (core dumped) __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] Histograms, density, and relative frequencies
R-users, I have been using R for about 1 year, and I have run across a couple of graphics problem that I am not quite sure how to address. I have read up on the email threads regarding the differences between density and relative frequencies (count/sum(count) on the R list, and I am hoping that someone could provide me with some advice/comments concerning my approach. I will admit that some of the underlying mathematics of the density discussion are beyond my current understanding, but I am looking into it. I have a data set (600,000 obs) used to parameterize a probabilistic causal model where each obs is a population response for one of 2 classes (either regs1 and regs2). I have been attempting to create 1 marginal probability plot with 2 lines (one for each class). Using my rather rough code, I created a plot that seems to adhere to the commonly used (although from what I can understand wrong) relative frequency histogram approach. My rough code looks like this: bk - c(0, .05, .1, .15, .2, .25,.3, .35, 1) par(mfrow=c(1, 1)) fawn1 - hist(MFAWNRESID[regs1], plot=F, breaks=bk) fawn2 - hist(MFAWNRESID[regs2], plot=F, breaks=bk) count1 - fawn1$counts/sum(fawn1$counts) count2 - fawn2$counts/sum(fawn2$counts) b - c(0, .05, .1, .15, .2, .25, .3, .35) plot(count1~b,xaxt=n, xlim=c(0, .5), ylim=c(0, .40), pch=., bty=l) lines(spline(count1~b), lty=c(1), lwd=c(2), col=black) lines(spline(count2~b), lty=c(2), lwd=c(2), col=black) axis(side=1, at=c(0, .05, .1, .15, .2, .25, .3, .35)) Using the above, I get frequency values for regs1 that look like this (which is the same as output for my probabilistic model): count1 [1] 1.213378e-01 3.454324e-01 3.365343e-01 1.580839e-01 3.342101e-02 [6] 4.698426e-03 4.488942e-04 4.322685e-05 First, count1 is the frequency of occurrence within range 0-0.05, but when plotted is the value at b=0 and does not really represent the range? Are there any suggestions on a technique to approach this? Next: Using the above code, the x-axis values end at 0.35, but the axis continues (because bk ends at 1)? While there is the chance of occurrence out past .35, it is low and I want to extend the lines to about .35 and clip the x-axis. But, I have been unable to figure out how to clip Could someone point me in the correct direction? TIA, Bret A. Collier Arkansas Cooperative Fish and Wildlife Research Unit Department of Biological Sciences University of Arkansas __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] Win32 C code
On Wed, 7 Jul 2004, Simon Cullen wrote: I'm trying to get C code working with R. This is my first time writing C on Windows and I'm making a mess of it. Help! ... I seem to be unable to make Windows pay attention to additions to the PATH variable so I stuck the code (test.c) into the $R_HOME\bin directory. I copied into the same directory mingw32-make.exe and renamed it make.exe (as the perl script SHLIB seems to want a make.exe). ... I wrote a batch file to get mine working. Change directory paths to match your setup. WINBUILD.CMD - @cls @SETLOCAL @set PROJ=rwt @set RBINDIR=C:\R\rw1091\bin @set TOOLSBINDIR=C:\Rtools\bin @set MINGWBINDIR=C:\MinGW\bin @set PERLBINDIR=C:\Perl\bin @set TEXBINDIR=C:\PROGRA~1\TeXLive\bin\win32 @set HCCBINDIR=C:\PROGRA~1\HTMLHE~1 Next line split for readability @set PATH=%TOOLSBINDIR%;%RBINDIR%;%MINGWBINDIR%;%PERLBINDIR%; %TEXBINDIR%;%HCCBINDIR%;%WINDIR%\system32;%WINDIR% @echo PATH=%PATH% Rcmd build --binary %PROJ% Rcmd check %PROJ% @ENDLOCAL -- SIGSIG -- signature too long (core dumped) __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] Win32 C code
On Wed, 7 Jul 2004 12:32:41 -0500 (CDT), Paul Roebuck [EMAIL PROTECTED] wrote: I wrote a batch file to get mine working. Change directory paths to match your setup. That batch file was exactly what I needed! Thanks. -- SC __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] Observational error in ARIMA
Hi, Does anyone know how to include observation errors in the arima of R, which is implemented with the Kalman filter. I want to estimate observational error variance for noisy data in the context of ARMA model using arima of R. I read the manual and tried the example codes, but did not find the solution. From the outputs of the components model, it seems to me that the default setting of the arima does not include the observational error in the fitting. The elements of matrix H are zeros when executing the example codes. Am I right on this one? Thanks in advance. Sincerely, Guiming Wang Natural Resource Ecology Lab Colorado State University Fort Collins, CO 80523 [[alternative HTML version deleted]] __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] Importing an Excel file
Hello, R users, I am a very beginner of R and tried read.csv to import an excel file after saving an excel file as csv. But it added alternating rows of fictitious NA values after row number 16. When I applied read.delim, there were trailing several commas at the end of each row after row number 16 instead of NA values. Appreciate your help. Kyong [[alternative HTML version deleted]] __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] Sorry forgot to mention about OS system in an previous email
Sorry for not mentioning about OS for importing an excel file in an previous email. I'm using R 1.9.1 with Windows 2000. Kyong [[alternative HTML version deleted]] __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] lost messages
I've posted a message twice to this list, and never seen it appear yet ... perhaps this one will go through ... ? -- Aaron J. Mackey, Ph.D. Dept. of Biology, Goddard 212 University of Pennsylvania email: [EMAIL PROTECTED] 415 S. University Avenue office: 215-898-1205 Philadelphia, PA 19104-6017 fax:215-746-6697 __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] Generate a matrix Q satisfying t(Q)%*%Q=Z and XQ=W
thanks, I want to create matrices for simulation purpose (in order to evaluate the efficiency of different methods on this simulated data set). So, I want to create a data matrix Q (m individuals and r variables). I want to specify the variance-covariance structure for this matrix (t(Q)%*%Q=Z ) but I want also to create another constraint due to another matrix of data. I want that the covariance of Q and X are equal to those given in W (XQ=W). The example I gave is just to illustrate my problem and perhaps it has no solution (I cannot see it because I have no idea how to construct Q such as Q=Q1=Q2) At 00:00 07/07/2004, Spencer Graves wrote: Is a solution even possible for the matrices in your example? I've tried a few things that have suggested that a solution may not be possible. What can you tell us of the problem that you've translated into this? I see a minimization problem subject to constraints, but I'm not certain which are the constraints and what is the objective function. For example, are you trying to find Q to minimize sum((Z-X'X)^2) subject to XQ=W or do you want to minimize sum((XQ-W)^2) subject to Q'Q=Z or something else? If it were my problem, I think I would work for a while with the singular value decompositions of X, W and Z, and see if that would lead me to more information about Q, including conditions under which a solution existed, expressions for Q when multiple solutions existed, and a solution minimizing your chosen objective function when solutions do not exist. (A google search produced many hits for singular value decomposition, implemented as svd in R.) hope this helps. spencer graves Stephane DRAY wrote: Hello, I have a question that is not directly related to R ... but I try to do it in R ;-) : I would like to generate a matrix Q satisfying (for a given Z, X and W) the two following conditions: t(Q)%*%Q=Z (1) XQ=W (2) where: Q is m rows and r columns X is p rows and m columns D is p rows and r columns C is r rows and r columns with mp,r e.g: m=6, p=2 r=3 Z=matrix(c(1,.2,.5,.2,1,.45,.5,.45,1),3,3) X=matrix(c(.1,.3,.5,.6,.2,.1,.8,1,.4,.2,.2,.9),2,6) W=matrix(c(0,.8,.4,.6,.2,0),2,3) #Create a matrix satisfying (1) is easy: A=matrix(runif(18),6,3) Q1=svd(A)$u%*%chol(Z) #For the second condition (2), a solution is given by Q2=A%*%ginv(X%*%A)%*%W I do not know how to create a matrix Q that satisfies the two conditions. I have try to construct an iterative procedure without success (no convergence): eps=10 i=0 while(eps.5) { Q1=svd(Q2)$u%*%chol(Z) Q2=Q1%*%ginv(X%*%Q1)%*%W eps=sum(abs(Q1-Q2)) cat(i,:,eps,\n) i=i+1 } Perhaps someone could have any idea to solve the problem, or a reference on this kind of question or the email of another list where I should ask this question. Thanks in advance, Sincerely. Stéphane DRAY -- Département des Sciences Biologiques Université de Montréal, C.P. 6128, succursale centre-ville Montréal, Québec H3C 3J7, Canada Tel : 514 343 6111 poste 1233 E-mail : [EMAIL PROTECTED] -- Web http://www.steph280.freesurf.fr/ __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html Stéphane DRAY -- Département des Sciences Biologiques Université de Montréal, C.P. 6128, succursale centre-ville Montréal, Québec H3C 3J7, Canada Tel : 514 343 6111 poste 1233 E-mail : [EMAIL PROTECTED] -- Web http://www.steph280.freesurf.fr/ __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] vectorizing sapply() code (Modified by Aaron J. Mackey)
[ Not sure why, but the first two times I sent this it never seemed to go through; apologies if you're seeing this thrice ... ] I have some fully functional code that I'm guessing can be done better/quicker with some savvy R vector tricks; any help to make this run a bit faster would be greatly appreciated; I'm particularly stuck on how to calculate using row-wise vectors without iterating explicitly over the dataframe or table ... library(stats4); d - data.frame( ix=c(0,1,2,3,4,5,6,7), ct=c(253987, 9596, 18680, 2630, 8224, 3590, 5534, 18937), A=c( 0, 1, 0, 1, 0, 1, 0, 1), B=c( 0, 0, 1, 1, 0, 0, 1, 1), C=c( 0, 0, 0, 0, 1, 1, 1, 1) ); ct - round(logb(length(d$ix), 2)) ll - function( th=0.5, a1=log(0.5), a2=log(0.5), a3=log(0.5), b1=log(0.5), b2=log(0.5), b3=log(0.5) ) { a - exp(sapply(1:ct, function (x) { get(paste(a, x, sep=)) })); b - exp(sapply(1:ct, function (x) { get(paste(b, x, sep=)) })); -sum( d$ct * log( sapply( d$ix, function (ix, th, a, b) { x - d[ix+1,3:(ct+2)] (th * prod((b ^ (1-x)) * ((1-b) ^ x ))) + ((1-th) * prod((a ^ x) * ((1-a) ^ (1-x }, th, a, b ) ) ); } ml - mle(ll, lower=c(0+1e-5, rep(log(0+1e-8), 2*ct)), upper=c(1-1e-5, rep(log(1-1e-8), 2*ct)), method=L-BFGS-B ); For those interested in the math, this is the MLE procedure to estimate the false positive/false negative rates (a and b) of three diagnostic (A, B and C) tests that have the observed performance recapitulated in dataframe d, but no gold standard (sometimes called latent class analysis, or LCA). Thanks for any help, -Aaron __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] Importing an Excel file
On Wed, 2004-07-07 at 13:21, Park, Kyong H Mr. RDECOM wrote: Hello, R users, I am a very beginner of R and tried read.csv to import an excel file after saving an excel file as csv. But it added alternating rows of fictitious NA values after row number 16. When I applied read.delim, there were trailing several commas at the end of each row after row number 16 instead of NA values. Appreciate your help. Kyong Yep. This is one of the behaviors that I had seen with Excel when I was running Windows XP. Seemingly empty cells outside the data range would get exported in the CSV file causing a data integrity problem. It is one of the reasons that I installed OpenOffice under Windows and used Calc to open the Excel files and then do the CSV exports before I switched to Linux :-) Depending upon the version of Excel you are using, you might try to highlight and copy only the rectangular range of cells in the sheet that actually have data to a new sheet and then export the new sheet to a CSV file. Do not just click on the upper left hand corner of the sheet to highlight the entire sheet to copy it. Only highlight the range of cells you actually need for copying. Another option is to use the read.xls() function in the 'gregmisc' package on CRAN or install OpenOffice. HTH, Marc Schwartz __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] KalmanSmooth problem
Hazenberg21, Pieter wrote: Hello, In R I am trying to use Kalman filtering to find a solution for an hydrological problem. With Kalman Filtering I want to estimate the discharge comming from three storage bassins. I have programmed a function in R which can run KalmanSmooth. When I'm asking for the function and putting in values, R detects the following error: Error in as.vector(data) : Argument S1 is missing, with no default. I have try to find a solution for this error in the R help file, and in different manuals, but I can't find it. Please help me find a solution. Question: What does R mean with S1 and what am I doing wrong? Here is the way I have programmed the hydrological problem in R. discharge=read.table(file=C:/Program Files/R/rw1090/discharge.txt,header=T) deb=discharge[,1] deb [1] 11.545313 8.045465 5.670868 4.044584 2.919311 2.306668 2.940956 [8] 4.238159 5.017374 3.818236 2.928805 2.262183 1.757765 1.633945 [15] 2.295130 3.454054 4.035224 3.193967 2.533181 2.012406 1.600836 [22] 1.652155 2.428678 3.642827 4.019545 3.209473 2.563617 2.048347 [29] 1.637041 1.828952 2.757842 4.050821 4.147013 3.316503 2.652490 [36] 2.121535 1.696934 2.027763 3.107366 4.429670 4.160178 3.327950 [43] 2.662237 2.129710 1.703717 2.158095 3.337039 4.582359 3.905901 [50] 3.124690 2.499732 1.999772 1.599810 2.130893 3.302622 4.336081 [57] 3.468857 2.775081 2.220062 1.776048 1.560859 2.169537 3.348081 [64] 4.170552 3.336440 2.669151 2.135320 1.708256 1.648859 2.374217 [71] 3.624091 4.248563 3.398850 2.719080 2.175264 1.740211 1.826122 [78] 2.704749 4.056438 4.437309 3.549847 2.839878 2.271902 1.817522 [85] 2.053994 3.107875 4.548436 4.600601 3.680481 2.944385 2.355508 [92] 1.884406 2.273248 3.490148 4.949898 4.584409 3.667527 2.934022 [99] 2.347217 1.84 Kalm = function(x,O1,O2,O3,T1,T2,T3,T4,T5,t,ga){ + t=array(c(1+ga*O1+t/O1*(-(1/T2)-(1/T3)-(1/T1)),t/O1*(1/T2),t/O1*(1/T3), + t/O2*(1/T2),1+ga*O2+t/O2*(-(1/T2)-(1/T4)),t/O2*(1/T4), + t/O3*(1/T3),t/O3*(1/T4),1+ga*O3+t/O3*(-(1/T3)-(1/T4)-(1/T5))),dim=c(3,3)); + h=0.5; + r=array(c(1,0,0,0,1,0,0,0,1),dim=c(3,3)); + q=1; + v=r*q*t(r); + a=10.14286; + z=array(c((1/T1),0,0,0,0,0,0,0,(1/T5)), dim=c(3,3)); + p=array(c(1,0,0,0,1,0,0,0,1),dim=c(3,3)); + pn=array(c(1,0,0,0,1,0,0,0,1),dim=c(3,3)); + kal=KalmanSmooth(x, list(T=t,Z=z,h=h,V=v,a=a,P=p,Pn=pn), nit=0) + kal} Kalm(deb,4,.5,5,.7,.1,2,3,4,1,0.65) Error in as.vector(data) : Argument S1 is missing, with no default First I thought I had to make a timeserie of deb. But this doesn't change the problem. Lot's of thanks trying to help me. a) Please tell us R Version and OS (OK, implicitly done that we are talking about R-1.9.0 on Windows). b) Please tell us which packages you are using (I don't know KalmanSmooth(), for example). c) Please try to specify reproducible examples. Conclusion for a-c): Please read the posting-guide. My hint is to try to debug yourself, at least *try*, starting eith calling traceback() right after the error appeared, in order to get a guess where the error really happens. Then look at the data that is passed to the function - and I'm pretty sure you will get at least an idea what goes wrong. Uwe Ligges Best regards, Pieter Hazenberg Student Hydrology and Watermanagement Wageningen University The Netherlands __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] Importing an Excel file
On Wed, 2004-07-07 at 13:44, Marc Schwartz wrote: On Wed, 2004-07-07 at 13:21, Park, Kyong H Mr. RDECOM wrote: Hello, R users, I am a very beginner of R and tried read.csv to import an excel file after saving an excel file as csv. But it added alternating rows of fictitious NA values after row number 16. When I applied read.delim, there were trailing several commas at the end of each row after row number 16 instead of NA values. Appreciate your help. Kyong One other thing: The default delimiting characters in read.csv() and read.delim() are NOT the same. The former uses a comma and the latter a TAB character. If you did not change the defaults in Excel when you created your CSV file, that would account for the difference behaviors upon import. Be sure that the delimiting character in the R function you use properly corresponds to the actual delimiting character in your CSV file. Marc __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] Generate a matrix Q satisfying t(Q)%*%Q=Z and XQ=W
How about generating matrices (X1|X2), dim(X1) = c(n, k1), dim(X2) = c(n, k2), with mean 0 and covariance matrix as follows: (S11 | S12) (S21 | S22), with S12 = W, S22 = Z and S11 = whatever you want? With X = t(X1), and Q = X2, we have E(XQ) = W and E(Q'Q) = Z. This can be done using rmvnorm in package mvtnorm. hope this helps. spencer graves Stephane DRAY wrote: thanks, I want to create matrices for simulation purpose (in order to evaluate the efficiency of different methods on this simulated data set). So, I want to create a data matrix Q (m individuals and r variables). I want to specify the variance-covariance structure for this matrix (t(Q)%*%Q=Z ) but I want also to create another constraint due to another matrix of data. I want that the covariance of Q and X are equal to those given in W (XQ=W). The example I gave is just to illustrate my problem and perhaps it has no solution (I cannot see it because I have no idea how to construct Q such as Q=Q1=Q2) At 00:00 07/07/2004, Spencer Graves wrote: Is a solution even possible for the matrices in your example? I've tried a few things that have suggested that a solution may not be possible. What can you tell us of the problem that you've translated into this? I see a minimization problem subject to constraints, but I'm not certain which are the constraints and what is the objective function. For example, are you trying to find Q to minimize sum((Z-X'X)^2) subject to XQ=W or do you want to minimize sum((XQ-W)^2) subject to Q'Q=Z or something else? If it were my problem, I think I would work for a while with the singular value decompositions of X, W and Z, and see if that would lead me to more information about Q, including conditions under which a solution existed, expressions for Q when multiple solutions existed, and a solution minimizing your chosen objective function when solutions do not exist. (A google search produced many hits for singular value decomposition, implemented as svd in R.) hope this helps. spencer graves Stephane DRAY wrote: Hello, I have a question that is not directly related to R ... but I try to do it in R ;-) : I would like to generate a matrix Q satisfying (for a given Z, X and W) the two following conditions: t(Q)%*%Q=Z (1) XQ=W (2) where: Q is m rows and r columns X is p rows and m columns D is p rows and r columns C is r rows and r columns with mp,r e.g: m=6, p=2 r=3 Z=matrix(c(1,.2,.5,.2,1,.45,.5,.45,1),3,3) X=matrix(c(.1,.3,.5,.6,.2,.1,.8,1,.4,.2,.2,.9),2,6) W=matrix(c(0,.8,.4,.6,.2,0),2,3) #Create a matrix satisfying (1) is easy: A=matrix(runif(18),6,3) Q1=svd(A)$u%*%chol(Z) #For the second condition (2), a solution is given by Q2=A%*%ginv(X%*%A)%*%W I do not know how to create a matrix Q that satisfies the two conditions. I have try to construct an iterative procedure without success (no convergence): eps=10 i=0 while(eps.5) { Q1=svd(Q2)$u%*%chol(Z) Q2=Q1%*%ginv(X%*%Q1)%*%W eps=sum(abs(Q1-Q2)) cat(i,:,eps,\n) i=i+1 } Perhaps someone could have any idea to solve the problem, or a reference on this kind of question or the email of another list where I should ask this question. Thanks in advance, Sincerely. Stéphane DRAY -- Département des Sciences Biologiques Université de Montréal, C.P. 6128, succursale centre-ville Montréal, Québec H3C 3J7, Canada Tel : 514 343 6111 poste 1233 E-mail : [EMAIL PROTECTED] -- Web http://www.steph280.freesurf.fr/ __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html Stéphane DRAY -- Département des Sciences Biologiques Université de Montréal, C.P. 6128, succursale centre-ville Montréal, Québec H3C 3J7, Canada Tel : 514 343 6111 poste 1233 E-mail : [EMAIL PROTECTED] -- Web http://www.steph280.freesurf.fr/ -- __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] lme: extract variance estimate
Spencer Graves wrote: Have you considered VarCorr? I've used it with lme, and the documentation in package lme4 suggests it should work with GLMM, which might also do what you want from glmmPQL. Thanks for the pointer (I was not aware that nlme and lme4 had different versions of lme), but I'm still stuck at the same place using lme4: VarCorr(fit) Groups NameVariance Std.Dev. yeart(Intercept) 0.040896 0.20223 Residual 0.091125 0.30187 The number I need to extract and store is the .20223, but all the components I can find in VarCorr(fit) are something else. u=VarCorr(fit); slotNames(u) [1] scalereSumry useScale [EMAIL PROTECTED] [1] 0.3018693 [EMAIL PROTECTED] $yeart An object of class corrmatrix (Intercept) (Intercept) 1 Slot stdDev: (Intercept) 0.6699156 [EMAIL PROTECTED] [1] TRUE In glmmML the estimate is returned as the $sigma component of the model, but I also need the same info from 'family=gaussian' models. Stephen P. Ellner ([EMAIL PROTECTED]) Department of Ecology and Evolutionary Biology Corson Hall, Cornell University, Ithaca NY 14853-2701 Phone (607) 254-4221FAX (607) 255-8088 __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] Generate a matrix Q satisfying t(Q)%*%Q=Z and XQ=W
Thanks but it does not solve my problem because I would like to generate Q (X2) for a given X(X1). X is fixed. But It seems hard to do it and perhaps It would be easier to change my approach. Thanks again for your help At 14:55 07/07/2004, Spencer Graves wrote: How about generating matrices (X1|X2), dim(X1) = c(n, k1), dim(X2) = c(n, k2), with mean 0 and covariance matrix as follows: (S11 | S12) (S21 | S22), with S12 = W, S22 = Z and S11 = whatever you want? With X = t(X1), and Q = X2, we have E(XQ) = W and E(Q'Q) = Z. This can be done using rmvnorm in package mvtnorm. hope this helps. spencer graves Stephane DRAY wrote: thanks, I want to create matrices for simulation purpose (in order to evaluate the efficiency of different methods on this simulated data set). So, I want to create a data matrix Q (m individuals and r variables). I want to specify the variance-covariance structure for this matrix (t(Q)%*%Q=Z ) but I want also to create another constraint due to another matrix of data. I want that the covariance of Q and X are equal to those given in W (XQ=W). The example I gave is just to illustrate my problem and perhaps it has no solution (I cannot see it because I have no idea how to construct Q such as Q=Q1=Q2) At 00:00 07/07/2004, Spencer Graves wrote: Is a solution even possible for the matrices in your example? I've tried a few things that have suggested that a solution may not be possible. What can you tell us of the problem that you've translated into this? I see a minimization problem subject to constraints, but I'm not certain which are the constraints and what is the objective function. For example, are you trying to find Q to minimize sum((Z-X'X)^2) subject to XQ=W or do you want to minimize sum((XQ-W)^2) subject to Q'Q=Z or something else? If it were my problem, I think I would work for a while with the singular value decompositions of X, W and Z, and see if that would lead me to more information about Q, including conditions under which a solution existed, expressions for Q when multiple solutions existed, and a solution minimizing your chosen objective function when solutions do not exist. (A google search produced many hits for singular value decomposition, implemented as svd in R.) hope this helps. spencer graves Stephane DRAY wrote: Hello, I have a question that is not directly related to R ... but I try to do it in R ;-) : I would like to generate a matrix Q satisfying (for a given Z, X and W) the two following conditions: t(Q)%*%Q=Z (1) XQ=W (2) where: Q is m rows and r columns X is p rows and m columns D is p rows and r columns C is r rows and r columns with mp,r e.g: m=6, p=2 r=3 Z=matrix(c(1,.2,.5,.2,1,.45,.5,.45,1),3,3) X=matrix(c(.1,.3,.5,.6,.2,.1,.8,1,.4,.2,.2,.9),2,6) W=matrix(c(0,.8,.4,.6,.2,0),2,3) #Create a matrix satisfying (1) is easy: A=matrix(runif(18),6,3) Q1=svd(A)$u%*%chol(Z) #For the second condition (2), a solution is given by Q2=A%*%ginv(X%*%A)%*%W I do not know how to create a matrix Q that satisfies the two conditions. I have try to construct an iterative procedure without success (no convergence): eps=10 i=0 while(eps.5) { Q1=svd(Q2)$u%*%chol(Z) Q2=Q1%*%ginv(X%*%Q1)%*%W eps=sum(abs(Q1-Q2)) cat(i,:,eps,\n) i=i+1 } Perhaps someone could have any idea to solve the problem, or a reference on this kind of question or the email of another list where I should ask this question. Thanks in advance, Sincerely. Stéphane DRAY -- Département des Sciences Biologiques Université de Montréal, C.P. 6128, succursale centre-ville Montréal, Québec H3C 3J7, Canada Tel : 514 343 6111 poste 1233 E-mail : [EMAIL PROTECTED] -- Web http://www.steph280.freesurf.fr/ __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html Stéphane DRAY -- Département des Sciences Biologiques Université de Montréal, C.P. 6128, succursale centre-ville Montréal, Québec H3C 3J7, Canada Tel : 514 343 6111 poste 1233 E-mail : [EMAIL PROTECTED] -- Web http://www.steph280.freesurf.fr/ -- Stéphane DRAY -- Département des Sciences Biologiques Université de Montréal, C.P. 6128, succursale centre-ville Montréal, Québec H3C 3J7, Canada Tel : 514 343 6111 poste 1233 E-mail : [EMAIL PROTECTED]
[R] a small bug in spatstat::rmh
Time to time, rmh.default fails to simulate a lookup-type process on a statement: if(all.equal(diff(r),rep(deltar,nlook-1))) { equisp - 1 par - c(beta,nlook,equisp,deltar,rmax,h) } else { equisp - 0 par - c(beta,nlook,equisp,deltar,rmax,h,r) } According to the manual, all.equal should not be used in if-statement directly. This works: identical(all.equal(diff(r), rep(deltar, nlook - 1)),TRUE) Evgueni __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] Importing an Excel file
Am Mittwoch, 7. Juli 2004 20:21 schrieb Park, Kyong H Mr. RDECOM: Hello, R users, I am a very beginner of R and tried read.csv to import an excel file after saving an excel file as csv. But it added alternating rows of fictitious NA values after row number 16. When I applied read.delim, there were trailing several commas at the end of each row after row number 16 instead of NA values. Appreciate your help. I import my OpenOffice calc files as follows (OOo or Excel won't make any difference, the csv-format is the same): inp - (scan(file, sep=;, dec=,, list(0,0), skip = 13, nlines = 58) x - inp[[1]]; y - inp [[2]] sep=;: column separator ; dec=, decimal separator , list(0,0): first two columns skip: no of lines to skip (these lines contain comments etc.) nlines=58: 58 lines of values to plot hth, Richard -- Richard Müller - Am Spring 9 - D-58802 Balve-Eisborn [EMAIL PROTECTED] - www.oeko-sorpe.de __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] NAMESPACE and tests for unexported functions
How do you get around the problem of having tests for functions that are not exported in NAMESPACE? It seems rather self-defeating to have to export everything so that 'R CMD CHECK pkg' won't crash when it encounters a test case for an internal function. I don't want someone using the package to call the function, but the package itself should be able to see its own contents. -- SIGSIG -- signature too long (core dumped) __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] lme: extract variance estimate
I just tried it in both lme4 and nlme. I got it in nlme but not lme4. I'm sure lme4 is better in many ways, but I could not figure out how to get what you want in lme4. Specifically, I tried the following: DF - data.frame(x=rep(letters[1:2], 2), y=rep(1:2, 2)+0.01*(1:4)) fit - lme(y~1, random=~1|x, data=DF) VC - VarCorr(fit) VC VC[1,2] When I did this in library(nlme), I got the following: VC x = pdLogChol(1) Variance StdDev (Intercept) 0.5101563004 0.71425227 Residual0.0001999596 0.01414071 VC[1,2] [1] 0.71425227 However, when I did it in library(lme4), I got something different: VC Groups NameVariance Std.Dev. x(Intercept) 0.50995 0.71411 Residual 2e-040.014142 VC[1,2] Error in VC[1, 2] : incorrect number of dimensions I was concerned by the differences in the estimates in this case, so I ported this problem to S-Plus 6.2. There I got the lme4 answers from both lme and varcomp. hope this helps. spencer graves Stephen Ellner wrote: Spencer Graves wrote: Have you considered VarCorr? I've used it with lme, and the documentation in package lme4 suggests it should work with GLMM, which might also do what you want from glmmPQL. Thanks for the pointer (I was not aware that nlme and lme4 had different versions of lme), but I'm still stuck at the same place using lme4: VarCorr(fit) Groups NameVariance Std.Dev. yeart(Intercept) 0.040896 0.20223 Residual 0.091125 0.30187 The number I need to extract and store is the .20223, but all the components I can find in VarCorr(fit) are something else. u=VarCorr(fit); slotNames(u) [1] scalereSumry useScale [EMAIL PROTECTED] [1] 0.3018693 [EMAIL PROTECTED] $yeart An object of class corrmatrix (Intercept) (Intercept) 1 Slot stdDev: (Intercept) 0.6699156 [EMAIL PROTECTED] [1] TRUE In glmmML the estimate is returned as the $sigma component of the model, but I also need the same info from 'family=gaussian' models. Stephen P. Ellner ([EMAIL PROTECTED]) Department of Ecology and Evolutionary Biology Corson Hall, Cornell University, Ithaca NY 14853-2701 Phone (607) 254-4221FAX (607) 255-8088 __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
RE: [R] NAMESPACE and tests for unexported functions
Not sure if this is recommended for testing purposes, but you can try using `:::'; e.g., mypkg:::invisibleFunction(...). Andy From: Paul Roebuck How do you get around the problem of having tests for functions that are not exported in NAMESPACE? It seems rather self-defeating to have to export everything so that 'R CMD CHECK pkg' won't crash when it encounters a test case for an internal function. I don't want someone using the package to call the function, but the package itself should be able to see its own contents. -- SIGSIG -- signature too long (core dumped) __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] KalmanSmooth problem
Spencer Graves wrote: Hi, Uwe: KalmanSmooth is in the stats package in R 1.9.1. R is moving so fast that it is impossible to keep current with all parts of it. Best Wishes, Spencer Graves U thanks, Spencer. My apologies to Pieter. Indeed, looks like I forgot to upgrade my R version on the machine at home. I only did so in my office and on my laptop. Uwe Uwe Ligges wrote: Hazenberg21, Pieter wrote: Hello, In R I am trying to use Kalman filtering to find a solution for an hydrological problem. With Kalman Filtering I want to estimate the discharge comming from three storage bassins. I have programmed a function in R which can run KalmanSmooth. When I'm asking for the function and putting in values, R detects the following error: Error in as.vector(data) : Argument S1 is missing, with no default. I have try to find a solution for this error in the R help file, and in different manuals, but I can't find it. Please help me find a solution. Question: What does R mean with S1 and what am I doing wrong? Here is the way I have programmed the hydrological problem in R. discharge=read.table(file=C:/Program Files/R/rw1090/discharge.txt,header=T) deb=discharge[,1] deb [1] 11.545313 8.045465 5.670868 4.044584 2.919311 2.306668 2.940956 [8] 4.238159 5.017374 3.818236 2.928805 2.262183 1.757765 1.633945 [15] 2.295130 3.454054 4.035224 3.193967 2.533181 2.012406 1.600836 [22] 1.652155 2.428678 3.642827 4.019545 3.209473 2.563617 2.048347 [29] 1.637041 1.828952 2.757842 4.050821 4.147013 3.316503 2.652490 [36] 2.121535 1.696934 2.027763 3.107366 4.429670 4.160178 3.327950 [43] 2.662237 2.129710 1.703717 2.158095 3.337039 4.582359 3.905901 [50] 3.124690 2.499732 1.999772 1.599810 2.130893 3.302622 4.336081 [57] 3.468857 2.775081 2.220062 1.776048 1.560859 2.169537 3.348081 [64] 4.170552 3.336440 2.669151 2.135320 1.708256 1.648859 2.374217 [71] 3.624091 4.248563 3.398850 2.719080 2.175264 1.740211 1.826122 [78] 2.704749 4.056438 4.437309 3.549847 2.839878 2.271902 1.817522 [85] 2.053994 3.107875 4.548436 4.600601 3.680481 2.944385 2.355508 [92] 1.884406 2.273248 3.490148 4.949898 4.584409 3.667527 2.934022 [99] 2.347217 1.84 Kalm = function(x,O1,O2,O3,T1,T2,T3,T4,T5,t,ga){ + t=array(c(1+ga*O1+t/O1*(-(1/T2)-(1/T3)-(1/T1)),t/O1*(1/T2),t/O1*(1/T3), + t/O2*(1/T2),1+ga*O2+t/O2*(-(1/T2)-(1/T4)),t/O2*(1/T4), + t/O3*(1/T3),t/O3*(1/T4),1+ga*O3+t/O3*(-(1/T3)-(1/T4)-(1/T5))),dim=c(3,3)); + h=0.5; + r=array(c(1,0,0,0,1,0,0,0,1),dim=c(3,3)); + q=1; + v=r*q*t(r); + a=10.14286; + z=array(c((1/T1),0,0,0,0,0,0,0,(1/T5)), dim=c(3,3)); + p=array(c(1,0,0,0,1,0,0,0,1),dim=c(3,3)); + pn=array(c(1,0,0,0,1,0,0,0,1),dim=c(3,3)); + kal=KalmanSmooth(x, list(T=t,Z=z,h=h,V=v,a=a,P=p,Pn=pn), nit=0) + kal} Kalm(deb,4,.5,5,.7,.1,2,3,4,1,0.65) Error in as.vector(data) : Argument S1 is missing, with no default First I thought I had to make a timeserie of deb. But this doesn't change the problem. Lot's of thanks trying to help me. a) Please tell us R Version and OS (OK, implicitly done that we are talking about R-1.9.0 on Windows). b) Please tell us which packages you are using (I don't know KalmanSmooth(), for example). c) Please try to specify reproducible examples. Conclusion for a-c): Please read the posting-guide. My hint is to try to debug yourself, at least *try*, starting eith calling traceback() right after the error appeared, in order to get a guess where the error really happens. Then look at the data that is passed to the function - and I'm pretty sure you will get at least an idea what goes wrong. Uwe Ligges Best regards, Pieter Hazenberg Student Hydrology and Watermanagement Wageningen University The Netherlands __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] a small bug in spatstat::rmh
Evgueni Parilov wrote: Time to time, rmh.default fails to simulate a lookup-type process on a statement: if(all.equal(diff(r),rep(deltar,nlook-1))) { equisp - 1 par - c(beta,nlook,equisp,deltar,rmax,h) } else { equisp - 0 par - c(beta,nlook,equisp,deltar,rmax,h,r) } According to the manual, all.equal should not be used in if-statement directly. This works: identical(all.equal(diff(r), rep(deltar, nlook - 1)),TRUE) Evgueni According to library(help = spatstat), Adrian Baddeley [EMAIL PROTECTED] (in CC) is maintainer of the spatstat package - and therefore the right addressee of this message. Uwe Ligges __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
RE: [R] KalmanSmooth problem
Dear Uwe and Spencer, Thanks for replying so soon. The R version I wrote is indeed R 1.9.1, and the package is stats. Sorry I didn't wrote it in my first email. I've have tried to use the comments you gave, and simplified my function. It is changed to: Kalm = function(x){ + t=array(c(-1.4248, 2.5, .1250, 20, -19.3183, .6667, .1, .0667, 1.7945), dim=c(3,3)) + h=0.5; + r=array(c(1,0,0,0,1,0,0,0,1),dim=c(3,3)); + q=1; + v=r*q*t(r); + a=10.14286; + z=array(c(.3574, 0, 0, 0, 0, 0, 0, 0, .2), dim=c(3,3)) + p=array(c(1,0,0,0,1,0,0,0,1),dim=c(3,3)); + pn=array(c(1,0,0,0,1,0,0,0,1),dim=c(3,3)); + kal=KalmanSmooth(x,list(T=t,Z=z,h=h,V=v,a=a,P=p,Pn=pn),nit=0) + kal} Kalm(deb) Wonderfully it is running quite well now. Thank you for your spare time. Best regards, Pieter Hazenberg -Oorspronkelijk bericht- Van: Uwe Ligges [mailto:[EMAIL PROTECTED] Verzonden: wo 7-7-2004 22:09 Aan: Spencer Graves; Hazenberg21, Pieter CC: R Help Mailing List Onderwerp: Re: [R] KalmanSmooth problem Spencer Graves wrote: Hi, Uwe: KalmanSmooth is in the stats package in R 1.9.1. R is moving so fast that it is impossible to keep current with all parts of it. Best Wishes, Spencer Graves U thanks, Spencer. My apologies to Pieter. Indeed, looks like I forgot to upgrade my R version on the machine at home. I only did so in my office and on my laptop. Uwe Uwe Ligges wrote: Hazenberg21, Pieter wrote: Hello, In R I am trying to use Kalman filtering to find a solution for an hydrological problem. With Kalman Filtering I want to estimate the discharge comming from three storage bassins. I have programmed a function in R which can run KalmanSmooth. When I'm asking for the function and putting in values, R detects the following error: Error in as.vector(data) : Argument S1 is missing, with no default. I have try to find a solution for this error in the R help file, and in different manuals, but I can't find it. Please help me find a solution. Question: What does R mean with S1 and what am I doing wrong? Here is the way I have programmed the hydrological problem in R. discharge=read.table(file=C:/Program Files/R/rw1090/discharge.txt,header=T) deb=discharge[,1] deb [1] 11.545313 8.045465 5.670868 4.044584 2.919311 2.306668 2.940956 [8] 4.238159 5.017374 3.818236 2.928805 2.262183 1.757765 1.633945 [15] 2.295130 3.454054 4.035224 3.193967 2.533181 2.012406 1.600836 [22] 1.652155 2.428678 3.642827 4.019545 3.209473 2.563617 2.048347 [29] 1.637041 1.828952 2.757842 4.050821 4.147013 3.316503 2.652490 [36] 2.121535 1.696934 2.027763 3.107366 4.429670 4.160178 3.327950 [43] 2.662237 2.129710 1.703717 2.158095 3.337039 4.582359 3.905901 [50] 3.124690 2.499732 1.999772 1.599810 2.130893 3.302622 4.336081 [57] 3.468857 2.775081 2.220062 1.776048 1.560859 2.169537 3.348081 [64] 4.170552 3.336440 2.669151 2.135320 1.708256 1.648859 2.374217 [71] 3.624091 4.248563 3.398850 2.719080 2.175264 1.740211 1.826122 [78] 2.704749 4.056438 4.437309 3.549847 2.839878 2.271902 1.817522 [85] 2.053994 3.107875 4.548436 4.600601 3.680481 2.944385 2.355508 [92] 1.884406 2.273248 3.490148 4.949898 4.584409 3.667527 2.934022 [99] 2.347217 1.84 Kalm = function(x,O1,O2,O3,T1,T2,T3,T4,T5,t,ga){ + t=array(c(1+ga*O1+t/O1*(-(1/T2)-(1/T3)-(1/T1)),t/O1*(1/T2),t/O1*(1/T3), + t/O2*(1/T2),1+ga*O2+t/O2*(-(1/T2)-(1/T4)),t/O2*(1/T4), + t/O3*(1/T3),t/O3*(1/T4),1+ga*O3+t/O3*(-(1/T3)-(1/T4)-(1/T5))),dim=c(3,3)); + h=0.5; + r=array(c(1,0,0,0,1,0,0,0,1),dim=c(3,3)); + q=1; + v=r*q*t(r); + a=10.14286; + z=array(c((1/T1),0,0,0,0,0,0,0,(1/T5)), dim=c(3,3)); + p=array(c(1,0,0,0,1,0,0,0,1),dim=c(3,3)); + pn=array(c(1,0,0,0,1,0,0,0,1),dim=c(3,3)); + kal=KalmanSmooth(x, list(T=t,Z=z,h=h,V=v,a=a,P=p,Pn=pn), nit=0) + kal} Kalm(deb,4,.5,5,.7,.1,2,3,4,1,0.65) Error in as.vector(data) : Argument S1 is missing, with no default First I thought I had to make a timeserie of deb. But this doesn't change the problem. Lot's of
[R] question about seq.dates from chron vs. as.POSIXct
Dear R People: Here is an interesting question: library(chron) xt - seq.dates(from=01/01/2004,by=days,length=5) xt [1] 01/01/04 01/02/04 01/03/04 01/04/04 01/05/04 #Fine so far as.POSIXct(xt) [1] 2003-12-31 18:00:00 Central Standard Time [2] 2004-01-01 18:00:00 Central Standard Time [3] 2004-01-02 18:00:00 Central Standard Time [4] 2004-01-03 18:00:00 Central Standard Time [5] 2004-01-04 18:00:00 Central Standard Time Why do the dates change, please? Presumably the as.POSIXct is taking the xt as midnight GMT and converting to Central Standard Time. Is the best solution to: as.POSIXlt(xt, CST) [1] 2004-01-01 CST 2004-01-02 CST 2004-01-03 CST 2004-01-04 CST [5] 2004-01-05 CST Thanks in advance! Sincerely, Laura Holt who is corrupted by dates and times mailto: [EMAIL PROTECTED] download! http://toolbar.msn.click-url.com/go/onm00200413ave/direct/01/ __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] lost messages
The second two resends did go through (I checked on the web archive), but the first did not; additionally, the second two resends both came through with the exact same timestamp, even though I resent them over the space of 4 hours or so (and only after I had sent the lost messages message, which did come through to me just fine). So there was something stuck, but seems to be unstuck now. Thanks to all, -Aaron On Jul 7, 2004, at 4:05 PM, (Ted Harding) wrote: On 07-Jul-04 Aaron J. Mackey wrote: I've posted a message twice to this list, and never seen it appear yet ... perhaps this one will go through ... ? Hi Aaron, Yes, it did get through. If you have not received it yourself, then possibly your subscription to R-help has got set to nomail or equivalent. It would be worth checking. Best wishes, Ted. __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] text editor for R
Hi, What is the best text editor for programming in R? I am using JEdit as the text editor, however, it does not have anything specific for R. It will be nice to have a developing environment where the keywords are highlighted, plus some other debugging functions. Yi-Xiong [[alternative HTML version deleted]] __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] text editor for R
Yi-Xiong Sean Zhou wrote: Hi, What is the best text editor for programming in R? I am using JEdit as the text editor, however, it does not have anything specific for R. It will be nice to have a developing environment where the keywords are highlighted, plus some other debugging functions. Yi-Xiong best is subjective, but (X)Emacs with ESS seems to be the most popular. See the following for more: http://cran.us.r-project.org/other-software.html --sundar __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] text editor for R
On Wed, 2004-07-07 at 17:47, Yi-Xiong Sean Zhou wrote: Hi, What is the best text editor for programming in R? I am using JEdit as the text editor, however, it does not have anything specific for R. It will be nice to have a developing environment where the keywords are highlighted, plus some other debugging functions. Yi-Xiong More information is available at: http://www.sciviews.org/_rgui/ Your e-mail headers suggest that you are using Windows. Thus, perhaps the two best choices (subject to challenge by others) would be: 1. R-WinEdt (Under IDE/Script Editors) 2. ESS for Windows The above two tools provide for a wide variety of functionality beyond syntax highlighting. There is a syntax highlighting file listed at the above site for jEdit. HTH, Marc Schwartz __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] text editor for R
I tried R-WinEdt a few years ago, but as I remember it interfered with my usual use of WinEdt which is as a front end to MiKTeX. Is there a way to use WinEdt both ways? Murray Jorgensen Marc Schwartz wrote: On Wed, 2004-07-07 at 17:47, Yi-Xiong Sean Zhou wrote: Hi, What is the best text editor for programming in R? I am using JEdit as the text editor, however, it does not have anything specific for R. It will be nice to have a developing environment where the keywords are highlighted, plus some other debugging functions. Yi-Xiong More information is available at: http://www.sciviews.org/_rgui/ Your e-mail headers suggest that you are using Windows. Thus, perhaps the two best choices (subject to challenge by others) would be: 1. R-WinEdt (Under IDE/Script Editors) 2. ESS for Windows The above two tools provide for a wide variety of functionality beyond syntax highlighting. There is a syntax highlighting file listed at the above site for jEdit. HTH, Marc Schwartz __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html -- Dr Murray Jorgensen http://www.stats.waikato.ac.nz/Staff/maj.html Department of Statistics, University of Waikato, Hamilton, New Zealand Email: [EMAIL PROTECTED]Fax 7 838 4155 Phone +64 7 838 4773 wk+64 7 849 6486 homeMobile 021 1395 862 __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
RE: [R] text editor for R
Uwe would be the authority on this 8-), but my impression is that if you keep two separate shortcuts, you should be fine. The one for R-WinEdt has flags that sets it up for R, which should not be used in the one for MikTeX. Andy From: Murray Jorgensen I tried R-WinEdt a few years ago, but as I remember it interfered with my usual use of WinEdt which is as a front end to MiKTeX. Is there a way to use WinEdt both ways? Murray Jorgensen Marc Schwartz wrote: On Wed, 2004-07-07 at 17:47, Yi-Xiong Sean Zhou wrote: Hi, What is the best text editor for programming in R? I am using JEdit as the text editor, however, it does not have anything specific for R. It will be nice to have a developing environment where the keywords are highlighted, plus some other debugging functions. Yi-Xiong More information is available at: http://www.sciviews.org/_rgui/ Your e-mail headers suggest that you are using Windows. Thus, perhaps the two best choices (subject to challenge by others) would be: 1. R-WinEdt (Under IDE/Script Editors) 2. ESS for Windows The above two tools provide for a wide variety of functionality beyond syntax highlighting. There is a syntax highlighting file listed at the above site for jEdit. HTH, Marc Schwartz __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html -- Dr Murray Jorgensen http://www.stats.waikato.ac.nz/Staff/maj.html Department of Statistics, University of Waikato, Hamilton, New Zealand Email: [EMAIL PROTECTED]Fax 7 838 4155 Phone +64 7 838 4773 wk+64 7 849 6486 homeMobile 021 1395 862 __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] omit complete cases
thanks everyone. all solutions were better than what I had, and simple. R is an interesting experience. Extremely powerful and awe-inspiring for its elegance; things work like I would never have believed how elegantly they work. The IQ in the subsetting alone is superbly clever. And then it turns around: figuring out how to do simple things can take a long time---until I realize that it is somewhere somehow built in already. So, R (and the answers from helpful souls on this list) often makes me feel quite stupid. Yes, I first search; yes, I always look---but either I did not know or I had forgotten. I used to use perl for much work, and although there is much to like about it, R seems to be even better for most tasks---except that there is one perl resource that R cannot beat: the Perl Cookbook. if I only had an R cookbook... regards, /ivo --- ivo welch professor of finance and economics brown / nber / yale __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] command line interface
How can plots (histograms) be implemented with the command line interface to R? Lana Schaffer __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
RE: [R] text editor for R
Hi, Uwe would be the authority on this 8-), but my impression is that if you keep two separate shortcuts, you should be fine. The one for R-WinEdt has flags that sets it up for R, which should not be used in the one for MikTeX. I think Andy is correct. A few years ago (back in the dark ages -- before I discovered Emacs/ESS), I had two short cuts, one calls R-WinEdt (i.e. with flags...etc) and the other with just a normal WinEdt icon. However, I *think* now you can interact R-WinEdt within R directly (I tried the new version about 2 ~ 3 months ago just for fun, and that seemed to be the case, but I can't quite remember *_*). Cheers, Kevin Ko-Kang Kevin Wang PhD Student Centre for Mathematics and its Applications Building 27, Room 1004 Mathematical Sciences Institute (MSI) Australian National University Canberra, ACT 0200 Australia Homepage: http://wwwmaths.anu.edu.au/~wangk/ Ph (W): +61-2-6125-2431 Ph (H): +61-2-6125-7411 Ph (M): +61-40-451-8301 __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] text editor for R
On Thu, 8 Jul 2004, Murray Jorgensen wrote: I tried R-WinEdt a few years ago, but as I remember it interfered with my usual use of WinEdt which is as a front end to MiKTeX. Is there a way to use WinEdt both ways? Murray Jorgensen This problem annoyed me for a while too. My solution (which is not perhaps ideal) is this. You want two different incarnations of WinEdt, one for TeX, the other for R. On the desktop I have a shortcut to WinEdt which is the one for TeX stuff. I open the other one with R syntax highlighting etc by starting R and using library(RWinEdt). To do this you have to install the RWinEdt package and SWinRegistry. This is all well explained in the ReadMe.txt for RWinEdt. I think with the right additions to the Target field in a shortcut to WinEdt you can call up the incarnation of WinEdt that is suitable for R. I haven't done that. You would then have two shortcuts to WinEdt, one for your TeX stuff, one for R. Uwe Ligges is the guru for this though. David Scott David Scott Department of Statistics, Tamaki Campus The University of Auckland, PB 92019 AucklandNEW ZEALAND Phone: +64 9 373 7599 ext 86830 Fax: +64 9 373 7000 Email: [EMAIL PROTECTED] Graduate Officer, Department of Statistics __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] S data library
I am currently working my way through Statistical Models in S (Chambers Hastie, 1992) and it would be helpful to me if I had access to the data library they refer to and which contains the authors sample datasets. Are these datasets available as an R package or any other type of download? Thanks, Mark W. Kimpel MD Department of Psychiatry Indiana University School of Medicine [[alternative HTML version deleted]] __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] text editor for R
I use WinEdt for both. I simply installed it twice, and set up one version for R and one for LaTeX, I have seperate icons on the desktop, with different names, and it works fine. HTH Peter Peter L. Flom, PhD Assistant Director, Statistics and Data Analysis Core Center for Drug Use and HIV Research National Development and Research Institutes 71 W. 23rd St www.peterflom.com New York, NY 10010 (212) 845-4485 (voice) (917) 438-0894 (fax) [EMAIL PROTECTED] 07/07/04 7:37 PM I tried R-WinEdt a few years ago, but as I remember it interfered with my usual use of WinEdt which is as a front end to MiKTeX. Is there a way to use WinEdt both ways? Murray Jorgensen Marc Schwartz wrote: On Wed, 2004-07-07 at 17:47, Yi-Xiong Sean Zhou wrote: Hi, What is the best text editor for programming in R? I am using JEdit as the text editor, however, it does not have anything specific for R. It will be nice to have a developing environment where the keywords are highlighted, plus some other debugging functions. Yi-Xiong More information is available at: http://www.sciviews.org/_rgui/ Your e-mail headers suggest that you are using Windows. Thus, perhaps the two best choices (subject to challenge by others) would be: 1. R-WinEdt (Under IDE/Script Editors) 2. ESS for Windows The above two tools provide for a wide variety of functionality beyond syntax highlighting. There is a syntax highlighting file listed at the above site for jEdit. HTH, Marc Schwartz __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html -- Dr Murray Jorgensen http://www.stats.waikato.ac.nz/Staff/maj.html Department of Statistics, University of Waikato, Hamilton, New Zealand Email: [EMAIL PROTECTED]Fax 7 838 4155 Phone +64 7 838 4773 wk+64 7 849 6486 homeMobile 021 1395 862 __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] text editor for R
Oh, yes. I think I did do something like this, but for some reason the two incarnations bothered me. Murray Jorgensen David Scott wrote: On Thu, 8 Jul 2004, Murray Jorgensen wrote: I tried R-WinEdt a few years ago, but as I remember it interfered with my usual use of WinEdt which is as a front end to MiKTeX. Is there a way to use WinEdt both ways? Murray Jorgensen This problem annoyed me for a while too. My solution (which is not perhaps ideal) is this. You want two different incarnations of WinEdt, one for TeX, the other for R. On the desktop I have a shortcut to WinEdt which is the one for TeX stuff. I open the other one with R syntax highlighting etc by starting R and using library(RWinEdt). To do this you have to install the RWinEdt package and SWinRegistry. This is all well explained in the ReadMe.txt for RWinEdt. I think with the right additions to the Target field in a shortcut to WinEdt you can call up the incarnation of WinEdt that is suitable for R. I haven't done that. You would then have two shortcuts to WinEdt, one for your TeX stuff, one for R. Uwe Ligges is the guru for this though. David Scott David Scott Department of Statistics, Tamaki Campus The University of Auckland, PB 92019 Auckland NEW ZEALAND Phone: +64 9 373 7599 ext 86830 Fax: +64 9 373 7000 Email: [EMAIL PROTECTED] Graduate Officer, Department of Statistics __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html -- Dr Murray Jorgensen http://www.stats.waikato.ac.nz/Staff/maj.html Department of Statistics, University of Waikato, Hamilton, New Zealand Email: [EMAIL PROTECTED]Fax 7 838 4155 Phone +64 7 838 4773 wk+64 7 849 6486 homeMobile 021 1395 862 __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
[R] read.frame
Hello group, I am learning R and I am new to many concepts.I face the following errors when I am trying to execute the following. I have 4 text files with protein accession numbers. I wanted to represent them in a venn diagram and for that I using intersect and setdiff functions. My data looks like this: file1.txt (c): NP_05 NP_20 NP_30 NP_53 file2.txt(e): NP_05 NP_20 NP_30 NP_31 NP_53 NP_55 NP_87 file3.txt(h): NP_05 NP_20 NP_30 NP_53 NP_55 NP_57 NP_87 file4.txt (s): NP_05 NP_20 NP_30 NP_33 NP_53 NP_55 NP_87 NP_000168 Now I did the following FIRST time: c=read.table(file1.txt) e=read.table(file2.txt) s=read.table(file4.txt) h=read.table(file3.txt) class(c) [1] data.frame class(s) [1] data.frame CiS=intersect(c,s) CiS NULL data frame with 0 rows # Why am I getting NULL data error. I know there are common elements between c and S. ## CiS-intersect(read.matrix(c,s)) Error in unique(y[match(x, y, 0)]) : Argument y is missing, with no default CiS-intersect(read.frame(c,s)) Error in unique(y[match(x, y, 0)]) : Argument y is missing, with no default # Why am I getting this error. Second thing I did: I loaded the data as data.frame instead read.table(). Again I never get intersection of C,E and S,H. Can any one please help me. thank you SP __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
R cookbook (Re: [R] omit complete cases)
Hi Ivo: You might check out Paul Jobnson's following page: http://www.ukans.edu/~pauljohn/R/Rtips.html HTH, Arin On Thu, 08 Jul 2004 [EMAIL PROTECTED] wrote : ...I used to use perl for much work, and although there is much to like about it, R seems to be even better for most tasks---except that there is one perl resource that R cannot beat: the Perl Cookbook. if I only had an R cookbook... regards, /ivo --- Arindam Basu MD MPH DBI Assistant Director Fogarty International Program on Environmental Health in India IPGMER 244 AJC Bose Road, Kolkata 700027 India [[alternative HTML version deleted]] __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] read.frame
Dear Alec, Thank you for your response and it worked. Further, I have another mathematical problem. I apologize ahead as this question is not apt for this list. I am a biologist working at Johns Hopkins School of Medicine. As I listed in my previous e-mail (attached below)I mentioned I have 4 protein sets. I am now trying to calculate the combinatorics of these sets. My ultimate aim is to draw a venn diagram and find out the proteins that are unique to set C,S,E and H. I drew a venn diagram graph and I am banging my head to deduce the combinations. It is easy for me to deduce the intersections - that means the protein entries that are present in common. However, it proved very difficult to deduce the following: I could calculate the following: NP_*** present in both C and E (C intersection E) NP_*** present in both C and H (C intersection H) NP_*** present in both C and S (C ^ S) NP_*** present in both E and H (E ^ H) NP_*** present in both E and S (E ^ S) NP_*** present in both H and S (H ^ S) NP_*** present in C, E and H (C^E^H) NP_*** present in C, H and S (C^H^S) NP_*** present in E, H and S (E^H^S) NP_*** present in E, S and C (E^S^C) It is very difficult for me to deduce the following: NP_ entries specific to E NP_ entries specific to H NP_ entries specific to S I waster many pages but could not derive some solution to get unique elements for sets, s,e,h,and C. Can any one help me by suggesting some way to get these. Thank you and I apologise again for posting the wrong question. SP --- S Peri [EMAIL PROTECTED] wrote: Hello group, I am learning R and I am new to many concepts.I face the following errors when I am trying to execute the following. I have 4 text files with protein accession numbers. I wanted to represent them in a venn diagram and for that I using intersect and setdiff functions. My data looks like this: file1.txt (c): NP_05 NP_20 NP_30 NP_53 file2.txt(e): NP_05 NP_20 NP_30 NP_31 NP_53 NP_55 NP_87 file3.txt(h): NP_05 NP_20 NP_30 NP_53 NP_55 NP_57 NP_87 file4.txt (s): NP_05 NP_20 NP_30 NP_33 NP_53 NP_55 NP_87 NP_000168 Now I did the following FIRST time: c=read.table(file1.txt) e=read.table(file2.txt) s=read.table(file4.txt) h=read.table(file3.txt) class(c) [1] data.frame class(s) [1] data.frame CiS=intersect(c,s) CiS NULL data frame with 0 rows # Why am I getting NULL data error. I know there are common elements between c and S. ## CiS-intersect(read.matrix(c,s)) Error in unique(y[match(x, y, 0)]) : Argument y is missing, with no default CiS-intersect(read.frame(c,s)) Error in unique(y[match(x, y, 0)]) : Argument y is missing, with no default # Why am I getting this error. Second thing I did: I loaded the data as data.frame instead read.table(). Again I never get intersection of C,E and S,H. Can any one please help me. thank you SP __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
RE: [R] question about seq.dates from chron vs. as.POSIXct
Yes, its assuming GMT. I find it best to use an intermediate conversion to character to avoid these sorts of problems. If, as in your example, you just have dates and no times then the following would do it (and has the advantage that you don't have to specify your time zone explicitly so it will still work if someone in another timezone tries your code): as.POSIXct(format(as.Date(xt))) The reason to convert it to Date first, before formatting is so that the format will use Date's default format (which is accepted by as.POSIXct) rather than chron's default format. --- Date: Wed, 07 Jul 2004 15:30:47 -0500 From: Laura Holt [EMAIL PROTECTED] To: [EMAIL PROTECTED] Subject:[R] question about seq.dates from chron vs. as.POSIXct Dear R People: Here is an interesting question: library(chron) xt - seq.dates(from=01/01/2004,by=days,length=5) xt [1] 01/01/04 01/02/04 01/03/04 01/04/04 01/05/04 #Fine so far as.POSIXct(xt) [1] 2003-12-31 18:00:00 Central Standard Time [2] 2004-01-01 18:00:00 Central Standard Time [3] 2004-01-02 18:00:00 Central Standard Time [4] 2004-01-03 18:00:00 Central Standard Time [5] 2004-01-04 18:00:00 Central Standard Time Why do the dates change, please? Presumably the as.POSIXct is taking the xt as midnight GMT and converting to Central Standard Time. Is the best solution to: as.POSIXlt(xt, CST) [1] 2004-01-01 CST 2004-01-02 CST 2004-01-03 CST 2004-01-04 CST [5] 2004-01-05 CST Thanks in advance! Sincerely, Laura Holt who is corrupted by dates and times mailto: [EMAIL PROTECTED] __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Re: [R] read.frame
In a message dated 7/7/2004 9:42:03 PM Pacific Daylight Time, [EMAIL PROTECTED] writes: Dear Alec, Thank you for your response and it worked. Further, I have another mathematical problem. I apologize ahead as this question is not apt for this list. I am a biologist working at Johns Hopkins School of Medicine. As I listed in my previous e-mail (attached below)I mentioned I have 4 protein sets. I am now trying to calculate the combinatorics of these sets. My ultimate aim is to draw a venn diagram and find out the proteins that are unique to set C,S,E and H. I drew a venn diagram graph and I am banging my head to deduce the combinations. It is easy for me to deduce the intersections - that means the protein entries that are present in common. However, it proved very difficult to deduce the following: I could calculate the following: NP_*** present in both C and E (C intersection E) NP_*** present in both C and H (C intersection H) NP_*** present in both C and S (C ^ S) NP_*** present in both E and H (E ^ H) NP_*** present in both E and S (E ^ S) NP_*** present in both H and S (H ^ S) NP_*** present in C, E and H (C^E^H) NP_*** present in C, H and S (C^H^S) NP_*** present in E, H and S (E^H^S) NP_*** present in E, S and C (E^S^C) It is very difficult for me to deduce the following: NP_ entries specific to E NP_ entries specific to H NP_ entries specific to S I am not sure about your notation, but in R terms, if you want the items specific to E, H, or S, how about something like: specific to E -- setdiff(e, union(c,union(h,s))) specific to H -- setdiff(h, union(c,union(e,s))) specific to S -- setdiff(s, union(c,union(e,h))) Dan Nordlund [[alternative HTML version deleted]] __ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html