[R] THANK YOU ALL
Dear Friends, I have been scarce here. This is because I am busy implementing what I have learned from you. You seem to have answered all my queries and I have not got any new ones. I will quickly contact you as soon as I encounter challenges in my analysis. I hope you are well. I have two recent publications resulting from your assistance ( https://doi.org/10.3847/1538-4357/abfe60, and https://rdcu.be/cpmyg). I tried to share the complete pdf of the first work but the Journal did not allow that. If you can find time to look at the acknowledgment section, you will note that your efforts were dully acknowledged. I will ever remain indebted to the group. I consider it plagiarism to publish any work without indicating my source of analysis tools. I hope you continue to devote your time to assist the world and humanity in general. I am sure that many people value your help. Many, many thanks. With the warmest regards Ogbos [[alternative HTML version deleted]] __ R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Thank you 4 Davide
On 3/22/21 8:24 AM, francesca brun via R-help wrote: Hello, The problem was that version 4.0.4 did not support the package so I tried with several old versions until 3.6.2 installs both climtrend and Rcmdr with its graphical interface !! solved and thanks again Davide !!Francesca I'm glad you got success. However, for anyone viewing this I must note that the proper name of the package is 'climtrends'. I'm not sure that you ever spelled it correctly in your two postings to rhelp. R users will get little system help for poorly spelled package names. There is some hinting that occurs when using Rstudio, but that will only occur for packages that are already installed on one's system. -- David. (from Italy) [[alternative HTML version deleted]] __ R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. __ R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] Thank you 4 Davide
Hello, The problem was that version 4.0.4 did not support the package so I tried with several old versions until 3.6.2 installs both climtrend and Rcmdr with its graphical interface !! solved and thanks again Davide !!Francesca (from Italy) [[alternative HTML version deleted]] __ R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] Thank you! RE: Ask for help: find corresponding elements between matrix
Dear Berend, Mark, Jose, Arun, Great! Thank you so much for all your replies with different codings. They all work well except one - because NAs in matrix A need to take care of. Best,Zhengyu Subject: Re: [R] Ask for help: find corresponding elements between matrix From: b...@xs4all.nl Date: Thu, 21 Feb 2013 16:35:08 +0100 CC: r-help@r-project.org To: zhyjiang2...@hotmail.com On 21-02-2013, at 15:39, JiangZhengyu zhyjiang2...@hotmail.com wrote: Dear R experts, I have two matrix (seq mat) I want to retrieve in a new matrix all the numbers from mat that =1 (corresponding to the same row/ column position) in seq, or all the numbers in mat that =-1 in seq. - Replace all the numbers with NA if it's not 1/-1 in seq. There are some NAs in seq. seq=matrix(c(1,-1,0,1,1,-1,0,0,-1,1,1,NA),3,4) mat=matrix(rnorm(12),3) I made this code but I don't know where's the problem.. seq=matrix(c(1,-1,0,1,1,-1,0,0,-1,1,1,NA),3,4) mat=matrix(rnorm(12),3) Something like this set.seed(15) seq1 - matrix(c(1,-1,0,1,1,-1,0,0,-1,1,1,NA),3,4) mat - matrix(rnorm(12),3) ind - which(seq1 == 1 | seq1 == -1,arr.ind=TRUE) m - matrix(NA,nrow=nrow(mat),ncol=ncol(mat)) m[ind] - mat[ind] m Output is: [,1] [,2] [,3] [,4] [1,] 0.2588229 0.8971982 NA -1.0750013 [2,] 1.8311207 0.4880163 NA 0.8550108 [3,]NA -1.2553858 -0.1321224 NA Berend [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Thank you your help.
Hi, temp3- read.table(text= ID CTIME WEIGHT HM001 1223 24.0 HM001 1224 25.2 HM001 1225 23.1 HM001 1226 NA HM001 1227 32.1 HM001 1228 32.4 HM001 1229 1323.2 HM001 1230 27.4 HM001 1231 22.4236 #changed here to test the previous solution ,sep=,header=TRUE,stringsAsFactors=FALSE) tempnew- na.omit(temp3) grep(\\d{4},temp3$WEIGHT) #[1] 7 9 #not correct temp3[,3][grep(\\d{4}..*,temp3$WEIGHT)]-NA #match 4 digit numbers before the decimals tail(temp3) # ID CTIME WEIGHT #4 HM001 1226 NA #5 HM001 1227 32.1000 #6 HM001 1228 32.4000 #7 HM001 1229 NA #8 HM001 1230 27.4000 #9 HM001 1231 22.4236 #Based on the variance, You could set up some limit, for example 50 and use: tempnew$WEIGHT- ifelse(tempnew$WEIGHT50,NA,tempnew$WEIGHT) A.K. From: 남윤주 jamansymp...@naver.com To: arun smartpink...@yahoo.com Sent: Monday, January 28, 2013 2:20 AM Subject: Re: Thank you your help. Thank you for your reply again. Your understanding is exactly right. I attached a picture that show dataset. 'weight' is a dependent variable. And CTIME means hour/minute. This data will have accumulated for years. Speaking of accepted variance range, it would be from 10 to 50. Actually, I am java programmer. So, I am strange this R Language. Can u give me some example to use grep function? -Original Message- From: arunsmartpink...@yahoo.com To: jamansymp...@naver.comjamansymp...@naver.com; Cc: Sent: 2013-01-28 (월) 15:27:12 Subject: Re: Thank you your help. Hi, Your original post was that ...it was evaluated from 20kg -40kg. But By some errors, it is evaluated 2000 kg. So, my understanding was that you get values 2000 or 2000-4000 reads in place of 20-40 occasionally due to some misreading. If your dataset contains observed value, strange value and NA and you want to replace the strange value to NA, could you mention the range of strange values. If the strange value ranges anywhere between 1000-, it should get replaced with the ?grep() solution. But, if it depends upon something else, you need to specify. Also, regarding the variance, what is your accepted range of variance. A.K. - Original Message - From: jamansymp...@naver.com jamansymptom@naver.com To: smartpink...@yahoo.com Cc: Sent: Monday, January 28, 2013 1:15 AM Subject: Thank you your help. Thank you to answer my question. It is not exactly what I want. I should have informed detailed situation. There is a sensor get data every minute. And that data will be accumulated and be portion of dataset. And the dataset contains observed value, strange value and NA. Namely, I am not sure where strange value will be occured. And I can't expect when strange value will be occured. I need the procedure performing like below. 1. using a method, set the range of variance 2. using for(i) statement, check whether variance(weihgt) is in the range. 3. when variance is out of range, impute weight[i] as NA. Thank you. __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Thank you your help and one more question.
HI, I don't have Amelia package installed. If you want to get the mean value, you could use either ?aggregate(), or ?ddply() from library(plyr) library(plyr) imputNew-do.call(rbind,imput1_2_3) res1-ddply(imputNew,.(ID,CTIME),function(x) mean(x$WEIGHT)) names(res1)[3]-WEIGHT head(res1) # ID CTIME WEIGHT #1 HM001 1223 24.9 #2 HM001 1224 25.2 #3 HM001 1225 25.5 #4 HM001 1226 25.41933 #5 HM001 1227 25.7 #6 HM001 1228 27.1 #or res2-aggregate(.~ID+CTIME,data=imputNew,mean) #or res3- do.call(rbind,lapply(split(imputNew,imputNew$CTIME),function(x) {x$WEIGHT-mean(x[,3]);head(x,1)})) row.names(res3)-1:nrow(res3) identical(res1,res2) #[1] TRUE identical(res1,res3) #[1] TRUE A.K. From: 남윤주 jamansymp...@naver.com To: arun smartpink...@yahoo.com Sent: Monday, January 28, 2013 9:47 PM Subject: Re: Thank you your help and one more question. Thank you for replying my question. What I want is the matrix like below. I have 3 data sets that named weightimp1, 2, 3. And, to get the matrix like below, I have to combine 3 data sets(named weightimp1, 2, 3). I don't know how to 3data sets combined. It could be mean of 3 data set. Or, there might be a value(temp2$imputations$...) in Amelia package. I prefer to use Amelia package method, but if it dosen't exist, can u recommend how to set as a mean value? # ID CTIME WEIGHT (It represents 3 data sets(weightimp1, 2, 3) #1 HM001 1223 24.9 #2 HM001 1224 25.2 #3 HM001 1225 25.5 #4 HM001 1226 25.24132 #5 HM001 1227 25.7 #6 HM001 1228 27.1 #7 HM001 1229 27.3 #8 HM001 1230 27.4 #9 HM001 1231 28.4 #10 HM001 1232 29.2 #11 HM001 1233 30.13770 #12 HM001 1234 31.17251 #13 HM001 1235 32.4 #14 HM001 1236 33.7 #15 HM001 1237 34.3 -Original Message- From: arunsmartpink...@yahoo.com To: 남윤주jamansymp...@naver.com; Cc: R helpr-help@r-project.org; Sent: 2013-01-29 (화) 11:25:38 Subject: Re: Thank you your help and one more question. HI, How do you want to combine the results? It looks like the 5 datasets are list elements. If I take the first three list elements, imput1_2_3-list(imp1=structure(list(ID = c(HM001, HM001, HM001, HM001, HM001, HM001, HM001, HM001, HM001, HM001, HM001, HM001, HM001, HM001, HM001), CTIME = 1223:1237, WEIGHT = c(24.9, 25.2, 25.5, 25.24132, 25.7, 27.1, 27.3, 27.4, 28.4, 29.2, 30.1377, 31.17251, 32.4, 33.7, 34.3)), .Names = c(ID, CTIME, WEIGHT ), class = data.frame, row.names = c(1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15)), imp2=structure(list(ID = c(HM001, HM001, HM001, HM001, HM001, HM001, HM001, HM001, HM001, HM001, HM001, HM001, HM001, HM001, HM001), CTIME = 1223:1237, WEIGHT = c(24.9, 25.2, 25.5, 25.54828, 25.7, 27.1, 27.3, 27.4, 28.4, 29.2, 29.8977, 31.35045, 32.4, 33.7, 34.3)), .Names = c(ID, CTIME, WEIGHT ), class = data.frame, row.names = c(1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15)), imp3=structure(list(ID = c(HM001, HM001, HM001, HM001, HM001, HM001, HM001, HM001, HM001, HM001, HM001, HM001, HM001, HM001, HM001), CTIME = 1223:1237, WEIGHT = c(24.9, 25.2, 25.5, 25.46838, 25.7, 27.1, 27.3, 27.4, 28.4, 29.2, 30.88185, 31.57952, 32.4, 33.7, 34.3)), .Names = c(ID, CTIME, WEIGHT ), class = data.frame, row.names = c(1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15))) #It could be combined by: do.call(rbind, imput1_2_3)# But if you do this the total number or rows will be the sum of the number of rows of each dataset. I guess you want something like this: res-Reduce(function(...) merge(...,by=c(ID,CTIME)),imput1_2_3) names(res)[3:5]- paste(WEIGHT,IMP,1:3,sep=) res # ID CTIME WEIGHTIMP1 WEIGHTIMP2 WEIGHTIMP3 #1 HM001 1223 24.9 24.9 24.9 #2 HM001 1224 25.2 25.2 25.2 #3 HM001 1225 25.5 25.5 25.5 #4 HM001 1226 25.24132 25.54828 25.46838 #5 HM001 1227 25.7 25.7 25.7 #6 HM001 1228 27.1 27.1 27.1 #7 HM001 1229 27.3 27.3 27.3 #8 HM001 1230 27.4 27.4 27.4 #9 HM001 1231 28.4 28.4 28.4 #10 HM001 1232 29.2 29.2 29.2 #11 HM001 1233 30.13770 29.89770 30.88185 #12 HM001 1234 31.17251 31.35045 31.57952 #13 HM001 1235 32.4 32.4 32.4 #14 HM001 1236 33.7 33.7 33.7 #15 HM001 1237 34.3 34.3 34.3 A.K. From: 남윤주 jamansymptom@naver.com To: arun smartpink111@yahoo.com Sent: Monday, January 28, 2013 7:35 PM Subject: Thank you your help and one more question. http://us-mg6.mail.yahoo.com/neo/launch?.rand=3qkohpi922i2q# I deeply appreciate your help. Answering your question, I am software engineer. And I am developing system accumulating data to draw chart and table. For higher perfromance, I have to deal missing value
Re: [R] Thank you your help and one more question.
Hi, I think I understand your mistake. imput1_2_3-list(imp1=structure(list(ID = c(HM001, HM001, HM001, HM001, HM001, HM001, HM001, HM001, HM001, HM001, HM001, HM001, HM001, HM001, HM001), CTIME = 1223:1237, WEIGHT = c(24.9, 25.2, 25.5, 25.24132, 25.7, 27.1, 27.3, 27.4, 28.4, 29.2, 30.1377, 31.17251, 32.4, 33.7, 34.3)), .Names = c(ID, CTIME, WEIGHT ), class = data.frame, row.names = c(1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15)), imp2=structure(list(ID = c(HM001, HM001, HM001, HM001, HM001, HM001, HM001, HM001, HM001, HM001, HM001, HM001, HM001, HM001, HM001), CTIME = 1223:1237, WEIGHT = c(24.9, 25.2, 25.5, 25.54828, 25.7, 27.1, 27.3, 27.4, 28.4, 29.2, 29.8977, 31.35045, 32.4, 33.7, 34.3)), .Names = c(ID, CTIME, WEIGHT ), class = data.frame, row.names = c(1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15)), imp3=structure(list(ID = c(HM001, HM001, HM001, HM001, HM001, HM001, HM001, HM001, HM001, HM001, HM001, HM001, HM001, HM001, HM001), CTIME = 1223:1237, WEIGHT = c(24.9, 25.2, 25.5, 25.46838, 25.7, 27.1, 27.3, 27.4, 28.4, 29.2, 30.88185, 31.57952, 32.4, 33.7, 34.3)), .Names = c(ID, CTIME, WEIGHT ), class = data.frame, row.names = c(1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15))) imput- list(imput1_2_3[1],imput1_2_3[2],imput1_2_3[3]) #what you tried. You should use [[ ]]instead of []. Here, it is not necessary aggregate(.~ID+CTIME,data=imput,mean) #Error in eval(expr, envir, enclos) : object 'ID' not found #You don't need the above step. class(imput1_2_3) #already a list [1] list imput-do.call(rbind,imput1_2_3) aggregate(.~ID+CTIME,data=imput,mean) # ID CTIME WEIGHT #1 HM001 1223 24.9 #2 HM001 1224 25.2 #3 HM001 1225 25.5 #4 HM001 1226 25.41933 #5 HM001 1227 25.7 #6 HM001 1228 27.1 #7 HM001 1229 27.3 #8 HM001 1230 27.4 #9 HM001 1231 28.4 #10 HM001 1232 29.2 #11 HM001 1233 30.30575 #12 HM001 1234 31.36749 #13 HM001 1235 32.4 #14 HM001 1236 33.7 #15 HM001 1237 34.3 A.K. From: 남윤주 jamansymp...@naver.com To: arun smartpink...@yahoo.com Sent: Tuesday, January 29, 2013 12:04 AM Subject: Re: Thank you your help and one more question. I decided to follow aggregate(). So i install library(plyr). But, While executing this statement 'res - aggregate(.~ID+CIME, data=input,mean)', Error was occcured. What should I do next time? library(plyr) a.out2$imputations $imp1 ID CTIME ACTIVE_KWH 1 HM001 2.01212e+11 24.2 2 HM001 2.01212e+11 25.5 3 HM001 2.01212e+11 25.6 4 HM001 2.01212e+11 25.90065 5 HM001 2.01212e+11 26.6 6 HM001 2.01212e+11 26.7 7 HM001 2.01212e+11 27.1 8 HM001 2.01212e+11 27.4 9 HM001 2.01212e+11 27.5 10 HM001 2.01212e+11 27.8 11 HM001 2.01212e+11 28.2 12 HM001 2.01212e+11 28.44605 13 HM001 2.01212e+11 28.7 14 HM001 2.01212e+11 28.9 15 HM001 2.01212e+11 29.1 $imp2 ID CTIME ACTIVE_KWH 1 HM001 2.01212e+11 24.2 2 HM001 2.01212e+11 25.5 3 HM001 2.01212e+11 25.6 4 HM001 2.01212e+11 25.87163 5 HM001 2.01212e+11 26.6 6 HM001 2.01212e+11 26.7 7 HM001 2.01212e+11 27.1 8 HM001 2.01212e+11 27.4 9 HM001 2.01212e+11 27.5 10 HM001 2.01212e+11 27.8 11 HM001 2.01212e+11 28.2 12 HM001 2.01212e+11 28.68048 13 HM001 2.01212e+11 28.7 14 HM001 2.01212e+11 28.9 15 HM001 2.01212e+11 29.1 imput - list(a.out2$imputations[1], a.out2$imputations[2]) do.call(rbind, imput) [[1]] [[1]]$imp1 ID CTIME ACTIVE_KWH 1 HM001 2.01212e+11 24.2 2 HM001 2.01212e+11 25.5 3 HM001 2.01212e+11 25.6 4 HM001 2.01212e+11 25.90065 5 HM001 2.01212e+11 26.6 6 HM001 2.01212e+11 26.7 7 HM001 2.01212e+11 27.1 8 HM001 2.01212e+11 27.4 9 HM001 2.01212e+11 27.5 10 HM001 2.01212e+11 27.8 11 HM001 2.01212e+11 28.2 12 HM001 2.01212e+11 28.44605 13 HM001 2.01212e+11 28.7 14 HM001 2.01212e+11 28.9 15 HM001 2.01212e+11 29.1 [[2]] [[2]]$imp2 ID CTIME ACTIVE_KWH 1 HM001 2.01212e+11 24.2 2 HM001 2.01212e+11 25.5 3 HM001 2.01212e+11 25.6 4 HM001 2.01212e+11 25.87163 5 HM001 2.01212e+11 26.6 6 HM001 2.01212e+11 26.7 7 HM001 2.01212e+11 27.1 8 HM001 2.01212e+11 27.4 9 HM001 2.01212e+11 27.5 10 HM001 2.01212e+11 27.8 11 HM001 2.01212e+11 28.2 12 HM001 2.01212e+11 28.68048 13 HM001 2.01212e+11 28.7 14 HM001 2.01212e+11 28.9 15 HM001 2.01212e+11 29.1 res - aggregate(.~ID+CTIME, data=imput,mean) Follwing Error. eval(expr, envir, enclos) : no element 'ID' # I transfer this line in english because it was written by my mother language. -Original Message- From: arunsmartpink...@yahoo.com To: 남윤주jamansymp...@naver.com; Cc: R helpr-help@r-project.org; Sent: 2013-01-29 (화) 12:20:10 Subject: Re: Thank you your
Re: [R] Thank you your help and one more question.
HI, How do you want to combine the results? It looks like the 5 datasets are list elements. If I take the first three list elements, imput1_2_3-list(imp1=structure(list(ID = c(HM001, HM001, HM001, HM001, HM001, HM001, HM001, HM001, HM001, HM001, HM001, HM001, HM001, HM001, HM001), CTIME = 1223:1237, WEIGHT = c(24.9, 25.2, 25.5, 25.24132, 25.7, 27.1, 27.3, 27.4, 28.4, 29.2, 30.1377, 31.17251, 32.4, 33.7, 34.3)), .Names = c(ID, CTIME, WEIGHT ), class = data.frame, row.names = c(1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15)), imp2=structure(list(ID = c(HM001, HM001, HM001, HM001, HM001, HM001, HM001, HM001, HM001, HM001, HM001, HM001, HM001, HM001, HM001), CTIME = 1223:1237, WEIGHT = c(24.9, 25.2, 25.5, 25.54828, 25.7, 27.1, 27.3, 27.4, 28.4, 29.2, 29.8977, 31.35045, 32.4, 33.7, 34.3)), .Names = c(ID, CTIME, WEIGHT ), class = data.frame, row.names = c(1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15)), imp3=structure(list(ID = c(HM001, HM001, HM001, HM001, HM001, HM001, HM001, HM001, HM001, HM001, HM001, HM001, HM001, HM001, HM001), CTIME = 1223:1237, WEIGHT = c(24.9, 25.2, 25.5, 25.46838, 25.7, 27.1, 27.3, 27.4, 28.4, 29.2, 30.88185, 31.57952, 32.4, 33.7, 34.3)), .Names = c(ID, CTIME, WEIGHT ), class = data.frame, row.names = c(1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15))) #It could be combined by: do.call(rbind, imput1_2_3)# But if you do this the total number or rows will be the sum of the number of rows of each dataset. I guess you want something like this: res-Reduce(function(...) merge(...,by=c(ID,CTIME)),imput1_2_3) names(res)[3:5]- paste(WEIGHT,IMP,1:3,sep=) res # ID CTIME WEIGHTIMP1 WEIGHTIMP2 WEIGHTIMP3 #1 HM001 1223 24.9 24.9 24.9 #2 HM001 1224 25.2 25.2 25.2 #3 HM001 1225 25.5 25.5 25.5 #4 HM001 1226 25.24132 25.54828 25.46838 #5 HM001 1227 25.7 25.7 25.7 #6 HM001 1228 27.1 27.1 27.1 #7 HM001 1229 27.3 27.3 27.3 #8 HM001 1230 27.4 27.4 27.4 #9 HM001 1231 28.4 28.4 28.4 #10 HM001 1232 29.2 29.2 29.2 #11 HM001 1233 30.13770 29.89770 30.88185 #12 HM001 1234 31.17251 31.35045 31.57952 #13 HM001 1235 32.4 32.4 32.4 #14 HM001 1236 33.7 33.7 33.7 #15 HM001 1237 34.3 34.3 34.3 A.K. From: 남윤주 jamansymp...@naver.com To: arun smartpink...@yahoo.com Sent: Monday, January 28, 2013 7:35 PM Subject: Thank you your help and one more question. http://us-mg6.mail.yahoo.com/neo/launch?.rand=3qkohpi922i2q# I deeply appreciate your help. Answering your question, I am software engineer. And I am developing system accumulating data to draw chart and table. For higher perfromance, I have to deal missing value treatment. So, I use Amelia Pacakge. Below is the result follwing your answer. temp2 #origin data ID CTIME WEIGHT 1 HM001 1223 24.9 2 HM001 1224 25.2 3 HM001 1225 25.5 4 HM001 1226 NA 5 HM001 1227 25.7 6 HM001 1228 27.1 7 HM001 1229 27.3 8 HM001 1230 27.4 9 HM001 1231 28.4 10 HM001 1232 29.2 11 HM001 1233 1221.0 12 HM001 1234 NA 13 HM001 1235 32.4 14 HM001 1236 33.7 15 HM001 1237 34.3 temp2$WEIGHT- ifelse(temp2$WEIGHT50,NA,temp2$WEIGHT) temp2 # After eliminating strange value ID CTIME WEIGHT 1 HM001 1223 24.9 2 HM001 1224 25.2 3 HM001 1225 25.5 4 HM001 1226 NA 5 HM001 1227 25.7 6 HM001 1228 27.1 7 HM001 1229 27.3 8 HM001 1230 27.4 9 HM001 1231 28.4 10 HM001 1232 29.2 11 HM001 1233 NA 12 HM001 1234 NA 13 HM001 1235 32.4 14 HM001 1236 33.7 15 HM001 1237 34.3 -- I have One more question. Below are codes and results. -- a.out2-amelia(temp2, m=5, ts=CTIME, cs=ID, polytime=1) -- Imputation 1 -- 1 2 3 4 -- Imputation 2 -- 1 2 3 -- Imputation 3 -- 1 2 3 4 -- Imputation 4 -- 1 2 3 -- Imputation 5 -- 1 2 3 a.out2$imputations $imp1 ID CTIME WEIGHT 1 HM001 1223 24.9 2 HM001 1224 25.2 3 HM001 1225 25.5 4 HM001 1226 25.24132 5 HM001 1227 25.7 6 HM001 1228 27.1 7 HM001 1229 27.3 8 HM001 1230 27.4 9 HM001 1231 28.4 10 HM001 1232 29.2 11 HM001 1233 30.13770 12 HM001 1234 31.17251 13 HM001 1235 32.4 14 HM001 1236 33.7 15 HM001 1237 34.3 $imp2 ID CTIME WEIGHT 1 HM001 1223 24.9 2 HM001 1224 25.2 3 HM001 1225 25.5 4 HM001 1226 25.54828 5 HM001 1227 25.7 6 HM001 1228 27.1 7 HM001 1229 27.3 8 HM001 1230 27.4 9 HM001 1231 28.4 10 HM001 1232 29.2 11 HM001 1233 29.89770 12 HM001 1234 31.35045 13 HM001 1235 32.4 14 HM001
[R] Thank you
... and while I am at it, as this is the U.S. Thanksgiving... My sincere thanks to the many R developers and documenters who contribute large amounts of their personal time and effort to developing, improving, and enhancing the accessibility of R for data analysis and science. I believe it is fair to say that R has had as much or more impact than Gosset's Student's T, and I fear that academics who do much of this work do not receive the professional recognition they deserve. I continue to be amazed and humbled by their high quality and consummate professionalismism -- I could not live without R. Kind regards and best wishes to all, -- Bert -- Bert Gunter Genentech Nonclinical Biostatistics Internal Contact Info: Phone: 467-7374 Website: http://pharmadevelopment.roche.com/index/pdb/pdb-functional-groups/pdb-biostatistics/pdb-ncb-home.htm __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Thank you
Well said. +1 Dennis On Thu, Nov 24, 2011 at 6:43 AM, Bert Gunter gunter.ber...@gene.com wrote: ... and while I am at it, as this is the U.S. Thanksgiving... My sincere thanks to the many R developers and documenters who contribute large amounts of their personal time and effort to developing, improving, and enhancing the accessibility of R for data analysis and science. I believe it is fair to say that R has had as much or more impact than Gosset's Student's T, and I fear that academics who do much of this work do not receive the professional recognition they deserve. I continue to be amazed and humbled by their high quality and consummate professionalismism -- I could not live without R. Kind regards and best wishes to all, -- Bert -- Bert Gunter Genentech Nonclinical Biostatistics Internal Contact Info: Phone: 467-7374 Website: http://pharmadevelopment.roche.com/index/pdb/pdb-functional-groups/pdb-biostatistics/pdb-ncb-home.htm __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Thank you
Bert you said it better than I ever could. What R creators, developers, and documenters do for us every day by how they effect our work as statisticians is something I would not know how to measure. THANK YOU! Frank Bert Gunter wrote ... and while I am at it, as this is the U.S. Thanksgiving... My sincere thanks to the many R developers and documenters who contribute large amounts of their personal time and effort to developing, improving, and enhancing the accessibility of R for data analysis and science. I believe it is fair to say that R has had as much or more impact than Gosset's Student's T, and I fear that academics who do much of this work do not receive the professional recognition they deserve. I continue to be amazed and humbled by their high quality and consummate professionalismism -- I could not live without R. Kind regards and best wishes to all, -- Bert -- Bert Gunter Genentech Nonclinical Biostatistics Internal Contact Info: Phone: 467-7374 Website: http://pharmadevelopment.roche.com/index/pdb/pdb-functional-groups/pdb-biostatistics/pdb-ncb-home.htm __ R-help@ mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. - Frank Harrell Department of Biostatistics, Vanderbilt University -- View this message in context: http://r.789695.n4.nabble.com/Thank-you-tp4104117p4104420.html Sent from the R help mailing list archive at Nabble.com. __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Thank you
And from the side of a ordinary user who opened the page that read: Chapter 1: What is R? two years ago to all of you on this list: Since reading that first page things have changed so that I would get through a normal working day without the software you create and the advice you give. Thank you to all of you Christiaan On 24 November 2011 18:00, Frank Harrell f.harr...@vanderbilt.edu wrote: Bert you said it better than I ever could. What R creators, developers, and documenters do for us every day by how they effect our work as statisticians is something I would not know how to measure. THANK YOU! Frank Bert Gunter wrote ... and while I am at it, as this is the U.S. Thanksgiving... My sincere thanks to the many R developers and documenters who contribute large amounts of their personal time and effort to developing, improving, and enhancing the accessibility of R for data analysis and science. I believe it is fair to say that R has had as much or more impact than Gosset's Student's T, and I fear that academics who do much of this work do not receive the professional recognition they deserve. I continue to be amazed and humbled by their high quality and consummate professionalismism -- I could not live without R. Kind regards and best wishes to all, -- Bert -- Bert Gunter Genentech Nonclinical Biostatistics [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Thank you
On 24/11/11 14:43, Bert Gunter wrote: ... and while I am at it, as this is the U.S. Thanksgiving... My sincere thanks to the many R developers and documenters who contribute large amounts of their personal time and effort to developing, improving, and enhancing the accessibility of R for data analysis and science. I believe it is fair to say that R has had as much or more impact than Gosset's Student's T, and I fear that academics who do much of this work do not receive the professional recognition they deserve. I continue to be amazed and humbled by their high quality and consummate professionalismism -- I could not live without R. Kind regards and best wishes to all, -- Bert and even though I'm (and many others) are the 'wrong' side of the pond, I'd like to add my +1 to these thanks. Thanks ! Paul. __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Thank you! logarithmically scaled y-axis in vioplot
Dear Uwe Thank you very much for your reply. Best, Alex Original-Nachricht Datum: Tue, 22 Nov 2011 15:41:17 +0100 Von: Uwe Ligges lig...@statistik.tu-dortmund.de An: french-connect...@gmx.net CC: r-help@r-project.org Betreff: Re: [R] logarithmically scaled y-axis in vioplot On 22.11.2011 12:37, french-connect...@gmx.net wrote: Dear all I am trying to make a graphic with the vioplot package. I use the following code: library(vioplot) x1- GSMrxDL x2- WIFI x3- UMT vioplot(x1, x2, x3, ylim=c(0, 10), names=c(GSMrxDL, WIFI, UMT), col=gold) title(NIS Strahlung, xlab=Sender, ylab=V/m) Now I want to scale the y-axis logarithmically, i.e. 0.01; 0.1; 1; 10. How can I do this? [resending this since my mailtool forgot to add the address of the OP] I think you will have to tweak the underlying code. A feature request to the authors may help, a patch that adds the requested feature may help even more. Best, Uwe Ligges Thank you very much Alex -- __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. -- __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] Thank you
Dear r-community, Today I have completed my PhD. I would like to take this opportunity to thank the r-community for helping me with the r-coding. I use r to do data manipulation during my PhD and I benefit a lot through the discussion in the r-forum. I will continue using R and help the others too. Thank you so much. Regards, Roslina UniSA. [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] thank you
Hi Dennis I was able to my problem. Thank you encouragement and time. n-7 newvars - c(paste('m', rep(1:n, each = 4), rep(c('a', 'b')), rep(c('p1', 'p2'), each = 2), sep = '')) newvars [1] m1ap1 m1bp1 m1ap2 m1bp2 m2ap1 m2bp1 m2ap2 m2bp2 m3ap1 [10] m3bp1 m3ap2 m3bp2 m4ap1 m4bp1 m4ap2 m4bp2 m5ap1 m5bp1 [19] m5ap2 m5bp2 m6ap1 m6bp1 m6ap2 m6bp2 m7ap1 m7bp1 m7ap2 [28] m7bp2 Umesh R [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] thank you so much
Hi thereI appreciate for your reply.I am running into another problem now, the following is my date (example)2000-1-4 -0.0383447182000-1-5 0.001952000-1-6 0.0009557022000-1-7 0.0270903842000-1-10 0.0111899662000-1-11-0.0130625692000-1-12 -0.0043863312000-1-13 0.0121696632000-1-140.0106713212000-1-18-0.0068320652000-1-19 0.0005222872000-1-20-0.0070952682000-1-21 -0.002912346Let's focus on the dates, by using weekdays(as.Date('2010-11-30')) == Monday, im supposed to find the monday return (2nd column)i create an array which includes the dates, let's call it dates. then when i plug in the dates into the following codesi=1while(i14){if(weekdays(as.Date(dates[i])) == Monday){pirnt (yes)}else {print(no)}i=1+i}the programe does not run properly...cuz im expecting to see yes been returned (there is not any no returned, all of them are yes..). i guess dates[i] returns 2010-1-1, but the as.Date requires '2010-1-1! '.. please help me out. I appreciate.Bill Subject: Welcome to the R-help mailing list From: r-help-requ...@r-project.org To: gy631...@hotmail.com Date: Wed, 1 Dec 2010 03:34:01 +0100 Welcome to the R-help@r-project.org mailing list! To post to this list, send your email to: r-help@r-project.org General information about the mailing list is at: https://stat.ethz.ch/mailman/listinfo/r-help If you ever want to unsubscribe or change your options (eg, switch to or from digest mode, change your password, etc.), visit your subscription page at: https://stat.ethz.ch/mailman/options/r-help/gy631223%40hotmail.com You can also make such adjustments via email by sending a message to: r-help-requ...@r-project.org with the word `help' in the subject or body (don't include the quotes), and you will get back a message with instructions. You must know your password to change your options (including changing the password, itself) or to unsubscribe. It is: 8711208752 Normally, Mailman will remind you of your r-project.org mailing list passwords once every month, although you can disable this if you prefer. This reminder will also include instructions on how to unsubscribe or change your account options. There is also a button on your options page that will email your current password to you. [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] a problem about integrate function in R .thank you .
I don't seem to get a problem with this. Have you tried a Monte Carlo approach to verify that you are getting incorrect answers? For me, I get when the upper is 1 that integrate(e2, lower = 0, upper = 1) -0.2820948 with absolute error 5e-05 sum(e2(runif(1)))/1 [1] -0.2825667 which seems to be consistent, while for upper = 0.75 I get 0.75*sum(e2(runif(1, min=0, max=0.75)))/1 [1] -0.2333506 integrate(e2,lower=0,upper = 0.75) -0.2341178 with absolute error 7.8e-05 On Oct 23, 2:52 pm, fuzuo xie xiefu...@gmail.com wrote: e2 - function(x) { out - 0*x for(i in 1:length(x)) out[i] -integrate(function(y) qnorm(y),lower=0,upper=x[i])$value out } integrate(e2,lower=0, upper=a)$value above is my code , when a is small , say a0.45 the result is right . however , when a0.5 the result is incorrect . why ? thank you . [[alternative HTML version deleted]] __ r-h...@r-project.org mailing listhttps://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guidehttp://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] a problem about integrate function in R .thank you .
e2 - function(x) { out - 0*x for(i in 1:length(x)) out[i] -integrate(function(y) qnorm(y),lower=0,upper=x[i])$value out } integrate(e2,lower=0, upper=a)$value above is my code , when a is small , say a0.45 the result is right . however , when a0.5 the result is incorrect . why ?thank you . [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] Thank you David : Re : Odp: Re : Odp: Re : table function
Thank you very very much David De : David Winsemius dwinsem...@comcast.net à : David Winsemius dwinsem...@comcast.net Envoyé le : Mardi, 25 Août 2009, 23h32mn 53s Objet : Re: [R] Re : Odp: Re : Odp: Re : table function On Aug 25, 2009, at 9:23 AM, David Winsemius wrote: On Aug 25, 2009, at 6:25 AM, Inchallah Yarab wrote: [[elided Yahoo spam]] [[elided Yahoo spam]] [[elided Yahoo spam]] ?table ?xtab I read this incorrectly. Try instead; by(z, z.c, sum) z.c: 0 - 1000 [1] 1010 z.c: 1000 - 3000 [1] 6400 z.c: 3000 [1] 14200 De : Petr PIKAL petr.pi...@precheza.cz Cc : r-help@r-project.org Envoyé le : Mardi, 25 Août 2009, 11h53mn 21s Objet : Odp: [R] Re : Odp: Re : table function Hi r-help-boun...@r-project.org napsal dne 25.08.2009 11:28:31: à Thank you Peter, in my vector Z i have missing value NA and i want to count its number in the vector i had did alla this i know the difference between a numeirc and a factor OK. If you know difference between factor and numeric you probably have seen ?factor where is note about how to make NA an extra level #Here is z z-c(10,100, 1000, 1200, 2000, 2200, 3000, 3200, 5000, 6000) #let's put some NA values into it z[c(2,5)] -NA #let's make a cut z.c-cut(z, breaks = c(-Inf, 1000, 3000, Inf), labels = c(0 - 1000, 1000 - 3000, 3000)) #as you see there are NA values but they are not extra level z.c [1] 0 - 1000  NA    0 - 1000  1000 - 3000 NA [6] 1000 - 3000 1000 - 3000 3000    3000    3000 Levels: 0 - 1000 1000 - 3000 3000 is.na(z.c) [1] FALSE TRUE FALSE FALSE TRUE FALSE FALSE FALSE FALSE FALSE # so let's try what help page says factor(z.c, exclude=NULL) [1] 0 - 1000  NA    0 - 1000  1000 - 3000 NA [6] 1000 - 3000 1000 - 3000 3000    3000    3000 Levels: 0 - 1000 1000 - 3000 3000 NA # wow we have NA as extra level, let's do table table(factor(z.c, exclude=NULL))   0 - 1000 1000 - 3000    3000    NA      2      3      3      2 Regards Petr the goal of this exercice that to count the number of missing value, number betwwen 0-1000 , 1000-3000, 3000. Thank you again for your help De : Petr PIKAL petr.pi...@precheza.cz Cc : r-help@r-project.org EnvoyÄ© le : Mardi, 25 AoÄ»t 2009, 11h15mn 23s Objetà : Odp: [R] Re : table function Hi r-help-boun...@r-project.org napsal dne 25.08.2009 10:08:36: Hi Mark, Thank you for your answer !! it works but if i have NA in the vector z what i shoud do to count its number in Z? You do not have NA in z, you manage to convert it somehow to factor. Please try to read about data types and its behaviour. Start with this chapter 2.8 Other types of objects in R intro manual which I suppose you have in doc folder of R program directory. You possibly can convert it back to numeric by e.g. DF$z - as.numeric(as.character(DF$z)) but I presume you need to check your original data maybe by str(your.data) what mode they are and why they are factor if you expect them numeric. Regards Petr xàààyààààz 1àà0ààà100 5àà1ààà1500 6àà1àààNA 2àà2ààà500 1àà1àààNA 5àà2àà2000 8àà5àà4500 i did the same but it gives me this erroràmessage: à[0 - 1000] [1000 - 3000]àààà3000 àààààà0àààààà0àààààà0 Warning message: In inherits(x, factor) : NAs introduced by coercion Thank you De : Marc Schwartz marc_schwa...@me.com Cc : r-help@r-project.org EnvoyÄ© le : Lundi, 24 Aoűt 2009, 18h33mn 52s Objet : Re: [R] table function On Aug 24, 2009, at 10:59 AM, Inchallah Yarab wrote: hi, i want to use the function table to build a table not of frequence (number of time the vareable is repeated in a list or a data frame!!) but in function of classes [[elided Yahoo spam]] example xàààyààààz 1àà0ààà100 5àà1ààà1500 6àà1ààà1200 2àà2ààà500 1àà1ààà3500 5àà2àà2000 8àà5àà4500 i want to do a table summerizing the number of variable where z is in [0-1000],],[1000-3000], [ 3000] thank you very much for your help See ?cut, which bins a continuous variable. DF àx yààz 1 1 0à100 2 5 1 1500 3 6 1 1200 4 2 2à500 5 1 1 3500 6 5 2 2000 7 8 5 4500
[R] Thank You All for the help
Hi All, I wish to thank all you guys out there. This is because of the help from you guys I am able to learn how to use R in a short time. Thanking you again for the help and quick responses. Regards, Rajat -- Rajat, PhD student Industrial Engineering, Texas Tech University, Lubbock, TX, USA. -- WE, THE PEOPLE OF INDIA, having solemnly resolved to constitute India into a SOVEREIGN, SOCIALIST, SECULAR, DEMOCRATIC, REPUBLIC and to secure to all its citizens: -JUSTICE, social, economic and political; LIBERTY of thought, expression, belief, faith and worship; EQUALITY of status and of opportunity; and to promote among them all; FRATERNITY assuring the dignity of the individual and the unity and integrity of the nation; IN OUR CONSTITUENT ASSEMBLY this twenty-sixth day of November 1949, do HEREBY ADOPT, ENACT AND GIVE TO OURSELVES THIS CONSTITUTION. -- I use Debian Lenny . [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] Thank you very much for all your possible solutions!
I also managed to get the right result but within a for loop ;) So I really appreciate your solutions! Thanks a lot! -- View this message in context: http://www.nabble.com/Using-grep-tp19881017p19882769.html Sent from the R help mailing list archive at Nabble.com. __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Thank you
Yes! Again, thank you ALL very, very much. Even simply lurking on the list generates many gems worth collecting. DaveT. From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] I totally agree both of you. This is a super place to mature the R. I learn a lot from this R heaven! Chunhao Quoting Esmail Bonakdarian [EMAIL PROTECTED]: Hear hear! .. I've benefited greatly from reading the postings here and some of the members have been very generous with their knowledge too! Esmail Tubin wrote: In the past few weeks I have had to give myself a crash course in R, in order to accomplish some necessary tasks for my job. During that time, I've found this forum to be helpful time and time again - usually I find the answer to my problem by searching the archives; once or twice I've posted questions and been given near-immediate solutions. I just wanted to thank the forum participants for all of their contributions over the years! __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] Thank you
In the past few weeks I have had to give myself a crash course in R, in order to accomplish some necessary tasks for my job. During that time, I've found this forum to be helpful time and time again - usually I find the answer to my problem by searching the archives; once or twice I've posted questions and been given near-immediate solutions. I just wanted to thank the forum participants for all of their contributions over the years! -- View this message in context: http://www.nabble.com/Thank-you-tp17540778p17540778.html Sent from the R help mailing list archive at Nabble.com. __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Thank you
Tubin wrote: In the past few weeks I have had to give myself a crash course in R, in order to accomplish some necessary tasks for my job. During that time, I've found this forum to be helpful time and time again - usually I find the answer to my problem by searching the archives; once or twice I've posted questions and been given near-immediate solutions. I just wanted to thank the forum participants for all of their contributions over the years! Hear hear! .. I've benefited greatly from reading the postings here and some of the members have been very generous with their knowledge too! Esmail __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Thank you
I totally agree both of you. This is a super place to mature the R. I learn a lot from this R heaven! Chunhao Quoting Esmail Bonakdarian [EMAIL PROTECTED]: Tubin wrote: In the past few weeks I have had to give myself a crash course in R, in order to accomplish some necessary tasks for my job. During that time, I've found this forum to be helpful time and time again - usually I find the answer to my problem by searching the archives; once or twice I've posted questions and been given near-immediate solutions. I just wanted to thank the forum participants for all of their contributions over the years! Hear hear! .. I've benefited greatly from reading the postings here and some of the members have been very generous with their knowledge too! Esmail __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] convert an S plus file to R? ----- Thank you!
hi all! Thank you for replying on my message. I will try all your suggestions. Thank you again! Filame filame uyaco [EMAIL PROTECTED] wrote: hi! i send again my question because there was a problem earlier that someone did not see my attached file. If you really can't download it, this is the attached file. --- file --- Please help me how to convert this S plus file to R. Is there a quick method to do it? I don't have an S plus installer here. Thanks for the help. Filame - __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. - [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] Thank you, and your suggestion works
Hi Jim, What you told me works well. I have tried using 'eval(parse(text=1:20 = x))' as well as 'try(parse(text=1:20 = x))' before but not using eval anfd try functions together. I just added try function following your suggestion, e.g. 'try(eval(parse(text=1:20 = x)))'and then it worked well. Thank you very much for the nice and instant help. Howard On Wed Oct 10 10:02:39 EDT 2007, jim holtman [EMAIL PROTECTED] wrote: You need to follow the posting guide and provide commented, minimal, self-contained, reproducible code. I can only guess at what your code looks like, but the following catches the error: z - try(eval(parse(text=1:20 - x))) Error in eval(expr, envir, enclos) : object x not found str(z) Class 'try-error' chr Error in eval(expr, envir, enclos) : object \x\ not found\n z - try(eval(parse(text=x - 1:20))) str(z) int [1:20] 1 2 3 4 5 6 7 8 9 10 ... On 10/10/07, HU,ZHENGJUN [EMAIL PROTECTED] wrote: Hi All, I entered a R statement, e.g. 1:20 = x or log(a) on an HTML form and passed it to a R-CGI script. Obviously, neither of both is a correct R statement or expression. However, my R-CGI script could not return and report the error message to the Web site even though it received the statement. I tried to use some R built-in functions of try, tryCatch, eval, expression, as.expression, parse, deparse, etc. None of them worked. Note: when a statement is correct, i.e. x = 1:20 or log(2), the same script returned and reported the R output to the Web site. If any of you has the experience about the matter or topic, could you please tell me how to catch a R error in this case? Thank you very much for the help in advance. Howard __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. -- Jim Holtman Cincinnati, OH +1 513 646 9390 What is the problem you are trying to solve? -- HU,ZHENGJUN __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.