[R] Box plot without original data

2014-04-04 Thread Mª Teresa Martinez Soriano
Hi to everyone!

I need to plot a box plot but I don't have the original data, is it possible to 
make it just with the median, mean, sd and range of values?

Any idea is welcome.


Thanks in advance.
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Simulating data

2014-04-04 Thread Mª Teresa Martinez Soriano
Hi to everyone


To simulate data I only know this command:

rnorm(n , mean, sd)
 
It exists another way to specify Median and range as well?

The point is that I have  this information about the variable:


Median: 4.3
Mean: 4.2
SD: 1.8
Range: 0-8

and I need a boxplot, but I don't have the original data, simulating data 
looks a good idea to solve it.

Thanks in advance.
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Box plot without original data

2014-04-04 Thread Jim Lemon

On 04/04/2014 05:36 PM, Mª Teresa Martinez Soriano wrote:

Hi to everyone!

I need to plot a box plot but I don't have the original data, is it possible to 
make it just with the median, mean, sd and range of values?

Any idea is welcome.


Hi Teresa,
You can get the return value from the boxplot function and then 
substitute the values you want into the $stats component. Then pass that 
to the bxp function. See the help pages for boxplot and bxp.


Another option is to use the box.heresy function in the plotrix package, 
as that function does not assume that the box will be defined by the 
median and (approximate) quartiles.


Jim

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Equation of a curve

2014-04-04 Thread Keith Jewell

On 03/04/2014 16:26, Frances Cheesman wrote:

Hi all,

I have a number of bacterial growth curves I would like to find the
equations for these and then integrate them to find the area under the
curves for me to do stats on later.

Is there any way I can do this in R?

Thanks,

Frances

[[alternative HTML version deleted]]

Responding to the curve fitting question and passing over the 
integration issue...


It is quite common to use nls to fit equations to log(count) v time 
data. You'll have to choose an appropriate model, ideally as a self 
starting nls model. Of those included in the stats package you might 
consider SSfpl, SSgompertz, SSlogis and SSweibull.


But choice of a model is really a microbiological issue and all those 
models might be considered a little passe. Fitting this kind of 
sigmoidal model can be difficult unless the data is good.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] [R-pkgs] rDVR package

2014-04-04 Thread John Harrison
I would like to announce the release of version 0.1.1 of the rDVR package
on CRAN.  rDVR is an R package that allows cross Platform (Lin/Win/OSx)
video recording from R. It is light weight and allows up to 10 minutes of
recording by default.

rDVR has a project page at

http://johndharrison.github.io/rDVR/
http://cran.r-project.org/web/packages/rDVR/index.html

The package comes with a vignette which contain overviews on basic
operation and a few examples linking to projects such as Shiny and
RSelenium. rDVR uses include bug testing, project documentation and web
testing.

Comments and suggestions are greatly appreciated.

John Harrison

[[alternative HTML version deleted]]

___
R-packages mailing list
r-packa...@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-packages

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Equation of a curve

2014-04-04 Thread Frances Cheesman
Thanks everyone for al your help, I don't think it's necessarily as easy as
I first thought.

I'm going to have a think about it and try some things out. And I'll be
back if I get stuck!

Thanks very much,
Frances


On 4 April 2014 09:11, Keith Jewell keith.jew...@campdenbri.co.uk wrote:

 On 03/04/2014 16:26, Frances Cheesman wrote:

 Hi all,

 I have a number of bacterial growth curves I would like to find the
 equations for these and then integrate them to find the area under the
 curves for me to do stats on later.

 Is there any way I can do this in R?

 Thanks,

 Frances

 [[alternative HTML version deleted]]

  Responding to the curve fitting question and passing over the
 integration issue...

 It is quite common to use nls to fit equations to log(count) v time data.
 You'll have to choose an appropriate model, ideally as a self starting nls
 model. Of those included in the stats package you might consider SSfpl,
 SSgompertz, SSlogis and SSweibull.

 But choice of a model is really a microbiological issue and all those
 models might be considered a little passe. Fitting this kind of sigmoidal
 model can be difficult unless the data is good.


 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/
 posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Simulating data

2014-04-04 Thread Frede Aakmann Tøgersen
Hi Teresa

Try this:

Median - 4.3
Mean - 4.2
SD - 1.8
RangeLower - 0
RangeUpper - 8

par(mfcol = c(1,2))

(bpstats - boxplot(rnorm(100, mean = Mean, sd = SD), outl = FALSE))

## assuming normality of original data since we don't know the distribution
## get the 25% and 75% quantile
q25 - qnorm(0.25, mean = Mean, sd = SD)
q75 - qnorm(0.75, mean = Mean, sd = SD)

yourstats - matrix(c(RangeLower, q25, Median, q75, RangeUpper), ncol = 1)

bpstats$stats - yourstats 

bxp(bpstats, outl = FALSE)



Yours sincerely / Med venlig hilsen


Frede Aakmann Tøgersen
Specialist, M.Sc., Ph.D.
Plant Performance  Modeling

Technology  Service Solutions
T +45 9730 5135
M +45 2547 6050
fr...@vestas.com
http://www.vestas.com

Company reg. name: Vestas Wind Systems A/S
This e-mail is subject to our e-mail disclaimer statement.
Please refer to www.vestas.com/legal/notice
If you have received this e-mail in error please contact the sender. 


 -Original Message-
 From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org]
 On Behalf Of Mª Teresa Martinez Soriano
 Sent: 4. april 2014 10:49
 To: r-help@r-project.org
 Subject: [R] Simulating data
 
 Hi to everyone
 
 
 To simulate data I only know this command:
 
 rnorm(n , mean, sd)
 
 It exists another way to specify Median and range as well?
 
 The point is that I have  this information about the variable:
 
 
 Median: 4.3
 Mean: 4.2
 SD: 1.8
 Range: 0-8
 
 and I need a boxplot, but I don't have the original data, simulating data
 looks a good idea to solve it.
 
 Thanks in advance.
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-
 guide.html
 and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] MANOVA post hoc testing

2014-04-04 Thread John Fox
Dear Nicholas,

On Fri, 4 Apr 2014 04:59:09 +
 nicholas.suraw...@csiro.au wrote:
 Greetings,
 
 I'm interested in performing some post hoc tests after conducting a 
 multivariate analysis of covariance (MANCOVA) which I performed using the 
 Anova function in the car package. The covariate did not end up being 
 statistically significant, but the single factor's effect on the multivariate 
 response was statistically significant.  From here, I would like to use the 
 linearHypothesis function in the car package to test for where significant 
 differences are occurring.
 
 Codewise, to fit the MANCOVA i've used:
 Model_1 - Anova(lm((Y)~X+covariate))
 I've then tried to perform multiple comparisons using:
 linearHypothesis(Model_1,matchCoefs(Model_1,X),white.adjust=TRUE)

The argument white.adjust=TRUE isn't available for a multivariate linear model; 
see ?linearHypothesis. Also, the command (without the white.adjust argument) 
will give you the same test as Anova() did.

Best,
 John


John Fox, Professor
McMaster University
Hamilton, Ontario, Canada
http://socserv.mcmaster.ca/jfox/

 
 After I run the 2nd bit of code I get the error message:  Error in 
 vcov.default(model) :   there is no vcov() method for models of class 
 Anova.mlm.
 
 Presumably, this means that I've stuffed up in trying to estimate the 
 variance-covariance matrix for the Anova function.
 
 Any suggestions/help to resolve this problem would be greatly appreciated.
 
 Cheers,
 
 Nic Surawski
 
 
 
   [[alternative HTML version deleted]]
 
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] simulation data

2014-04-04 Thread thanoon younis
dear sir
i want to simulate multivariate ordinal data matrix with categories (1,4)
and n=1000 and p=10.
thanks alot

thanoon

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] How to plot data using ggplot

2014-04-04 Thread drunkenphd
Hi,
I have a list of cities and their coordinates, and also for each city I have
a variable varA which I want to represent in a map using ggplot.
For example :

CityA lat 22.93977  lon 46.70663varA 545

CityB lat 23.93977  lon 46.70663varA 122

VarA values begin from 0 to 3000.
I want the color scale to represent  this range appropriately.
Can you help
Regards



--
View this message in context: 
http://r.789695.n4.nabble.com/How-to-plot-data-using-ggplot-tp4688168.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Lognormal AR(0,1) model

2014-04-04 Thread Chris89
Hi everyone!

I am trying to make two log-normal AR(0,1) model using R with a given
correlation between them, \rho, on the form:

X_t = \alpha X_{t-1} + a_t
Y_t = \beta Y_{t-1} + b_t

At the moment I have been making n values of correlated log-normal data,
called a_t and b_t, and generated a starting value X[1] and Y[1] using the
rnorm() function. The rest of the n-1 values are calculated in a for() loop.
The data do get a lognormal look, but it is obviously not a lognormal
distribution. 

As I am a novice to time-series, my question is simply: Are there any way to
make correlated log-normal distributed AR(0,1) models, and are there any
package in R that will help me?

sincerely
Chris

 





--
View this message in context: 
http://r.789695.n4.nabble.com/Lognormal-AR-0-1-model-tp4688176.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] MICE, POST-PROCESSING imputations with two conditions

2014-04-04 Thread Leah Schinasi
Dear R listserve,

I am writing with the hope or receiving assistance using the MICE package.My 
troubles relate to post-processing imputations. I am attempting to impute 
variables for which I want to specify two conditions using post-processing 
imputation: 1) if the value of a variable representing the pesticide 
application (PESTICIDE) is coded 0 to represent that the participant did not 
conduct task then I want my duration of work in the task (DURR) and start year 
for the task variables (START_DATE) to be coded as NA; 2) If the value of the 
variable representing pesticide application task is coded 1 (did conduct task), 
then I want my duration of work in the task and start year for the task to be 
imputed  using certain boundaries (restricting the values of the imputed date 
and duration variables).

The two statements seem to work independently (post-processing statements are 
recognized); however, I can't figure out how to combine them in one post 
statement . Thus, I can't get my imputations to take the two conditions into 
account; I can either restrict the variables to be set to missing if PESTICIDE 
is 0, or I can get my imputations to be restricted to my specified boundaries, 
but I can't get R to take both conditions into account simultaneously.

The following is the code that I am using. I am highlighting the two post 
processing statements that I am having trouble with.:

ini-mice(agrican.subb,  print=FALSE, max=0)
pred-ini$pred

pred[START_DATE,]-1
pred[END_DATE,]-0
pred[,END_DATE]-0
pred[,START_DATE]-0
pred[,YEAR_BIRTH]-0
pred[,year_ten]-0

pred[,DIFF]-0
pred[,ID]-0
pred[,CORN]-0

method-c(, , , , , norm, , pmm, , , , , , , norm)

post-ini$post

post[START_DATE]-imp[[j]][,i]-squeeze(imp[[j]][,i],c(agrican.subb$YEAR_TEN,
 agrican.subb$ENROLLMENT_YEAR))
post[START_DATE]-imp[[j]][p$data$PESTICIDE[!r[,j]]1,i]-NA

post[DIFF]-imp[[j]][,i]-squeeze(imp[[j]][,i],c(0, 0))
#post[DIFF]-imp[[j]][p$data$PESTICIDE[!r[,j]]1,i]-NA

imp-mice(subb,post=post, pred=pred, seed=2332, method=method, maxit=3)

I will appreciate any advise that you can offer.

Thanks in advance!
Leah

Postdoctoral fellow
Section of Environment and Radiation
International Agency for Research on Cancer
150, cours Albert Thomas
69372 Lyon Cedex 08
France
schina...@fellows.iarc.fr
Tel: [+33] (0)472 73 84 85


---
This message and its attachments are strictly confidenti...{{dropped:11}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Mistakes in date conversion for future date/time (POSIXct)

2014-04-04 Thread Winkler, Matthias
Dear R-users,

I'm working on datasets which contain data from the years 1960 to 2100 with a 
timestep of one hour. Every year has 365 days, leap years are ignored.
After reading the dataset with R I convert the column which contains date/time 
to POSIXct:

as.POSIXct(strptime(MyData [,1], format=%d.%m.%Y : %H))

After that, I divide the data with split() into parts of one year each. Then I 
recognized, that the years for some rows are obviously converted wrong: They 
show years larger than 2100 (see example below).
I've controlled my original dataset, but the dates are correct there.

I also produced a date/time-sequence in R, which showed the same mistakes (see 
example below). The mistakes occur at the same dates like in my datasets. It's 
always at the end of march.

 datetimesequenz - seq.POSIXt(from=as.POSIXct(1960-01-01 00:00), 
 to=as.POSIXct(2100-01-01 00:00), by=1 hour)
 levels(as.factor(strftime(datetimesequenz, format=%Y)))
  [1] 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 
1970 1971 1972 1973 1974 1975 1976 1977
[19] 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 
1988 1989 1990 1991 1992 1993 1994 1995
[37] 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 
2006 2007 2008 2009 2010 2011 2012 2013
[55] 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 
2024 2025 2026 2027 2028 2029 2030 2031
[73] 2032 2033 2034 2035 2036 2037 2038 2039 2040 2041 
2042 2043 2044 2045 2046 2047 2048 2049
[91] 2050 2051 2052 2053 2054 2055 2056 2057 2058 2059 
2060 2061 2062 2063 2064 2065 2066 2067
[109] 2068 2069 2070 2071 2072 2073 2074 2075 2076 2077 
2078 2079 2080 2081 2082 2083 2084 2085
[127] 2086 2087 2088 2089 2090 2091 2092 2093 2094 2095 
2096 2097 2098 2099 2100 2101 2102 2103
[145] 2105 2107 2109 2110 2111 2112 2113 2114 2115 2117 
2118 2120 2121 2122 2124 2125 2126 2128
[163] 2129 2130 2131 2132 2133 2135 2137 2138 2139 2140 
2141 2142 2143 2145 2146 2148 2149 2150
[181] 2152 2153 2154 2156 2157 2158 2159 2160 2161 2166

Has anybody experienced the same problem and knows a workaround?

I'm using R 3.0.1 under Windows 7 64bit. I also tried this with R 3.0.3, it 
showed the same problem.
Thank you for your help!

Kind regards,
Matthias




[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] simulation data

2014-04-04 Thread Charles Determan Jr
Hi Thanoon,

How about this?
# replicate p=10 times random sampling n=1000 from a vector containing your
ordinal categories (1,2,3,4)
R - replicate(10, sample(as.vector(seq(4)), 1000, replace = T))

Cheers,
Charles



On Fri, Apr 4, 2014 at 7:10 AM, thanoon younis
thanoon.youni...@gmail.comwrote:

 dear sir
 i want to simulate multivariate ordinal data matrix with categories (1,4)
 and n=1000 and p=10.
 thanks alot

 thanoon

 [[alternative HTML version deleted]]

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.




-- 
Charles Determan
Integrated Biosciences PhD Candidate
University of Minnesota

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Mistakes in date conversion for future date/time (POSIXct)

2014-04-04 Thread Rmh

high probability you are in a daylight savings time problem.  see the archives 
for repair strategies.  probably it will be enforcing standard time on all 
measurements.

Rich
Sent from my iPhone

 On Apr 4, 2014, at 10:55, Winkler, Matthias 
 matthias.wink...@ibp.fraunhofer.de wrote:
 
 Dear R-users,
 
 I'm working on datasets which contain data from the years 1960 to 2100 with a 
 timestep of one hour. Every year has 365 days, leap years are ignored.
 After reading the dataset with R I convert the column which contains 
 date/time to POSIXct:
 
 as.POSIXct(strptime(MyData [,1], format=%d.%m.%Y : %H))
 
 After that, I divide the data with split() into parts of one year each. Then 
 I recognized, that the years for some rows are obviously converted wrong: 
 They show years larger than 2100 (see example below).
 I've controlled my original dataset, but the dates are correct there.
 
 I also produced a date/time-sequence in R, which showed the same mistakes 
 (see example below). The mistakes occur at the same dates like in my 
 datasets. It's always at the end of march.
 
 datetimesequenz - seq.POSIXt(from=as.POSIXct(1960-01-01 00:00), 
 to=as.POSIXct(2100-01-01 00:00), by=1 hour)
 levels(as.factor(strftime(datetimesequenz, format=%Y)))
  [1] 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 
 1970 1971 1972 1973 1974 1975 1976 1977
 [19] 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 
 1988 1989 1990 1991 1992 1993 1994 1995
 [37] 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 
 2006 2007 2008 2009 2010 2011 2012 2013
 [55] 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 
 2024 2025 2026 2027 2028 2029 2030 2031
 [73] 2032 2033 2034 2035 2036 2037 2038 2039 2040 2041 
 2042 2043 2044 2045 2046 2047 2048 2049
 [91] 2050 2051 2052 2053 2054 2055 2056 2057 2058 2059 
 2060 2061 2062 2063 2064 2065 2066 2067
 [109] 2068 2069 2070 2071 2072 2073 2074 2075 2076 2077 
 2078 2079 2080 2081 2082 2083 2084 2085
 [127] 2086 2087 2088 2089 2090 2091 2092 2093 2094 2095 
 2096 2097 2098 2099 2100 2101 2102 2103
 [145] 2105 2107 2109 2110 2111 2112 2113 2114 2115 2117 
 2118 2120 2121 2122 2124 2125 2126 2128
 [163] 2129 2130 2131 2132 2133 2135 2137 2138 2139 2140 
 2141 2142 2143 2145 2146 2148 2149 2150
 [181] 2152 2153 2154 2156 2157 2158 2159 2160 2161 2166
 
 Has anybody experienced the same problem and knows a workaround?
 
 I'm using R 3.0.1 under Windows 7 64bit. I also tried this with R 3.0.3, it 
 showed the same problem.
 Thank you for your help!
 
 Kind regards,
 Matthias
 
 
 
 
[[alternative HTML version deleted]]
 
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Mistakes in date conversion for future date/time (POSIXct)

2014-04-04 Thread Duncan Murdoch

On 04/04/2014 10:55 AM, Winkler, Matthias wrote:

Dear R-users,

I'm working on datasets which contain data from the years 1960 to 2100 with a 
timestep of one hour. Every year has 365 days, leap years are ignored.
After reading the dataset with R I convert the column which contains date/time 
to POSIXct:

as.POSIXct(strptime(MyData [,1], format=%d.%m.%Y : %H))

After that, I divide the data with split() into parts of one year each. Then I 
recognized, that the years for some rows are obviously converted wrong: They 
show years larger than 2100 (see example below).
I've controlled my original dataset, but the dates are correct there.

I also produced a date/time-sequence in R, which showed the same mistakes (see 
example below). The mistakes occur at the same dates like in my datasets. It's 
always at the end of march.

 datetimesequenz - seq.POSIXt(from=as.POSIXct(1960-01-01 00:00), to=as.POSIXct(2100-01-01 
00:00), by=1 hour)
 levels(as.factor(strftime(datetimesequenz, format=%Y)))
   [1] 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 
1974 1975 1976 1977
[19] 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 
1993 1994 1995
[37] 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 
2011 2012 2013
[55] 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 2027 2028 
2029 2030 2031
[73] 2032 2033 2034 2035 2036 2037 2038 2039 2040 2041 2042 2043 2044 2045 2046 
2047 2048 2049
[91] 2050 2051 2052 2053 2054 2055 2056 2057 2058 2059 2060 2061 2062 2063 2064 
2065 2066 2067
[109] 2068 2069 2070 2071 2072 2073 2074 2075 2076 2077 2078 2079 2080 2081 
2082 2083 2084 2085
[127] 2086 2087 2088 2089 2090 2091 2092 2093 2094 2095 2096 2097 2098 2099 
2100 2101 2102 2103
[145] 2105 2107 2109 2110 2111 2112 2113 2114 2115 2117 2118 2120 2121 2122 
2124 2125 2126 2128
[163] 2129 2130 2131 2132 2133 2135 2137 2138 2139 2140 2141 2142 2143 2145 
2146 2148 2149 2150
[181] 2152 2153 2154 2156 2157 2158 2159 2160 2161 2166

Has anybody experienced the same problem and knows a workaround?

I'm using R 3.0.1 under Windows 7 64bit. I also tried this with R 3.0.3, it 
showed the same problem.
Thank you for your help!


I don't see this in 3.1.0 beta.  Do you?

Duncan Murdoch

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Base within reverses column order

2014-04-04 Thread Dan Murphy
I just noticed this annoyance, but I'm not the first one, apparently
-- see 
http://lists.r-forge.r-project.org/pipermail/datatable-help/2012-May/001176.html

The thread never answered the OP's question Is this a bug? so I
assume the answer, unfortunately, is No.

If not a bug, do users of within have a workaround to produce a result
with columns as ordered within 'within'? I can think of a way using
names and subset-with-select, but that seems unduly kludgy.

Thanks,
Dan

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Base within reverses column order

2014-04-04 Thread Duncan Murdoch

On 04/04/2014 1:32 PM, Dan Murphy wrote:

I just noticed this annoyance, but I'm not the first one, apparently
-- see 
http://lists.r-forge.r-project.org/pipermail/datatable-help/2012-May/001176.html

The thread never answered the OP's question Is this a bug? so I
assume the answer, unfortunately, is No.

If not a bug, do users of within have a workaround to produce a result
with columns as ordered within 'within'? I can think of a way using
names and subset-with-select, but that seems unduly kludgy.


I wouldn't be surprised if it is not consistent about that.  It uses 
as.list to convert an environment to a list, and that's where the 
reversal occurs:  but since environments are unordered collections of 
objects, you just happen to be seeing an undocumented and unpromised 
property of the internal implementation.


If the order matters to you, then create your initial dataframe with the 
new variables (set to NA, for example), or reorder it afterwards.  But 
generally speaking even in a dataframe (which is an ordered collection 
of objects), it's better to program in a way that doesn't make 
assumptions about the order.  Columns have names, and you should use those.


Duncan Murdoch

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] par(mfrow)

2014-04-04 Thread Emanuele Belli
Hi,
I have some problems using the par function: I want to split the screen into 2 
rows and 4 col and I type  par(mfrow=c(2, 4))  but when I do that, instead of 
setting a graphical parameter, it creates a white Quarz.
I'm currently using the R base version for Mac Os, 3.0.3 .
Could you give me an help?
Thank you very much, 

Emanuele
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] average of rows of each column

2014-04-04 Thread eliza botto

Dear useRs,
I have a matrix of 120 row and 1000 columns.What I want is to get an average of 
a set of 12 rows starting from 1 till 120 for each column. Precisely, for 
column 1 the average of 1:10 rows, 11:20 rows 111:120. similarly for column 
2, 3, 4 1000. So in the end i should have a matrix with 12 rows and 1000 
columns.
Thankyou very much in advance.

Eliza
  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How to plot data using ggplot

2014-04-04 Thread Dennis Murphy
1. Look into the ggmap package if you want to overlay your data onto a map.
2. Re your color scale representation, define 'appropriately'. Do you
mean a continuous range expressible in a colorbar or a discrete range,
and if the latter, what intervals did you have in mind?

Dennis

On Fri, Apr 4, 2014 at 2:42 AM, drunkenphd enrr...@fimif.upt.al wrote:
 Hi,
 I have a list of cities and their coordinates, and also for each city I have
 a variable varA which I want to represent in a map using ggplot.
 For example :

 CityA lat 22.93977  lon 46.70663varA 545

 CityB lat 23.93977  lon 46.70663varA 122

 VarA values begin from 0 to 3000.
 I want the color scale to represent  this range appropriately.
 Can you help
 Regards



 --
 View this message in context: 
 http://r.789695.n4.nabble.com/How-to-plot-data-using-ggplot-tp4688168.html
 Sent from the R help mailing list archive at Nabble.com.

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Revolutions blog: March roundup

2014-04-04 Thread David Smith
Revolution Analytics staff write about R every weekday at the Revolutions blog:
 http://blog.revolutionanalytics.com
and every month I post a summary of articles from the previous month
of particular interest to readers of r-help.

In case you missed them, here are some articles related to R from the
month of March:

Francis Smart offers five excellent reasons to use R, and notes that R
is the top Google Search for statistical software:
http://bit.ly/1dYHrGv

Revolution Analytics is offering R training for SAS users in Singapore
and online: http://bit.ly/1dYHphT

The number of R user groups worldwide continues to grow, and there
have already been over 135 meetings in 2014: http://bit.ly/1dYHphV

Color palettes for R charts based on the production design of Wes
Anderson movies: http://bit.ly/1dYHrGu

A history of ensemble methods, by Mike Bowles: http://bit.ly/1dYHrGt

An eBook on Big Data and Data Science by the publishers of the Big
Data Journal includes articles based on R: http://bit.ly/1dYHphU

An in-depth tutorial by Gaston Sanchez on handling character data with
R: http://bit.ly/1dYHpi3

Joseph Rickert suggests several large, open data sets you can analyze
with R: http://bit.ly/1dYHrGz

Rodrigo Zamith updates his web-based application to compare NCAA
basketball team performance: http://bit.ly/1dYHpyg

Many R projects are under consideration for the 2014 Google Summer of
Code: http://bit.ly/1dYHpyh

Bob Muenchen shares his secrets of teaching with R: http://bit.ly/1dYHrGA

The Atlanta Big Data Analytics Team Challenge sponsored R users to
help CARE International: http://bit.ly/1dYHrGB

The Human Rights Data Analysis Group uses R and ensemble models to
quantify the impact of the war in Syria: http://bit.ly/1dYHrGD

An index of contributed R documentation, assembled into an R meta
book: http://bit.ly/1dYHrGF

The deadline for submitting tutorials to the useR! 2014 conference in
LA has been extended to April 10: http://bit.ly/1dYHpyk

Derek Norton describes how to do ridge regression using the rxCovCor
function of the RevoScaleR package: http://bit.ly/1dYHrGG

In an op-ed at RSS StatsLife, I review the role of statisticians in
data privacy: http://bit.ly/1dYHpyo

A brief summary of the improvements in R 3.0.3: http://bit.ly/1dYHpyr

Hidden Markov models in R, with application to detection
regime-switching events in financial markets: http://bit.ly/1dYHpys

Why automating data science is dangerous without human supervision and
statistical expertise: http://bit.ly/1dYHpyt

A history of Emacs and ESS-mode for R, by Rodney Sparapani:
http://bit.ly/1dYHpyv

Some news articles about R and Revolution Analytics in Wired,
ComputerWorld, Inside BigData and Datanami: http://bit.ly/1dYHpyu

Some non-R stories in the past month included: a real photo that looks
like Sim City (http://bit.ly/1dYHrWY), a video of Europe's
constantly-changing borders (http://bit.ly/1dYHpyw), the new
FiveThirtyEight data journalism site (http://bit.ly/1dYHrWZ),
bad-mannered cats (http://bit.ly/1dYHpOQ), and a surprising
demonstration of change blindness (http://bit.ly/1dYHpOS).

Meeting times for local R user groups (http://bit.ly/eC5YQe) can be
found on the updated R Community Calendar at: http://bit.ly/bb3naW

If you're looking for more articles about R, you can find summaries
from previous months at http://blog.revolutionanalytics.com/roundups/.
You can receive daily blog posts via email using services like
blogtrottr.com, or join the Revolution Analytics mailing list at
http://revolutionanalytics.com/newsletter to be alerted to new
articles on a monthly basis.

As always, thanks for the comments and please keep sending suggestions
to me at da...@revolutionanalytics.com . Don't forget you can also
follow the blog using an RSS reader, via email using blogtrottr.com,
or by following me on Twitter (I'm @revodavid).

Cheers,
# David

-- 
David M Smith da...@revolutionanalytics.com
Chief Community Officer, Revolution Analytics
http://blog.revolutionanalytics.com
Tel: +1 (650) 646-9523 (Seattle WA, USA)
Twitter: @revodavid

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] par(mfrow)

2014-04-04 Thread Franklin Bretschneider
Hi,


re:


 I have some problems using the par function: I want to split the screen into 
 2 rows and 4 col and I type  par(mfrow=c(2, 4))  but when I do that, 
 instead of setting a graphical parameter, it creates a white Quarz.
 I'm currently using the R base version for Mac Os, 3.0.3 .
 Could you give me an help?
 Thank you very much, 




But that's correct, if you don't plot something afterwards. Try plotting 8 x-y 
graphs, then you'll see that 8 small plot will appear (unless the margins will 
prove to be too large).

Good luck,


Franklin Bretschneider
Dept of Biology
Utrecht University
brets...@xs4all.nl

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] average of rows of each column

2014-04-04 Thread David Carlson
Something like (only 20 columns here):

x - matrix(rnorm(120*20), 120, 20)
xagg - aggregate(x, list(rep(1:12, each=10)), mean)

-
David L Carlson
Department of Anthropology
Texas AM University
College Station, TX 77840-4352

Original Message-
From: r-help-boun...@r-project.org
[mailto:r-help-boun...@r-project.org] On Behalf Of eliza botto
Sent: Friday, April 4, 2014 1:08 PM
To: r-help@r-project.org
Subject: [R] average of rows of each column


Dear useRs,
I have a matrix of 120 row and 1000 columns.What I want is to
get an average of a set of 12 rows starting from 1 till 120 for
each column. Precisely, for column 1 the average of 1:10 rows,
11:20 rows 111:120. similarly for column 2, 3, 4 1000.
So in the end i should have a matrix with 12 rows and 1000
columns.
Thankyou very much in advance.

Eliza
  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible
code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] solicit help to read in 384 plate color image

2014-04-04 Thread Waverley @ Palo Alto
Hi,

I am doing an experiment which results different colors of different
intensities in the 384 micro titer plate.

I took a picture of the plate by scanning in the image as a jpeg file and
now I want to

1. read in the image file
2. grid the content
3. need to extract the intensity and color of each well.

Is there a R package I can use for that? there is a package called gitter
which is almost satisfying my needs. However, it can only read in grey
values and no colors.

If someone has the code, please share.

-- 

Thanks.

waverley

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] average of rows of each column

2014-04-04 Thread Rui Barradas

Hello,

Try the following.

m - 120
n - 10  # in your case this is 1000
mat - matrix(rnorm(n*m), nrow = m)

fun - function(x, na.rm = TRUE){
tapply(x, rep(1:12, each = 10), mean, na.rm = na.rm)
}

apply(mat, 2, fun)
apply(mat, 2, fun, na.rm = FALSE) # alternative



Hope this helps,

Rui Barradas

Em 04-04-2014 19:08, eliza botto escreveu:


Dear useRs,
I have a matrix of 120 row and 1000 columns.What I want is to get an average of 
a set of 12 rows starting from 1 till 120 for each column. Precisely, for 
column 1 the average of 1:10 rows, 11:20 rows 111:120. similarly for column 
2, 3, 4 1000. So in the end i should have a matrix with 12 rows and 1000 
columns.
Thankyou very much in advance.

Eliza

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.



__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] how to change annotations in contour function from rsm package

2014-04-04 Thread John Smith
Dear All,

I am using R3.0.3 and rsm 2.04. Since I want do log transformation on my
data, so I want change annotation of Slice at ...  For example, I want
change (Slice at W = 1.25, L=2) in the first graph into (Slice at W =
log(1.25), L=log(2)). Can anyone tell me how can I change it?

Thanks

library(rsm)
heli.rsm - rsm(ave ~ block + SO(x1, x2, x3, x4), data = heli)
windows(width=10.8/2*3, height=10.8)
par(mfrow = c(2, 3))
contour(heli.rsm, ~ x1 + x2 + x3 + x4, image = TRUE)

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Mistakes in date conversion for future date/time (POSIXct)

2014-04-04 Thread David Winsemius

On Apr 4, 2014, at 9:54 AM, Duncan Murdoch wrote:

 On 04/04/2014 10:55 AM, Winkler, Matthias wrote:
 Dear R-users,
 
 I'm working on datasets which contain data from the years 1960 to 2100 with 
 a timestep of one hour. Every year has 365 days, leap years are ignored.
 After reading the dataset with R I convert the column which contains 
 date/time to POSIXct:
 
 as.POSIXct(strptime(MyData [,1], format=%d.%m.%Y : %H))
 
 After that, I divide the data with split() into parts of one year each. Then 
 I recognized, that the years for some rows are obviously converted wrong: 
 They show years larger than 2100 (see example below).
 I've controlled my original dataset, but the dates are correct there.
 
 I also produced a date/time-sequence in R, which showed the same mistakes 
 (see example below). The mistakes occur at the same dates like in my 
 datasets. It's always at the end of march.
 
  datetimesequenz - seq.POSIXt(from=as.POSIXct(1960-01-01 00:00), 
  to=as.POSIXct(2100-01-01 00:00), by=1 hour)
  levels(as.factor(strftime(datetimesequenz, format=%Y)))
   [1] 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 
 1970 1971 1972 1973 1974 1975 1976 1977
 [19] 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 
 1988 1989 1990 1991 1992 1993 1994 1995
 [37] 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 
 2006 2007 2008 2009 2010 2011 2012 2013
 [55] 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 
 2024 2025 2026 2027 2028 2029 2030 2031
 [73] 2032 2033 2034 2035 2036 2037 2038 2039 2040 2041 
 2042 2043 2044 2045 2046 2047 2048 2049
 [91] 2050 2051 2052 2053 2054 2055 2056 2057 2058 2059 
 2060 2061 2062 2063 2064 2065 2066 2067
 [109] 2068 2069 2070 2071 2072 2073 2074 2075 2076 2077 
 2078 2079 2080 2081 2082 2083 2084 2085
 [127] 2086 2087 2088 2089 2090 2091 2092 2093 2094 2095 
 2096 2097 2098 2099 2100 2101 2102 2103
 [145] 2105 2107 2109 2110 2111 2112 2113 2114 2115 2117 
 2118 2120 2121 2122 2124 2125 2126 2128
 [163] 2129 2130 2131 2132 2133 2135 2137 2138 2139 2140 
 2141 2142 2143 2145 2146 2148 2149 2150
 [181] 2152 2153 2154 2156 2157 2158 2159 2160 2161 2166
 
 Has anybody experienced the same problem and knows a workaround?
 
 I'm using R 3.0.1 under Windows 7 64bit. I also tried this with R 3.0.3, it 
 showed the same problem.
 Thank you for your help!
 
 I don't see this in 3.1.0 beta.  Do you?

I'm not seeing it on a Mac in 3.0.2 either.

 max(datetimesequenz)
[1] 2100-01-01 PST
 length(datetimesequenz)
[1] 1227241

 
 Duncan Murdoch

David Winsemius
Alameda, CA, USA

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] average of rows of each column

2014-04-04 Thread eliza botto
Dear David and Rui,
Thankyou very much. :D
Eliza

 From: dcarl...@tamu.edu
 To: eliza_bo...@hotmail.com; r-help@r-project.org
 Subject: RE: [R] average of rows of each column
 Date: Fri, 4 Apr 2014 15:14:32 -0500
 
 Something like (only 20 columns here):
 
 x - matrix(rnorm(120*20), 120, 20)
 xagg - aggregate(x, list(rep(1:12, each=10)), mean)
 
 -
 David L Carlson
 Department of Anthropology
 Texas AM University
 College Station, TX 77840-4352
 
 Original Message-
 From: r-help-boun...@r-project.org
 [mailto:r-help-boun...@r-project.org] On Behalf Of eliza botto
 Sent: Friday, April 4, 2014 1:08 PM
 To: r-help@r-project.org
 Subject: [R] average of rows of each column
 
 
 Dear useRs,
 I have a matrix of 120 row and 1000 columns.What I want is to
 get an average of a set of 12 rows starting from 1 till 120 for
 each column. Precisely, for column 1 the average of 1:10 rows,
 11:20 rows 111:120. similarly for column 2, 3, 4 1000.
 So in the end i should have a matrix with 12 rows and 1000
 columns.
 Thankyou very much in advance.
 
 Eliza
 
   [[alternative HTML version deleted]]
 
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible
 code.
 
  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Sweave files into LaTex

2014-04-04 Thread Axel Urbiz
Hi,

I'm writing a thesis in Latex (say master.tex). I'd like to include R
code/results from an .Rwd file. I've naively tried:

1) Add ONLY the code below in Rcode.Rnw file:

\section{Exploratory data analysis}
eval=TRUE, echo=FALSE=
library(ggplot2)
data(diamonds)
head(diamonds)
@

2) Then, in the master.tex file add the following line:

\include{Rcode.Rnw}

But of course, that didn't work.Any help would be much appreciated.

Best,
Axel.

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Sweave files into LaTex

2014-04-04 Thread Duncan Murdoch

On 04/04/2014, 7:10 PM, Axel Urbiz wrote:

Hi,

I'm writing a thesis in Latex (say master.tex). I'd like to include R
code/results from an .Rwd file. I've naively tried:

1) Add ONLY the code below in Rcode.Rnw file:

\section{Exploratory data analysis}
eval=TRUE, echo=FALSE=
library(ggplot2)
data(diamonds)
head(diamonds)
@

2) Then, in the master.tex file add the following line:

\include{Rcode.Rnw}

But of course, that didn't work.Any help would be much appreciated.


You want

\include{Rcode.tex}

But you have to run Sweave to produce Rcode.tex from Rcode.Rnw.  The 
patchDVI package has functions to make this easy if you're using 
TeXWorks or some other LaTeX editors -- see the vignette.


Duncan Murdoch

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] rehh package for iHS and Rsb on NGS data

2014-04-04 Thread James Abugri
Hello expert community.
Is there anybody whose used rehh package for iHS and rsb. I have gone through 
the tutorial a dozen times, I need expert help to enable me apply it to my 
p.falciparum snp data from the illumina platform.

Thanks
James Abugri
-- 
* The information contained in this email and any attachments may be 
legally privileged and confidential. If you are not an intended recipient, 
you are hereby notified that any dissemination, distribution, or copying of 
this e-mail is strictly prohibited. If you have received this e-mail in 
error, please notify the sender and permanently delete the e-mail and any 
attachments immediately. You should not retain, copy or use this e-mail or 
any attachments for any purpose, nor disclose all or any part of the 
contents to any other person.*

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How to plot data using ggplot

2014-04-04 Thread drunkenphd
djmuseR wrote
 Do you mean a continuous range expressible in a colorbar or a discrete
 range,
 and if the latter, what intervals did you have in mind? 

I mean diskrete values like : 54508   25  101  420   95  928   24 
1656  108   18  213   70.
The problem is that range is from 0-4000 while only one value is near 4000
and all the others are below 1000.
Please advice..
Regards




--
View this message in context: 
http://r.789695.n4.nabble.com/How-to-plot-data-using-ggplot-tp4688168p4688191.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Subsetting data by ID with different constraints

2014-04-04 Thread Lib Gray
Hello,

I have a data set with many individuals all with multiple timed
observations, and I would like to subset the data to exclude later timed
observations.
However, I would like to exclude different amounts of data for each
individual. These individuals have two types of data: DV and dose. What I
would like to do is exclude later instances when one of the types of data
is no longer included.

The data is structured with an (approximate) 28 day cycle. Each individual
has a baseline DV, and on day 1, they receive their first dose. Around day
28, they will have their first DV observed. This means that an individual
should have one less dose data item than they have DV data items.

What I would like is to take the following:

IDTIMEDV   DOSE  TYPE
1 0 0NA 2
1 1 NA 1001
1 27   0NA 2
1 29   NA 1001
1 54   2NA 2
1 84   3NA 2
1 100 3NA 2
1 127 3NA 2

2 0 0NA 2
2 1 NA 1201
2 28   4NA 2
2 29   NA 1201
2 56   8NA 2
2 57   NA 1001

3 0 2NA 2
3 1 NA 80  1
3 28   5NA 2
3 56   2NA 1
3 84   1NA 2

4 0 0NA 2
4 1 NA 1001
4 29   NA 1001
4 57   NA 1001
4 85   NA 1001
...


And turn it into:

IDTIMEDV   DOSE  TYPE
1 0 0NA 2
1 1 NA 1001
1 27   0NA 2
1 29   NA 1001
1 54   2NA 2

2 0 0NA 2
2 1 NA 1201
2 28   4NA 2
2 29   NA 1201
2 56   8NA 2

3 0 2NA 2
3 1 NA 80  1
3 28   5NA 2

4 0 0NA 2
...


My thought for how to do this was to:

(1)  Subset the data by the maximum time an individual had an observed DV
(type=2). However, this will be a different time for every patient, and I
was unsure how to do this type of subsetting.

(2) After I had done that, I would want to take my new subsetted data and
determine the maximum time an individual had a dose. Then I would
determine the total rows of data a patient had up to their last dose data
time. Then I could subset the data by taking the first n+1 observations
for each individual, n=total rows of data a patient had up to their last
dose data time. This step I would hope I could determine from knowing how
to do step (1), if I can use table and max interchangeably.

Any help would be appreciated!

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] rehh package for iHS and Rsb on NGS data

2014-04-04 Thread David Winsemius

On Apr 4, 2014, at 11:46 PM, James Abugri wrote:

 Hello expert community.
 Is there anybody whose used rehh package for iHS and rsb. I have gone through 
 the tutorial a dozen times, I need expert help to enable me apply it to my 
 p.falciparum snp data from the illumina platform.
 

This sounds like it would be more appropriate on the BioConductor forum.

 Thanks
 James Abugri
 -- 
 * The information contained in this email and any attachments may be 
 legally privileged and confidential. If you are not an intended recipient, 
 you are hereby notified that any dissemination, distribution, or copying of 
 this e-mail is strictly prohibited. If you have received this e-mail in 
 error, please notify the sender and permanently delete the e-mail and any 
 attachments immediately. You should not retain, copy or use this e-mail or 
 any attachments for any purpose, nor disclose all or any part of the 
 contents to any other person.*
 
   [[alternative HTML version deleted]]
 
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.

David Winsemius
Alameda, CA, USA

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Removing White spaces with NA

2014-04-04 Thread arun
Hi,
Check if this works:
n - 1e7
dat - data.frame(Col1=c(A, , B,C, ,,D,rep('',n)), 
stringsAsFactors=FALSE) 

dat2 - dat

dat[dat==''] - NA 

which(dat$Col1=='')
#integer(0) 


#or 
dat2$Col1[dat2$Col1==''] - NA 
 which(dat2$Col1=='') 
#integer(0) 

A.K.


On Wednesday, April 2, 2014 2:20 PM, arun smartpink...@yahoo.com wrote:
Hi,
May be this helps:
dat - data.frame(Col1=c(A, , B,C, ,,D), stringsAsFactors=FALSE)
is.na(dat) - dat==''
dat$Col1
#[1] A NA  B C NA  NA  D 
A.K.



Hi All, I have a table and a column with values as below Col1
A B
C D I need to replace the Empty cells with the value NA as below
Col1
A
NA
B
C
NA
NA
D I tried a code, which was not working.
Table.name$column.name - gsub(,NA, table.name$column.name) Can anyone help 
me with this ?
Thanks and regards,
Praveen

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Aggregate time series from daily to monthly by date and site

2014-04-04 Thread Zilefac Elvis
Hi,
I have daily data arranged by date and site. Keeping the number of columns as 
there are, I will like to aggregate (FUN=mean) from daily to monthly the 
following data (only part is shown here) which starts in 1971 and ends in 1980.

    Year Month Day Site Sim001 Sim002 Sim003 Sim004
1   1971     1   1 GGG1   8.58 -12.67   4.45  -1.31
2   1971     1   1 GGG2  11.82  -9.94  -3.37   4.94
3   1971     1   1 GGG3   7.72 -11.94  -1.17   4.70
4   1971     1   1 GGG4   8.93 -10.81   4.66   2.88
5   1971     1   1 GGG5   9.82  -6.78  -4.19  -0.01
6   1971     1   1 GGG6  13.93  -3.39  -3.84   1.83
7   1971     1   1 GGG7  10.94  -7.58   1.74  -7.51
8   1971     1   1 GGG8   5.07 -16.09   1.26   1.12
9   1971     1   1 GGG9  11.13  -9.96  -7.06   5.25
10  1971     1   1 GG10   7.66  -8.68  -2.65   5.25
11  1971     1   1 GG11   1.06   6.14  -4.88   3.78
12  1971     1   1 GG12  14.93 -12.43  -4.06   4.94
13  1971     1   1 GG13   7.56 -10.81  -2.32   2.32
14  1971     1   1 GG14   6.18  -7.58  -1.64   9.83
15  1971     1   1 GG15  10.96  -0.62   0.56  -1.59
16  1971     1   1 GG16   4.94   1.52   0.31   6.45
17  1971     1   1 GG17   0.79   0.83  -0.35   4.26
18  1971     1   1 GG18   4.91  -3.29  -5.69   3.10
19  1971     1   1 GG19   0.68  -0.50   3.35   5.50
20  1971     1   1 GG20   4.50   1.14   4.84   6.94
21  1971     1   1 GG21   3.13   3.35   3.62   2.76
22  1971     1   1 GG22   2.91   1.10   0.77   5.10
23  1971     1   1 GG23  -2.27  -5.25  -3.05   1.95
24  1971     1   1 GG24   8.18   2.00  -0.42  15.13
25  1971     1   1 GG25   3.87  -4.09  -2.55  -9.18
26  1971     1   1 GG26   5.10   2.28   1.34   2.88
27  1971     1   1 GG27   7.23   2.46   2.89   4.28
28  1971     1   1 GG28   8.55   5.64   3.09  -5.01
29  1971     1   1 GG29   1.39   4.64   9.79  -0.27
30  1971     1   1 GG30   6.85 -12.11   4.98   1.91
31  1971     1   1 GG31   4.25  -2.21   9.59  -1.46
32  1971     1   1 GG32  -0.24 -16.54   4.99  -0.60
33  1971     1   1 GG33   9.86  -7.38  11.77  -8.99
34  1971     1   1 GG34   9.92 -16.33  13.07  -8.79
35  1971     1   1 GG35   5.11  -7.63   0.41  -3.09
36  1971     1   1 GG36   6.14 -11.61  10.38  -7.09
37  1971     1   1 GG37   8.14 -12.78  11.01  -5.20
38  1971     1   1 GG38   7.52 -12.86   3.43  -7.55
39  1971     1   1 GG39   4.19  -9.99   6.08  -4.04
40  1971     1   1 GG40   1.02   4.84   0.55  -3.80
41  1971     1   1 GG41  -2.43 -13.75   6.49 -10.66
42  1971     1   1 GG42   6.85 -12.33   2.85  -6.34
43  1971     1   1 GG43   4.94 -13.43  11.17  -3.62
44  1971     1   1 GG44   8.11 -21.13  11.32  -8.49
45  1971     1   1 GG45   7.34 -12.63  -0.89  -2.29
46  1971     1   1 GG46  10.56  -3.16  -0.48   0.38
47  1971     1   1 GG47  -6.52   1.61  10.80   5.25
48  1971     1   1 GG48   2.66  -2.36   1.86   8.60
49  1971     1   1 GG49  -4.89   5.54   6.63   5.83
50  1971     1   1 GG50   0.11   3.59   5.14   8.94
51  1971     1   1 GG51   3.90   1.23   4.13   9.86
52  1971     1   1 GG52   3.87  -0.25   8.72   4.62
53  1971     1   1 GG53   2.55  -1.49  15.01   4.33
54  1971     1   1 GG54  -0.20  -1.65   4.78  10.15
55  1971     1   1 GG55   5.09   0.90   5.56   7.87
56  1971     1   1 GG56  -2.40  -2.29   5.69   9.07
57  1971     1   1 GG57   1.32  -2.35  10.39   0.04
58  1971     1   1 GG58   3.49  -2.01   8.99   2.85
59  1971     1   1 GG59   4.93  -2.07   6.95   6.00
60  1971     1   1 GG60  -9.58   1.37  10.59   4.54
61  1971     1   1 GG61   9.08  -0.64   3.92  13.50
62  1971     1   1 GG62   2.85   4.75   3.40  12.39
63  1971     1   1 GG63   7.71   3.02   3.95  11.79
64  1971     1   1 GG64   4.50   5.44   0.87   6.29
65  1971     1   1 GG65   0.99   3.76   2.28  15.45
66  1971     1   1 GG66   8.72   5.16   1.11  15.82
67  1971     1   1 GG67  12.45   2.54   4.36  19.79
68  1971     1   1 GG68   8.83   4.11   6.21  13.12
69  1971     1   1 GG69   8.94   5.03   1.73   6.50
70  1971     1   1 GG70   5.05  -1.12   2.50  -4.63
71  1971     1   1 GG71   9.82   4.53   4.19   1.79
72  1971     1   1 GG72  11.72  -0.15   1.85  -0.80
73  1971     1   1 GG73   1.21  -4.98   8.65   1.29
74  1971     1   1 GG74   7.92   0.85   6.24   8.88
75  1971     1   1 GG75   3.45  -3.04   7.82   1.28
76  1971     1   1 GG76   1.34  -0.06   7.43   6.55
77  1971     1   1 GG77   8.25  -3.01   5.19   5.78
78  1971     1   1 GG78   2.92  -1.10  -1.71   5.46
79  1971     1   1 GG79   2.10   4.02  -3.16   2.83
80  1971     1   1 GG80  -3.19   1.77  -2.66   8.00
81  1971     1   1 GG81   4.75  -3.36  -7.00   6.25
82  1971     1   1 GG82  -0.30   1.56  -2.08   4.94
83  1971     1   1 GG83   1.69  -1.63   0.36   5.01
84  1971     1   1 GG84   3.31   1.12   8.61   5.32
85  1971     1   1 GG85   5.18  -2.39   3.22   2.95
86  1971     1   1 GG86   2.43  -2.05   7.99   7.46
87  1971     1   1 GG87   3.02   4.51  -1.19   5.71
88  1971     1   1 GG88  -5.31   1.52  11.38  -3.51
89  1971     1   1 GG89  -6.70  -0.61  10.20   3.51
90  1971     1   1 GG90  -5.90   2.54   8.87   9.46
91  1971     1   1 GG91  

Re: [R] Aggregate time series from daily to monthly by date and site

2014-04-04 Thread Jeff Newmiller
You have been around long enough that we should not have to tell you how to 
provide data in a reproducible manner... read ?dput.
---
Jeff NewmillerThe .   .  Go Live...
DCN:jdnew...@dcn.davis.ca.usBasics: ##.#.   ##.#.  Live Go...
  Live:   OO#.. Dead: OO#..  Playing
Research Engineer (Solar/BatteriesO.O#.   #.O#.  with
/Software/Embedded Controllers)   .OO#.   .OO#.  rocks...1k
--- 
Sent from my phone. Please excuse my brevity.

On April 4, 2014 9:02:03 PM PDT, Zilefac Elvis zilefacel...@yahoo.com wrote:
Hi,
I have daily data arranged by date and site. Keeping the number of
columns as there are, I will like to aggregate (FUN=mean) from daily to
monthly the following data (only part is shown here) which starts in
1971 and ends in 1980.

    Year Month Day Site Sim001 Sim002 Sim003 Sim004
1   1971     1   1 GGG1   8.58 -12.67   4.45  -1.31
2   1971     1   1 GGG2  11.82  -9.94  -3.37   4.94
3   1971     1   1 GGG3   7.72 -11.94  -1.17   4.70
4   1971     1   1 GGG4   8.93 -10.81   4.66   2.88
5   1971     1   1 GGG5   9.82  -6.78  -4.19  -0.01
6   1971     1   1 GGG6  13.93  -3.39  -3.84   1.83
7   1971     1   1 GGG7  10.94  -7.58   1.74  -7.51
8   1971     1   1 GGG8   5.07 -16.09   1.26   1.12
9   1971     1   1 GGG9  11.13  -9.96  -7.06   5.25
10  1971     1   1 GG10   7.66  -8.68  -2.65   5.25
11  1971     1   1 GG11   1.06   6.14  -4.88   3.78
12  1971     1   1 GG12  14.93 -12.43  -4.06   4.94
13  1971     1   1 GG13   7.56 -10.81  -2.32   2.32
14  1971     1   1 GG14   6.18  -7.58  -1.64   9.83
15  1971     1   1 GG15  10.96  -0.62   0.56  -1.59
16  1971     1   1 GG16   4.94   1.52   0.31   6.45
17  1971     1   1 GG17   0.79   0.83  -0.35   4.26
18  1971     1   1 GG18   4.91  -3.29  -5.69   3.10
19  1971     1   1 GG19   0.68  -0.50   3.35   5.50
20  1971     1   1 GG20   4.50   1.14   4.84   6.94
21  1971     1   1 GG21   3.13   3.35   3.62   2.76
22  1971     1   1 GG22   2.91   1.10   0.77   5.10
23  1971     1   1 GG23  -2.27  -5.25  -3.05   1.95
24  1971     1   1 GG24   8.18   2.00  -0.42  15.13
25  1971     1   1 GG25   3.87  -4.09  -2.55  -9.18
26  1971     1   1 GG26   5.10   2.28   1.34   2.88
27  1971     1   1 GG27   7.23   2.46   2.89   4.28
28  1971     1   1 GG28   8.55   5.64   3.09  -5.01
29  1971     1   1 GG29   1.39   4.64   9.79  -0.27
30  1971     1   1 GG30   6.85 -12.11   4.98   1.91
31  1971     1   1 GG31   4.25  -2.21   9.59  -1.46
32  1971     1   1 GG32  -0.24 -16.54   4.99  -0.60
33  1971     1   1 GG33   9.86  -7.38  11.77  -8.99
34  1971     1   1 GG34   9.92 -16.33  13.07  -8.79
35  1971     1   1 GG35   5.11  -7.63   0.41  -3.09
36  1971     1   1 GG36   6.14 -11.61  10.38  -7.09
37  1971     1   1 GG37   8.14 -12.78  11.01  -5.20
38  1971     1   1 GG38   7.52 -12.86   3.43  -7.55
39  1971     1   1 GG39   4.19  -9.99   6.08  -4.04
40  1971     1   1 GG40   1.02   4.84   0.55  -3.80
41  1971     1   1 GG41  -2.43 -13.75   6.49 -10.66
42  1971     1   1 GG42   6.85 -12.33   2.85  -6.34
43  1971     1   1 GG43   4.94 -13.43  11.17  -3.62
44  1971     1   1 GG44   8.11 -21.13  11.32  -8.49
45  1971     1   1 GG45   7.34 -12.63  -0.89  -2.29
46  1971     1   1 GG46  10.56  -3.16  -0.48   0.38
47  1971     1   1 GG47  -6.52   1.61  10.80   5.25
48  1971     1   1 GG48   2.66  -2.36   1.86   8.60
49  1971     1   1 GG49  -4.89   5.54   6.63   5.83
50  1971     1   1 GG50   0.11   3.59   5.14   8.94
51  1971     1   1 GG51   3.90   1.23   4.13   9.86
52  1971     1   1 GG52   3.87  -0.25   8.72   4.62
53  1971     1   1 GG53   2.55  -1.49  15.01   4.33
54  1971     1   1 GG54  -0.20  -1.65   4.78  10.15
55  1971     1   1 GG55   5.09   0.90   5.56   7.87
56  1971     1   1 GG56  -2.40  -2.29   5.69   9.07
57  1971     1   1 GG57   1.32  -2.35  10.39   0.04
58  1971     1   1 GG58   3.49  -2.01   8.99   2.85
59  1971     1   1 GG59   4.93  -2.07   6.95   6.00
60  1971     1   1 GG60  -9.58   1.37  10.59   4.54
61  1971     1   1 GG61   9.08  -0.64   3.92  13.50
62  1971     1   1 GG62   2.85   4.75   3.40  12.39
63  1971     1   1 GG63   7.71   3.02   3.95  11.79
64  1971     1   1 GG64   4.50   5.44   0.87   6.29
65  1971     1   1 GG65   0.99   3.76   2.28  15.45
66  1971     1   1 GG66   8.72   5.16   1.11  15.82
67  1971     1   1 GG67  12.45   2.54   4.36  19.79
68  1971     1   1 GG68   8.83   4.11   6.21  13.12
69  1971     1   1 GG69   8.94   5.03   1.73   6.50
70  1971     1   1 GG70   5.05  -1.12   2.50  -4.63
71  1971     1   1 GG71   9.82   4.53   4.19   1.79
72  1971     1   1 GG72  11.72  -0.15   1.85  -0.80
73  1971     1   1 GG73   1.21  -4.98   8.65   1.29
74  1971     1   1 GG74   7.92   0.85   6.24   8.88
75  1971     1   1 GG75   3.45  -3.04   7.82   1.28
76  1971     1   1 GG76   1.34  

Re: [R] Aggregate time series from daily to monthly by date and site

2014-04-04 Thread Zilefac Elvis
Hi,

I have daily data arranged by date and site. Keeping the number of columns as 
there are, I will like to aggregate (FUN=mean) from daily to monthly the 
following data (only part is shown here) which starts in 1971 and ends in 1980.


structure(list(Year = c(1971, 1971, 1971, 1971, 1971, 1971, 1971, 
1971, 1971, 1971, 1971, 1971, 1971, 1971, 1971, 1971, 1971, 1971, 
1971, 1971, 1971, 1971, 1971, 1971, 1971, 1971, 1971, 1971, 1971, 
1971, 1971, 1971, 1971, 1971, 1971, 1971, 1971, 1971, 1971, 1971, 
1971, 1971, 1971, 1971, 1971, 1971, 1971, 1971, 1971, 1971, 1971, 
1971, 1971, 1971, 1971, 1971, 1971, 1971, 1971, 1971, 1971, 1971, 
1971, 1971, 1971, 1971, 1971, 1971, 1971, 1971, 1971, 1971, 1971, 
1971, 1971, 1971, 1971, 1971, 1971, 1971, 1971, 1971, 1971, 1971, 
1971, 1971, 1971, 1971, 1971, 1971, 1971, 1971, 1971, 1971, 1971, 
1971, 1971, 1971, 1971, 1971), Month = c(1, 1, 1, 1, 1, 1, 1, 
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 
1, 1, 1, 1, 1, 1, 1, 1, 1), Day = c(1, 1, 1, 1, 1, 1, 1, 1, 1, 
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 
1, 1, 1, 1, 1, 1, 1), Site = c(GGG1, GGG2, GGG3, GGG4, 
GGG5, GGG6, GGG7, GGG8, GGG9, GG10, GG11, GG12, 
GG13, GG14, GG15, GG16, GG17, GG18, GG19, GG20, 
GG21, GG22, GG23, GG24, GG25, GG26, GG27, GG28, 
GG29, GG30, GG31, GG32, GG33, GG34, GG35, GG36, 
GG37, GG38, GG39, GG40, GG41, GG42, GG43, GG44, 
GG45, GG46, GG47, GG48, GG49, GG50, GG51, GG52, 
GG53, GG54, GG55, GG56, GG57, GG58, GG59, GG60, 
GG61, GG62, GG63, GG64, GG65, GG66, GG67, GG68, 
GG69, GG70, GG71, GG72, GG73, GG74, GG75, GG76, 
GG77, GG78, GG79, GG80, GG81, GG82, GG83, GG84, 
GG85, GG86, GG87, GG88, GG89, GG90, GG91, GG92, 
GG93, GG94, GG95, GG96, GG97, GG98, GG99, G100
), Sim001 = c(8.58, 11.82, 7.72, 8.93, 9.82, 13.93, 10.94, 5.07, 
11.13, 7.66, 1.06, 14.93, 7.56, 6.18, 10.96, 4.94, 0.79, 4.91, 
0.68, 4.5, 3.13, 2.91, -2.27, 8.18, 3.87, 5.1, 7.23, 8.55, 1.39, 
6.85, 4.25, -0.24, 9.86, 9.92, 5.11, 6.14, 8.14, 7.52, 4.19, 
1.02, -2.43, 6.85, 4.94, 8.11, 7.34, 10.56, -6.52, 2.66, -4.89, 
0.11, 3.9, 3.87, 2.55, -0.2, 5.09, -2.4, 1.32, 3.49, 4.93, -9.58, 
9.08, 2.85, 7.71, 4.5, 0.99, 8.72, 12.45, 8.83, 8.94, 5.05, 9.82, 
11.72, 1.21, 7.92, 3.45, 1.34, 8.25, 2.92, 2.1, -3.19, 4.75, 
-0.3, 1.69, 3.31, 5.18, 2.43, 3.02, -5.31, -6.7, -5.9, -4.73, 
-8.13, -7.67, 3.73, -2.4, 1.46, -4.71, 0.33, -3.11, 2.45), Sim002 = c(-12.67, 
-9.94, -11.94, -10.81, -6.78, -3.39, -7.58, -16.09, -9.96, -8.68, 
6.14, -12.43, -10.81, -7.58, -0.62, 1.52, 0.83, -3.29, -0.5, 
1.14, 3.35, 1.1, -5.25, 2, -4.09, 2.28, 2.46, 5.64, 4.64, -12.11, 
-2.21, -16.54, -7.38, -16.33, -7.63, -11.61, -12.78, -12.86, 
-9.99, 4.84, -13.75, -12.33, -13.43, -21.13, -12.63, -3.16, 1.61, 
-2.36, 5.54, 3.59, 1.23, -0.25, -1.49, -1.65, 0.9, -2.29, -2.35, 
-2.01, -2.07, 1.37, -0.64, 4.75, 3.02, 5.44, 3.76, 5.16, 2.54, 
4.11, 5.03, -1.12, 4.53, -0.15, -4.98, 0.85, -3.04, -0.06, -3.01, 
-1.1, 4.02, 1.77, -3.36, 1.56, -1.63, 1.12, -2.39, -2.05, 4.51, 
1.52, -0.61, 2.54, 2.88, 6.79, 5.5, -2.36, 4.18, -0.13, 5.68, 
1.82, 3.21, 0.21), Sim003 = c(4.45, -3.37, -1.17, 4.66, -4.19, 
-3.84, 1.74, 1.26, -7.06, -2.65, -4.88, -4.06, -2.32, -1.64, 
0.56, 0.31, -0.35, -5.69, 3.35, 4.84, 3.62, 0.77, -3.05, -0.42, 
-2.55, 1.34, 2.89, 3.09, 9.79, 4.98, 9.59, 4.99, 11.77, 13.07, 
0.41, 10.38, 11.01, 3.43, 6.08, 0.55, 6.49, 2.85, 11.17, 11.32, 
-0.89, -0.48, 10.8, 1.86, 6.63, 5.14, 4.13, 8.72, 15.01, 4.78, 
5.56, 5.69, 10.39, 8.99, 6.95, 10.59, 3.92, 3.4, 3.95, 0.87, 
2.28, 1.11, 4.36, 6.21, 1.73, 2.5, 4.19, 1.85, 8.65, 6.24, 7.82, 
7.43, 5.19, -1.71, -3.16, -2.66, -7, -2.08, 0.36, 8.61, 3.22, 
7.99, -1.19, 11.38, 10.2, 8.87, 7.23, 8.07, 2.77, 9.61, -1.1, 
-2.05, 6.39, 6.6, -2.89, -6.41), Sim004 = c(-1.31, 4.94, 4.7, 
2.88, -0.01, 1.83, -7.51, 1.12, 5.25, 5.25, 3.78, 4.94, 2.32, 
9.83, -1.59, 6.45, 4.26, 3.1, 5.5, 6.94, 2.76, 5.1, 1.95, 15.13, 
-9.18, 2.88, 4.28, -5.01, -0.27, 1.91, -1.46, -0.6, -8.99, -8.79, 
-3.09, -7.09, -5.2, -7.55, -4.04, -3.8, -10.66, -6.34, -3.62, 
-8.49, -2.29, 0.38, 5.25, 8.6, 5.83, 8.94, 9.86, 4.62, 4.33, 
10.15, 7.87, 9.07, 0.04, 2.85, 6, 4.54, 13.5, 12.39, 11.79, 6.29, 
15.45, 15.82, 19.79, 13.12, 6.5, -4.63, 1.79, -0.8, 1.29, 8.88, 
1.28, 6.55, 5.78, 5.46, 2.83, 8, 6.25, 4.94, 5.01, 5.32, 2.95, 
7.46, 5.71, -3.51, 3.51, 9.46, 8.55, 6.71, 7.36, 10.96, 3.47, 
-1.99, 5.75, 1.56, -4.38, 0.67)), .Names = c(Year, Month, 
Day, Site, Sim001, Sim002, Sim003, Sim004), row.names = c(NA, 
100L), class = data.frame)

Thanks for your useful solution.
Atem. 
[[alternative HTML version deleted]]

__