Hi David,
Thank you so much for your help and others. Here is the code.
balok <- read.csv("G:/A_backup 11 mei 2015/DATA (D)/1 Universiti Malaysia
Pahang/ISM-3 2016 UM/Data/Hourly Rainfall/balok2.csv",header=TRUE)
head(balok, 10); tail(balok, 10)
str(balok)
## Introduce NAs for
balok$Rain.mm2
Thanks. I found the reason was Rtools does not run under the new version
of R. I had to go back to as early as R 3.0.2 (September 2013) to make
Rtools work.
Any idea for a go-around? Thanks.
On 7/19/2016 4:38 PM, John McKown wrote:
> On Tue, Jul 19, 2016 at 3:15 PM, Steven Yen
Hi
Will doing in lattice suite
>From https://stat.ethz.ch/pipermail/r-help/2007-October/142116.html and
https://stat.ethz.ch/pipermail/r-help/2007-October/142124.html]
This is a direct cut and paste from the last url to give you an idea of
Deepayan Sarkar's script
library(lattice)
Frederic,
you need to be more clear about what you want to do. If you are
sampling 100 numbers from 1:100 with replacement, the sum will always be
greater than 50 (as has already been pointed out).
In addition, you mention trying to eliminate zeros. If you are sampling
from 1:100 there
On Tue, Jul 19, 2016 at 3:15 PM, Steven Yen wrote:
> I recently updated my R and RStudio to the latest version and now the
> binary option in the "build" command in devtools stops working.
>
> I went around and used the binary=F option which worked by I get the
> .tar.gz file
Thanks so much
Best Rgds
John
On Tue, Jul 19, 2016 at 9:56 PM, Greg Snow <538...@gmail.com> wrote:
> with 72 lines to label it will be crowded whatever you do.
>
> Here is one option (though I am showing with fewer lines):
>
> x <- sapply(1:15, function(i) cumsum(rnorm(100)))
>
I recently updated my R and RStudio to the latest version and now the
binary option in the "build" command in devtools stops working.
I went around and used the binary=F option which worked by I get the
.tar.gz file instead of the .zip file which I prefer.
Does anyone understand the following
with 72 lines to label it will be crowded whatever you do.
Here is one option (though I am showing with fewer lines):
x <- sapply(1:15, function(i) cumsum(rnorm(100)))
par(mar=c(5,4,4,3)+0.1)
matplot(1:100, x, type='l', lty=1)
mtext(LETTERS[1:15], side=4, at=x[100,], las=1, line=1)
One way to
Thanks! It worked for me.
matplot(for_jhon$ID, for_jhon[,2:73], type='l')
Any I dea on how I can label multiple-line plot based on column names?
Thanks for your help
John
On Tue, Jul 19, 2016 at 8:46 PM, Greg Snow <538...@gmail.com> wrote:
> Most attachments get stripped off, so your data
The problem is that there
are some zeros while the numbers should range from 1 to 100
As the replace= TRUE if the smallest candidate is 1 may be replaced
because if I can have 1, fifty times this is equal to 50.
Remember that it is a sampling with a replacement
I am still wondering how I can
Most attachments get stripped off, so your data did not make it through.
But try:
matplot(for_jhon$ID, for_jhon[,2:73], type='l')
On Tue, Jul 19, 2016 at 12:24 PM, John Wasige wrote:
> Dear all,
>
> This is to kindly request for your help. I would like to plot my data.
>
Dear all,
This is to kindly request for your help. I would like to plot my data.
The R script below gives some plot that is not clear. How can I get a clear
multiple-line plot. The data is attached herewith.
##R Script
for_jhon = read.csv("C:/LVM_share/for_ jhon.csv", header=TRUE, sep=";")
If you want square plots on a rectangular plotting region, then where
do you want the extra space to go?
One option would be to add outer margins to use up the extra space.
The calculations to figure out exactly how much space to put in the
outer margins will probably not be trivial.
Another
N <- 100
C <- 50
x <- numeric(N)
for (i in 1:N){
x[i] <- sample(C-sum(x),1)
}
x
sum(x)
Frederic Ntirenganya
Sent by: "R-help"
07/19/2016 01:41 PM
To
"r-help@r-project.org" ,
cc
Subject
[R] Sampe numbers
Hi
I think that you need to reconsider your conditions.
The smallest number in your candidate set is 1, so if you sample 100
1's they will add to 100 which is greater than 50. So to have a set
of numbers that sums to 50 you will need to either include negative
numbers, 0's, or sample fewer than 50
The default shape for this correlation scatterplot is rectangle. I changed
it to square, but then the x-axis spacing between squares are off. Is
there an easy way to change x-axis spacing between squares to that of the
y-axis spacing size?
I decided to hide the name values of the diagonal
Hi Guys,
I am trying to sample 100 numbers from 1:100
i did it like this:
sample(1:100,100, replace= TRUE)
but i want again include a constraint that their sum must be equal to 50
How could I do it?
Cheers
Frederic Ntirenganya
Maseno University,
African Maths Initiative,
Kenya.
Try this:
Reduce(modifyList, list(x, y, z))
On Tue, Jul 19, 2016 at 12:34 PM, Luca Cerone wrote:
> Dear all,
> I would like to know if there is a function to concatenate two lists
> while replacing elements with the same name.
>
> For example:
>
> x <-
concatfun <- function(...) {
Reduce(f=function(a,b){ a[names(b)] <- b ; a },x=list(...), init=list())
}
Bill Dunlap
TIBCO Software
wdunlap tibco.com
On Tue, Jul 19, 2016 at 9:34 AM, Luca Cerone wrote:
> Dear all,
> I would like to know if there is a function to
Dear all,
I would like to know if there is a function to concatenate two lists
while replacing elements with the same name.
For example:
x <- list(a=1,b=2,c=3)
y <- list( b=4, d=5)
z <- list(a = 6, b = 8, e= 7)
I am looking for a function "concatfun" so that
u <- concatfun(x,y,z)
returns:
Presumably it disappears because there is a unique value of ID for eac
combination of S*x1 so they are indistinguishable.
On 19/07/2016 12:53, Justin Thong wrote:
Why does the S:x1 column disappear (presumably S:x1 goes into ID but I dont
know why)? S is a factor, x1 is a covariate and ID is a
Hi All,
From the library forecast I have fitted a regression model with ARMA residuals
on a transformed variable diff(log(Y),1).
What "code(s)" must I use to get the fitted and forecasted values on level
values ( or original scale of Y) without doing my own manual manipulation?
Please
Hi,
I have a data frame like below.
11,15,12,25
11,12
15,25
134,45,56
46
45,56
15,12
66,45,56,24,14,11,25,12,134
I want to identify the frequency of pairs/triplets or higher that occurs in
the data. Say for example, in above data the occurrence of pairs looks like
below
item No of
Gracias, Carlos. Lo que quiero es representar de manera clara que los datos
recolectados cada 24 horas no se distribuyen normalmente y que la
dispersion es muy grande, sobre todo al principio del experimento. Tambien
he pensado en hacer un grafico de puntos. La audiencia esperada para esos
Ideally, you would use a more functional programming approach:
minimal <- function(rows, cols){
x <- matrix(NA_integer_, ncol = cols, nrow = 0)
for (i in seq_len(rows)){
x <- rbind(x, rep(i, 10))
}
x
}
minimaly <- function(rows, cols){
x <- matrix(NA_integer_, ncol = cols, nrow =
Dear Brent,
I can confirm your timings with
library(microbenchmark)
microbenchmark(
mkFrameForLoop(100, 10),
mkFrameForLoop(200, 10),
mkFrameForLoop(400, 10)
)
but profiling your code shown that rbind only uses a small fraction on the
cpu time used by the function.
Subtitle: or, more likely, am I benchmarking wrong?
I am new to R, but I have read that R is a hive of performance pitfalls. A
very common case is if you try to accumulate results into a typical immutable R
data structure.
Exhibit A for this confusion is this StackOverflow question on an
Thanks! That's just what I was looking for!
A
On Mon, Jul 18, 2016 at 5:56 PM, Rolf Turner
wrote:
> On 19/07/16 01:16, Adrienne Wootten wrote:
>
>> All,
>>
>> Greetings! I hope things are going well for all! I apologize if someone's
>> already answered this and I'm
Dear Pamela,
I'm afraid that this question seems confused. Type III sums of squares are
computed for a linear model -- it's the *linear model* that you'd check, in the
usual manner, for outliers, etc., and this has nothing to do with how the ANOVA
table for the model is computed.
Best,
John
Hello everyone
I have gridded output data from a climate model. I would like to take the
average of these grids and plot them. I have tried to plot it with gglot or
image.plot (field/raster package). I have to do this extensively meaning that a
manual way is not what I am looking for. My data
Why does the S:x1 column disappear (presumably S:x1 goes into ID but I dont
know why)? S is a factor, x1 is a covariate and ID is a factor.
rich.side<-aov(y~S*x1+ID)
summary(rich.side)
Below is the model frame
model.frame(~S*x1+ID)
S x1 ID
1 1 12 A
2 1 12 A
Hello all,
I tried posting on r-sig-ME, but it's not going through. So I thought I
would try here. Apologies if this ends up being a cross-post if it
eventually goes through there
I am parameterizing exponential fits for some metabolic scaling models.
I have done this in lmer already,
> Jim Lemon
> on Tue, 19 Jul 2016 16:20:49 +1000 writes:
> Hi Daniel,
> Judging by the numbers you mention, the distribution is either very
> skewed or not at all normal. If you look at this:
>
Dear list members,
is anyone aware of a (simple) strategy to prevent update() from
introducing parentheses around a conditional rhs of a formula
when updating, e.g., only its lhs? Simple example:
update( y ~ a|b, log(.) ~ .)
gives log(y) ~ (a | b), but I want/need log(y) ~ a | b
I do know
On 17.07.2016 05:33, Christopher Kelvin via R-help wrote:
Dear R-User,
I have written a simple code to analyze some data using Bayesian logistic
regression via the R2WinBUGS package. The code when run in WinBUGS stops
WinBUGS from running it and using the package returns no results also.
Hi Daniel,
Judging by the numbers you mention, the distribution is either very
skewed or not at all normal. If you look at this:
plot(c(0,0.012,0.015,0.057,0.07),c(0,0.05,0.4,0.05,0),type="b")
you will see the general shape of whatever distribution produced these
summary statistics. Did the
Hi all,
I need to draw density curves based on some published data. But in the article
only mean value (0.015 ) and quantiles (Q0.15=0.012 , Q0.85=0.057) were given.
So I was thinking if it is possible to plot density curves solely based on the
mean value and quantiles. The dnorm(x, mean, sd,
I am wondering if there is a way to plot results and model diagnostics (to
check for outliers, homoscedasticity, normality, collinearity) using type III
sums of squares in R
__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
Thank you Marc,
The typo was causing the problem, solved now.
Regards,
Fipou
On Mon, Jul 18, 2016 at 8:38 PM, Marc Schwartz wrote:
>
> > On Jul 18, 2016, at 1:06 PM, Abdoulaye Sarr
> wrote:
> >
> > I am doing a basic bar plot which works but the
39 matches
Mail list logo