Thanks.
As to the data warpbreaks, if I want to analysis the impact of
tension(L,M,H) on breaks, should I order the tension or not?
Many thanks.
At 2013-05-21 20:55:18,David Winsemius dwinsem...@comcast.net wrote:
On May 20, 2013, at 10:35 PM, meng wrote:
Hi all:
If the
On May 22, 2013, at 01:54 , David Winsemius wrote:
{
test0 - transform(test, (get(var.names[i])) = 0)
There is no `get-` function. You need to use assign.
Also note that transform's left hand sides are really names of function
arguments and therefore syntactically cannot be expressions.
On 22.05.2013 07:09, meng wrote:
Thanks.
As to the data warpbreaks, if I want to analysis the impact of
tension(L,M,H) on breaks, should I order the tension or not?
No homework questions on this list, please ask your teacher.
Best,
Uwe Ligges
Many thanks.
At
Hello R experts,
I am having an wired problem to save my RPlot after I use identify
option.
Points are identified properly, but when I try to save that image I get
error as:
Error: first argument must be a string (of length 1) or native symbol
reference and the image without identified points
So you just want to compare the distances from each point of your new
data to each of the Centres and assign the corresponding number of the
centre as in:
clust - apply(NewData, 1, function(x) which.min(colSums(x - tCentre)^2
but since the apply loop is rather long here for lots of new
Dear Branwen,
This means that your design matrix is not of full rank (in a more recent
version of the metafor package, the error message is a bit more informative;
i.e., please upgrade). Since continent is a factor, this should imply that
one of the levels never actually occurs, leading to a
Suparna Mitra suparna.mitra.sm at gmail.com writes:
Hello R experts,
I am having an wired problem to save my RPlot after I use identify
option.
Points are identified properly, but when I try to save that image I get
error as:
Error: first argument must be a string (of length 1) or
Well spotted. I've changed that now (it was an artifact from previous
work), so there's no devicegroup--just device and group, as in the model.
So now table(data.AMIDS_d[,2:5]) produces
, , width = fat, length = long
device
group Dingo SNAR
NR12 12
NV12 12
RA12
Hi,
I guess you meant this:
dat2- read.table(text=
patient_id t scores
1 0 1.6
1 1 2.6
1 2 2.2
1 3 1.8
2 0
Using arun, data set, dat1
Is this type of thing what you want?
dat2 - subset(dat1, dat1$Play != B dat1$Offense == Y)
with(dat2,table(Play, Offense))
John Kane
Kingston ON Canada
-Original Message-
From: smartpink...@yahoo.com
Sent: Tue, 21 May 2013 18:57:56 -0700 (PDT)
To:
Inline ...
On Wed, May 22, 2013 at 6:40 AM, Krysta Chauncey
kry...@essential-design.com wrote:
Well spotted. I've changed that now (it was an artifact from previous
work), so there's no devicegroup--just device and group, as in the model.
So now table(data.AMIDS_d[,2:5]) produces
, , width
Hi,
3 down vote favorite
1
I am interested in forecasting a MA model.Therefore I have created a
very simple data set (three variables). I then adapted a MA(1) model
to it. The results are:
x-c(2,5,3)
m-arima(x,order=c(0,0,1))
Series: x
ARIMA(0,0,1) with non-zero mean
Coefficients:
Hi R user,
I was trying to develop a model (logistic regression) for 4001 dependent
variables using 15 environmental variables (45000 rows); and then trying to use
the models to predict in future. I used following code but it took so much time
and consumed 100% of the PC memory. Even though-
Hello,
In predict.glm, there's no argument 'data', it's 'newdata'.
As for your problem, maybe try doing one prediction at a time, write the
results to file, then the next.
Hope this helps,
Rui Barradas
Em 22-05-2013 15:20, Kristi Glover escreveu:
Hi R user,
I was trying to develop a model
Hello,
Since R is open source, you can look at the source code of package
forecast to know exactly how it is done. My guess would be
x - m$residuals
Time Series:
Start = 1
End = 3
Frequency = 1
[1] 3.060660 4.387627 3.00
Hope this helps,
Rui Barradas
Em 22-05-2013 15:13, Neuman Co
Hi Rui,
Yes you are right, there is no argument 'data.
Mistakenly, I wrote only 'data' in the email. but in the r script, I have
written newdata.
As you suggested, I did trying one prediction at a time, but still- same
problem. It is really big data set. May be - should I go 'Matlab' for this
One approach is to use the rms package's cph and Mean.cph functions.
Mean.cph (cph calls coxph and can compute Kaplan-Meier and other survival
estimates) can compute mean restricted life.
Frank
Dinesh W wrote
I am using survfit to generate a survival curve. My population is such
that my x axis
Hi,
You didn't provide a reproducible example
(http://stackoverflow.com/questions/5963269/how-to-make-a-great-r-reproducible-example)
So, it is difficult to comment.
Using the dataset(sweetpotato)
library(agricolae)
data(sweetpotato)
m1-aov(yield~virus, data=sweetpotato)
df1-df.residual(m1)
Hi Jim,
Thank you very much. I have started another conversation in the ggplot
list. My current code is
plot_data - data.frame(
xmin = c(1, 1.5, 3, 3.5, 5, 5.5, 7, 7.5, 9, 9.5, 11, 11.5, 13, 13.5)
, ymin = c(16.7026, 17.20, 14.9968, 16.32, 16.0630, 15.86, 17.7510, 18.12,
-5.01694, -8.86,
Thank you very much - now I see my mistake in loop - your hint helped. Thank
you again!
--
View this message in context:
http://r.789695.n4.nabble.com/problems-with-saving-plots-from-loop-tp4667616p4667682.html
Sent from the R help mailing list archive at Nabble.com.
Hi!
I am trying to use a 3-parametric Weibull model. I used mselect to find the
best-fit model which was W1.3 in one and W2.3 in another case. When
searching the internet for formulas, I only find a formula for W1.3, which
is:
f(x) = 0 + (d-0)\exp(-\exp(b(\log(x)-e)))
But what is the formula
Dear Uwe
Just wanted to say thank you so much for this as whilst waiting for a reply
from r-help I had been wrting a piece of ugly code as below to do the job
and yours looks MUCH smarter and I especially like the use of '-apply()'
bit as there is no 'min.col()' function.
p.s. my data has 144
Thanks, but this does not help me, because first of all, I do not know
how to look at the source code (just entering fitted() or
getAnywhere(fitted()) does not help,
second, your solution x-m$residuals does not be a solution, because
then the question is, where do the residuals come from?
Hello all,
Running the msprep function of the mstate package
R showsmethe following error:
Error in hlpsrt[, 1] - hlpsrt[, 2] :
non-numeric argument to binary operator
what hlpsrt is?
[[alternative HTML version deleted]]
__
Hello Rlisters!
In my codes, I need to import a matrix:
v - read.table(/home/tiago/matrix.txt, header=FALSE)
v-as.matrix(v)
v
V1 V2 V3 V4 V5 V6
[1,] 1. -0.89847480 -0.73929292 -0.99055335 -0.04514469 0.04056137
[2,]
So I mean: How can I calculate them manually?
2013/5/22 Neuman Co neumanc...@gmail.com:
Thanks, but this does not help me, because first of all, I do not know
how to look at the source code (just entering fitted() or
getAnywhere(fitted()) does not help,
second, your solution x-m$residuals
mh, this is interesting, I would have expected that the following is valid:
If we look at the second value:
4.387627 = 3.5 + (-1)*(5-3.060660)
or
4.387627 = 3.5 - (-1)*(5-3.060660)
but this does not work. Surprise. Nice question!
Von: Neuman Co
I have a couple of large data sets, on the order of 4GB. they come in .csv
files, with about 50 columns and lots of rows. a couple have weird NA
values, such as C and B, in numeric columns.
I am wondering how good read.csv() is dealing with this stuff on the first
pass.
d-(read.csv(t.csv,
HI GG,
I thought you were referring about the solution that I sent today. Sorry, I
didn't check it.
TRy this:
dat1- read.table(text=
patient_id number responsed_at delais scores1 scores2
scores3
1 1 2010-05-26 NA 2.6
Hi Dennis,
Thank you for your constructive advice on plotting CIs in point ranges. I
combined the advices from you, Jim, Kaori, and bosie (from IRC) and my own
research and produced the following figure:
WTP_labels - c(Wild cod, Farmed cod, Farmed salmon, Wild monk,
Farmed pangasius, MSC label,
Rui responded to your first question graciously with a very simple
default answer -- subtract the residuals from your observations.
That's about as manual as you can be without using pencil and paper.
If you can't understand the source code but want to so that you can
understand how the
Hi,
May be this helps:
month.dayslong-rep(31,7)
names(month.dayslong)-
c(January,March,May,July,August,October,December)
month.dayslong
# January March May July August October December
# 31 31 31 31 31 31 31
unname(month.dayslong)
#[1]
Dear Rxperts,
Using the above example, I have been playing around using viewport under
panel-function(...) {...} block in conjunction with
panel.groups=function(..) {xyplot.} code block.. I have not bee
successful so far.. I was wondering if it is possible to pass user-defined
functions (including
Hi, Jeff.
Thanks for your thoughtful suggestions.
I do not plan to wait for the hash package to be redesigned to meet my
expectations. As a matter of fact, I have:
a) Submitted a report of unexpected behavior in hash::values, which the package
maintainer quickly replied to and said would
Hi,
Couldn't reproduce your error.
It is better to dput() the example data:
v - read.table(/home/tiago/matrix.txt, header=FALSE)
dput(v)
v- read.table(matrix1.txt,header=FALSE,sep=)
v-as.matrix(v)
v
# V1 V2 V3 V4 V5 V6
#[1,]
Couldn't exactly explain the subject, so here's the example:
idx-which(blah[,1]==xyz)
idx
integer(0)
How do I test that idx has a valid value (namely, 0)?
TiA,
Joe
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
Hi Joe,
Not sure about your expected result
blah- paste0(x,1:5)
which(blah==xyz)
#integer(0)
blah==xyz
#[1] FALSE FALSE FALSE FALSE FALSE
any(blah==xyz)
#[1] FALSE
sum(blah==xyz)
#[1] 0
sum(blah==x1)
#[1] 1
A.K.
- Original Message -
From: Joseph Trubisz jtrub...@me.com
To:
Dear Dr. Viechtbauer:
Thank you very much for sparing your precious time to answer my question. I
still want to make sure for the third question below: for studies which only
reported percentage changes (something like: the metabolite concentration
increased by 20%+/-5% after intervention),
#Some cases:
ifelse(blah!=xyz,paste0(blah,1),blah)
#[1] x11 x21 x31 x41 x51
if(sum(blah==xyz)0) blah else paste0(blah,1)
#[1] x11 x21 x31 x41 x51
if(any(blah==xyz)) blah else as.numeric(factor(blah))
#[1] 1 2 3 4 5
A.K.
- Original Message -
From: Joseph Trubisz jtrub...@me.com
Hello Arnaud,
You posted this question a long long time ago, however I found your answer so I
decided to post it anyway in case somebody else have the same problem as you
and me.
You were actually very close in finding your solution. The function DoWritedbf
is an internal function from the
My perception of illogic was in your addition of more data structure complexity
when faced with this difficulty. R has best performance when calculations are
pushed into simple typed vectors where precompiled code can handle the majority
of the work. These are simpler structures, not more
Thanks Bastien,
I completely forgot that I asked this question.
I learned a lot since then ... actually, now I know how to do it, but it
was not the case in 2009 :-)
Arnaud
2013/5/22 bastien.ferland-raym...@mrn.gouv.qc.ca
Hello Arnaud,
You posted this question a long long time ago, however
Joe
Testing is something _exists_ is different than testing if something has what
you are referring to as a valid value. Here is one way to do what I think you
are doing versus testing if something exists
validVal - function(x, val){
if (!is.numeric(x)) stop('Not a numeric variable')
Hi,
May be this helps:
test1- test[1:5,]
test0- test1
for(i in 1:5) test0[,i]=0
test0
# Y X1 X2 X3 X4
#1 0 0 0 0 0
#2 0 0 0 0 0
#3 0 0 0 0 0
#4 0 0 0 0 0
#5 0 0 0 0 0
A.K.
- Original Message -
From: Roni Kobrosly roni.kobro...@gmail.com
To: r-help@r-project.org
Cc:
I have been using the NADA package to do some statistical analysis, however I
have just found that the package is no longer available for install. I've
downloaded an older version ( NADA_1.5-4.tar.gz ) and tried to use
install.packages to install it in two versions of R ( 3.0.0 and
Please let's not turn this into an ad hominem discussion by adding remarks on
what the other thinks or knows, as this will get us nowhere fast. Let's focus
on the issue, ok? :)
Again, the point behind my workaround was to try to change the rest of my
program as little as possible while I
On May 22, 2013, at 5:00 AM, catalin roibu wrote:
Hello all!
I have a problem to transform this list in a data frame. I try
this command as.data.frame, but unsuccessfully.
Please help me!
thank you very much!
structure(list(fns = list(structure(list(r = c(0, 0.048828125,
0.09765625,
Gents:
You've both been polite and thoughtful, but I think you should take
your discussion private, no?
-- Bert
On Wed, May 22, 2013 at 12:57 PM, Alexandre Sieira
alexandre.sie...@gmail.com wrote:
Please let's not turn this into an ad hominem discussion by adding remarks on
what the other
Dear Rxperts..
Just figured how to add a text at a custom location in panel.groups..
use grid.text(x=unit(value1,npc),y=unit(value2,npc),label=label
content)
With this, I hope to stop flogging such a valuable black horse! :)
On Wed, May 22, 2013 at 10:35 AM, Santosh santosh2...@gmail.com
On Wed, 22 May 2013, rwillims wrote:
I have been using the NADA package to do some statistical analysis,
however I have just found that the package is no longer available for
install. I've downloaded an older version ( NADA_1.5-4.tar.gz ) and tried
to use install.packages to install it in two
First suggestion is to ask the question on r-sig-geo.
There is the over()
function in the sp package, though it may require you to put your points
in a spatial class object.
For a crude brute-force approach that does not easily generalize, but
might be the quickest short-term solution for you
http://cran.r-project.org/web/packages/data.table/index.html
On Wed, May 22, 2013 at 12:31 PM, ivo welch ivo.we...@anderson.ucla.eduwrote:
I have a couple of large data sets, on the order of 4GB. they come in .csv
files, with about 50 columns and lots of rows. a couple have weird NA
On May 20, 2013, at 2:09 PM, Alexandre Sieira wrote:
I was trying to convert a vector of POSIXct into a list of POSIXct, However,
I had a problem that I wanted to share with you.
Works fine with, say, numeric:
v = c(1, 2, 3)
v
[1] 1 2 3
str(v)
num [1:3] 1 2 3
l = as.vector(v,
Hi everyone,
I'm having some difficulty getting the linebreaks I want with the cat()
function.
I currently have the following function:
lab1-function(iv,dv){
a-anova(lm(dv~iv))
cat(df(between) is,a[1,1])
cat(df(within) is, a[2,1])
cat(ss(between) is, a[1,2])
cat(ss(within) is, a[2,2])
The only permutation you likely didn't try:
cat(df(between) is, a[1,1], \ndf(within) is, a[2,1])
\n has to be inside the quotes. HTH. Bryan
On May 22, 2013, at 4:34 PM, jordanbrace r24...@mun.ca wrote:
Hi everyone,
I'm having some difficulty getting the linebreaks I want with the cat()
You may not have expected it, but that result is what is described: From the
help page for `as.vector`:
For as.vector, a vector (atomic or of type list). All attributes are removed
from the result if it is of an atomic mode, ...
And since POSIXct vectors _are_ of atomic mode, all of
HI,
Try:
lab2- function(var1,var2,data1){
a- anova(lm(var2~var1,data=data1))
cat(df(between) is,a[1,1],\n)
cat(df(within) is, a[2,1],\n)
cat(ss(between) is, a[1,2],\n)
cat(ss(within) is, a[2,2],\n)
}
#or
lab3- function(var1,var2,data1){
a- anova(lm(var2~var1,data=data1))
That's a very interesting point, David. The part I actually didn't know is that
the class of an object is an attribute, and that by removing all attributes
the function would in effect unclass it.
Thank you.
--
Alexandre Sieira
CISA, CISSP, ISO 27001 Lead Auditor
The truth is rarely pure and
hey, I want to divide my data into three groups based on the value in one
column with group name.
dat:
Var
0
0.2
0.5
1
4
6
I tried:
dat - cbind(dat, group=cut(dat$Var, breaks=c(0.1,0.6)))
But it doesnt work, I want to group those 0.1 as group A, 0.1-0.6 as group
B, 0.6 as group C
Thanks for
Hi,
Try:
dat- read.table(text=
Var
0
0.2
0.5
1
4
6
,sep=,header=TRUE)
res1-within(dat,group-factor(findInterval(Var,c(-Inf,0.1,0.6),rightmost.closed=TRUE),labels=LETTERS[1:3]))
res1
# Var group
#1 0.0 A
#2 0.2 B
#3 0.5 B
#4 1.0 C
#5 4.0 C
#6 6.0 C
#or
HI,
directory- /home/arunksa111/NewData
GetFileList - function(directory,number){
setwd(directory)
filelist1-dir()[file.info(dir())$isdir]
direct-dir(directory,pattern = paste(MSMS_,number,PepInfo.txt,sep=),
full.names = FALSE, recursive = TRUE)
direct-lapply(direct,function(x)
dat$group - cut( dat$Var, breaks=c(-Inf,0.1, 0.6,Inf))
levels(dat$group) - LETTERS[1:3]
---
Jeff NewmillerThe . . Go Live...
DCN:jdnew...@dcn.davis.ca.usBasics: ##.#.
Hello,
To see the source code, type fitted.Arima (no parenthesis) at an R
prompt. It will show that they are computed as I've said.
So your second question is in order, where do the residuals come from?
To see the source code for arima() type the function name without the
parenthesis and it
Dear Bernhard,
I am using your R package var. I am interested in running impulse
response analysis (using irf) and error variance decomposition (using
fevd).
I have two questions:
-What decomposition method do you use in your package Cholesky?
-What is the order of entry of the
Greetings. My wife is teaching an introductory stat class at UC Davis. The
class emphasizes the use of simulations, rather than mathematics, to get
insight into statistics, and R is the mandated tool. A student in the class
recently inquired about different approaches to sampling from a
Hello,
I have a 'zoo' object containing dates [dd/mm/yr] in the first column and
values in the second column.
I tried as.Date but did not succeed.
Here is an example of date format
01/01/2000
01/02/2000
...
01/12/2000
01/01/2001
01/02/2001
...
01/12/2000
etc.
I would like to sort all Jans
It's not homework.
I met this question during my practical work via R.
The boss is an expert of biology,but he doesn't know statistics.So I must find
the right method to this work.
At 2013-05-22 17:30:34,Uwe Ligges lig...@statistik.tu-dortmund.de wrote:
On 22.05.2013 07:09, meng
Hi,
May be this helps.
date1-seq.Date(as.Date(01/01/2000,format=%d/%m/%Y),as.Date(31/12/2010,format=%d/%m/%Y),by=day)
set.seed(24)
value- rnorm(4018,25)
dat1- data.frame(date1,value)
dat2-
do.call(rbind,split(dat1,as.numeric(gsub(.*\\-(.*)\\-.*,\\1,dat1$date1
library(zoo)
z1-
On May 22, 2013, at 7:44 PM, meng wrote:
It's not homework.
I met this question during my practical work via R.
The boss is an expert of biology,but he doesn't know statistics.So I must
find the right method to this work.
Yes, you must. Unfortunately, the Rhelp mailing list is for
Hi Ak,
I seems to work correctly with your data. Please try it on my data (attached).
My data is monthly.
The y in column 1 is not provided when I convert my data to zoo.
So, delete the header, y and try to work with the data.
Sort all Jans, Febs, etc just as you did.
Atem.
I'm comparing a variety of datasets with over 4M rows. I've solved this
problem 5 different ways using a for/while loop but the processing time is
murder (over 8 hours doing this row by row per data set). As such I'm
trying to find whether this solution is possible without a loop or one in
which
You seem to be building an elaborate structure for testing the reproducibility
of the random number generator. I suspect that rbinom is calling the random
number generator a different number of times when you pass prob=0.5 than
otherwise.
On May 23, 2013, at 07:01 , Jeff Newmiller wrote:
You seem to be building an elaborate structure for testing the
reproducibility of the random number generator. I suspect that rbinom is
calling the random number generator a different number of times when you pass
prob=0.5 than otherwise.
Hi,
dat1- read.csv(me.csv,header=TRUE,stringsAsFactors=FALSE)
dat1$y- as.Date(dat1$y,format=%d/%m/%Y)
library(zoo)
z1- zoo(dat1[,-1],dat1[,1])
library(xts)
library(lubridate)
lst1-lapply(split(z1,month(index(z1))),as.xts)
head(lst1[[1]])
# [,1]
#1961-01-01 7.45
#1962-01-01 17.63
HI Atem,
No problem.
If you want to do some calculations based on monthly data:
aggregate(z1,month(index(z1)),mean)
# 1 2 3 4 5 6 7 8
#21.52044 12.64133 20.06044 27.28711 43.41489 70.15022 41.18756 34.28689
# 9 10 11
75 matches
Mail list logo