On Sep 2, 2010, at 1:00 AM, aftar wrote:
Hi
Is anyone know how could I find the cross spectrum?
from the spectrum function, it only give the spectrum for each
individual
series.
I read the manual very differently. It is telling me that you _do_ get
cross spectrum results in the coh
David,
Thanks a lot!!! It works,and that's what I need. Good night.
Sean
From: David Winsemius [mailto:dwinsem...@comcast.net]
Sent: Wed 9/1/2010 9:50 PM
To: Xiang Li
Cc: r-help@r-project.org
Subject: Re: [R] How to access some elements of a S4 object?
Hi, folks,
runif (n,min,max) is the typical code for generate R.V from uniform dist.
But what if we need to fix the mean as 20, and we want the values to be
integers only?
Thanks
[[alternative HTML version deleted]]
__
R-help@r-project.org
Hi,
I would like to extract monthly data automatically using a code. Given are my
sample data.
I know how to do one by one:
jan_data1 - Pooraka_data[Pooraka_data$Month==1,4]
feb_data1 - Pooraka_data[Pooraka_data$Month==2,4]
mar_data1 - Pooraka_data[Pooraka_data$Month==3,4]
apr_data1 -
Hi,
I've built my own package in windows and when I run R CMD check Package-Name I
get,
* install options are ' --no-html'
* installing *source* package 'AceTest' ...
** libs
making DLL ...
g++ ...etc.
installing to PATH
... done
** R
** preparing package for lazy loading
** help
Warning:
Hi,
I have to compute the singular value decomposition of rather large
matrices. My test matrix is 10558 by 4255 and it takes about three
minutes in R to decompose on a 64bit quadruple core linux machine. (R is
running svd in parallel, all four cores are at their maximum load while
doing this.) I
Hi Dejian,
You're right on this! Do you know how to pass those two argument into
lower.panel? Thanks!
...Tao
From: Dejian Zhao zha...@ioz.ac.cn
To: r-help@r-project.org
Sent: Tue, August 31, 2010 6:10:16 PM
Subject: Re: [R] pairs with same xlim and ylim
On Thu, Sep 2, 2010 at 7:17 AM, Yi liuyi.fe...@gmail.com wrote:
Hi, folks,
runif (n,min,max) is the typical code for generate R.V from uniform dist.
But what if we need to fix the mean as 20, and we want the values to be
integers only?
It's not clear what you want. Uniformly random
Hi
I'm sorry, but I don't think that coherence is the same as the cross
spectrum. People use coherence since it is much easier to deal with. I know
how by using R to plot and calculate the coherence and phase, but what I
didn't know is how to calculate the cross spectrum by using R.
Regards
Hi,
I am using vim to edit my files...I do not know what has been pressed by me
as my R code get converted to such a crap...
%PDF-1.4
%81â81ã81Ï81Ó\r
1 0 obj
/CreationDate (D:20100902122215)
/ModDate (D:20100902122215)
/Title (R Graphics Output)
/Producer (R 2.11.1)
/Creator (R)
how to
Hello,
I have a data.frame with the following format:
head(clin2)
Study Subject Type Obs Cycle Day Date Time
1 A001101 10108 ALB 44.098 1 2004-03-11 14:26
2 A001101 10108 ALP 95.098 1 2004-03-11 14:26
3 A001101 10108 ALT 61.098 1
On Sep 2, 2010, at 09:16 , khush wrote:
Hi,
I am using vim to edit my files...I do not know what has been pressed by me
as my R code get converted to such a crap...
%PDF-1.4
%81â81ã81Ï81Ó\r
1 0 obj
/CreationDate (D:20100902122215)
/ModDate (D:20100902122215)
/Title (R
i test the null that the coin is fair (p(succ) = p(fail) = 0.5) with
one trail and get a p-value of 1. actually i want to proof the
alternative H that the estimate is different from 0.5, what certainly
can not be aproven here. but in reverse the p-value of 1 says that i
can 100% sure that
On 02-Sep-10 07:16:54, khush wrote:
Hi,
I am using vim to edit my files...I do not know what has been
pressed by me as my R code get converted to such a crap...
%PDF-1.4
%81â81ã81Ï81Ó\r
1 0 obj
/CreationDate (D:20100902122215)
/ModDate (D:20100902122215)
/Title (R Graphics
Dear Greg,
First convert your data.frame to a long format using melt(). Then use
ddply() to calculate the averages. Once you get at this point is should
be rather straightforward.
library (ggplot2)
v1 - c(1,2,3,3,4)
v2 - c(4,3,1,1,9)
v3 - c(3,5,7,2,9)
gender - c(m,f,m,f,f)
d.data -
Hi,
I didn't find any post on this subject so I ll ask you some advices.
Let's say that I have two library trees.
Number 1 is the default R library tree on path1
Number 2 is another library tree on a server with all packages on path2.
When I set library(aaMI,lib.loc=paths2) it loads the
You state: in reverse the p-value of 1 says that i can 100% sure
that the estimate of 0.5 is true. This is where your logic about
significance tests goes wrong.
The general logic of a singificance test is that a test statistic
(say T) is chosen such that large values represent a discrepancy
Hi ,
I use library(gee),library(geepack),library(yags) perform GEE data analysis
, but all of them cannot do variable selection!
Both step and stepAIC can do variable selection based on AIC criterion under
linear regression and glm,
but they cannot work when model is based on GEE.
I want to
Hi:
I did the following test using function ddply() in the plyr package on a toy
data frame with 5 observations using five studies, 20 subjects per
study, 25 cycles per subject, five days per cycle and four observations by
type per day. No date-time variable was included.
# Test data frame
On 09/02/2010 04:56 AM, josef.kar...@phila.gov wrote:
I have a simple barplot of 4 mean values, each mean value has an
associated 95% confidence interval drawn on the plot as an error bar.
I want to make a legend on the plot that uses the error bar symbol, and
explains 95% C.I.
How do I show the
Dear Dennis,
cast() is in this case much faster.
system.time(bigtab - ddply(big, .(study, subject, cycle, day),
function(x) xtabs(obs ~ type, data = x)))
user system elapsed
35.360.12 35.53
system.time(bigtab2 - cast(data = big, study + subject + cycle + day
~type, value = obs,
On 02/09/2010 2:29 AM, raje...@cse.iitm.ac.in wrote:
Hi,
I've built my own package in windows and when I run R CMD check Package-Name I
get,
* install options are ' --no-html'
* installing *source* package 'AceTest' ...
** libs
making DLL ...
g++ ...etc.
installing to PATH
... done
** R
picking up on Thierry's example, I don't think you need any function because
you are just reshaping
(not aggregating). Therefore:
bigtab2 - cast(data = big, study + subject + cycle + day ~type, value =
obs)
head(bigtab2)
study subject cycle day ALB ALP ALT AST
1 1 1 1 1 66
Hi,
I am looking for a way to add labels, i.e. absolute values, into a
stacked bar chart using the basic plot functions of R. The labels should
be inside the stacked bars.
For example,
### I have this dataset
height = cbind(x = c(465, 91) / 465 * 100,
y = c(840, 200)
On 02/09/2010 6:46 AM, raje...@cse.iitm.ac.in wrote:
It is dependent on another dll but it did not give compilation errors. It seemed to link fine at that point. Why does it have a problem at this stage?
Windows needs to be able to find the other DLL at load time. It will
find it if it's in
Dear r-help,
I am using CAH. I would cut my dendogram. What is the command in R that
allows draw a graph of the semi-partial R-squared ?
Best Regards
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list
It is dependent on another dll but it did not give compilation errors. It
seemed to link fine at that point. Why does it have a problem at this stage?
From: Duncan Murdoch murdoch.dun...@gmail.com
To: raje...@cse.iitm.ac.in
Cc: r-help r-help@r-project.org
Sent: Thursday, September 2, 2010
Hi,
I'm trying to run an anderson-darling test for normality on a given variable
'Y':
ad.test(Y)
I think I need the 'nortest' package, but since it does not appear in any of
the Ubuntu repositories for 2.10.1, I am wondering if it goes by the name of
something else now?
Thanks
--
View this
On 09/02/2010 08:50 PM, Jens Oldeland wrote:
...
I am looking for a way to add labels, i.e. absolute values, into a
stacked bar chart using the basic plot functions of R. The labels
should be inside the stacked bars.
barpos-barplot(height,beside = FALSE,
horiz=TRUE,col = c(2, 3))
Dear R-community,
I'm analysing some noise using the nlme-package. I'm writing in order
to get my usage of lme verified.
In practise, a number of samples have been processed by a machine
measuring the same signal at four different channels. I want to model
the noise. I have taken the noise (the
Hi:
Thanks, David and Thierry. I knew what I did was inefficient, but I'm not
very adept with cast() yet. Thanks for the lesson!
The time is less than half without the fun = mean statement, too. On my
system, the timing of David's call was 2.06 s elapsed; with Thierry's, it
was 4.88 s. Both big
Hi:
One option to search for functions in R is to download the sos package:
library(sos)
findFn('Anderson-Darling')
On my system (Win 7), it found 51 matches. The two likeliest packages are
nortest and ADGofTest. To answer your question, nortest still exists on
CRAN. I can't comment on Ubuntu,
Dear Mikkel,
You need to do some reading on terminology.
In your model the fixed effects are channel 1, 2 and 3. samplenumber is
a random effect and the error term is an error term
The model you described has the notation below. You do not need to
create the grouped data structure.
Hello all
I wonder what can i use to cluster vectors which composed of several
factors.
lets say around 30 different factors compose a vector, and if the factor is
present then it encoded as 1, if not presented then it will be encoded as 0.
I was thinking of using hierarchical clustering, as i
The absence of stepwise methods works to your advantage, as these
yield invalid statistical inference and inflated regression
coefficients.
Frank E Harrell Jr Professor and ChairmanSchool of Medicine
Department of Biostatistics Vanderbilt University
On Thu,
Dear Thierry,
Thanks for the quick answer. I'm moving this to r-sig-mixed-models
(but also posting on r-help to notify).
I reserved Mixed-effects models in S and S-PLUS by Pinheiro and
Bates, New York : Springer, 2000. Do you know any other good
references?
Cheers, Mikkel.
2010/9/2 ONKELINX,
thanks a lot for the elaborations.
your explanations clearly brought to me that either
binom.test(1,1,0.5,two-sided) or binom.test(0,1,0.5) giving a
p-value of 1 simply indicate i have abolutely no ensurance to reject H0.
considering binom.test(0,1,0.5,alternative=greater) and
I think I found a simple solution, although it requires some tinkering to
find the x,y coordinates of the plot region to place the text...
barplot ( )
text(x= 2.9, y = 0.43, srt=90, labels = H, cex = 1.5, col=blue) #srt
rotates the H character, so that it resembles an error bar
text(x=3.5,
Dear all,
mydata-data.frame(x1=c(1,4,6),x2=c(3,1,2),x3=c(2,1,3))
cor(mydata)
x1 x2x3
x1 1.000 -0.5960396 0.3973597
x2 -0.5960396 1.000 0.500
x3 0.3973597 0.500 1.000
I wonder if it is possible to fill only lower triangle of this
correlation
I was just testing out ks.test()
y - runif(1, min=0, max=1)
ks.test(y, runif, min=0, max=1, alternative=greater)
One-sample Kolmogorov-Smirnov test
data: y
D^+ = 0.9761, p-value 2.2e-16
alternative hypothesis: the CDF of x lies above the null hypothesis
It seems that everytime
Like this?
mydata-data.frame(x1=c(1,4,6),x2=c(3,1,2),x3=c(2,1,3))
as.dist(cor(mydata))
x1 x2
x2 -0.5960396
x3 0.3973597 0.500
Sarah
On Thu, Sep 2, 2010 at 9:51 AM, Olga Lyashevska o...@herenstraat.nl wrote:
Dear all,
Hi,
Are you sure you don't want to do ks.test(y, punif, min=0, max=1,
alternative=greater) instead of what you tried?
Alain
On 02-Sep-10 15:52, Samsiddhi Bhattacharjee wrote:
ks.test(y, runif, min=0, max=1, alternative=greater)
--
Alain Guillet
Statistician and Computer Scientist
try lower.tri
and see
??lower.tri
Steve Friedman Ph. D.
Spatial Statistical Analyst
Everglades and Dry Tortugas National Park
950 N Krome Ave (3rd Floor)
Homestead, Florida 33034
steve_fried...@nps.gov
Office (305) 224 - 4282
Fax (305) 224 - 4147
oops sorryreally careless. thanks !
On Thu, Sep 2, 2010 at 10:03 AM, Alain Guillet
alain.guil...@uclouvain.be wrote:
Hi,
Are you sure you don't want to do ks.test(y, punif, min=0, max=1,
alternative=greater) instead of what you tried?
Alain
On 02-Sep-10 15:52, Samsiddhi
if I try as.dist I get the following error:
On Thu, 2010-09-02 at 09:57 -0400, Sarah Goslee wrote:
mydata-data.frame(x1=c(1,4,6),x2=c(3,1,2),x3=c(2,1,3))
as.dist(cor(mydata))
x1 x2
x2 -0.5960396
x3 0.3973597 0.500
print(xtable(as.dist(cor(mydata)),digits=3))
..i'd like to add that i actually wanted to test the location of differences
of paired samples coming from an non-normal asymetric distribution. the
alternative hypothesis was that negative differences are more often than in
0.5 of all cases. thus i tested
Hello all,
I've 2 strings that representing the start and end values of a date and
time.
For example,
time1 - c(21/04/2005,23/05/2005,11/04/2005)
time2 - c(15/07/2009, 03/06/2008, 15/10/2005)
as.difftime(time1,time2)
Time differences in secs
[1] NA NA NA
attr(,tzone)
[1]
How can i calculate the
Hi,
Have you gotten help on this?
Lexi
-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org]
On Behalf Of ma...@unizar.es
Sent: Saturday, August 28, 2010 5:08 PM
To: r-help@r-project.org
Subject: [R] star models
Hi,
I am traying to implement an STAR
sorry to bump in late, but I am doing similar things now and was browsing.
IMHO anova is not appropriate here. it applies when the richer model has p
more variables than the simpler model. this is not the case here. the
competing models use different variables.
you are left with IC.
by
Hi Dunia,
You need to convert the character strings to Dates.
time1 - as.Date(c(21/04/2005,23/05/2005,11/04/2005), %d/%m/%Y)
time2 - as.Date(c(15/07/2009, 03/06/2008, 15/10/2005), %d/%m/%Y)
time2-time1
Best,
Ista
On Thu, Sep 2, 2010 at 10:32 AM, Dunia Scheid dunia.sch...@gmail.com wrote:
Good morning gentlemen!
How using a weighted model in nls2? Values with the nls are logical since
values with nls2 are not. I believe that this discrepancy is due to I did
not include the weights argument in nls2.
Here's an example:
MOISTURE - c(28.41640, 28.47340, 29.05821, 28.52201,
Dear all,
I'm trying to use the optim function to replicate the results from the glm
using an example from the help page of glm, but I could not get the optim
function to work. Would you please point out where I did wrong? Thanks a lot.
The following is the code:
# Step 1: fit the glm
Hi,
Sorry for a basic questions on linear models.
I am trying to adjust raw data for both fixed and mixed effects. The data that
I
output should account for these effects, so that I can use the adjusted data
for
further analysis.
For example, if I have the blood sugar levels for 30
On Thu, 2 Sep 2010, Zhang,Yanwei wrote:
Dear all,
I'm trying to use the optim function to replicate the results from the glm using an example
from the help page of glm, but I could not get the optim function to work. Would you please
point out where I did wrong? Thanks a lot.
The following
Thomas,
Thanks a lot. This solves my problem.
Wayne (Yanwei) Zhang
Statistical Research
CNA
-Original Message-
From: Thomas Lumley [mailto:tlum...@u.washington.edu]
Sent: Thursday, September 02, 2010 11:24 AM
To: Zhang,Yanwei
Cc: r-help@r-project.org
Subject: Re: [R] Help on glm
Why should height be a random effect?
I suspect you may need a tutorial on what a random effect in a mixed
model is. I see no obvious reason why one would cluster on height.
Perhaps if you clarify, it may become obvious either what your
concerns are (and that your model is correct) or that you
Hi Bert,
Thanks for the reply.
Height, was just as an example. Perhaps, I should have said 'x' instead of
height.
Essentially, what I want to do is adjust the data for known effects. After I've
done this, I can conduct further analysis on the data (for example, if another
variable 'z' has an
Sorry, forgot to mention that the processed data will be used as input for a
classification algorithm. So, I need to adjust for known effects before I can
use the data.
thanks
From: Bert Gunter gunter.ber...@gene.com
Cc: r-help@r-project.org
Sent: Thu,
Just to add to Ted's addition to my response. I think you are moving towards
better understanding (and your misunderstandings are common), but to further
clarify:
First, make sure that you understand that the probability of A given B, p(A|B),
is different from the probability of B given A,
James Nead james_nead at yahoo.com writes:
Sorry, forgot to mention that the processed data will be used as input for a
classification algorithm. So, I need to adjust for known effects before I can
use the data.
I am trying to adjust raw data for both fixed and mixed effects.
The
On Sep 2, 2010, at 2:01 PM, Greg Snow wrote:
snipped much good material
The real tricky bit about hypothesis testing is that we compute a
single p-value, a single observation from a distribution, and based
on that try to decide if the process that produced that observation
is a uniform
My apologies - I have made this more confusing than it needs to be.
I had microarray gene expression data which I want to use for classification
algorithms. However, I want to 'adjust' the data for all confounding factors
(such as age, experiment number etc.), before I could use the data as
On 10-09-02 02:26 PM, James Nead wrote:
My apologies - I have made this more confusing than it needs to be.
I had microarray gene expression data which I want to use for
classification algorithms. However, I want to 'adjust' the data for
all confounding factors (such as age, experiment number
David,
The original poster was not looking at distributions and testing distributions,
I referred to the distribution of the p-value to help them understand (in
reference to the paper mentioned).
--
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
Perhaps even more to the point, covariate adjustment and
classification should not be separate. One should fit the
appropriate model that does both.
-- Bert
On Thu, Sep 2, 2010 at 11:34 AM, Ben Bolker bbol...@gmail.com wrote:
On 10-09-02 02:26 PM, James Nead wrote:
My apologies - I have made
Tal wrote:
A friend recently brought to my attention that vector assignment actually
recreates the entire vector on which the assignment is performed.
...
I brought this up in r-devel a few months ago. You can read my posting,
and the various replies, at
Dear all,
I am relatively new to R and have had some difficulty in understanding the
user manual for a package that I have downloaded to evaluate non-linear
simultaneous equations.
The package is called systemfit.
Does anyone have any experience of this package?
What I am trying to do is
benhartley903 wrote:
Dear all,
I am relatively new to R and have had some difficulty in understanding the
user manual for a package that I have downloaded to evaluate non-linear
simultaneous equations.
The package is called systemfit.
Does anyone have any experience of this package?
On 09/02/2010 09:25 PM, benhartley903 wrote:
Dear all,
I am relatively new to R and have had some difficulty in understanding the
user manual for a package that I have downloaded to evaluate non-linear
simultaneous equations.
The package is called systemfit.
Does anyone have any
On Thu, 2 Sep 2010, stephenb wrote:
sorry to bump in late, but I am doing similar things now and was browsing.
IMHO anova is not appropriate here. it applies when the richer model has p
more variables than the simpler model. this is not the case here. the
competing models use different
I am aware this is fairly simple, but is currently driving me mad! Could
someone help me out with conducting a post-hoc power analysis on a Wilcoxon
test. I am being driven slightly mad by some conflicting advice!
Thanks in advance,
Lewis
[[alternative HTML version deleted]]
Thanks Peter
Actually I should have specified. These are not actually the functions I
ultimately want to solve can't be rearranged explicitly and substituted. I
do need a way to solve simultaneously.
Ben
--
View this message in context:
Hi listers,
I could order a data that like this:
x-c(2,6,8,8,1)
y-c(1,6,3,5,4)
o-order(x)
frame-rbind(x,y)[,o]
But, I would like to know if there is a way to order my data without setting
up a data frame. I would like to keep independent vectors x and y.
Any suggestions?
Thanks in advance,
Marcio
Hi Marcio,
Is this what you want?
x - c(2,6,8,8,1)
y - c(1,6,3,5,4)
o - order(x)
# If you want each vector order by x
x[o]
y[o]
You can also use sort(), but then each vector would be sorted by
itself, not both by x.
HTH,
Josh
On Thu, Sep 2, 2010 at 1:48 PM, Mestat mes...@pop.com.br wrote:
On Thu, Sep 2, 2010 at 11:09 AM, Ivan Allaman ivanala...@yahoo.com.br wrote:
Good morning gentlemen!
How using a weighted model in nls2? Values with the nls are logical since
values with nls2 are not. I believe that this discrepancy is due to I did
not include the weights argument in nls2.
Totally agreed.
I made a mistake in calling the categorization a GAM. If we apply a step
function to the continuous age we get a limited range ordinal variable.
Categorizing is creating several binary variables from the continuous (with
treatment contrasts).
Stephen B
-Original
Be happy, don't do post-hoc power analyses.
The standard post-hoc power analysis is actually counterproductive. It is
much better to just create confidence intervals. Or give a better
description/justification showing that your case is not the standard/worse than
useless version.
--
Gregory
Suggestion: use the power of R.
If x and y are independent then sorting y based on x is meaningless.
If sorting y based on x is meaningful, then they are not independent.
Trying to force non-independent things to pretend that they are independent
just causes future headaches.
Part of the
On 02-Sep-10 18:01:55, Greg Snow wrote:
Just to add to Ted's addition to my response. I think you are moving
towards better understanding (and your misunderstandings are common),
but to further clarify:
[Wise words about P(A|B), P(B|A), P-values, etc., snipped]
The real tricky bit about
Hi,
I'm doing a generalized linear mixed model, and I currently use an R function
called glmm. However, in this function they use a standard normal
distribution for the random effect, which doesn't satisfy my case, i.e. my
random effect also follows a normal distribution, but observations
This data are kilojoules of energy that are consumed in starving fish over a
time period (Days). The KJ reach a lower asymptote and level off and I
would like to use a non-linear plot to show this leveling off. The data are
noisy and the sample sizes not the largest. I have tried selfstarting
Qiu, Weiyu weiyu at ualberta.ca writes:
Hi,
I'm doing a generalized linear mixed model, and I currently use an
R function called glmm. However, in
this function they use a standard normal distribution for the random
effect, which doesn't satisfy my
case, i.e. my random effect also
Is there a complete list of these very handy and power functions in the base
R?
--
View this message in context:
http://r.789695.n4.nabble.com/testing-for-emptyenv-tp2432922p2525031.html
Sent from the R help mailing list archive at Nabble.com.
__
When pairs draws plots, lower.panel invokes f.xy. Maybe there is
something in f.xy incompatible with pairs. You can read the code of
pairs to see what happens.
pairs has two methods, as you can see in the help message (?pairs).
According to your code, pairs is supposed to invoke Default S3
William,
Thanks. I adapted your example by doing:
library(psych)
pdf(file=myfile.pdf,width=30,height=30)
pairs.panels(data,gap=0)
dev.off()
The R part worked. I could see it doing so when I replaced the call to
'pdf' by
windows(width=30,height=30)
. The remaining problem was that
Greg, thanks for the suggestion. That's useful to know for future work.
It's not so good in this case, because I'm making the plots for a
colleague who doesn't know R, and it would be a bother for me to have to
send him several files and him reassemble them. What I did was to use
pairs.panels,
On 2010-08-31 13:49, Greg Snow wrote:
Look at the pairs2 function in the TeachingDemos package, this lets you produce
smaller portions of the total scatterplot matrix at a time (with bigger plots),
you could print the smaller portions then assemble the full matrix on a large
wall, or just use
Hello group
Im trying to plot 3d with scatterplot packages, i got error say
length(color) must be equal length(x) or 1 may data has dimensions
(lon,lat,lev,time) ,the time in month i want to
calculate the monthly mean for the time how can i make that , is there any
function
Agree with Greg's point. In fact it does not make logical sense in many
cases. Similar to the use of the statistically unreliable reliability
measure Cronbach's alpha in some non-statistical fields.
--
View this message in context:
Hi,
I'm doing a generalized linear mixed model, and I currently use an R function
called glmm. However, in this function they use a standard normal
distribution for the random effect, which doesn't satisfy my case, i.e. my
random effect also follows a normal distribution, but observations
try to use difftime() instead of as.difftime().
On Thu, Sep 2, 2010 at 10:32 PM, Dunia Scheid dunia.sch...@gmail.com wrote:
Hello all,
I've 2 strings that representing the start and end values of a date and
time.
For example,
time1 - c(21/04/2005,23/05/2005,11/04/2005)
time2 -
On Thu, Sep 2, 2010 at 7:39 PM, Marlin Keith Cox marlink...@gmail.com wrote:
This data are kilojoules of energy that are consumed in starving fish over a
time period (Days). The KJ reach a lower asymptote and level off and I
would like to use a non-linear plot to show this leveling off. The
On 2010-09-02 22:16, Jocelyn Paine wrote:
Greg, thanks for the suggestion. That's useful to know for future work.
It's not so good in this case, because I'm making the plots for a
colleague who doesn't know R, and it would be a bother for me to have to
send him several files and him reassemble
On 2010-09-02 22:32, Gabor Grothendieck wrote:
On Thu, Sep 2, 2010 at 7:39 PM, Marlin Keith Coxmarlink...@gmail.com wrote:
This data are kilojoules of energy that are consumed in starving fish over a
time period (Days). The KJ reach a lower asymptote and level off and I
would like to use a
Hi,
I'd like to be able to run a .exe in the background whenever library(package-x)
is called. Is this possible?
~Aks
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
95 matches
Mail list logo