Hi,
one comment: Claeskens and Hjort define AIC as 2*log L - 2*p for a model
with likelihood L and p parameters; consequently, they look for models
with *maximum* AIC in model selection and averaging. This differs from
the vast majority of authors (and R), who define AIC as -2*log L + 2*p
and
Mike,
I am slightly unclear on what you want to do. Do you want to check rows
1 and 7 or 1 *to* 7? Should c1 be at least 100 for *any one* or *all*
rows you are looking at, and same for c2?
You can sort your data like this:
data - data[order(data$ds),]
Type ?order for help. But also do this
for the help!
Michael
Stephan Kolassa 07/17/10 4:50 PM
Mike,
I am slightly unclear on what you want to do. Do you want to check rows
1 and 7 or 1 *to* 7? Should c1 be at least 100 for *any one* or *all*
rows you are looking at, and same for c2?
You can sort your data like this:
data - data
Hi,
simulating would still require you to operationalize the lack of
normality. Are the tails too heavy? Is the distribution skewed? Does it
have multiple peaks? I suspect that the specific choices you would make
here would *strongly* influence the result.
My condolences on the client you
Dear all,
I am stumped at what should be a painfully easy task: predicting from an lm
object. A toy example would be this:
XX - matrix(runif(8),ncol=2)
yy - runif(4)
model - lm(yy~XX)
XX.pred - data.frame(matrix(runif(6),ncol=2))
colnames(XX.pred) - c(XX1,XX2)
predict(model,newdata=XX.pred)
I
Hi,
basically, you know 5 periods later. If you use a good error measure,
that is.
I am a big believer in AIC for model selection. I believe that arima()
also gives you the AIC of a fitted model, or try AIC(arima1).
Other ideas include keeping a holdout sample or some such.
I'd recommend
Hi Alex,
I'm slightly unclear as to why you would want to restructure your nice
six-column data.frame (why six? One column for the data and four for the
factors should make five, shouldn't it? I guess you have a subject ID in
one column?) into some monstrosity which I assume you would fill
Hi Alex,
help.search(uniform)
HTH,
Stephan
Am 08.09.2010 15:36, schrieb Alaios:
Hello,
I would like to uniformly distribute values from 0 to 200. Can someone help me
find the appropriate uniform distribution generator?
I would like to thank you in advance for your help.
Best Regards
Alex
Try this:
boxplot.stats(x[x[,2]0,2],do.conf=FALSE)
HTH,
Stephan
e-letter schrieb:
Readers,
I have a data set as follows:
1,1
2,2
3,3
4,4
5,3
6,2
7,-10
8,-9
9,-3
10,2
11,3
12,4
13,5
14,4
15,3
16,2
17,1
I entered this data set using the command 'read.csv'. I want to
exclude values fewer than
Hi Luna,
you may want to look at the IIF website, http://www.forecasters.org
They have a mailing list for forecasters - you may get more of a
response there than on a dedicated R list.
HTH,
Stephan
Luna Moon schrieb:
Hi all,
Could anybody please shed some lights on me about good
Or:
weekdays(as.Date(2010-05-24))
HTH,
Stephan
Wu Gong schrieb:
?strptime will helps.
d - as.Date(01/05/2007,%m/%d/%Y)
format(d, %A, %b %d, %Y)
[1] Friday, Jan 05, 2007
-
A R learner.
__
R-help@r-project.org mailing list
Hi Bob,
Muenchen, Robert A (Bob) wrote:
Does anyone have a program that graphs the growth of R packages? I don't
know if that historical data is around.
John Fox had a slide on this in his useR 2008 talk The Social
Organization of the R Project (page 7), with package counts up to March
Hi,
from what I understand, you may be interested in text mining, so perhaps
you want to look at the tm package.
Then again, depending on what you are really trying to do, you may be
better served with perl, awk and similar tools than with R...
HTH,
Stephan
Schwan schrieb:
Dear all,
i
?predict
HTH
Stephan
DispersionMap schrieb:
I have some data that i ran a regression on and got the usual r output, with
the a intercept and b coefficient for my independent variable.
i want to forecast the number of future events using these parameters.
What function / packages in R would
Hi Ray,
First possibility: just select those combinations that contain AL:
combos.with.AL - possible.combos[rowSums(possible.combos==AL)0,]
Second possibility: create all 3-combos *without* AL:
bands.without.AL - c(B, DB, DG, G, K, LB, LG, MG, O,
P, PI, PK, PU, R, V, W, Y)
Hi Jonathan,
grep() returns a vector giving either the indices of the elements of
'x' that yielded a match or, if 'value' is 'TRUE', the matched elements
of 'x' (quoting from the help page, see ?grep).
So you probably want to test whether this vector is empty or not - in
other words,
Hi teo,
try lines() instead of points().
HTH
Stephan
teo schrieb:
Hi:
Could you please guide me how to plot weekly.t2. Below are my values and I
am only able to plot weekly t1 but failing to add a second line
representng weekly t2.
Thanks.
weekly.t1
[1] 228.5204 326.1224 387.2449 415.4082
Dear guRus,
is there a version of boot() that deals with array data and array-valued
statistics? For example:
foo - array(rnorm(120),dim=c(3,5,8))
means - apply(foo, MARGIN=c(2,3), FUN=mean)
means contains the means over the first dimension of foo: a 5x8 array.
Now I would like to bootstrap
Hi Stephanie,
it sounds like R's exception handling may help, something like this:
foo - try(eblest(i, dir5, sterr5, weight5, aux5))
if ( class(foo) == try-error ) next
Take a look at ?try.
HTH,
Stephan
Stephanie Coffey schrieb:
Hi all,
I'm running R version 2.9.2 on a PC.
I'm having a
Dear useRs,
I am trying to read a tab-delimited Unicode text file containing both
latin and cyrillic characters and failing miserably. The file looks like
this (I hope it comes across right):
A B C
3 foo ФОО
5 bar БАР
read.table(foo.txt,sep=\t,header=TRUE)
I
Hi Jean-Baptiste,
two points:
1) Your variable df is a *local* variable which you define in your
function myfunc(), so it is not known outside myfunc(). When you ask
is.data.frame(df), R looks at the global definition of df - which is the
density function of the F distribution. To make your
Hi,
you can permute array dimensions using aperm():
x - 1 : 24
z - array(x, dim=c(6,2,2))
y - aperm(z,perm=c(3,2,1))
y[1,1,]
HTH,
Stephan
Kohleth Chia schrieb:
Dear all,
When I coerce a vector into a multi dimensional array, I would like R to start
filling the array along the last
Hi,
David Winsemius schrieb:
snip
This would imply that ozon is a list or dataframe.
snip
And you tried to give the whole list to a function that only wants a
vector.
And whenever you suspect that your data types clash, try str() to find
out just what kind of thing your data is. Here:
Hi Kristina,
Thierry's solution is certainly the correct one in terms of keeping
within R's philosophy... but I personally find a series of conditional
assignments easier to understand - see below for an example.
HTH,
Stephan
#
# Example
Hi Helga,
did you load the boot library, which contains glm.diag(), by calling
library(boot)?
HTH
Stephan
Helga margrete holmestad schrieb:
I have made a poisson regression by using glm.
Then I want to check if I have over-dispersion in my model. I tried to use
glm.diag(fit.1), but then R
Hi Jon,
does the empirical cumulative distribution function do what you want?
dat$q.score - ecdf(dat$score)(dat$score)
?ecdf
HTH
Stephan
Jonathan Beard schrieb:
Hello all,
Thanks in advance for you attention.
I would like to generate a third value that represents the quantile
value of a
Have you set the correct working directory?
?setwd
?getwd
HTH
Stephan
Robert Tsutakawa schrieb:
I am trying to read a source program into a mac pro laptop, which uses
Snow Leopard. R is unable to find the file containing my source
program. I'm using the function source( file name). I
Hi Phani,
to get the best Holt's model, I would simply wrap a suitable function
calling ets() within optim() and optimize for alpha and beta - the
values given by ets() without constraints would probably be good
starting values, but you had better start the optimization with a
variety of
this.
And I did find a way around could this allow me to set MAPE as a criteria?
Phani
On Tue, Jun 29, 2010 at 12:47 AM, Stephan Kolassa stephan.kola...@gmx.dewrote:
Hi Phani,
to get the best Holt's model, I would simply wrap a suitable function
calling ets() within optim() and optimize for alpha
Hi Elaine,
in general, stepwise selection is a very bad idea:
Whittingham, M. J.; Stephens, P. A.; Bradbury, R. B. Freckleton, R. P.
Why do we still use stepwise modelling in ecology and behaviour? Journal
of Animal Ecology, 2006, 75, 1182-1189
HTH
Stephan
elaine kuo schrieb:
Dear list,
Hi,
one possibility would be to calculate the convex hull using chull(). I
believe that the hull points are returned by chull() in a clockwise
order (?), so the points between the rightmost and the leftmost point in
the chull() result are the lower half of the convex hull. Remove these
the
percentage of points outside it?
Thanks!
Asha
Stephan Kolassa wrote:
Hi,
one possibility would be to calculate the convex hull using chull(). I
believe that the hull points are returned by chull() in a clockwise
order (?), so the points between the rightmost and the leftmost point
Hi,
your problem is called string matching. Search for that term on
rseek.org, there are a couple of functions and packages. And Wikipedia
can tell you everything you ever wanted to know about string matching
(and more).
HTH,
Stephan
Cable, Samuel B Civ USAF AFMC AFRL/RVBXI schrieb:
Hi,
I recommend that you look at the following help pages and experiment a
little (maybe create a toy directory with only three or four files with
a few lines each):
?files
?dir
?grep
?strsplit
Good luck!
Stephan
jd6688 schrieb:
Here are what i am going to accomplish:
I have 400 files
Hi Trafim,
take a look at FAQ 7.31.
HTH
Stephan
Trafim Vanishek schrieb:
Dear all,
Does anybody know the probable reason why = gives false when it should give
true?
These two variables are of the same type, and everything works in the cycle
but then it stops when they are equal.
this is
Hi,
does this do what you want?
d - cbind(d,apply(d[,c(2,3,4)],1,mean),apply(d[,c(2,3,4)],1,sd))
HTH,
Stephan
Abhishek Pratap schrieb:
Hi All
I have a data frame in which there are 4 columns .
Column 1 : name
Column 2-4 : values
I would like to calculate mean/Standard error of values
Hi Aaron,
try the argument statistic=mean. Then boot() will give you the mean
turn angle in your actual data (which appears to be 6 degrees, judging
from what you write), as well as the means of the bootstrapped data.
Then you can get (nonparametric) bootstrap CIs by
Hi,
it looks like when you read in your data.frames, you didn't tell R to
expect dates, so it treats the Date columns as factors. Judicious use of
something along these lines before doing your comparisons may help:
arr$Date - as.Date(as.character(arr$Date),format=something)
Then again, it
Hi David,
str(g) gives you a ton of output, and the @fit slot has a $ics
component, part of which has the promising name of AIC...
(g...@fit)$ics[1]
HTH,
Stephan
David Rubins schrieb:
Hi,
Is there anyway to extract the AIC and BIC from the summary statistics
in fGarch package?
g -
Hi Karthik,
I think you will need to do something like
jpeg(histograms.jpg)
hist(rnorm(100))
dev.off()
HTH
Stephan
Karthik schrieb:
Hello Tal,
This is the code.
hist(rnorm(100))
jpeg(histogram.jpeg)
---
Even when I decrease the quality, I
Hi Alex,
I personally have had more success with the (more complicated)
collinearity diagnostics proposed by Belsley, Kuh Welsch in their book
Regression Diagnostics than with Variance Inflation Factors. See also:
Belsley, D. A. A Guide to using the collinearity diagnostics.
Computational
Hi Tim,
Variance proportions (and condition indices) are exactly the tools
described in Belsley, Kuh Welsch, Regression Diagnostics - see my
previous post. Good to see I'm not the only one to use them! BKW also
describe in detail how to calculate all this using SVD, so you don't
need to use
Hi,
I like RSeek:
http://www.rseek.org/
And of course searching the R-help list archives, e.g., via
http://www.nabble.com/R-f13819.html
Good hunting!
Stephan
Carl Witthoft schrieb:
One thing I'll say: it's going to be much easier to Google for
references to Rstat than to R .
I've been
Dear guRus,
I would like to loop over a medium amount of Sweave code, including both R and
LaTeX chunks. Is there any way to do so? As an illustration, can I create a
.tex file like this using a loop within a .Rnw file, where the 1,2,3 comes
from some iteration variable in R?
internet.use[internet.use==Never | internet.use==Don't know] - 0
internet.use[internet.use!=0] - 1
HTH,
Stephan
Spencer schrieb:
Hi All,
I'm relatively new to R. I have a variable, internet use, which ranges
from Almost everyday, Several times a week, Several times a month,
Seldom, Never,
Dear guRus,
I am trying to replace ~ by $\sim$ for TeX. However, I can't get the
backslash to work. I would like to turn DV~IV into DV$\sim$IV.
sub(~,$\sim$,DV~IV) = DV$sim$IV
sub(~,$\\sim$,DV~IV) = DV$sim$IV
sub(~,$\\\sim$,DV~IV) = DV$sim$IV
sub(~,$sim$,DV~IV) = DV$\\sim$IV
Alternatives
[EMAIL PROTECTED]
On Mon, 14 Jul 2008, Stephan Kolassa wrote:
Dear guRus,
I am trying to replace ~ by $\sim$ for TeX. However, I can't get
the backslash to work. I would like to turn DV~IV into DV$\sim$IV.
sub(~,$\sim$,DV~IV) = DV$sim$IV
sub(~,$\\sim$,DV~IV) = DV$sim$IV
sub(~,$\\\sim
Kevin,
By default, many functions only *return* a result, they don't explicitly
*print* it. There is no difference in interactive mode, but there is in
batch mode (e.g., in loops). Use print() or cat() for explicit printing
to console.
for(i in 1:100)
{
cat(i,\n)
}
HTH,
Stephan
?print
?cat
HTH
Stephan
[EMAIL PROTECTED] schrieb:
Hi,
I know this must be a stupid question, and sorry in advance for being such a
noob. But, is there way to get R to display only certain variables when running
a script. I know if you want to see the value of a variable when using the
Hi Bertolt,
by(test,INDICES=test$groupID,FUN=mean)
And today's a holiday in Switzerland, so stop working already ;-)
HTH
Stephan
Bertolt Meyer schrieb:
Dear R users,
I have a newbie-question that I couldn't resolve after reading through
several pieces of documentation and searching the
Hi Thomas,
I have looked through several R books and searched the web to find
answers to my questions with no results. I have an ensemble of time
series data (what are essentially Monte Carlo simulations) which I would
like to summarize as a time series of boxplots, by date/time at 6-hr
Hi Tobias,
If you want to do inferential statistics with groups differing
systematically on the covariate, you will need to be extra careful in
your interpretation. See, e.g., Miller, G. A. Chapman, J. P.
Misunderstanding Analysis of Covariance, Journal of Abnormal Psychology,
2001, 110,
Dennis,
I assume that there is a set.seed() somewhere in your script, possibly
in something you source()d (hopefully not in anything library()d).
Have you tried successively removing/commenting parts of the script
before the sample() command until the problem goes away? That way you
should
Hi Elke,
the matrix you are trying to create has 5^21 = 476837158203125 rows and 21
columns. I'm afraid Thierry's proposal with n=21 will not fit into memory. And
the file you are writing is 5^21*5*8 bytes big, about 80108643 GB.
Perhaps you want to think a little more about what you are
Hi Ashish,
I am rather more concerned about whether what you outlined is legitimate
(your question 1 below). If you are looking at children, higher AGE will
be associated with higher TIV, so both variables would essentially
measure the same thing (see Miller Chapman, Misunderstanding
Hi,
The CRAN Task View on Optimization may help:
http://stat.ethz.ch/CRAN/web/views/Optimization.html
HTH,
Stephan
barbara.r...@uniroma1.it schrieb:
Devo risolvere un problema di minimo vincolato con vincoli di uguaglianza e un
altro con vincoli di uguaglianza e disuguaglianza.
Cosa posso
Hi Vishal,
re 1]: Ben Bolker very kindly shared an R reimplementation of Kaplan's
Matlab code a little while ago:
http://www.nabble.com/Approximate-Entropy--to21144062.html#a21149402
Best wishes
Stephan
Vishal Belsare schrieb:
Is there any existing implementation in R/S of :
1] Pincus
Hi Paul,
you do *not* want to do this, it takes too long and may lead to rounding
errors. Vectorize everything, e.g., use sum(meanrotation). And look into
?apply, and google for the R Inferno.
And no, there is no +=...
Good luck!
Stephan
pgseye schrieb:
Hi,
I'm learning to write some
Hi Simeon,
?gsub
HTH,
Stephan
simeon duckworth schrieb:
I am trying to simplify a text variable by matching and replacing it with a
string in another vector
so for example in
colours - paste(letters,colours(),stuff,LETTERS)
find and replace with (red,blue,green,gray,yellow,other) -
)
}
On Sat, Mar 28, 2009 at 9:45 AM, Stephan Kolassa stephan.kola...@gmx.dewrote:
Hi Simeon,
?gsub
HTH,
Stephan
simeon duckworth schrieb:
I am trying to simplify a text variable by matching and replacing it with
a
string in another vector
so for example in
colours - paste(letters,colours
red xx xxx xx
xx xxx xxx xx blue xx xx xx xx x
x xx xx xx xx red
red xx xx xx xx xx
xx xx xx xx xx xx
xx x x x x
which i'd like to replace with
red
blue
red
other
other
thanks
On Sat, Mar 28, 2009 at 2:38 PM, Stephan Kolassa stephan.kola...@gmx.dewrote:
Hi Simeon,
I'm
Hi,
if you are looking for *natural* cubic splines (linear beyond the outer
knots), you could use rcs() in Frank Harrell's Design package.
HTH,
Stephan
David Winsemius schrieb:
If one enters:
??spline
... You get quite a few matches. The one in the stats functions that
probably answers
Hi,
ets() in Hyndman's forecast package allows you to specify which one of
the many smoothing variants (additive/multiplicative season, damped
trend, additive/multiplicative errors) you want.
HTH,
Stephan
minben schrieb:
I want to use double-exponential smoothing to forecast time series
Hi Alina,
your approach sounds problematic - you can always get a smaller RSS if
you add terms to your model, so your approach will always go for larger
models, and you will end up overfitting. Consider information criteria,
e.g., AIC or BIC, which penalize larger models. References for AIC
Hi David,
David Winsemius schrieb:
The splinefun documentation indicates that natural is one of the types
of cubic spline options available.
That sounds good, didn't know that... rcs() has the advantage of coming
with a book (Harrell's Regression Modeling Strategies).
Does rcs actually do
Hi Perry,
my impression after a very cursory glance: this looks like noise.
Perhaps you should think a little more about your series - what kind of
seasonality there could be (is this weekly data? or monthly?), whether
the peaks and troughs could be due to some kind of external driver,
citation()
HTH,
Stephan
Tom Backer Johnsen schrieb:
What is the correct citation to R in BibTeX format? I have looked in
the R pages but so far without any luck.
Tom
__
R-help@r-project.org mailing list
Hi,
Gavin Simpson wrote:
I bemoan the apparent inability of those asking such questions to use
the resources provided to solve these problems for themselves...
Looking at all the people who quite obviously do NOT read the posting
guide and provide commented, minimal, self-contained,
f[rowSums(f=1)0,colSums(f=1)0]
Judging from your result, you want less than or equal to 1.
HTH,
Stephan
Crosby, Jacy R schrieb:
How can I delete both rows and columns that do not meet a particular cut off
value.
Example:
d - rbind(c(0,1,6,4),
+ c(2,5, 7,5),
+ c(3,
Hi Wacek,
Wacek Kusnierczyk schrieb:
... or gain a bit on performance by doing the threshold comparison on
the whole matrix just once at once:
dd = d = 1
d[apply(d, 1, any), apply(d, 2, any)]
d[apply(dd, 1, any), apply(dd, 2, any)]
Or not?
Cheers,
Stephan
Hi Marko,
this may be helpful:
http://www.ingentaconnect.com/content/bpl/rssb/2008/0070/0001/art5;jsessionid=an2la3spa0n5h.alexandra?format=print
Happy modeling!
Stephan
useR schrieb:
Hi R helpers,
One rather statistical question?
What would be the best startegy to shortlist
Hi JD,
do you have the pdf open in some app, e.g., Acrobat Reader? If the file
is open, R can't write on it. My (German) errors in this case look like
yours:
Fehler in pdf(paste(pic.directory, /full_map.pdf, sep = )) :
unable to start device pdf
Zusätzlich: Warning message:
In
take all of the matrix *except* those indices, so XW4[,-2] is the
matrix XW4 with the 2nd column *deleted*.
Cheers,
Stephan
Itziar Frades Alzueta schrieb:
Hi,
Does anyone know what the negative indexing of a matrix mean?
I am using the RWeka and this evaluate classifier does not work
Hi,
fish.new - fish[fish$GeoArea==1 fish$Month==10,]
HTH,
Stephan
pfc_ivan schrieb:
I am a beginner using this R software and have a quick question.
I added a file into the R called fish.txt using this line.
fish-read.table(fish.txt, head=T, fill=T)
The .txt file looks like this.
Hi Adam,
My (and, judging from previous traffic on R-help about power analyses,
also some other people's) preferred approach is to simply simulate an
effect size you would like to detect a couple of thousand times, run
your proposed analysis and look how often you get significance. In your
Try
pdf(foo.pdf)
plot(x)
dev.off()
Other possibilities are jpeg(), tiff(), postscript() etc.
HTH,
Stephan
julien cuisinier schrieb:
Hi List,
My apologies in advance if question is simplistic, I am quite new to R graphics capabilities and I could not find anything in past threads...
I
Assuming your data are in a data.frame called dataset,
apply(dataset,2,median)
should work. Look at
?apply
HTH,
Stephan
Frank Zhang schrieb:
I am new to R. How can I get column median? Thanks.Frank
[[alternative HTML version deleted]]
. Kramer schrieb:
On Mon, 26 Jan 2009, Stephan Kolassa wrote:
My (and, judging from previous traffic on R-help about power analyses,
also some other people's) preferred approach is to simply simulate an
effect size you would like to detect a couple of thousand times, run your
proposed analysis and look
Thank you, Rolf, for this well-deserved spanking :-)
I promise to amend my ways and think before I send in the future.
Best,
Stephan
Rolf Turner schrieb:
On 29/01/2009, at 8:39 AM, Stephan Kolassa wrote:
Assuming your data are in a data.frame called dataset,
apply(dataset,2,median
Hi Thomas,
Thomas Mang schrieb:
I have a question here: I am not sure if I understand your 'fit the full
model ... to the permuted data set'. Am I correct to suppose that once
the residuals of the reduced-model fit have been permuted and added back
to the fitted values, the values obtained
Hi Cleber,
there is no hard-and-fast magic number here. Ill-conditioning also
depends on what you are trying to do (inference? prediction?). The
condition number is only one of a number of conditioning/collinearity
diagnostics commonly used. Take a look at:
Golub, G. H., Van Loan, C. F.
Hi Erika,
the bootstrap is more of a tool to assess variability and create
confidence intervals, and people like me prefer permutation tests for
testing hypotheses, perhaps something like this:
###
sample1 - rnorm(20)
sample2 - rnorm(20)
n.perms
Hi Evrim,
chisq.test() performs chi^2 GOF tests.
However, the chi^2 test may be sensitive to how you bin your data if you
are working with continuous data (as I infer from your mentioning
cutting). You may want to look at other GOF tests. Perhaps the NIST
statistics is a good starting point:
?shell
HTH,
Stephan
Aurelie Labbe, Dr. schrieb:
Hi,
I am trying to use the R command system under windows (XP). If I try the simple command system(mkdir toto) to create a directory toto, it tells me that it cannot find the command mkdir...
Does anybody knows how it works ? Is it a path
Hi,
does this help?
http://www.nabble.com/factor-question-to18638814.html#a18638814
HTH,
Stephan
Ine schrieb:
Hi all,
I have got a seemingly simple problem (I am an R starter) with subsetting my
data set, but cannot figure out the solution: I want to subset a data set
from six to two
Hi Mihai,
one (very bad style) way would be
if (FALSE) {
comment
comment
comment
}
But putting a # in front of every line is easier to spot in the code.
HTH,
Stephan
mihai.mira...@bafin.de schrieb:
Hi everybody,
I use for the moment # at the begining of each line for comments.
Is
Hi Juliet,
Juliet Hannah schrieb:
One simple thing to try would be to form categories
Simple but problematic. Frank Harrell put together a wonderful page
detailing all the issues with categorizing continuous data:
http://biostat.mc.vanderbilt.edu/twiki/bin/view/Main/CatContinuous
So:
Hi Juliet,
Juliet Hannah schrieb:
I should have emphasized, I do not intend to categorize -- mainly
because of all the discussions I have seen on R-help arguing against
this.
Sorry that we all jumped on this ;-)
I just thought it would be problematic to include the variable by
itself. Take
Hi Farrel,
I usually simulate in cases like this - pick the effect size and
distributions you conjecture, simulate your data 10,000 times and look
how often t.test() gets you a significant difference.
Good luck,
Stephan
Farrel Buchinsky schrieb:
I have used the function power.t.test()
Hi Emma,
unfortunately, rounding variables before taking the difference will not
solve your problem, because the *rounded* variables are subject to the
same (effectively) random internal representation. Examples:
round(8.3,20)-round(7.3,20)=1
[1] TRUE
round(2.3,20)-round(1.3,20)=1
[1]
Hi Mauricio,
Mauricio Calvao schrieb:
1) I would like very much to use R for processing some big data files
(around 1.7 or more GB) for spatial analysis, wavelets, and power
spectra estimation; is this possible with R? Within IDL, such a big data
set seems to be tractable...
There are some
Dear guRus,
is there a package that calculates the Approximate Entropy (ApEn) of a
time series?
RSiteSearch only gave me a similar question in 2004, which appears not
to have been answered:
http://finzi.psych.upenn.edu/R/Rhelp02a/archive/28830.html
RSeek.org didn't yield any results at
Ben,
thanks a lot for that! I have a (reasonably) good idea about what ApEn
should be, and I'll try to understand your translation of Kaplan's
Matlab code.
Best,
Stephan
Ben Bolker schrieb:
Stephan Kolassa wrote:
Dear guRus,
is there a package that calculates the Approximate Entropy
you for your time!
Stephan Kolassa
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible
Hi Alex,
you can have R execute OS commands using system(), perhaps you can call
pdfmerge that way.
Or (admittedly less elegantly), you can use LaTeX with the pdfpages
package, either using Sweave or system(pdflatex ...).
Good luck,
Stephan
Alex Pine schrieb:
Hello all,
My question has
Dear useRs,
I have a list, each entry of which is a matrix of constant dimensions.
Is there a good way (i.e., not using a for loop) to apply a mean to each
matrix entry *across list entries*?
Example:
foo - list(rbind(c(1,2,3),c(4,5,6)),rbind(c(7,8,9),c(10,11,12)))
Statistical Computing Facility
Department of Statistics
UC Berkeley
spec...@stat.berkeley.edu
On Tue, 30 Dec 2008, Stephan Kolassa wrote:
Dear useRs,
I have a list, each entry of which is a matrix of constant
for your help!
Stephan
Marc Schwartz schrieb:
on 12/30/2008 08:33 AM Stephan Kolassa wrote:
Dear useRs,
I have a list, each entry of which is a matrix of constant dimensions.
Is there a good way (i.e., not using a for loop) to apply a mean to each
matrix entry *across list entries*?
Example:
foo
Hi Nidhi,
in your very last line, you set
Observed_Scores = GenData[1]
After that, Observed_Scores is not a matrix any more, but a list with a
single entry Observed_Scores$model, which is in fact a data.frame, but
with different dimensions than you initially set Observed_Scores to.
HTH,
Hi Jörg,
?by
here probably something like
by(data=mydata,INDICES=mydata$group, FUN=sd, ...)
HTH,
Stephan
Jörg Groß schrieb:
Hi,
I have a data frame and would like to have summary statistics for
grouped data.
With summary() I get the central tendencies for the overall data.
How can I
1 - 100 of 148 matches
Mail list logo