Hi Martin,
it sounds like you want the difference between the first and the last
observation per user, not, e.g., all the date differences between
successive observations of each separate user. Correct me if I'm wrong.
That said, let's build some toy data:
set.seed(1)
dataset -
Hi,
it usually is a good idea to look at the output of citation() (which,
however, also often is auto-generated) or at the authors listed in
package vignettes.
And thanks for citing R package authors. When I review papers, I often
have to remind authors of this...
Best
Stephan
On
Have you looked at ?save and ?load?
As I already wrote here:
http://stackoverflow.com/questions/14761496/saving-and-loading-a-model-in-r
Best,
Stephan
On 07.02.2013 22:33, James Jong wrote:
Say I train a model in caret, e.g.:
RFmodel - train(X,Y,method='rf',trControl=myCtrl,tuneLength=1)
Hi Katja,
try fitting the original model using ML (not REML) with the parameter
method = ML:
PModell1 -lme(sqrt(Earthwormsm.2)~
Treatment+Pflanzenfrischmasse+aBodenfeuchte+bBodenfeuchte+Gfrischmasse+Ltrockenmasseanteil+KCN+I+Eindringtiefe,
random=~1|Block/Treatment/Cluster/Patch,
?ecdf
Best,
Stephan
On 03.03.2012 13:37, drflxms wrote:
Dear all,
I am familiar with obtaining the value corresponding to a chosen
probability via the quantile function.
Now I am facing the opposite problem I have a value an want to know it's
corresponding percentile in the distribution. So
Hi Alex,
could you be a little more specific as to what exactly you mean by
plotting many x's and y's with one legend per plot?
Please note what appears at the bottom of every R-help mail:
PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
and provide commented,
Hi,
you could set a dummy variable to FALSE outside the outermost loop. If
the break condition is met in the inner loop, set the dummy variable to
TRUE before breaking and test its truth status in the outer loop.
HTH
Stephan
Am 01.06.2011 21:25, schrieb Salih Tuna:
Hi,
I am looking for a
Hi Salih,
here you go:
dummy - FALSE
for ( ii in 1:5 ) {
for ( jj in 3:6 ) {
cat(ii=,ii,; jj=,jj,\n,sep=)
if ( ii == jj ) {
dummy - TRUE
break
}
}
if ( dummy ) break
}
###
Note
Hi Nat,
I guess something like
as.Date(as.character(3/4/2007),format=%d/%m/%Y)
should work - as.character() coerces the factors to characters, which
the as.Date() function can work with, given the right format argument.
HTH
Stephan
Am 01.06.2011 22:59, schrieb Struckmeier, Nathanael:
I'm
Or just include is.na=TRUE in the definition of kurtosis():
kurtosis-function(x) {
m4-sum((x-mean(x,na.rm=TRUE))^4,na.rm=TRUE)/length(x)
s4-var(x,na.rm=TRUE)^2
m4/s4 - 3 }
HTH
Stephan
Am 29.05.2011 11:34, schrieb Jim Holtman:
kurtosis(fem[!is.na(fem)])
Sent from my iPad
On May 29,
Dear all,
may I suggest the acronym IOTT for the inter-ocular trauma test?
Now we just need someone to implement iot.test(). I assume it will
appear on CRAN within the next 24 hours.
Looking forward to yet another base package,
Stephan
Am 25.05.2011 23:36, schrieb Greg Snow:
How can
Hi,
this sounds like a standard problem in Computational Geometry - I guess
game developers have to deal with something like this all the time. You
may want to look at a textbook or two.
An article with the promising title On fast computation of distance
between line segments can be found
Hi Clare,
you want to go here:
http://stats.stackexchange.com/questions
HTH
Stephan
Am 04.03.2011 12:08, schrieb Clare Embling:
Hi,
I know this forum is for R-related issues, but the question I have is a statistical
question I was wondering if anyone could recommend a good statistics
Hi,
this is R FAQ 7.31.
http://cran.r-project.org/doc/FAQ/R-FAQ.html
HTH,
Stephan
Am 01.02.2011 14:49, schrieb mlancee:
Hi,
I have a seemingly easy question that has been keeping be busy for quite a
while. The problem is the following:
0.1 + 0.1 + 0.1 == 0.3
[1] FALSE
Why is this false?
Hi Alex,
help.search(uniform)
HTH,
Stephan
Am 08.09.2010 15:36, schrieb Alaios:
Hello,
I would like to uniformly distribute values from 0 to 200. Can someone help me
find the appropriate uniform distribution generator?
I would like to thank you in advance for your help.
Best Regards
Alex
Hi Alex,
I'm slightly unclear as to why you would want to restructure your nice
six-column data.frame (why six? One column for the data and four for the
factors should make five, shouldn't it? I guess you have a subject ID in
one column?) into some monstrosity which I assume you would fill
Hi,
basically, you know 5 periods later. If you use a good error measure,
that is.
I am a big believer in AIC for model selection. I believe that arima()
also gives you the AIC of a fitted model, or try AIC(arima1).
Other ideas include keeping a holdout sample or some such.
I'd recommend
Dear all,
I am stumped at what should be a painfully easy task: predicting from an lm
object. A toy example would be this:
XX - matrix(runif(8),ncol=2)
yy - runif(4)
model - lm(yy~XX)
XX.pred - data.frame(matrix(runif(6),ncol=2))
colnames(XX.pred) - c(XX1,XX2)
predict(model,newdata=XX.pred)
I
Hi,
simulating would still require you to operationalize the lack of
normality. Are the tails too heavy? Is the distribution skewed? Does it
have multiple peaks? I suspect that the specific choices you would make
here would *strongly* influence the result.
My condolences on the client you
for the help!
Michael
Stephan Kolassa 07/17/10 4:50 PM
Mike,
I am slightly unclear on what you want to do. Do you want to check rows
1 and 7 or 1 *to* 7? Should c1 be at least 100 for *any one* or *all*
rows you are looking at, and same for c2?
You can sort your data like this:
data - data
Mike,
I am slightly unclear on what you want to do. Do you want to check rows
1 and 7 or 1 *to* 7? Should c1 be at least 100 for *any one* or *all*
rows you are looking at, and same for c2?
You can sort your data like this:
data - data[order(data$ds),]
Type ?order for help. But also do this
Hi,
one comment: Claeskens and Hjort define AIC as 2*log L - 2*p for a model
with likelihood L and p parameters; consequently, they look for models
with *maximum* AIC in model selection and averaging. This differs from
the vast majority of authors (and R), who define AIC as -2*log L + 2*p
and
Hi,
I recommend that you look at the following help pages and experiment a
little (maybe create a toy directory with only three or four files with
a few lines each):
?files
?dir
?grep
?strsplit
Good luck!
Stephan
jd6688 schrieb:
Here are what i am going to accomplish:
I have 400 files
Hi Elaine,
in general, stepwise selection is a very bad idea:
Whittingham, M. J.; Stephens, P. A.; Bradbury, R. B. Freckleton, R. P.
Why do we still use stepwise modelling in ecology and behaviour? Journal
of Animal Ecology, 2006, 75, 1182-1189
HTH
Stephan
elaine kuo schrieb:
Dear list,
Hi,
one possibility would be to calculate the convex hull using chull(). I
believe that the hull points are returned by chull() in a clockwise
order (?), so the points between the rightmost and the leftmost point in
the chull() result are the lower half of the convex hull. Remove these
the
percentage of points outside it?
Thanks!
Asha
Stephan Kolassa wrote:
Hi,
one possibility would be to calculate the convex hull using chull(). I
believe that the hull points are returned by chull() in a clockwise
order (?), so the points between the rightmost and the leftmost point
Hi,
your problem is called string matching. Search for that term on
rseek.org, there are a couple of functions and packages. And Wikipedia
can tell you everything you ever wanted to know about string matching
(and more).
HTH,
Stephan
Cable, Samuel B Civ USAF AFMC AFRL/RVBXI schrieb:
this.
And I did find a way around could this allow me to set MAPE as a criteria?
Phani
On Tue, Jun 29, 2010 at 12:47 AM, Stephan Kolassa stephan.kola...@gmx.dewrote:
Hi Phani,
to get the best Holt's model, I would simply wrap a suitable function
calling ets() within optim() and optimize for alpha
Hi Phani,
to get the best Holt's model, I would simply wrap a suitable function
calling ets() within optim() and optimize for alpha and beta - the
values given by ets() without constraints would probably be good
starting values, but you had better start the optimization with a
variety of
Have you set the correct working directory?
?setwd
?getwd
HTH
Stephan
Robert Tsutakawa schrieb:
I am trying to read a source program into a mac pro laptop, which uses
Snow Leopard. R is unable to find the file containing my source
program. I'm using the function source( file name). I
Hi Jon,
does the empirical cumulative distribution function do what you want?
dat$q.score - ecdf(dat$score)(dat$score)
?ecdf
HTH
Stephan
Jonathan Beard schrieb:
Hello all,
Thanks in advance for you attention.
I would like to generate a third value that represents the quantile
value of a
Hi,
David Winsemius schrieb:
snip
This would imply that ozon is a list or dataframe.
snip
And you tried to give the whole list to a function that only wants a
vector.
And whenever you suspect that your data types clash, try str() to find
out just what kind of thing your data is. Here:
Hi Kristina,
Thierry's solution is certainly the correct one in terms of keeping
within R's philosophy... but I personally find a series of conditional
assignments easier to understand - see below for an example.
HTH,
Stephan
#
# Example
Hi Helga,
did you load the boot library, which contains glm.diag(), by calling
library(boot)?
HTH
Stephan
Helga margrete holmestad schrieb:
I have made a poisson regression by using glm.
Then I want to check if I have over-dispersion in my model. I tried to use
glm.diag(fit.1), but then R
Or:
weekdays(as.Date(2010-05-24))
HTH,
Stephan
Wu Gong schrieb:
?strptime will helps.
d - as.Date(01/05/2007,%m/%d/%Y)
format(d, %A, %b %d, %Y)
[1] Friday, Jan 05, 2007
-
A R learner.
__
R-help@r-project.org mailing list
Hi,
this is FAQ 7.31: pb and pr are floating-point numbers that are coerced
to integer for rep(), and this does not always work the way you want.
HTH
Stephan
Covelli Paolo schrieb:
Hi,
I've got the following code:
p - 0.34
pb - p*100
pr - (1-p)*100
A - rep(0,pb) # a vector with 34
Hi,
first get the densities without plotting the histogram:
foo - hist(x, plot=FALSE)
then plot the histogram and feed the rounded densities, converted to
character, to the labels argument (instead of just labels=TRUE):
hist(x, freq=F, xlab='',ylab=Percent of Total, col=skyblue,
Hi Claus,
welcome to the wonderful world of collinearity (or multicollinearity, as
some call it)! You have a near linear relationship between some of your
predictors, which can (and in your case does) lead to extreme parameter
estimates, which in some cases almost cancel out (a coefficient of
Hi Kim,
look at the reshape() command with direction=wide. Or at the reshape
package.
HTH,
Stephan
Kim Jung Hwa schrieb:
Hi All,
Can someone help me reshape following data:
Var1 Var2 Val
A X 1
A Y 2
A Z 3
B X 4
B Y 5
B Z 6
to some kind of matrix/tabular format (preferably as a matrix),
(foo %% 2) == 0
See ?%%
HTH
Stephan
tj schrieb:
Hi,
anyone here who knows how to determine if an integer is odd or even in
R?
Thanks.
tj
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the
?by may also be helpful.
Stephan
Steve Murray schrieb:
Dear all,
I have a dataset of 1073 rows, the first 15 which look as follows:
data[1:15,]
date year month day rammday thmmday
1 3/8/1988 1988 3 81.430.94
2 3/15/1988 1988 3 152.860.66
3 3/22/1988
Hi Thomas,
%in% does the trick:
vector_1 - c(Belgium, Spain, Greece, Ireland, Luxembourg,
Netherlands,Portugal)
vector_2 - c(Denmark, Luxembourg)
vector_1[!(vector_1 %in% vector_2)]
HTH,
Stephan
Thomas Jensen schrieb:
Dear R-list,
I have a problem which I think is quite basic, but so far
Hi Mike,
the following works for me:
SITE - ordered(c(101,102,103,104))
WDAY -
ordered(c(MON,TUE,WED,THR,FRI),levels=c(MON,TUE,WED,THR,FRI))
TOD - ordered(c(MORN,AFTN),levels=c(MORN,AFTN))
foo - expand.grid(SITE=SITE,WDAY=WDAY,TOD=TOD)
foo[order(foo$SITE),]
If this doesn't solve your
Hi Matteo,
just use forecast.Arima() with h=2 to get forecasts up to 2 steps ahead.
R will automatically use forecast.Arima() if you call forecast() with an
Arima object.
library(forecast)
model - auto.arima(AirPassengers)
forecast(model,h=2)
HTH,
Stephan
Matteo Bertini schrieb:
Hello
Hi Martin,
it is slightly unclear to me what you are trying to achieve... are you
trying to tabulate how often each value appears in datjan[,4]? Then
table(datjan[,4]) may be what you want.
HTH
Stephan
Schmidt Martin schrieb:
Hello
I'm working with R since a few month and have still many
Hi Tal,
basically, by summing over the (pointwise) density, you are
approximating the integral over the density (which should be around 1)
- but to really do a rectangular approximation, you will of course need
to multiply each function value by the width of the corresponding
rectangle. I'd
Hi Hyo,
how about as.vector(ttx1) or as.vector(t(ttx1))?
HTH
Stephan
Hyo Lee schrieb:
Hi guys,
I have a very simple question.
I'm trying to make multiple columns to a single column.
For example,
*ttx1* is a 46*72 matrix.
so, I tried this.
*d1=ttx1[,1]
d2=ttx1[,2]
...
d72=ttx1[,72]*
Hi Mike,
take an index vector that selects Monday and Tuesday out of each week,
and then run a restricted random permutation on this vector which only
permutes indices within each week. rperm() is in the sna package.
library(sna)
foo - rep(c(TRUE,TRUE,FALSE,FALSE,FALSE),26)
Hi,
use dnorm() for the density and polygon() to shade the area underneath,
with suitably many x values so your density looks smooth.
HTH,
Stephan
claytonmccandless schrieb:
I want to shade the area under the curve of the standard normal density.
Specifically color to the left of -2 and
Hi,
data.frame(x=x,y=as.numeric(x%in%y))
HTH,
Stephan
joseph schrieb:
hello
can you show me how to create a data.frame from two factors x and y. column 1
should be equal to x and column 2 is 1 if it is common to y and 0 if it is not.
x=factor(c(A,B,C,D,E,F,G))
y=factor(c(B,C,G))
the
Hi,
In that case, I'd recommend reading a good book on time series analysis.
Forecasting: Methods and Applications by Makridakis, Wheelwright and
Hyndman is very accessible. Alternatively, there are probably tons of
webpages on ARIMA, so google around.
Best,
Stephan
testuser schrieb:
Hi,
the help page for arima() suggests looking at predict.Arima(), so take a
look at ?predict.Arima(). You will probably not use the coefficients,
but just feed it the output from arima(). And take a look at
auto.arima() in the forecast package.
HTH
Stephan
testuser schrieb:
I would like
Hi Karthik,
I think you will need to do something like
jpeg(histograms.jpg)
hist(rnorm(100))
dev.off()
HTH
Stephan
Karthik schrieb:
Hello Tal,
This is the code.
hist(rnorm(100))
jpeg(histogram.jpeg)
---
Even when I decrease the quality, I
Hi David,
str(g) gives you a ton of output, and the @fit slot has a $ics
component, part of which has the promising name of AIC...
(g...@fit)$ics[1]
HTH,
Stephan
David Rubins schrieb:
Hi,
Is there anyway to extract the AIC and BIC from the summary statistics
in fGarch package?
g -
Hi,
it looks like when you read in your data.frames, you didn't tell R to
expect dates, so it treats the Date columns as factors. Judicious use of
something along these lines before doing your comparisons may help:
arr$Date - as.Date(as.character(arr$Date),format=something)
Then again, it
Hi Aaron,
try the argument statistic=mean. Then boot() will give you the mean
turn angle in your actual data (which appears to be 6 degrees, judging
from what you write), as well as the means of the bootstrapped data.
Then you can get (nonparametric) bootstrap CIs by
Hi Trafim,
take a look at FAQ 7.31.
HTH
Stephan
Trafim Vanishek schrieb:
Dear all,
Does anybody know the probable reason why = gives false when it should give
true?
These two variables are of the same type, and everything works in the cycle
but then it stops when they are equal.
this is
Hi,
does this do what you want?
d - cbind(d,apply(d[,c(2,3,4)],1,mean),apply(d[,c(2,3,4)],1,sd))
HTH,
Stephan
Abhishek Pratap schrieb:
Hi All
I have a data frame in which there are 4 columns .
Column 1 : name
Column 2-4 : values
I would like to calculate mean/Standard error of values
Dear useRs,
I am trying to read a tab-delimited Unicode text file containing both
latin and cyrillic characters and failing miserably. The file looks like
this (I hope it comes across right):
A B C
3 foo ФОО
5 bar БАР
read.table(foo.txt,sep=\t,header=TRUE)
I
Hi Jean-Baptiste,
two points:
1) Your variable df is a *local* variable which you define in your
function myfunc(), so it is not known outside myfunc(). When you ask
is.data.frame(df), R looks at the global definition of df - which is the
density function of the F distribution. To make your
Hi,
you can permute array dimensions using aperm():
x - 1 : 24
z - array(x, dim=c(6,2,2))
y - aperm(z,perm=c(3,2,1))
y[1,1,]
HTH,
Stephan
Kohleth Chia schrieb:
Dear all,
When I coerce a vector into a multi dimensional array, I would like R to start
filling the array along the last
Hi Stephanie,
it sounds like R's exception handling may help, something like this:
foo - try(eblest(i, dir5, sterr5, weight5, aux5))
if ( class(foo) == try-error ) next
Take a look at ?try.
HTH,
Stephan
Stephanie Coffey schrieb:
Hi all,
I'm running R version 2.9.2 on a PC.
I'm having a
Dear guRus,
is there a version of boot() that deals with array data and array-valued
statistics? For example:
foo - array(rnorm(120),dim=c(3,5,8))
means - apply(foo, MARGIN=c(2,3), FUN=mean)
means contains the means over the first dimension of foo: a 5x8 array.
Now I would like to bootstrap
Hi teo,
try lines() instead of points().
HTH
Stephan
teo schrieb:
Hi:
Could you please guide me how to plot weekly.t2. Below are my values and I
am only able to plot weekly t1 but failing to add a second line
representng weekly t2.
Thanks.
weekly.t1
[1] 228.5204 326.1224 387.2449 415.4082
Hi Ray,
First possibility: just select those combinations that contain AL:
combos.with.AL - possible.combos[rowSums(possible.combos==AL)0,]
Second possibility: create all 3-combos *without* AL:
bands.without.AL - c(B, DB, DG, G, K, LB, LG, MG, O,
P, PI, PK, PU, R, V, W, Y)
Hi Jonathan,
grep() returns a vector giving either the indices of the elements of
'x' that yielded a match or, if 'value' is 'TRUE', the matched elements
of 'x' (quoting from the help page, see ?grep).
So you probably want to test whether this vector is empty or not - in
other words,
?predict
HTH
Stephan
DispersionMap schrieb:
I have some data that i ran a regression on and got the usual r output, with
the a intercept and b coefficient for my independent variable.
i want to forecast the number of future events using these parameters.
What function / packages in R would
Hi,
from what I understand, you may be interested in text mining, so perhaps
you want to look at the tm package.
Then again, depending on what you are really trying to do, you may be
better served with perl, awk and similar tools than with R...
HTH,
Stephan
Schwan schrieb:
Dear all,
i
Hi Bob,
Muenchen, Robert A (Bob) wrote:
Does anyone have a program that graphs the growth of R packages? I don't
know if that historical data is around.
John Fox had a slide on this in his useR 2008 talk The Social
Organization of the R Project (page 7), with package counts up to March
Hi Luna,
you may want to look at the IIF website, http://www.forecasters.org
They have a mailing list for forecasters - you may get more of a
response there than on a dedicated R list.
HTH,
Stephan
Luna Moon schrieb:
Hi all,
Could anybody please shed some lights on me about good
Try this:
boxplot.stats(x[x[,2]0,2],do.conf=FALSE)
HTH,
Stephan
e-letter schrieb:
Readers,
I have a data set as follows:
1,1
2,2
3,3
4,4
5,3
6,2
7,-10
8,-9
9,-3
10,2
11,3
12,4
13,5
14,4
15,3
16,2
17,1
I entered this data set using the command 'read.csv'. I want to
exclude values fewer than
Hi,
Rob Hyndman's forecast package does exponential smoothing forecasting
based on state space models (and lots of other stuff, ARIMA et al.).
It's not exactly the companion package to his book, but it comes close.
The book's (Forecasting with Exponential Smoothing - The State Space
Dear guRus,
I am starting to work with the ggplot2 package and have two very dumb
questions:
1) deterministic position_jitter - the jittering is stochastic; is there
any way to get a deterministic jittering? For instance:
example.data -
Are you looking for reshape()?
HTH,
Stephan
Edward Chen schrieb:
Hi all,
I have a mxn matrix that consists of 28077 rows of features and 30 columns
of samples. I want to normalize each row for the samples for each feature.
I have tried normalize and scale functions but they don't seem to
Hi,
Try kernel smoothing via the density() function. And take a look at ecdf().
HTH,
Stephan
sendona essile schrieb:
How can I plot the graph of a density of a sample with an unknown distribution?
I can provide any sample size which is required. I want to have a smooth
density graph of my
strsplit(1041281__2009_08_20_.lev, split=_)[[1]][1]
HTH,
Stephan
stephen sefick schrieb:
x - 1041281__2009_08_20_.lev
I would like to split this string up and only extract the leading numbers.
1041281
to use as a label for a data column in a bigger for loop function to
read in data.
Hi Rakknar,
I believe that the menu command File - Save to File (or some such, I
use the German version) in the R GUI for Windows (I'm unclear on your
OS) has not yet been suggested. This writes a file containing the entire
R session. Does this create the kind of log you are looking for?
Hi,
Mao Jianfeng schrieb:
plot(c(min(myda$traits),max(myda$traits)),c(-0.03,0.5), xlab='State',
ylab='ylab')
Here, you are plotting two data points: (min(myda$traits),-0.03) and
(max(myda$traits),0.5). Try this:
plot(c(min(myda$traits),max(myda$traits)),c(-0.03,0.5), xlab='State',
Hi Alex,
I personally have had more success with the (more complicated)
collinearity diagnostics proposed by Belsley, Kuh Welsch in their book
Regression Diagnostics than with Variance Inflation Factors. See also:
Belsley, D. A. A Guide to using the collinearity diagnostics.
Computational
Hi Tim,
Variance proportions (and condition indices) are exactly the tools
described in Belsley, Kuh Welsch, Regression Diagnostics - see my
previous post. Good to see I'm not the only one to use them! BKW also
describe in detail how to calculate all this using SVD, so you don't
need to use
Hi,
The CRAN Task View on Optimization may help:
http://stat.ethz.ch/CRAN/web/views/Optimization.html
HTH,
Stephan
barbara.r...@uniroma1.it schrieb:
Devo risolvere un problema di minimo vincolato con vincoli di uguaglianza e un
altro con vincoli di uguaglianza e disuguaglianza.
Cosa posso
f[rowSums(f=1)0,colSums(f=1)0]
Judging from your result, you want less than or equal to 1.
HTH,
Stephan
Crosby, Jacy R schrieb:
How can I delete both rows and columns that do not meet a particular cut off
value.
Example:
d - rbind(c(0,1,6,4),
+ c(2,5, 7,5),
+ c(3,
Hi Wacek,
Wacek Kusnierczyk schrieb:
... or gain a bit on performance by doing the threshold comparison on
the whole matrix just once at once:
dd = d = 1
d[apply(d, 1, any), apply(d, 2, any)]
d[apply(dd, 1, any), apply(dd, 2, any)]
Or not?
Cheers,
Stephan
Hi,
Gavin Simpson wrote:
I bemoan the apparent inability of those asking such questions to use
the resources provided to solve these problems for themselves...
Looking at all the people who quite obviously do NOT read the posting
guide and provide commented, minimal, self-contained,
citation()
HTH,
Stephan
Tom Backer Johnsen schrieb:
What is the correct citation to R in BibTeX format? I have looked in
the R pages but so far without any luck.
Tom
__
R-help@r-project.org mailing list
Hi Perry,
my impression after a very cursory glance: this looks like noise.
Perhaps you should think a little more about your series - what kind of
seasonality there could be (is this weekly data? or monthly?), whether
the peaks and troughs could be due to some kind of external driver,
Hi,
if you are looking for *natural* cubic splines (linear beyond the outer
knots), you could use rcs() in Frank Harrell's Design package.
HTH,
Stephan
David Winsemius schrieb:
If one enters:
??spline
... You get quite a few matches. The one in the stats functions that
probably answers
Hi,
ets() in Hyndman's forecast package allows you to specify which one of
the many smoothing variants (additive/multiplicative season, damped
trend, additive/multiplicative errors) you want.
HTH,
Stephan
minben schrieb:
I want to use double-exponential smoothing to forecast time series
Hi Alina,
your approach sounds problematic - you can always get a smaller RSS if
you add terms to your model, so your approach will always go for larger
models, and you will end up overfitting. Consider information criteria,
e.g., AIC or BIC, which penalize larger models. References for AIC
Hi David,
David Winsemius schrieb:
The splinefun documentation indicates that natural is one of the types
of cubic spline options available.
That sounds good, didn't know that... rcs() has the advantage of coming
with a book (Harrell's Regression Modeling Strategies).
Does rcs actually do
Hi Paul,
you do *not* want to do this, it takes too long and may lead to rounding
errors. Vectorize everything, e.g., use sum(meanrotation). And look into
?apply, and google for the R Inferno.
And no, there is no +=...
Good luck!
Stephan
pgseye schrieb:
Hi,
I'm learning to write some
Hi Simeon,
?gsub
HTH,
Stephan
simeon duckworth schrieb:
I am trying to simplify a text variable by matching and replacing it with a
string in another vector
so for example in
colours - paste(letters,colours(),stuff,LETTERS)
find and replace with (red,blue,green,gray,yellow,other) -
)
}
On Sat, Mar 28, 2009 at 9:45 AM, Stephan Kolassa stephan.kola...@gmx.dewrote:
Hi Simeon,
?gsub
HTH,
Stephan
simeon duckworth schrieb:
I am trying to simplify a text variable by matching and replacing it with
a
string in another vector
so for example in
colours - paste(letters,colours
red xx xxx xx
xx xxx xxx xx blue xx xx xx xx x
x xx xx xx xx red
red xx xx xx xx xx
xx xx xx xx xx xx
xx x x x x
which i'd like to replace with
red
blue
red
other
other
thanks
On Sat, Mar 28, 2009 at 2:38 PM, Stephan Kolassa stephan.kola...@gmx.dewrote:
Hi Simeon,
I'm
Hi Vishal,
re 1]: Ben Bolker very kindly shared an R reimplementation of Kaplan's
Matlab code a little while ago:
http://www.nabble.com/Approximate-Entropy--to21144062.html#a21149402
Best wishes
Stephan
Vishal Belsare schrieb:
Is there any existing implementation in R/S of :
1] Pincus
Hi Juliet,
Juliet Hannah schrieb:
One simple thing to try would be to form categories
Simple but problematic. Frank Harrell put together a wonderful page
detailing all the issues with categorizing continuous data:
http://biostat.mc.vanderbilt.edu/twiki/bin/view/Main/CatContinuous
So:
Hi Juliet,
Juliet Hannah schrieb:
I should have emphasized, I do not intend to categorize -- mainly
because of all the discussions I have seen on R-help arguing against
this.
Sorry that we all jumped on this ;-)
I just thought it would be problematic to include the variable by
itself. Take
Hi Mihai,
one (very bad style) way would be
if (FALSE) {
comment
comment
comment
}
But putting a # in front of every line is easier to spot in the code.
HTH,
Stephan
mihai.mira...@bafin.de schrieb:
Hi everybody,
I use for the moment # at the begining of each line for comments.
Is
?shell
HTH,
Stephan
Aurelie Labbe, Dr. schrieb:
Hi,
I am trying to use the R command system under windows (XP). If I try the simple command system(mkdir toto) to create a directory toto, it tells me that it cannot find the command mkdir...
Does anybody knows how it works ? Is it a path
Hi,
does this help?
http://www.nabble.com/factor-question-to18638814.html#a18638814
HTH,
Stephan
Ine schrieb:
Hi all,
I have got a seemingly simple problem (I am an R starter) with subsetting my
data set, but cannot figure out the solution: I want to subset a data set
from six to two
1 - 100 of 148 matches
Mail list logo