Hi Nils,
I would say, pnorm is faster and has a higher precision.
Best,
Matthias
- original message
Subject: Re: [R] Integration + Normal Distribution + Directory Browsing
Processing Questions
Sent: Mon, 22 Jan 2007
From: Nils Hoeller[EMAIL PROTECTED]
Thank you,
both work
see:
?commandArgs
or more detail for R startup mechanisms:
?Startup
On 1/22/07, Deepak Chandra [EMAIL PROTECTED] wrote:
Hi All,
A simple and naive question from a newbie. How can one access command-line
arguments in R i.e. equivalent of argv in C?
Have spent a lot of time on finding it.
the reason is that is more natural to use pnorm(), which also should a
more efficient approximation of the Normal integral than intgrate(),
you may even use
diff(pnorm(0:1, mean = 0.5, sd = 1.2))
however, the point I meant to make was to use the '...' argument that
can found in many R
Hi,
List , i have 6 predictor variables and i want to make possible combinations
of these 6 predictors ,all the data is in matrix form ,
if i am having 6 predictors than possible combination of sets are 64 2 power 6,
or 63 ,whatever it may be i want to store the result in another variable to
Hi!
I've constructed a small menu-driven interface to a couple of R functions
using the possibilities offered by the tcltk package. When user runs some
specific analyses, I would then like to disable some of the menus (or menu
choises) that are not applicable after the performed analysis. I
Dear Tomas
You can produce the results in Montgomery Montgomery with
lme. Please remark that you should indicate the nesting with the
levels in your nested factor. Therefore I recreated your data,
but used 1,...,12 for the levels of batch instead of 1,...,4.
purity-c(1,-2,-2,1,-1,-3,0,4, 0,-4,
The following three lines will do what you want. You will probably
want to change some of the default behaviour; just look at the
relevant help pages.
plot(x,y)
text(x,y,ID)
grid(2)
On 21/01/07, gnv shqp [EMAIL PROTECTED] wrote:
Hi my friends,
I'm trying to make a scatterplot like this.
1)
play below after your code and look at tk window:
tkentryconfigure(editMenu,0,state=disable)
tkentryconfigure(editMenu,0,state=active)
tkentryconfigure(topMenu,1,state=disable)
tkentryconfigure(topMenu,1,state=active)
HTH
On 1/22/07, Jarno Tuimala [EMAIL PROTECTED] wrote:
Hi!
I've
I have been running this script regularly for some time. This morning the
following error message appeared.
Any suggestions to correct this change would be appreciated.
EWL-get.hist.quote(EWL,start=(today - Sys.Date())-350,quote=Cl)
trying URL 'http://chart.yahoo.com/table.csv?s=EWLa=1bc
On 1/22/07, Dirk Eddelbuettel [EMAIL PROTECTED] wrote:
On 22 January 2007 at 00:05, Ramon Diaz-Uriarte wrote:
| On 1/20/07, Dirk Eddelbuettel [EMAIL PROTECTED] wrote:
| Just confirms my suspicion that even after all these years, I barely
| scratched the surface of ess. That '2+ years' old
Jerry Pressnell wrote:
I have been running this script regularly for some time. This morning the
following error message appeared.
EWL-get.hist.quote(EWL,start=(today - Sys.Date())-350,quote=Cl)
Error in if (dat[n] != start) cat(format(dat[n], time series starts
%Y-%m-%d\n)) :
missing
Hi,
List , i have 6 predictor variables and i want to make possible combinations
of these 6 predictors ,all the data is in matrix form ,
if i am having 6 predictors than possible combination of sets are 64 2 power 6,
or 63 ,whatever it may be i want to store the result in another variable to
Hello,
I'd like to know if the D'Agostino test of normality is reliable,
because somme of our results are not really coherent.
This test seems to be very sensitive. Even compared to a normal
distribution generated by R, the results are not very clear.
thanks for any help
Matthieu.
Dear Prof. Ripley and Christoph,
thank you very much for your comments. You have helped me a lot.
Thanks,
Tomas Goicoa
Dear Prof. Ripley
Thank you for your email. Yes, this is of course the correct
syntax to save us the extra calculation. And I forgot the
lower.tail = FALSE for pf()
You don't say what model you want to do. It isn't necessary to store
each combination of predictors in a separate matrix unless you really
need to do this for some other reason, in which case I imagine you
could adopt this idea. I dare say there are better ways, but this
should work (assuming you
You might find the following code useful. It's part of a package I'm
developing for interactive model exploration.
Hadley
# Generate all models
# Fit all combinations of x variables ($2^p$)
#
# This technique generalises \code{\link{fitbest}}. While it is much
# slower it will work for any
Dear R useres,
I'm interested in getting a series of time-varying correlation, simply
between two random variables.
Could you please introduce a package to do this task?
Thank you so much for any help.
Amir
-
Don't pick lemons.
On Jan 21, 2007, at 8:11 PM, John Fox wrote:
Dear Haris,
Using lapply() et al. may produce cleaner code, but it won't
necessarily
speed up a computation. For example:
X - data.frame(matrix(rnorm(1000*1000), 1000, 1000))
y - rnorm(1000)
mods - as.list(1:1000)
system.time(for (i in
Dear helpeRs,
I'm estimating a series of linear models (using lm) in which in every
new model variables are added. I want to test to what degree the new
variables can explain the effects of the variables already present in
the models. In order to do that, I simply observe wether these
Aimin:
The problem is that the columns you choose for training (only 4
variables) do not match the ones used for prediction (all except y).
David
I try to use naiveBayes
p.nb.90-naiveBayes(y~aa_three+bas+bcu+aa_ss,data=training)
You can use the anova function a la:
anova(model1, model2)
Analysis of Variance Table
Model 1: y ~ x
Model 2: y ~ x + z
Res.DfRSS Df Sum of Sq F Pr(F)
1 13 4.4947
2 12 4.4228 10.0720 0.1952 0.6665
I would suggest
On Sun, 21 Jan 2007, Lynette wrote:
Dear all,
I am using Rdqags in C to realize the integration. It seems for the
continous C function I can get correct results. However, for step functions,
the results are not correct. For example, the following one, when integrated
from 0 to 1 gives 1
It can't be done with the current code.
In a nutshell, you are trying to use a feature that I never got around to
coding. It's been on my to do list, but may never make it to the top.
Terry
__
R-help@stat.math.ethz.ch mailing list
?box.cox
?boxcox
On Jan 22, 2007, at 2:25 AM, Arun Kumar Saha wrote:
I have a dataset 'data' and I want to see the effect of Box-Cox
transformation on it Interactively for different lambda values. I
already
got a look on function vis.boxcoxu in package TeachingDemos. But I
didn't find
On Mon, 22 Jan 2007, Charilaos Skiadas wrote:
On Jan 21, 2007, at 8:11 PM, John Fox wrote:
Dear Haris,
Using lapply() et al. may produce cleaner code, but it won't
necessarily
speed up a computation. For example:
X - data.frame(matrix(rnorm(1000*1000), 1000, 1000))
y - rnorm(1000)
Dear Haris,
My timings were on a 3 GHz Pentium 4 system with 1 GB of memory running Win
XP SP2 and R 2.4.1.
I'm no expert on these matters, and I wouldn't have been surprised by
qualitatively different results on different systems, but this difference is
larger than I would have expected. One
Dear R experts
I am looking for a package which gives me latin hyper cube samples
from the grid of values produced from the command expand.grid. Any
pointers to this issue might be very useful. Basically, I am doing the
following:
a-(1:10)
b-(20:30)
dataGrid-expand.grid(a,b)
Now, is there a
Thank you Alain and Max for your swift responses.
It might be that I'm misunderstanding your responses, but aren't
you testing if there is a difference between the two full models?
What I want to know, os whether the effect of a specific predictor
(x) differs between model1 and model2. I'm
Dear Brian,
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Prof
Brian Ripley
Sent: Monday, January 22, 2007 11:06 AM
To: Charilaos Skiadas
Cc: John Fox; r-help@stat.math.ethz.ch
Subject: Re: [R] efficient code. how to reduce running time?
Hello,
thanks for help and code.
We did a lot of work to speedup our function in R. We have a nested
loop, vectorizing is the fastest way. But there we got a very big matrix
and problems with memory. So we want to stay by loops an speedup with C.
My code is similar to this. (my_c is code from
Well,
I have no idea either. I can get correct answers for continous functions but
incorrect for step functions.
Sign, I have been trying to realize the integration in C for long time.
Thank you for your answering.
Best,
Lynette
- Original Message -
From: Thomas Lumley [EMAIL
Dear R-user,
I am trying to use the R nlme function to fit a non linear mixed
effects model. The method has some problem to reach the convergence.
I am trying to understand causes of the problem by following step by
step evolution of the iterative algorithm (verbose=TRUE command).
However, I
On Jan 22, 2007, at 10:39 AM, John Fox wrote:
One thing that seems particularly
striking in your results is the large difference between elapsed
time and
user CPU time, making me wonder what else was going on when you ran
these
examples.
Yes, indeed there were a lot of other things
On Mon, 22 Jan 2007, Markus Schmidberger wrote:
Hello,
thanks for help and code.
We did a lot of work to speedup our function in R. We have a nested loop,
vectorizing is the fastest way. But there we got a very big matrix and
problems with memory. So we want to stay by loops an speedup
On Mon, 22 Jan 2007, Lynette wrote:
Well,
I have no idea either. I can get correct answers for continous functions but
incorrect for step functions.
I have just tried using Rdqags from C for the function x0 and it worked
fine (once I had declared all the arguments correctly). The code is
This is to submit a commented example function for use in the data
argument to the bigglm(biglm) function, when you want to read the data
from a file (instead of a URL), or rescale or modify the data before
fitting the model. In the hope that this may be of help to someone out
there.
make.data
Dear all, especially to Thomas,
I have figured out the problem. For the step function, something wrong with
my C codes. I should use the expression ((x=0.25)(x=0.75)) ? 2:1 instead
of ((x=1/4)(x=3/4)) ? 2:1 ). Have no idea why 0.25 makes difference from
1/4 in C. But now I can go ahead with
On Mon, 22 Jan 2007, Lynette wrote:
Dear all, especially to Thomas,
I have figured out the problem. For the step function, something wrong with
my C codes. I should use the expression ((x=0.25)(x=0.75)) ? 2:1 instead
of ((x=1/4)(x=3/4)) ? 2:1 ). Have no idea why 0.25 makes difference from
Nils Hoeller wrote:
for example.
BUT what can I do for dynamic m and sd?
I want something like integrate(dnorm(,0.6,0.15),0,1), with the first
dnorm parameter open for the
integration but fixed m and sd.
integrate(function(x)dnorm(x,0.1,1.2), 0, 1)
is a way of fixing additional
Hi,
I am new to R (and not really a stats expert) and am having trouble
interpreting its output. I am running a human learning experiment, with
6 scores per subject in both the pretest and the posttest. I believe I
have fitted the correct model for my data- a mixed-effects design, with
In the two solutions for the repeated measures problem given in the
original reply below, the F and p values given by aov() with the error
strata defined by Error() are different from those given by lme().
However, when one does the problem by hand using the standard split
plot model, the results
Hi
I am a newbie to R and am using the lm function to
fit my data.
This optimization is to be performed for around 45000
files not all of which lend themselves to
optimization. Some of these will and do crash.
However, How do I ensure that the program simply goes
to the next file in line
I am trying to implement a simple r-svm example using the iris data (only two
of the classes are taken and data is within the code). I am running into some
errors. I am not an expert on svm's. If any one has used it, I would appreciate
their help. I am appending the code below.
Thanks../Murli
Hi
Thanks for your response.
However I seem to be doing something wrong regarding
the try block resulting in yet another error described
below.
I have a function that takes in a file name and
does the fit for the data in that file.
Hence based on your input, I tried
try ( (fit = lm(y~x, data =
Hi Lalitha,
Use
try()
or
tryCatch()
Cheers
Andrew
On Mon, Jan 22, 2007 at 12:43:28PM -0800, lalitha viswanath wrote:
Hi
I am a newbie to R and am using the lm function to
fit my data.
This optimization is to be performed for around 45000
files not all of which lend themselves to
One option for processing very large files with R is split:
## split a large file into pieces
#--parameters: the folder, file and number of parts
FLD=/home/user/data
F=very_large_file.dat
parts=50
#---split
cd $FLD
fn=`echo $F | awk -F\. '{print $1}'` #file name without extension
Hi,
Say I have
z-data.frame(y=runif(190),
x=runif(190),
f=gl(5,38),
g=gl(19,10))
plot-xyplot(y~x|g,
data=z,
layout=c(5,4),
groups=f,
Dear All
I would like to use rpart to obtain a regression tree for a dataset
like the following:
Y X1 X2 X3 X4
5.500033B A 3 2
0.35625148 D B 6 5
0.8062546 E C 4 3
5.100014C A 3
So, I take it, given that the use of a pipe is suggested for
sequential reading, that the standard approach to processing a data
frame is to load the entire file? Please correct if wrong.
BTW, I am not interested in finding direct translations of SAS data
step statements to R, but instead in
Benjamin Tyner said the following on 1/22/2007 3:18 PM:
Hi,
Say I have
z-data.frame(y=runif(190),
x=runif(190),
f=gl(5,38),
g=gl(19,10))
plot-xyplot(y~x|g,
data=z,
why not use lda{MASS} and it has cv=T option; it does loo, though.
Or use randomForest.
if you have to use lrm, then the following code might help:
n.fold - 5 # 5-fold cv
n.sample - 50 # assumed 50 samples
s - sample(1:n.fold, size=n.sample, replace=T)
for (i in 1:n.fold){
# create your
Hello,
Does anyone know of an R version of loess that allows more than 4
predictors and/or allows the specification of offsets? For that matter,
does anyone know of _any_ version of loess that does either of the
things I mention?
Thanks,
Paul Louisell
650-833-6254
[EMAIL PROTECTED]
Research
I have been using the wonderful xtable package lately, in combination
with Sweave, and I have a couple of general questions along with a
more particular one.
I'll start with the particular question. I basically have a 1x3 array
with column names but no row names. I want to create a latex
Usually (that is, not limited in R language), when error occurs in
try, stacks are rollbacked, so the variables defined in try no longer
exists after calling try.
One non-elegant solution is:
fit-NULL
try ( (fit = lm(y~x, data = data_fitting)), silent =T)
if(!is.null(fit)){
coeffs =
TeamInfo
TEAMNAME LEVEL WORKTIME BONUS
1 batch sunan B 135 9,818
2 batch Chenqi E 121 6,050
3 batch jiangxu F 97 4,189
4 online zhouxi F 63 2,720
5 online chenhe H 36 1,064
## try:
factor(TeamInfo$TEAM)
[1] batch batch batch
On Mon, 22 Jan 2007, Paul Smith wrote:
Dear All
I would like to use rpart to obtain a regression tree for a dataset
like the following:
Y X1 X2 X3 X4
5.500033 B A 3 2
0.35625148D B 6 5
0.8062546 E C 4 3
On Mon, 22 Jan 2007, Louisell, Paul wrote:
Hello,
Does anyone know of an R version of loess that allows more than 4
predictors and/or allows the specification of offsets? For that matter,
does anyone know of _any_ version of loess that does either of the
things I mention?
Why would you
Am Montag, 22. Januar 2007 12:33 schrieb Matthieu Mourroux:
Hello,
I'd like to know if the D'Agostino test of normality is reliable,
The test is not consistent. The test statistic
can be used for testing the hypothesis of uniformity.
See the paper
Baringhaus, L.; Henze, N.
A test for
Hi,
we noticed there was a error in the arules package.
After reading the source code, we saw that the Dice similarity index was
miscalculated in dissimilarity function : an copy-paste from Jaccard
Index was not corrected (2* a_b_c, ie 2*(a+b+c) in the code instead of
2*a +b + c !!!).
After
59 matches
Mail list logo