Dear R users,
I need to generate random integer(s) in a range (say that beetween 1 to
100) in R.
Any help is deeply appreciated.
Kind Regards
Seyit Ali
Dr. Seyit Ali KAYIS
Selcuk University,
-Original Message-
From: r-help-boun...@r-project.org
[mailto:r-help-boun...@r-project.org] On Behalf Of skayis selcuk
Sent: Saturday, April 11, 2009 11:24 PM
To: r-help@r-project.org
Subject: [R] Generating random integers
Dear R users,
I need to generate random
Hello
I have a problem like this:
Glass$RI[is.outlier(Glass$RI)]
Error: could not find function is.outlier
Which commend I may add is.outer to my application?
--
View this message in context:
http://www.nabble.com/How-may-I-add-%22is.outer%22-method-tp23006216p23006216.html
Sent from the
stephalope wrote:
Hi there,
I am plotting relative warp scores (equivalent to pca scores) and I want to
label (color code and shape) the points by group. I can't figure out how to
do this beyond simple plotting.
plot(RW1, RW2);
Do I need to make vectors of each group and then plot them
I checked with my KDE 4.2 (Mandriva 2009 system, kde.org binaries)
with no font rendering issues in running demo(graphics).
It is quite possible that pango was not installed with Fedora (as it
is unnecessary for KDE 4.2 systems). Installing the relevant rpm may
fix things.
The packaging of the
floor(runif(1000, 1,101))
On Sun, Apr 12, 2009 at 2:24 AM, skayis selcuk ska...@selcuk.edu.tr wrote:
Dear R users,
I need to generate random integer(s) in a range (say that beetween 1 to
100) in R.
Any help is deeply appreciated.
Kind Regards
Seyit Ali
I am not quite sure how you define your outlier, but the definition
that I am familiar with is that an outlier is greater than 1.5*Inter
quartile range above the 3rd quartile or below the 1st quartile (this
is the default for the whiskers in boxplot). This can be easily found
by
x[(x
Sorry about some mistakes in the code. The correct one is:
library(rimage)
image
size: 458 x 372
type: rgb
laplacian_result - normalize(laplacian(image))
postscript(laplacian_result.eps)
plot.imagematrix(laplacian_result)
dev.off()
Talita
2009/4/11 Talita Perciano
I'm generating some images in R to put into a document that I'm producing
using Latex. This document in Latex is following a predefined model, which
does not accept compilation with pdflatex, so I have to compile with latex
- dvi - pdf. Because of that, I have to generate the images in R with
I got a similar problem when I used family=quasibinomial with my data. But, the
problem disappeared when I used family=binomial. I assumed that Douglas Bates
et al. had amended the lmer program to detect over-dispersion, so that it is no
longer necessary to specify its possible presence with
On 31/03/2009 12:53 PM, Duncan Murdoch wrote:
On 3/31/2009 12:29 PM, hadley wickham wrote:
col2rgb(#0079, TRUE)
[,1]
red 0
green0
blue 0
alpha 121
col2rgb(#0080, TRUE)
[,1]
red255
green 255
blue 255
alpha0
col2rgb(#0081, TRUE)
[,1]
red
Hi,
I would like to run random Forest classification algorithm and check the
accuracy of the prediction according to different training and testing
schemes. For example, extracting 70% of the samples for training and the
rest for testing, or using 10-fold cross validation scheme.
How can I do
Hi Chysanthi,
check out the randomForest package, with the function randomForest. It has a CV
option. Sorry for not providing you with a lengthier response at the moment but
I'm rather busy on a project. Let me know if you need more help.
Also, to split your data into two parts- the training
you need to include in your code something like:
tree-rpart(result~., data, control=rpart.control(xval=10)).
this xval=10 is 10-fold CV.
Best,
Pierre
De : Chrysanthi A. chrys...@gmail.com
À : r-help@r-project.org
Envoyé le : Dimanche, 12 Avril 2009, 17h26mn
On Sun, Apr 12, 2009 at 1:21 PM, jim holtman jholt...@gmail.com wrote:
floor(runif(1000, 1,101))
I need to generate random integer(s) in a range (say that beetween 1 to
100) in R.
Another way:
sample(1:100,1000,replace=T)
Paul
__
One variable contains values (1.30 - one hour and thirty minutes, 1.2
(which is supposed to be 1.20 - one hour and twenty minutes)). I would
like to convert to a minute variable so 1.2 is converted to 80
minutes. How?
__
R-help@r-project.org mailing
Hello list:
I generate by simulation (using different procedures) two sample vectors of
size N, each corresponding to a discrete variable and I want to text if these
samples can be considered as having the same probability distribution (which is
unknown). What is the best test for that?
On 12 April 2009 at 21:00, Peter Kraglund Jacobsen wrote:
| One variable contains values (1.30 - one hour and thirty minutes, 1.2
| (which is supposed to be 1.20 - one hour and twenty minutes)). I would
| like to convert to a minute variable so 1.2 is converted to 80
| minutes. How?
The littler
Dear stats experts:
Me and my little brain must be missing something regarding bootstrapping. I
understand how to get a 95%CI and how to hypothesis test using bootstrapping
(e.g., reject or not the null). However, I'd also like to get a p-value from
it, and to me this seems simple, but it seems
There is also the train function in the caret package. The
trainControl function can be used to try different resampling schemes.
There is also a package vignette with details.
Max
On Apr 12, 2009, at 12:26 PM, Chrysanthi A. chrys...@gmail.com
wrote:
Hi,
I would like to run random
Dear all,
I am a newbie to R and practising at the moment.
Here is my problem:
I have a programme with 2 loops involved.
The inner loop get me matrices as output and safes all values for me.
Now once I wrote a 2nd loop around the other loop in order to
repeat the inner loop a couple
I am really new to R and ran across a need to take a data matrix and
calculate an approximation of the first derivative of the data. I am more
than happy to do an Excel kind of calculation (deltaY/deltaX) for each
pair of rows down the matrix, but I don't know how to get R to do that kind
of
Hi,
I am trying to figure out the observed acceptance rate and M, using generalised
rejection sampling to generate a sample from the posterior distribution for p.
I have been told my code doesn't work because I need to take the log of the
expression for M, evaluate it and then
Hi,
I am trying to figure out the observed acceptance rate and M, using generalised
rejection sampling to generate a sample from the posterior distribution for p.
I have been told my code doesn't work because I need to take the log of the
expression for M, evaluate it and then
Dear unbekannt;
The construction that would append a number to a numeric vector would
be:
vec - c(vec , number)
You can create an empty vector with vec - c() or vec - NULL
--
David Winsemius
On Apr 12, 2009, at 2:10 PM, unbekannt wrote:
Dear all,
I am a newbie to R and practising at
-Original Message-
From: r-help-boun...@r-project.org
[mailto:r-help-boun...@r-project.org] On Behalf Of Mary Winter
Sent: Sunday, April 12, 2009 1:39 PM
To: r-help@r-project.org
Subject: [R] taking the log then later exponentiate the result query
Hi,
I am trying to
Johan Jackson wrote:
Dear stats experts:
Me and my little brain must be missing something regarding bootstrapping. I
understand how to get a 95%CI and how to hypothesis test using bootstrapping
(e.g., reject or not the null). However, I'd also like to get a p-value from
it, and to me this seems
delta(index) is identically 1, so taking first differences is all that
is needed. If the dtatframe's name is df then:
df$dacflong_dx - c(NA, diff(acflong)) # the slash would not be a
legal character in a variable name unless you jumped through some
hoops that appear entirely without
Your problem is that with the alpha beta you've specified
(((alpha-1)^(alpha-1))*((beta-1)^(beta-1)))/((beta+alpha-2)^(alpha+beta-2))
is
Inf/Inf
which is NaN.
On Sun, Apr 12, 2009 at 5:39 PM, Mary Winter statsstud...@hotmail.com wrote:
Hi,
I am trying to figure out the observed
Hi Johan,
Interesting question. I'm (trying) to write a lecture on this as we
speak. I'm no expert, but here are my two cents.
I think that your method works fine WHEN the sampling distribution
doesn't change its variance or shape depending on where it's centered.
Of course, for normally, t-, or
However, estimating derivatives from differencing data amplifies
minor errors. Less noisy estimates can be obtained by first smoothing
and then differentiating the smooth. The fda package provides
substantial facilities for this.
Hope this helps.
Spencer Graves
David
On Apr 12, 2009, at 3:09 PM, jose romero wrote:
Hello list:
I generate by simulation (using different procedures) two sample
vectors of size N, each corresponding to a discrete variable and I
want to text if these samples can be considered as having the same
probability distribution
Hi,
As part of an R code assingment I have been asked to find a quantitative
procedure for assessing whether or not the data are normal?
I have previously used the graphical procedure using the qqnorm command.
Any help/tips would be greatly appreciated as to how I should start going
Back in 2005 I had been doing most of my work in Mathcad for over 10 years.
For a number of reasons I decided to switch over to R. After much effort and
help from you folks I am finally thinking in R rather than thinking in
Mathcad and trying to translating to R. Anyway, the only task I still use
David,
Thank you!
-Chris
--
View this message in context:
http://www.nabble.com/First-Derivative-of-Data-Matrix-tp23012026p23015941.html
Sent from the R help mailing list archive at Nabble.com.
__
R-help@r-project.org mailing list
There is really nothing wrong with this approach, which differs
primarily from the permutation test in that sampling is with
replacement instead of without replacement (multinomial vs. multiple
hypergeometric).
One of the issues that permutation tests don't have is bias in the statistic.
In
Hi,
I am trying to figure out exactly what the bootcov() function in the Design
package is doing within the context of clustered data. From reading the
documentation/source code it appears that using bootcov() with the cluster
argument constructs standard errors by resampling whole clusters of
www.rseek.org
normality test
On Sun, Apr 12, 2009 at 8:45 PM, Henry Cooper henry.1...@hotmail.co.uk wrote:
Hi,
As part of an R code assingment I have been asked to find a quantitative
procedure for assessing whether or not the data are normal?
I have previously used the graphical
Philippe Grosjean wrote:
..I would be happy to receive your comments and suggestions to improve
this document.
All the best,
PhG
LaTeX is my personal tool of choice and the vector format I use most often
is http://sourceforge.net/projects/pgf/ PGF (Portable Graphics Format),
At 08:00 PM 4/12/2009, Tom La Bone wrote:
Back in 2005 I had been doing most of my work in Mathcad for over 10 years.
For a number of reasons I decided to switch over to R. After much effort and
help from you folks I am finally thinking in R rather than thinking in
Mathcad and trying to
Dear R friends,
I have a data frame, I need to get a time interval between the two columns.
The times are recorded in 24 hour clock. My data frame is called
version.one.
my commands are:
t.s.one-paste(version.one[,9])
t.s.two-paste(version.one[,61])
x-strptime(t.s.one,format=%H:%M)
x
Dear all,
I have a dataset with students nested in schools and also schools belong
to each district. The data was explicitly nested as previous examples.
In my case, I don't care the variance between schools or district,and I
just want to assess the effect of different teaching
Is there anything available off the shelf in R for this? I don't think so.
It is, however, an interesting problem and there are the tools there to handle
it. Basically you need to create a class for each kind of measure you want to
handle (length, area, volume, weight, and so on) and then
Thanks a lot for your help..
But, using this function, how can identify the size of the training set? and
how will I identify my data? There is not any example and I am a bit
confused.
Many thanks,
Chrysanthi
2009/4/12 Max Kuhn mxk...@gmail.com
There is also the train function in the caret
Hi Pierre,
Thanks a lot for your help..
So, using that script, I just separate my data in two parts, right? For
using as training set the 70 % of the data and the rest as test, should I
multiply the n with the 0.70 (for this case)?
Many thanks,
Chrysanthi
2009/4/12 Pierre Moffard
Mitra;
n Apr 12, 2009, at 8:09 PM, Mitra Jazayeri wrote:
Dear R friends,
I have a data frame, I need to get a time interval between the two
columns.
The times are recorded in 24 hour clock. My data frame is called
version.one.
my commands are:
t.s.one-paste(version.one[,9])
I would like to trace functions, displaying their arguments and return
value, but I haven't been able to figure out how to do this with the
'trace' function.
After some thrashing, I got as far as this:
fact - function(x) if(x1) 1 else x*fact(x-1)
tracefnc - function()
47 matches
Mail list logo