Marie Sivertsen wrote:
I am relatively new to R, so maybe I am miss something, but I now
tried the as.Date now and have problems understanding how it works (or
don't work as it seem).
Brian D Ripley wrote:
On Thu, 22 Jan 2009, Terry Therneau wrote:
One idea is to use the as.date function,
Hi,
I am trying to plot the following data such that both variable y and z vs x.
(ie two lines on a single plot). As the x variable is not numeric, how do I
go about it? Appreciate if any expert could help.
I know I use plot() follow by lines() to add another line to the plot. But
my problem is
stats787 wrote:
Hi,
I am trying to plot the following data such that both variable y and z vs x.
(ie two lines on a single plot). As the x variable is not numeric, how do I
go about it? Appreciate if any expert could help.
I know I use plot() follow by lines() to add another line to the plot.
Hi experts,
I was graciously offered a function to enhance abline by restricting the
extent of the line to less than the plotting region. This seems a useful
idea, and it looked like the easiest way to program it was to set up a
clipping region with clip, draw the abline and then restore the
Jim Lemon wrote:
stats787 wrote:
Hi,
I am trying to plot the following data such that both variable y and
z vs x.
(ie two lines on a single plot). As the x variable is not numeric,
how do I
go about it? Appreciate if any expert could help. I know I use plot()
follow by lines() to add
Bob,
Your point is well taken, but it also raises a number of issues
(post-install testing to name one) for which the R-devel list would be
more suitable. Could we move the discussion there?
-Peter
Muenchen, Robert A (Bob) wrote:
Hi All,
We have all had to face skeptical
Dear R useRs,
i have the function f1(x, y, z) which i want to integrate for x and y.
On the one hand i do this by first integrating for x and then for y, on
the other hand i do this the other way round and i wondering why i
doesn't get the same result each way?
z - c(80, 20, 40, 30)
f1 -
I wish to write a new link function for a GLM. R's glm routine does
not supply the loglog link. I modified the make.link function adding
the code:
}, loglog = {
linkfun - function(mu) -log(-log(mu))
linkinv - function(eta) exp(-exp(-eta))
mu.eta - function(eta)
Dear John,
thank you for your answer. You are right, I also would not have expected
a divergent result.
I have double-checked it again. No, I got type-III tests.
When I use type II, I get the same results in SPSS as in 'Anova' (using
also type-II tests).
My guess was that the somehow weighted
On Fri, 23 Jan 2009, Garza, Hortencia [BEELINE] wrote:
I work in export compliance and would like to know if you would please
answer the questions below in regards to your application, R for Windows
1.3.1. The information I'm requesting is for export compliance.
I'm surprised that you would
Hello Deepayan,
Thanks for your help - it works great now. Indeed, there was also a
problem with the way I specified the curve. Once I made the suggests
you suggested I ended up with an error:
//Error using packet 1 argument subscripts is missing, with no default
//which I fixed by adding
Can I call R from SAS? I tried the below command in SAS, but not
working...
OPTIONS XWAIT XSYNC;
X C:\Program Files\R\R-2.7.1\bin\R.exe --no-save -quiet
C:\TEMPO\program.r C:\TEMPO\program.log;
Had done this before and it was working perfect... But now, not...
Sorry if this query was
More generally, if you want to do a two-dimensional integral, you will
do better to us a 2D integration algorithm, such as those in package
'adapt'.
Also, these routines are somewhat sensitive to scaling, so if the
correct answer is around 5e-9, you ought to rescale. You seem to be
in the
Got it working... Just that the R code file's extension was missing in the
physical folder...
Shubha Karanth | Amba Research
Ph +91 80 3980 8031 | Mob +91 94 4886 4510
Bangalore * Colombo * London * New York * San José * Singapore *
www.ambaresearch.com
-Original Message-
From:
Thank you all, for the very helpful advice.
i want to estimate the parameters omega and beta of the gamma-nhpp-model
with numerical integration.
so one step in order to do this is to compute the normalizing constant,
but as you see below i get different values
## some reliability data,
The category widths are not equal. Surely, what you mean is:
x - c(15,25,rep(10/3,3),rep(5/5, 5))
names(x) - c('0-10','10-20','','20-50',rep('',3), '50-100', '', '')
barplot(x,space=0, xlab='Size', ylab='Count', border = NA,
col=c(1,2, rep(3,3), rep(4,5)))
:-)
Jon Anson
Jorge Ivan Velez
Each of the two integrals (g1, g2) seem to be divergent (or at least is
considered to be so by R :) )
Try this:
z - c(80, 20, 40, 30)
f1 - function(x, y, z) {dgamma(cumsum(z)[-length(z)], shape=x, rate=y)}
g1 - function(y, z) {integrate(function(x) {f1(x=x, y=y, z=z)}, 0.1,
0.5,
I have created a 5 column matrix of the data and here have shown the
variable names of each column. I do not understand the error message given
when I try to run the cloud function Thank you in advance for your help.
V1-data[,1]
V2-data[,2]
V3-data[,3]
V4-data[,4]
Gender-data[,5]
You may want to consider a dotchart instead of a barplot. Then you can
distinguish between groups by using symbols, grouping, and labels rather than
depending on colors/shades of grey.
--
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
greg.s...@imail.org
Dear Felipe,
You will need to do some reading on factors. Although the labels of a
factor can be 'numeric' try are actually just string representing
numbers. Internally factors are coded as numbers: 1 for the first level,
2 for the second and so on. That's why you run into trouble with
glmnet _1.1-3 is on CRAN now.
glmnet fits lasso and elastic net regularization paths for squared
error, binomial and multinomial
models via coordinate descent. It is extremely fast and can work on
large scale problems.
See the paper: Regularized Paths for Generalized Linear Models via
or how about
lm(myData$response~as.matrix(myData[2:4]))
hth, david
Juliet Hannah wrote:
Hi All,
I had posted a question on a similar topic, but I think it was not
focused. I am posting a modification that I think better accomplishes
this.
I hope this is ok, and I apologize if it is
Steven Mortimer wrote:
I have created a 5 column matrix of the data and here have shown the
variable names of each column. I do not understand the error message given
when I try to run the cloud function Thank you in advance for your help.
V1-data[,1]
V2-data[,2]
V3-data[,3]
V4-data[,4]
Karun Gahlawat wrote:
Hi!
Trying to build R-2.8.1. while configuring, it throws error
./configure
checking iconv.h usability... yes
checking iconv.h presence... yes
checking for iconv.h... yes
checking for iconv... yes
checking whether iconv accepts UTF-8, latin1 and UCS-*... no
checking
Thibault Grava wrote:
Hello,
I am working on extracting data from sound recording (fee bee song of the black-capped chickadee). I obtained a two column matrix with x=time(s) and y=frequency(khz). The part of the recording that is interesting me is when the frequency is stable. Does somebody
Hello,
We have created an interface between R and Hadoop so that the user
can, after a fashion, interact with very large datasets
using the Map Reduce programming model. We also use IBM's TSpaces to
implement a shared memory implementation that can be
accessed via R(somewhat like
I have been using R and Tinn-R for the last few weeks. While I prefer Tinn-R to
Emacs as my text editor, I have been unable to calibrate Tinn-R so that the R
outputs will be translated into Tinn R.
For example, if I create a data vecor entitled I like pizza- c(1,2,3) and
then insert the
Barry Rowlingson wrote:
2009/1/23 Jorge Ivan Velez jorgeivanve...@gmail.com:
See ?.Last.value
Or put this inside your Rprofile:
makeActiveBinding(prev, function(...) .Last.value, .GlobalEnv)
I like this 'cause I don't need to include the () for prev to work
properly.
Dear All,
A very quick newbie question sorry.
I am running R on Windows and would like to know if I can store a .R file in
a location that it gets run on start up. Or is there some file I can alter
please?
Regards
Glenn
[[alternative HTML version deleted]]
Dear Nils,
I don't currently have a copy of SAS on my computer, so I asked Michael
Friendly to run the problem in SAS and he kindly supplied the following
results:
--- snip
The SAS System
1
Dear Thierry:
Thanks for the factor/level advice. I was actually reading about it last night
and playing further with my data I was able to make it work with the code below:
options(scipen=3)
bargraph - qplot(factor(Week,levels=c(27:52,1:26)),FryPassage,
This is a known issue, the documentation of clip talks about some plotting
functions resetting the clipping region and some don't. abline is apparently
one of those that plots first, then resets the clipping region (so the first
time it doesn't acknowledge it, but does afterwards). The
See ?Startup
Uwe Ligges
glenn wrote:
Dear All,
A very quick newbie question sorry.
I am running R on Windows and would like to know if I can store a .R file in
a location that it gets run on start up. Or is there some file I can alter
please?
Regards
Glenn
[[alternative HTML
If you do ?Startup at the command line, you will get a help file that will give
you several options for doing this.
--
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
greg.s...@imail.org
801.408.8111
-Original Message-
From: r-help-boun...@r-project.org
Duncan Murdoch murdoch at stats.uwo.ca writes:
shell(set JAVA_HOME)
JAVA_HOME=C:\Program Files\Java\jre6
That doesn't last beyond the shell call, as far as I know. It starts a
process to run the shell, sets the environment variable in that process,
then the process dies
On 24/01/2009 3:56 PM, Dieter Menne wrote:
Duncan Murdoch murdoch at stats.uwo.ca writes:
shell(set JAVA_HOME)
JAVA_HOME=C:\Program Files\Java\jre6
That doesn't last beyond the shell call, as far as I know. It starts a
process to run the shell, sets the environment variable in that
Dear John,
thank you again! You replicated the type III result I got in SPSS! When I
calculate Anova() type II:
Univariate Type II Repeated-Measures ANOVA Assuming Sphericity
SS num Df Error SS den Df F Pr(F)
between 4.8000 1 9. 8 4.2667
rak1304 rkeyes87 at hotmail.com writes:
I am new to R and Im
some trouble with the following question...
I'm starting to study stats and R again after almost a year, so I
thought this is interesting. I think I have the answer. Here is how I
arrived at it:
Generate 100 standard normal
Nils Skotara wrote:
Dear John,
thank you again! You replicated the type III result I got in SPSS! When I
calculate Anova() type II:
Univariate Type II Repeated-Measures ANOVA Assuming Sphericity
SS num Df Error SS den Df F Pr(F)
between 4.8000 1
Monte,
For a list of online sources that may be useful, go to
http://www.uvm.edu/~dhowell/methods/Websites/Archives.html
and check out some of the material referenced there. Clay Helberg's site
is particularly helpful. Unfortunately it is virtually impossible to
keep links current, so some are
On 23/01/2009, at 8:59 PM, Jim Lemon wrote:
Rolf Turner wrote:
...
It always gets fussy and fiddly whenever legal issues arise. It
would
be nice if there
were no such thing as ``intellectual property'' (which has always
seemed to me to be
an oymoron) and no such thing as lawyers.
Hey,
Hello -
I need to read in some tables that are embedded within data files like this:
line 1
line 2
data table
01000
10110
00011
end table
line 3
line 4
Is there any way to read just the data by telling an input device to start
reading when it encounters the keyword data table and stop reading
Here is one way to do it:
x - readLines(textConnection(line 1
+ line 2
+ data table
+ 01000
+ 10110
+ 00011
+ end table
+ line 3
+ line 4))
start - grep(data table, x)
end - grep(end table, x)
# now read in only the data between limit
input - read.table(textConnection(x[(start + 1):(end -
Dear Peter and Nils,
In my initial message, I stated misleadingly that the contrast coding didn't
matter for the type-III tests here since there is just one
between-subjects factor, but that's not right: The between type-III SS is
correct using contr.treatment(), but the within SS is not. As is
Hi Jesse,
Here is another option by using a function, which argument is the file's
name:
# A function
rmf-function(filename){
x-readLines(filename,warn=FALSE)
res-which(x=='data table' | x=='end table')
x-x[seq(res[1]+1,res[2]-1)]
mydata-read.table(textConnection(x))
closeAllConnections()
mydata
Dear R-helpers,
I wonder if you can give me advice about the best way to use help().
(1) If I type ?normal because I forgot the name dnorm() I get a long
list of relevant pages. Getting to right page is laborious.
(2) If I remember dnorm() and want to be reminded of the call, I also
get a
On 25/01/2009, at 2:33 PM, Michael Kubovy wrote:
Dear R-helpers,
I wonder if you can give me advice about the best way to use help().
(1) If I type ?normal because I forgot the name dnorm() I get a long
list of relevant pages. Getting to right page is laborious.
(2) If I remember dnorm()
Dear Uwe and folks,
I had connections to CRAN daily and won't have problems for most packages,
but only some does. Installed directory is:
C:\Documents and Settings\R\library\downloaded_packages
I do not have set library path except the R default. I tried to download
lme4 and packages that
Dear all,
Is there a simple way to strip the leading 0's from R output? For example,
I want
Data - data.frame(x=rnorm(10), y=x*rnorm(10), z = x+y+rnorm(10))
cor(Data)
to give me
x y z
x 1.000 -.1038904 -.3737842
y -.1038904 1.000 .4414706
z -.3737842
Hello,
I've started to learn about neural networks and the first examples
I've seen are the implementation of an OR logical gate, aswell as the
AND gate. I've implemented it as a perceptron with no hidden layers,
and I've done it this way because so far is the only way I've learned.
The R file
The only comprehensive way to do this would be to change R's internaal
print mechanisms. (Note that changing 0. to . breaks the layout:
changing '0.' to ' .' would be better.)
But you haven't told use why you would want to do this. Leaving off
leading zeroes makes output harder to read for
??'normal distribution' seems to do rather well. If you know that you
want results from 'stats' you can confine the search to that package.
'normal' is such an overloaded word that searching for it is going to
be overwhelming. At least 'Normal' is AFAIR only used in one sense in
statistics.
52 matches
Mail list logo