Johannes Huesing wrote:
chaogai chao...@xs4all.nl [Thu, Mar 05, 2009 at 07:04:19PM CET]:
I'm having similar experiences on my Acer Aspire One. Everything will
work good. Only thing that takes a lot of time is compiling R if you are
in the habit of doing so.
On the Fedora version
I think the interaction is not so strong anymore if you do what glm
does: use a logit transformation.
testdata -
matrix(c(rep(0:1,times=4),rep(c(FLC,FLC,free,free),times=2),
rep(c(no,yes),each =4),3,42,1,44,27,20,3,42),ncol=4)
colnames(testdata) -c(spot,constr,vernalized,Freq)
testdata -
Steven Lubitz slubitz1 at yahoo.com writes:
x - data.frame(item1=c(NA,NA,3,4,5), item2=c(1,NA,NA,4,5), id=1:5)
y - data.frame(item1=c(NA,2,NA,4,5,6), item2=c(NA,NA,3,4,5,NA), id=1:6)
merge(x,y,by=c(id,item1,item2),all.x=T,all.y=T) #my rows are duplicated
and the NA values are
Lars Bishop lars52r at gmail.com writes:
I'll appreciate your help on this. Do you know of any package that can be
used to solve optimization problems subject to general *non-linear* equality
constraints.
Package DEoptim
Dieter
__
John Poulsen jpoulsen at ufl.edu writes:
I know I am forgetting to do something silly. I typed coordinates in
vectors (as below) but when I call them in R they come out as integers,
and I want them to be real numbers. I have tried using as.numeric,
as.real, etc... but they are still read
I'd recommend to use this script instead. It uses screen to
communicate R and vim, it works well.
http://www.vim.org/scripts/script.php?script_id=2551
Best,
- -Jose
- --
Jose Quesada, PhD.
Max Planck Institute,
Center for Adaptive Behavior and cognition,
Berlin
Dear List,
I am trying to solve a problem: I have approximately 100 Excel
spreadsheets each with approximately 4 sheet each that I would like to
download and import in R for analysis.
Unfortunately i realized (i also sent an email to the author or
xlsReadWrite() ) that the read.xls() doesn't
Dear List,
I am trying to solve a problem: I have approximately 100 Excel
spreadsheets each with approximately 4 sheet each that I would like to
download and import in R for analysis.
Unfortunately i realized (i also sent an email to the author or
xlsReadWrite() ) that the read.xls() doesn't
Dear List,
I am trying to solve a problem: I have approximately 100 Excel
spreadsheets each with approximately 4 sheet each that I would like to
download and import in R for analysis.
Unfortunately i realized (i also sent an email to the author or
xlsReadWrite() ) that the read.xls() doesn't
Could somebody share some tips on implementing multivariate integration and
partial differentiation in R?
For example, for a trivariate joint distribution (cumulative density function)
of F(x,y,z), how to differentiate with respect to x and get the bivariate
distribution (probability density
Jacopo Anselmi wrote:
Dear List,
I am trying to solve a problem: I have approximately 100 Excel
spreadsheets each with approximately 4 sheet each that I would like to
download and import in R for analysis.
Unfortunately i realized (i also sent an email to the author or
xlsReadWrite() ) that
Steven Lubitz wrote:
Hello, I'm switching over from SAS to R and am having trouble merging data
frames. The data frames have several columns with the same name, and each has a
different number of rows. Some of the values are missing from cells with the
same column names in each data frame. I
The adapt package has multivariate integration.
However, I am not sure you need the multivariate integration for the
example you describe: you only need one dimensional integration. For
this, you can check out
?integrate
For differentiation, depending on how well behaved the cdf is, you
could
On Fri, 6 Mar 2009, joris meys wrote:
Dear all,
I have a dataset where the interaction is more than obvious, but I was asked
to give a p-value, so I ran a logistic regression using glm. Very funny, in
the outcome the interaction term is NOT significant, although that's
completely
Try this:
library(gdata)
ciao-read.xls(pattern = TOTALE,
http://www.giustizia.it/statistiche/statistiche_dap/det/seriestoriche/corsi_proff.xls;)
Downloading...
trying URL
'http://www.giustizia.it/statistiche/statistiche_dap/det/seriestoriche/corsi_proff.xls'
Content type
How to I recode a factor into a binary data frame according to the
factor levels:
### example:start
set.seed(20)
l - sample(rep.int(c(locA, locB, locC, locD), 100), 10,
replace=T)
# [1] locD locD locD locD locB locA locA locA locD
locA
### example:end
What I want in the end is the
Hi Manli. Try the replace() function as below:
replace(a,is.na(a),0) #where a is the name of your 50 x 50 matrix
Below is an example:
a-matrix(c(sqrt(-2:3)), nrow=2) # produces a 2 x 3 matrix some of whose
elements are NaN (or NA)
# due to square root operator on negative integers
replace(a,
one way is:
set.seed(20)
l - sample(rep.int(c(locA, locB, locC, locD), 100), 10, replace=T)
f - factor(l, levels = paste(loc, LETTERS[1:4], sep = ))
m - as.data.frame(model.matrix(~ f - 1))
names(m) - levels(f)
m
I hope it helps.
Best,
Dimitris
soeren.vo...@eawag.ch wrote:
How to I recode
Hi R Users:
Could somebody share some tips on implementing multivariate integration and
partial differentiation in R?
For example, for a trivariate joint distribution (cumulative density function)
of F(x,y,z), how to differentiate with respect to x and get the bivariate
distribution
Sören;
You need to somehow add back to the information that is in l that
fact that it was sampled from a set with 4 elements. Since you didn't
sample from a factor the level information was lost. Otherwise, you
coud create that list with unique(l) which in this case only returns 3
Hi - I'd like to construct and plot the percents by year in a small data set
(d) that has values between 1988 and 2007. I'd like to have a breakpoint
(buy no discontinuity) at 1996. Is there a better way to do this than in
the code below?
d
year percent se
1 198830.6 0.32
2 1989
Hi,
I have been meaning to get back to you sooner on this. I have posted
goldbach5, which is a bit faster, on my blog.
http://romainfrancois.blog.free.fr/
Any takers for the next step ?
Cheers,
Romain
Folks,
I put up a brief note describing my naive attempts to compute Goldbach
It actually looked reasonably economical but the output certainly is
ugly. I see a variety of approaches in the r-help archives. This
thread discusses two other approaches, degree-one splines from Berry
and hard coded-coefficients from Lumley:
Hi R users,
I am looking for a date function that will give the following:
- The number-of-week value is in the range 01-53
- Weeks begin on a Monday and week 1 of the year is the week that
includes both January 4th and the first Thursday of the year.
If the
Hi, This is not an R question, but I've seen opinions given on non R
topics, so I wanted
to give it a try. :)
How would one treat a variable that was measured once, but is known to
fluctuate a lot?
For example, I want to include a hormone in my regression as an
explanatory variable. However, this
I am not seeing anything but that proves nothing of course. You could
write your own function and stick it in the .First of your .Rprofile
files that get loads at startup.
Details here:
http://cran.r-project.org/doc/contrib/Lemon-kickstart/kr_first.html
week.dBY - function(x)
If you form categories, you add even more error, specifically, the
variation in the distance between each number and the category
boundary.
What's wrong with just including it in the regression?
Yes, the measure X1 will account for less variance than the underlying
variable of real interest (T1,
Hi Juliet,
Juliet Hannah schrieb:
One simple thing to try would be to form categories
Simple but problematic. Frank Harrell put together a wonderful page
detailing all the issues with categorizing continuous data:
http://biostat.mc.vanderbilt.edu/twiki/bin/view/Main/CatContinuous
So:
Thank you for your responses.
I should have emphasized, I do not intend to categorize -- mainly
because of all the discussions I have seen on R-help arguing against
this.
I just thought it would be problematic to include the variable by
itself. Take other variables, such as a genotype or BMI. If
I was wondering if there was a way to add the standardized
coefficients from SEM that I get from running std.coef() to my graph
that I create with path.diagram() for graphviz? Right now the only way
I know how is to edit the values in a text editor after creating the
graph.
Thanks,
Hi Juliet,
Juliet Hannah schrieb:
I should have emphasized, I do not intend to categorize -- mainly
because of all the discussions I have seen on R-help arguing against
this.
Sorry that we all jumped on this ;-)
I just thought it would be problematic to include the variable by
itself. Take
On Sat, Mar 7, 2009 at 11:49 AM, Juliet Hannah juliet.han...@gmail.com wrote:
Hi, This is not an R question, but I've seen opinions given on non R
topics, so I wanted
to give it a try. :)
How would one treat a variable that was measured once, but is known to
fluctuate a lot?
For example, I
I would like to get some idea of which R-packages are popular, and what R is
used for in general. Are there any statistics available on which R packages
are downloaded often, or is there something like a package-survey? Something
similar to http://popcon.debian.org/ maybe? Any tips are welcome!
Dear Christopher,
-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org]
On
Behalf Of Christopher David Desjardins
Sent: March-07-09 1:38 PM
To: r-help@r-project.org
Subject: [R] Standardized coefficients (std.coef) in graphviz from
On Sat, 7 Mar 2009, Juliet Hannah wrote:
Hi, This is not an R question, but I've seen opinions given on non R
topics, so I wanted
to give it a try. :)
How would one treat a variable that was measured once, but is known to
fluctuate a lot?
For example, I want to include a hormone in my
Dear all,
is it possible to estimate a standard error for the median?
And is there a function in R for this?
I want to use it to describe a skewed distribution.
Thanks in advance,
Ralph
[[alternative HTML version deleted]]
__
This function will show which other packages depend on a particular
package:
dep - function(pkg, AP = available.packages()) {
+pkg - paste(\\b, pkg, \\b, sep = )
+cat(Depends:, rownames(AP)[grep(pkg, AP[, Depends])], \n)
+cat(Suggests:, rownames(AP)[grep(pkg, AP[, Suggests])], \n)
+
Hi everyone,
Im quite new to R an I have the following Question:
I would like to define Variables which I can add and multiply etc. and that
R can simplyfy the terms.
The variables should stand for integers.
For example I would like to have an entry in an array with variable z
and if I
Dear list,
i am a biologist who needs to do some ttest between disease and non disease,
sex, genotype and the serum levels of proteins on a large number of
individuals. i have been using excel for a long time but it is very tedious
and time consuming. i am posting the data below and ask your
It is an example (both via asymptotic theory and the bootstrap) in
chapter 5 of MASS (the book). The functions used are in the scripts
of the MASS package, but you will need to understand the theory being
used as described in the book.
On Sat, 7 Mar 2009, Ralph Scherer wrote:
Dear all,
is
Dear Prof. Ripley,
thank you for your fast answer.
But which book do you mean?
I can't find an MASS book.
Do you mean the R book?
Best wishes,
Ralph
Am Samstag, den 07.03.2009, 21:24 + schrieb Prof Brian Ripley:
It is an example (both via asymptotic theory and the bootstrap) in
Ok, found it.
Thanks.
Am Samstag, den 07.03.2009, 22:34 +0100 schrieb Ralph Scherer:
Dear Prof. Ripley,
thank you for your fast answer.
But which book do you mean?
I can't find an MASS book.
Do you mean the R book?
Best wishes,
Ralph
Am Samstag, den 07.03.2009, 21:24 +
Ralph Scherer scherer...@googlemail.com [Sat, Mar 07, 2009 at 10:34:28PM CET]:
Dear Prof. Ripley,
thank you for your fast answer.
But which book do you mean?
I can't find an MASS book.
Try
library(MASS)
citation(package=MASS)
--
Johannes Hüsing There is something
Hello friend.
I believe anova might be a better solution for you.
You might have a look here:
http://www.personality-project.org/r/r.anova.html
A simple R session that will work for you is:
# getting the data in:
data1 - read.table( enter the path of the file here, look at ?read.table
for
Steve,
I don't know if R has such a function to perform the task you were asking. I
wrote one myself. Try the following to see if it works for you. The new
function merge.new has one additional argument col.ID, which is the column
number of ID column. To use your x, y as examples, type:
Subject: Re: [R] merge data frames with same column names of different lengths
and missing values
To: Phil Spector spec...@stat.berkeley.edu
Date: Saturday, March 7, 2009, 5:01 PM
Phil,
Thank you - this is very helpful. However I realized that with my real data
sets (not the example I have
p.s: since the data (Y variable) is very much not normal (also after a log
transform)I would consider going with an a-parametric test
check:
? wilcox.test
(for a non parametric t.test )
OR
(for a non parametric simple anova)
?kruskal.test
On Sat, Mar 7, 2009 at 11:51 PM, Tal Galili
Hi,
Did anyone used this package? Could you please share your thought on it?
Thanks!
L.
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide
myArray[, 'z'] - myArray[, 'z'] + b
Is this what you want?
On Sat, Mar 7, 2009 at 9:52 AM, David1234 danielth...@web.de wrote:
Hi everyone,
Im quite new to R an I have the following Question:
I would like to define Variables which I can add and multiply etc. and that
R can simplyfy the
I fear that you are looking for a symbolic algebra system, and R is
not that sort of platform. If I am correct and you still want to
access a symbolic algebra system from R, then you should look at YACAS
and the interface to it, Ryacas.
--
David Winsemius
On Mar 7, 2009, at 9:52 AM,
When the question arises How many R-users there are?, the consensus
seems to be that there is no valid method to address the question. The
thread R-business case from 2004 can be found here:
https://stat.ethz.ch/pipermail/r-help/2004-March/047606.html
I did not see any material revision to
I don't think At least one of the participants in the 2004 thread
suggested that it would be a good thing to track the numbers of
downloads by package. is reasonable because I download R packages for 2
home computers (laptop desktop) and 2 at work (1 Linux 1 Mac). There
must be many such
I agree with Thomas, over the years I have installed R on at least 5
computers.
BTW: does any one knows how the website statistics of r-project are
being analyzed?
Since I can't see any google analytics or other tracking code in the main
website, I am guessing someone might be running some
Quite so. It certainly is the case that Dirk Eddelbuettel suggested
would be very desirable and I think Dirk's track record speaks for
itself. I never said (and I am sure Dirk never intended) that one
could take the raw numbers as a basis for blandly asserting that
copies of ttt
I agree with Thomas, over the years I have installed R on at least 5
computers.
I don't see why per-marchine statistics would not be useful. When you
installed a package on five machines, you probably use it a lot, and it is
more important to you than packages that you only installed once.
i have kept r installed on more than ten computers during the past few
years, some of them running win + more than one linux distro, all of
them having r, most often installed from a separate download.
i know of many cases where students download r for the purpose of a
course in statistics --
hi,
is there an easy way to plot the confidence lines or confidence
area of the beta weight in a scatterplot?
like in this plot;
http://www.ssc.wisc.edu/sscc/pubs/screenshots/4-25/4-25_4.png
thanks!
__
R-help@r-project.org mailing list
Try the package Rdonlp2, which can handle general, nonlinear, equality and
inequality constraints for smooth optimization problems.
Ravi.
Ravi Varadhan, Ph.D.
Assistant Professor,
Division of Geriatric Medicine and
Hi,
The adapt package might work, but note that it cannot handle infinite limits.
So, integrate() is your best bet. Since you need to integrate out only one
dimension, integrate() would work just fine.
As for differentiation, you might try the grad() fucntion in the numDeriv
package.
How about
library(ggplot2)
qplot(wt, mpg, data = mtcars, geom=c(point, smooth),method = lm).
On Sat, Mar 7, 2009 at 6:19 PM, Martin Batholdy batho...@googlemail.com wrote:
hi,
is there an easy way to plot the confidence lines or confidence area of
the beta weight in a scatterplot?
like
hi,
I don't know what I am doing wrong,
but with that code;
x1 - c(1.60, 0.27, 0.17, 1.63, 1.37, 2.00, 0.90, 1.07, 0.89, 0.43,
0.37, 0.59,
0.47, 1.83, 1.79, 0.90, 0.72, 1.83, 0.23, 1.97, 2.03, 2.19, 2.03, 0.86)
x2 - c(1.30, 0.24, 0.20, 0.50, 1.33, 1.87, 1.30, 0.75, 1.07, 0.43,
0.37,
I just did RSiteSearch(library(xxx)) with xxx = the names of 6
packages familiar to me, with the following numbers of hits:
hits package
169 lme4
165 nlme
6 fda
4 maps
2 FinTS
2 DierckxSpline
Software could be written to (1) extract the names of current
packages from
It is not an error but rather a warning. As you should have seen, R
went ahead and returned estimates for 24 predicted values for x1 for
arguments to the formula of x2. In R errors and warnings are very
different. You are expected to post full console messages to prevent
this sort of
hi,
at first; thanks for the help on getting confidence intervals in R.
now I have a pure statistical question.
I hope you don't mind if I ask ...
I have an expectation of how large my beta-weight in a regression
should be - so I have an ideal or expected regression line.
Now the real
Hi David - I will try that..
Thanks for your suggestion!
David Winsemius wrote:
I am not seeing anything but that proves nothing of course. You could
write your own function and stick it in the .First of your .Rprofile
files that get loads at startup.
Details here:
Hi all,
I wanted to let you know about our training seminar on predictive analytics
- coming April, May, Oct, and Nov in San Jose, NYC, Stockholm, Toronto and
other cities. This is intensive training for marketers, managers and
business professionals to make actionable sense of customer data
Hi Spencer,
XLSolutions is currently analyzing r-help archived questions to rank
packages for the upcoming R-PLUS 3.3 Professional version and we will be
happy to share the outcome with interested parties. Please email
d...@xlsolutions-corp.com
Regards -
Sue Turner
Senior Account Manager
Hi all,
I'm kind of amazed at the answers suggested for the relatively simple
question, How many times has each R package been downloaded?. Some
have veered off in another direction, like working out how many packages
a package depends upon, or whether someone downloads more than one copy.
Leo Guelman leo.guelman at gmail.com writes:
Hi,
Did anyone used this package? Could you please share your thought on it?
What do you, exactly, mean with share your thought on it? It has its pros and
cons, as always.
Sure Rdonlp2 has been used, and it has been requested and discussed
69 matches
Mail list logo