Subject: [R] Inconsistent computation of an integral
Dear R experts,
I computed the same integral in two different ways, and find different
values in R.
The difference is due to the max function that is part of the integrand.
In
the first case, I keep it as such, in the second case, I
, TIBCO Software
wdunlap tibco.com
From: Aurélien Philippot [mailto:aurelien.philip...@gmail.com]
Sent: Friday, December 20, 2013 8:59 AM
To: William Dunlap; R-help@r-project.org
Subject: Re: [R] Inconsistent computation of an integral
Thanks William.
I was convinced by pmax, until I played
Dear R experts,
I computed the same integral in two different ways, and find different
values in R.
The difference is due to the max function that is part of the integrand. In
the first case, I keep it as such, in the second case, I split it in two
depending on the values of the variable of
Of Aurélien Philippot
Sent: Thursday, December 19, 2013 10:38 AM
To: R-help@r-project.org
Subject: [R] Inconsistent computation of an integral
Dear R experts,
I computed the same integral in two different ways, and find different
values in R.
The difference is due to the max function
Hello again, in the task view I see that Direct support in R is
starting with release 2.14.0 which includes a new package parallel
However I can not get any access to it. When I type
ls('package:parallel'), I get following error:
ls('package:parallel')
Error in as.environment(pos) :
On 2013-02-25 10:00, Christofer Bogaso wrote:
Hello again, in the task view I see that Direct support in R is
starting with release 2.14.0 which includes a new package parallel
However I can not get any access to it. When I type
ls('package:parallel'), I get following error:
:42
To: r-help
Subject: [R] Symbolic computation in R-solving system of equations
Hi,
To my knowledge, Ryacas can do symbolic computation in R, like Mathematica
or Maple.
I wonder if it (or any other R package) can solve symbolically a system of
equations? I have a system of four equations
Hi,
To my knowledge, Ryacas can do symbolic computation in R, like
Mathematica or Maple.
I wonder if it (or any other R package) can solve symbolically a system
of equations? I have a system of four equations in four variables
(non-linear, but not very hard to solve) and wonder if Ryacas
],dat1[1,1],units=secs)),format=f,digits=4)
[1] 0.0061
#or,
formatC(as.numeric(diff(dat1$datetime)),format=f,digits=4)
[1] 0.0061
A.K.
- Original Message -
From: Julia julia.schm...@gmx.de
To: r-h...@stat.math.ethz.ch
Cc:
Sent: Friday, June 15, 2012 12:49 PM
Subject: [R] Wrong computation
Hello,
I wanted to compute the time differenze between to times:
first =as.POSIXct( 2012-06-15 16:32:39.0025 CEST)
second = as.POSIXct(2012-06-15 16:32:39.0086 CEST)
second - first
The result is
Time difference of 0.006099939 secs
instead of just 0.0061 secs
So R adds aditional numbers
On Jun 15, 2012, at 12:49 PM, Julia wrote:
Hello,
I wanted to compute the time differenze between to times:
first =as.POSIXct( 2012-06-15 16:32:39.0025 CEST)
second = as.POSIXct(2012-06-15 16:32:39.0086 CEST)
second - first
The result is
Time difference of 0.006099939 secs
instead of just
Hello,
A classic of floating-point accuracy is
3/5 - 3/5
[1] 0
3/5 - (2/5 + 1/5)
[1] -1.110223e-16
3/5 - 2/5 - 1/5
[1] -5.551115e-17
Rui Barradas
Em 15-06-2012 18:18, David Winsemius escreveu:
On Jun 15, 2012, at 12:49 PM, Julia wrote:
Hello,
I wanted to compute the time differenze
On Jun 15, 2012, at 20:33 , Rui Barradas wrote:
Hello,
A classic of floating-point accuracy is
3/5 - 3/5
[1] 0
3/5 - (2/5 + 1/5)
[1] -1.110223e-16
3/5 - 2/5 - 1/5
[1] -5.551115e-17
Rui Barradas
Yes. There are only about 16 significant digits in (64 bit) floating point. One
Hello Everybody,
The code:
dfmed-lapply(unique(colnames(df)), function(x)
rowMedians(as.matrix(df[,colnames(df) == x]),na.rm=TRUE))
takes really long time to execute ( in hours). Is there a faster way to do
this?
Thanks!
On Tue, May 22, 2012 at 3:46 PM, Preeti pre...@sci.utah.edu wrote:
Assuming your original matrix IS a matrix, call it yourmat, and not a
data frame (whose columns **must* have unique names if you haven't
messed with the check.names default) then maybe:
UNTESTED!!! ###
thenames - unique(dimnames(yourmat)[[2]])
ans - lapply(thenames, function(nm, {
apply(
I wonder how you do this (or maybe on what kind of machine you execute it).
I tried it out of curiosity and get
df = as.data.frame(lapply(1:300,function(x)sample(200,25,T)))
colnames(df) = sample(letters[1:20],300,T)
system.time(dfmed-lapply(unique(colnames(df)), function(x)
+
Hmm.. that is interesting... I did this on our server machine which has
about 200 cores. So memory is not an issue. Also, building the dataframe
takes about a few minutes maximum for me. My code is similar to yours but
for the fact that I create my dataframe from read.delim(filename) and
then I
Just adding a few cents to this:
rowMedians(x) is roughly 4-10 times faster than apply(x, MARGIN=1,
FUN=median) - at least on my local Windows 7 64bit tests. You can do
these simple benchmark runs yourself via the
matrixStats/tests/rowMedians.R system test, cf. http://goo.gl/YCJed
[R-forge].
On May 23, 2012, at 19:30 , Preeti wrote:
Hmm.. that is interesting... I did this on our server machine which has
about 200 cores. So memory is not an issue. Also, building the dataframe
takes about a few minutes maximum for me. My code is similar to yours but
for the fact that I create my
On Wed, May 23, 2012 at 11:54 AM, peter dalgaard pda...@gmail.com wrote:
On May 23, 2012, at 19:30 , Preeti wrote:
Hmm.. that is interesting... I did this on our server machine which has
about 200 cores. So memory is not an issue. Also, building the dataframe
takes about a few minutes
Yes, thanks Henrik. I neglected to mention that rowMedians could just
be plugged in instead of apply (..,1,...)
However, my main point is that that's probably not what matters,as
Benno points out. Maybe it's the data frames instead of the matrices,
but The process should execute in a few
Hi,
I have a 250,000 by 300 matrix. I am trying to calculate the median of
those columns (by row) with column names that are identical. I would like
this to be efficient since apply(x,1,median) where x is created by choosing
only those columns with same column name and looping on this is taking a
On Tue, May 22, 2012 at 01:34:45PM -0600, Preeti wrote:
Hi,
I have a 250,000 by 300 matrix. I am trying to calculate the median of
those columns (by row) with column names that are identical. I would like
this to be efficient since apply(x,1,median) where x is created by choosing
only those
See rowMedians() of the matrixStats package for replacing apply(x,
MARGIN=1, FUN=median). /Henrik
On Tue, May 22, 2012 at 12:34 PM, Preeti pre...@sci.utah.edu wrote:
Hi,
I have a 250,000 by 300 matrix. I am trying to calculate the median of
those columns (by row) with column names that are
Thanks Henrik! Here is the one-liner that I wrote:
dfmed-lapply(unique(colnames(df)), function(x)
rowMedians(as.matrix(df[,colnames(df) == x]),na.rm=TRUE))
Thanks again!
On Tue, May 22, 2012 at 3:23 PM, Henrik Bengtsson h...@biostat.ucsf.eduwrote:
See rowMedians() of the matrixStats package
Please see https://github.com/hadley/plyr/issues/60
Hadley
On Thu, Jan 12, 2012 at 11:54 AM, abhagwat bhagwatadi...@gmail.com wrote:
The code below shows that
(1) the way to activate the parallel backend indeed is to use 'registerDoMC'
(2) the function d_ply does NOT accept the argument
Dear all,
I have a question regarding the possibility of parallel computation in plyr
version 1.7.
The help files of the following functions mention the argument '.parallel':
ddply, aaply, llply, daply, adply, dlply, alply, ldply, laply
However, the help files of the following functions do
Additional question: the plyr package mentions that parallel computation can
be set up using the parallel computing backend from 'foreach'.
Is this limited to executing the following two lines on a UNIX machine
(source
http://cran.r-project.org/web/packages/doMC/.../gettingstartedMC.pdf
The code below shows that
(1) the way to activate the parallel backend indeed is to use 'registerDoMC'
(2) the function d_ply does NOT accept the argument parallel, while the
function ddply does. Perhaps it is interesting to add this feature to d_ply,
l_ply and a_ply too? As a workaround one can
Hi all,
After reading this interesting discussion I delved a bit deeper into the
subject matter. The following snippet of code (see the end of my mail)
compares three ways of performing this task, using ddply, ave and one
yet unmentioned option: data.table (a package). The piece of code
Hello there,
Im computing the total value of an order from the price of the order items
using a for loop and the ifelse function. I do this on a large dataframe
(close to 1m lines). The computation of this function is painfully slow: in
1min only about 90 rows are calculated.
The
Verzonden: woensdag 3 augustus 2011 15:26
Aan: r-help@r-project.org
Onderwerp: [R] slow computation of functions over large datasets
Hello there,
I'm computing the total value of an order from the price of the order items
using
a for loop and the ifelse function. I do this on a large dataframe
On Aug 3, 2011, at 9:25 AM, Caroline Faisst wrote:
Hello there,
Im computing the total value of an order from the price of the
order items
using a for loop and the ifelse function.
Ouch. Schools really should stop teaching SAS and BASIC as a first
language.
I do this on a
.
--
david.
Best regards,
Thierry
-Oorspronkelijk bericht-
Van: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org
]
Namens Caroline Faisst
Verzonden: woensdag 3 augustus 2011 15:26
Aan: r-help@r-project.org
Onderwerp: [R] slow computation of functions over large datasets
Hello
This takes about 2 secs for 1M rows:
n - 100
exampledata - data.frame(orderID = sample(floor(n / 5), n, replace = TRUE),
itemPrice = rpois(n, 10))
require(data.table)
# convert to data.table
ed.dt - data.table(exampledata)
system.time(result - ed.dt[
+ ,
On Aug 3, 2011, at 12:20 PM, jim holtman wrote:
This takes about 2 secs for 1M rows:
n - 100
exampledata - data.frame(orderID = sample(floor(n / 5), n, replace
= TRUE), itemPrice = rpois(n, 10))
require(data.table)
# convert to data.table
ed.dt - data.table(exampledata)
On Aug 3, 2011, at 2:01 PM, Ken wrote:
Hello,
Perhaps transpose the table attach(as.data.frame(t(data))) and use
ColSums() function with order id as header.
-Ken Hutchison
Got any code? The OP offered a reproducible example, after all.
--
David.
On Aug 3, 2554 BE, at 1:12
Sorry about the lack of code, but using Davids example, would:
tapply(itemPrice, INDEX=orderID, FUN=sum)
work?
-Ken Hutchison
On Aug 3, 2554 BE, at 2:09 PM, David Winsemius dwinsem...@comcast.net wrote:
On Aug 3, 2011, at 2:01 PM, Ken wrote:
Hello,
Perhaps transpose the table
Hello,
Perhaps transpose the table attach(as.data.frame(t(data))) and use ColSums()
function with order id as header.
-Ken Hutchison
On Aug 3, 2554 BE, at 1:12 PM, David Winsemius dwinsem...@comcast.net wrote:
On Aug 3, 2011, at 12:20 PM, jim holtman wrote:
This takes
On Aug 3, 2011, at 3:05 PM, Ken wrote:
Sorry about the lack of code, but using Davids example, would:
tapply(itemPrice, INDEX=orderID, FUN=sum)
work?
Doesn't do the cumulative sums or the assignment into column of the
same data.frame. That's why I used ave, because it keeps the sequence
Hi,
I have been trying to use the new .parallel argument with the most recent
version of plyr [1] to speed up some tasks. I can run the example in the NEWS
file [1], and it seems to be working correctly. However, R will only use a
single core when I try to apply this same approach with
Yes, this was a little bug that will be fixed in the next release.
Hadley
On Thu, Sep 16, 2010 at 1:11 PM, Dylan Beaudette
debeaude...@ucdavis.edu wrote:
Hi,
I have been trying to use the new .parallel argument with the most recent
version of plyr [1] to speed up some tasks. I can run the
On Thursday 16 September 2010, David Winsemius wrote:
On Sep 16, 2010, at 1:11 PM, Dylan Beaudette wrote:
Hi,
I have been trying to use the new .parallel argument with the most
recent
version of plyr [1] to speed up some tasks. I can run the example in
the NEWS
file [1], and it
...@r-project.org [mailto:r-help-boun...@r-
project.org] On Behalf Of Kutlwano Ramaboa
Sent: Wednesday, February 18, 2009 10:43 PM
To: r-help@r-project.org
Subject: [R] matrix computation???
Hello
Can anyone tell me what I am doing wrong below? My Y and y_hat are the
same.
A-scale
Sent: Tuesday, February 24, 2009 1:02 AM
To: Greg Snow; r-help@r-project.org
Subject: Re: [R] matrix computation???
Hi Greg
Thanks for responding.
When I scale stackloss and compute the fitted values of the scaled data
frame, the values are not the same:
A-data.frame(scale(stackloss
-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-
project.org] On Behalf Of Kutlwano Ramaboa
Sent: Tuesday, February 24, 2009 1:02 AM
To: Greg Snow; r-help@r-project.org
Subject: Re: [R] matrix computation???
Hi Greg
Thanks for responding.
When I scale stackloss
-
project.org] On Behalf Of Kutlwano Ramaboa
Sent: Wednesday, February 18, 2009 10:43 PM
To: r-help@r-project.org
Subject: [R] matrix computation???
Hello
Can anyone tell me what I am doing wrong below? My Y and y_hat are the
same.
A-scale(stackloss)
n1- dim(A)[1];n2-dim(A)[2]
X
Hello
Can anyone tell me what I am doing wrong below? My Y and y_hat are the same.
A-scale(stackloss)
n1- dim(A)[1];n2-dim(A)[2]
X-svd(A)
Y- matrix(A[,stack.loss],nrow=n1)
Y
y_hat -matrix((X$u%*% t(X$u))%*%Y,nrow=n1,byrow=T)
y_hat
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
christabel_jane prudencio wrote:
I have an external data (.txt) for
annual peak flood. The first column is the year, second column is the
observation date, and the last is the observed discharge. My task is to
calculate the mean, skewness and kurtosis of the said data. I was advised to use
I have an external data (.txt) for
annual peak flood. The first column is the year, second column is the
observation date, and the last is the observed discharge. My task is to
calculate the mean, skewness and kurtosis of the said data. I was advised to use
read.table() to read the entire
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
?mean
kurtosis
http://finzi.psych.upenn.edu/R/Rhelp02a/archive/110186.html
--- On Thu, 11/6/08, christabel_jane prudencio [EMAIL PROTECTED] wrote:
From: christabel_jane prudencio [EMAIL PROTECTED]
Subject: [R] mean computation for external data
To: r-help@r-project.org
Received: Thursday
Hi Christabel,
Take a look at the function basicStats in the fBasics package. Here is an
example:
library(fBasics)
set.seed(123)
x=rnorm(20,24,2)
basicStats(x)
#x
#nobs 20.00
#NAs 0.00
#Minimum 20.066766
#Maximum 27.573826
#1. Quartile 23.012892
#3. Quartile
You haven't given us the 'at a minimum' information that the posting guide
requested, so we don't know your timezone. But in America/Toronto on my
F8 machine
d - 2007-11-04 01:30:00
dd - as.POSIXct(d)
c(dd, dd+3600)
[1] 2007-11-04 01:30:00 EDT 2007-11-04 01:30:00 EST
Note the change of
Hi,
A friend just introduced me to R today and I think it is really nice after
browsing its web site. I'm eager to cook up some web-based interface to use
R at work for more platform-independent access. I plan to implement it in
a Python-based framework such as TurboGears or Django on either
Check out http://www.rpad.org
On Tue, May 27, 2008 at 6:26 PM, Hseu-Ming Chen [EMAIL PROTECTED] wrote:
Hi,
A friend just introduced me to R today and I think it is really nice after
browsing its web site. I'm eager to cook up some web-based interface to use
R at work for more
Samu,
Thanks you are a life saver. Your efforts saved me a ton of time. I went
through basically exactly the same process as you describe. To summarize
the same issue you have I get the following phenomenon:
spawning appears to not happen the same every time. How can you specify how
the nodes
Hello,
I want to compute OR from a simulation,
I simulate outcome-y and binary covariate, and I repeat it 1000 times.
I built a logistic model, and want to compute OR for every sample
At the end I'm looking for the mean(OR) of the 1000 repetitions.
Can I use cc (from epicalc)?
example: if yall is
Hello!
I finally got MPICH 1.06 + R 2.6.1 + Rmpi 0.5-5 working with multiple
computers. The key was to realize that the number of processes should be
one when launching Rgui using mpiexec and not the number of
master+slaves, as I had first wrongly understood.
However, I seem to have a new
Some progress in my problem:
Samu Mäntyniemi kirjoitti:
With MPICH2 I managed to connect my computers so that I was able to
remotely launch Rgui on both machines but R hanged when calling
library(Rmpi). If only one Rgui was launched on the localhost,
library(Rmpi) worked without errors, but
R-users,
My question is related to earlier posts about benefits of quadcore over
dualcore computers; I am trying to setup a cluster of windows xp
computers so that eventually I could make use of 10-20 cpu:s, but for
learning how to do this, I am playing around with two laptops.
I thought that
Hi All.
I would like to compute a separate covariance matrix for a set of variables
for each of the levels of a factor and then compute the average covariance
matrix over the factor levels. I can loop through this computation but I
need to perform the calculation for a large number of levels and
Hi All.
I would like to compute a separate covariance matrix for a set of
variables for each of the levels of a factor and then compute the
average covariance matrix over the factor levels. I can loop through
this computation but I need to perform the calculation for a large
number of levels and
64 matches
Mail list logo