Hi there,
I have a largish optimisation problem (10 years of daily observations).
I want to optimise between 4 and 6 parameters.
I'd like to utilise parallel computing if I can as I will have to run it
with different starting values etc.
I have a quad core PC with 16GB ram running windows 7.
On Fri, Sep 14, 2012 at 6:00 PM, Bazman76 h_a_patie...@hotmail.com wrote:
Hi there,
I have a largish optimisation problem (10 years of daily observations).
I want to optimise between 4 and 6 parameters.
I'd like to utilise parallel computing if I can as I will have to run it
with different
Thanks for that I hadn't realised parallel could run on windows Pc's.
The code is differential evolution but its not part of a package.
Still I would like to be able to use cloud computing if possible, any
thoughts on the easiest way to achieve that using a windows based PC?
Found this blog
On Fri, Sep 14, 2012 at 7:22 PM, Bazman76 h_a_patie...@hotmail.com wrote:
Thanks for that I hadn't realised parallel could run on windows Pc's.
The code is differential evolution but its not part of a package.
Still I would like to be able to use cloud computing if possible, any
thoughts on
I addition to Michael's suggestions, you can also check out this
tutorial which shows how to use lapply into EC2.
http://www.rinfinance.com/agenda/2012/workshop/WhitArmstrong.pdf
Unfortunately, rzmq is not available on windows, so this may not be
the best solution for your setup.
-Whit
On
For cross-validation, the caret package was designed to easily go
between sequential and parallel processing (using nws, mpi or anything
else).
See the last examples in ?train.
Max
On Jun 26, 2009, at 11:28 AM, Michael comtech@gmail.com wrote:
I guess when we move to Amazon AWS,
Hi all,
Lots of big IT companies are renting out their computing facilities.
Amazon has one such service. In my understanding, this will
dramatically improve the speed of my R program -- currently the cross
validation and model selection part is the bottle neck. It take a few
days to just finish
On 26 June 2009 at 07:40, Michael wrote:
| Hi all,
|
| Lots of big IT companies are renting out their computing facilities.
| Amazon has one such service. In my understanding, this will
| dramatically improve the speed of my R program -- currently the cross
| validation and model selection part
I guess when we move to Amazon AWS,
we have to rewrite the whole R programs?
On Fri, Jun 26, 2009 at 8:05 AM, Dirk Eddelbuettele...@debian.org wrote:
On 26 June 2009 at 07:40, Michael wrote:
| Hi all,
|
| Lots of big IT companies are renting out their computing facilities.
| Amazon has one
losemind wrote:
Moreover, at my PC level, I have a 4-core PC, is there anything we
could do in R to speed up my CV programs?
I have seen one very nice paper that compared parallelization options for R:
http://epub.ub.uni-muenchen.de/8991/
losemind wrote:
we have to rewrite
On Fri, Jun 26, 2009 at 8:28 AM, Michaelcomtech@gmail.com wrote:
I guess when we move to Amazon AWS,
we have to rewrite the whole R programs?
Not necessarily. I use foreach (currently available in our REvolution
R Enterprise distribution and coming very soon to CRAN), and test out
the
Just out of curiosity, what system do you have?
These are the results in my machine:
system.time(exp(m), gcFirst=TRUE)
user system elapsed
0.520.040.56
library(pnmath)
system.time(exp(m), gcFirst=TRUE)
user system elapsed
0.660 0.016 0.175
Juan Pablo
Juan Pablo Romero Méndez [EMAIL PROTECTED] writes:
Just out of curiosity, what system do you have?
These are the results in my machine:
system.time(exp(m), gcFirst=TRUE)
user system elapsed
0.520.040.56
library(pnmath)
system.time(exp(m), gcFirst=TRUE)
user system
pnmath currently uses up to 8 threads (i.e. 1, 2, 4, or 8).
getNumPnmathThreads() should tell you the maximum number used on your
system, which should be 8 if the number of processors is being
identified correctly. With the size of m this calculation should be
using 8 threads, but the exp
Juan Pablo Romero Méndez [EMAIL PROTECTED] writes:
Thanks!
It turned out that Rmpi was a good option for this problem after all.
Nevetheless, pnmath seems very promising, although it doesn't load in my
system:
library(pnmath)
Error in dyn.load(file, DLLpath = DLLpath, ...) :
unable
Juan Pablo Romero Méndez [EMAIL PROTECTED] writes:
Hello,
The problem I'm working now requires to operate on big matrices.
I've noticed that there are some packages that allows to run some
commands in parallel. I've tried snow and NetWorkSpaces, without much
success (they are far more
Thanks!
It turned out that Rmpi was a good option for this problem after all.
Nevetheless, pnmath seems very promising, although it doesn't load in my system:
library(pnmath)
Error in dyn.load(file, DLLpath = DLLpath, ...) :
unable to load shared library
Hello,
The problem I'm working now requires to operate on big matrices.
I've noticed that there are some packages that allows to run some
commands in parallel. I've tried snow and NetWorkSpaces, without much
success (they are far more slower that the normal functions)
My problem is very simple,
Hi,
I had access to an hpc cluster, and wanted to parallelize some of my R code. I
looked at the snow,nws, rscalapack documentation but was unable to make out how
I should submit my job to the hpc, and how I should code a simple program. For
example, if I had 10 matrices, and 10 processor how
Hi Tim,
I think you should have a look at this Rmpi Tutorial
http://ace.acadiau.ca/math/ACMMaC/Rmpi/
and to Luke Tierney's webpage:
http://www.cs.uiowa.edu/~luke/R/cluster/uiowasnow.html
Best,
Markus
Tim Smith schrieb:
Hi,
I had access to an hpc cluster, and wanted to parallelize some of my
20 matches
Mail list logo