[R] Display Multiple page lattice plots

2007-06-07 Thread rhelp . 20 . trevva
Gudday,

I am generating a series of lattice contourplots that are conditioned on a 
variable (Year) that has 27 different levels. If I try and put them all on one 
plot, it ends up pretty messy and you can't really read anything, so instead I 
have set the layout to 3x3, thus generating three pages of nine plots each. The 
problem is that I can't display all these on screen at once, because each 
subsequent page overwrites the previous one. I have found in the mailing lists 
how to print them to separate files without any problems eg.

  p-contourplot(log10(x)~lat*long|Year,
  data=data.tbl,
  layout=c(3,3))  
  png(file=Herring Distribution%02d.png,width=800,height=800)
  print(p)
  dev.off()

but there doesn't seem to be anything about how to output multiple pages to the 
screen... I suspect that I may need to use the page=... option in contourplot 
command, but I can't seem to make it work. Its a simple, and not particularly 
important problem, but it sure is bugging me!

Thanks for the advice in advance.

Cheers,

Mark

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] How to utilise dual cores and multi-processors on WinXP

2007-03-06 Thread rhelp . 20 . trevva
Hello,

I have a question that I was wondering if anyone had a fairly straightforward 
answer to: what is the quickest and easiest way to take advantage of the extra 
cores / processors that are now commonplace on modern machines? And how do I do 
that in Windows?

I realise that this is a complex question that is not answered easily, so let 
me refine it some more. The type of scripts that I'm dealing with are well 
suited to parallelisation - often they involve mapping out parameter space by 
changing a single parameter and then re-running the simulation 10 (or n times), 
and then brining all the results back to gether at the end for analysis. If I 
can distribute the runs over all the processors available in my machine, I'm 
going to roughly halve the run speed. The question is, how to do this?

I've looked at many of the packages in this area: rmpi, snow, snowFT, rpvm, and 
taskPR - these all seem to have the functionality that I want, but don't exist 
for windows. The best solution is to switch to Linux, but unfortunately that's 
not an option. 

Another option is to divide the task in half from the beginning, spawn two 
slave instances of R (e.g. via Rcmd), let them run, and then collate the 
results at the end. But how exactly to do this and how to know when they're 
done?

Can anyone recommend a nice solution? I'm sure that I'm not the only one who'd 
love to double their computational speed...

Cheers,

Mark

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.