Re: [R] spatstat rmh problem

2014-09-16 Thread Rolf Turner
OK. Two things are going wrong. (1) There is an error in your code. You are passing the new.coef argument to density() and not to rmh(). The function density() has no such argument, but has a ... argument, so new.coef simply gets ignored. You should use:

Re: [R] apply block of if statements with menu function

2014-09-16 Thread rl
On 2014-09-15 14:22, David L Carlson wrote: I think switch() should work for you here, but it is not clear how much flexibility you are trying to have (different tests based on the first response; different tests based on first, then second response; different tests based on each successive

Re: [R] spatstat rmh problem

2014-09-16 Thread Rolf Turner
There was indeed a bug in rmh() w.r.t. the new.coeff argument. The bug has been fixed and will not be present in the next release of spatstat. cheers, Rolf Turner On 16/09/14 16:30, Sebastian Schutte wrote: Thanks so much for your comments. Sorry for not having sent a running example from

Re: [R] apply block of if statements with menu function

2014-09-16 Thread PIKAL Petr
Hi Selection: 1 Error in if (menu1 == 1) menu1a - menu(c(1a, 2a, 3a, 4a), graphics = FALSE, : argument is of length zero How to correct this error please, for this zero length argument? Instead of writing functions which you are unable to debug and resolve look what elements of

[R] R's memory limitation and Hadoop

2014-09-16 Thread Barry King
Is there a way to get around R’s memory-bound limitation by interfacing with a Hadoop database or should I look at products like SAS or JMP to work with data that has hundreds of thousands of records? Any help is appreciated. -- __ *Barry E. King, Ph.D.* Analytics

Re: [R] R's memory limitation and Hadoop

2014-09-16 Thread John McKown
On Tue, Sep 16, 2014 at 6:40 AM, Barry King barry.k...@qlx.com wrote: Is there a way to get around R’s memory-bound limitation by interfacing with a Hadoop database or should I look at products like SAS or JMP to work with data that has hundreds of thousands of records? Any help is

Re: [R] apply block of if statements with menu function

2014-09-16 Thread rl
On 2014-09-16 10:50, PIKAL Petr wrote: switch(menu(c(List letters, List LETTERS)) + 1, cat(Nothing done\n), letters, LETTERS) why is the result changed if ' + 1,' removed? Because +1 belongs to switch not to menu. You can translate above to: The help pages ?switch, ?menu do not

Re: [R] R's memory limitation and Hadoop

2014-09-16 Thread Jeff Newmiller
If you need to start your question with a false dichotomy, by all means choose the option you seem to have already chosen and stop trolling us. If you actually want an answer here, try Googling on the topic first (is R hadoop so un-obvious?) and then phrase a specific question so someone has a

Re: [R] apply block of if statements with menu function

2014-09-16 Thread PIKAL Petr
Hi One comment: I never used menu or switch, this is just how I understand its function. So you probably are on a wrong track and do not understand what are objects and functions in R languange. menu Description menu presents the user with a menu of choices labelled from 1 to the number of

Re: [R] R's memory limitation and Hadoop

2014-09-16 Thread peter dalgaard
Not sure trolling was intended here. Anyways: Yes, there are ways of working with very large datasets in R, using databases or otherwise. Check the CRAN task views. SAS will for _some_ purposes be able to avoid overflowing RAM by using sequential file access. The biglm package is an example

Re: [R] R's memory limitation and Hadoop

2014-09-16 Thread Hadley Wickham
Hundreds of thousands of records usually fit into memory fine. Hadley On Tue, Sep 16, 2014 at 12:40 PM, Barry King barry.k...@qlx.com wrote: Is there a way to get around R’s memory-bound limitation by interfacing with a Hadoop database or should I look at products like SAS or JMP to work with

Re: [R] spatstat rmh problem

2014-09-16 Thread Sebastian Schutte
You should use: plot(density(rmh(mod,new.coef=c(1,200 Sorry, my bad, typo in the example code. (2) However, even when the correct call is given you still wind up with identical densities!!! Hm. I think this may be a bug; I'll will check with the other authors of spatstat and

[R] R write strange behavior in huge file

2014-09-16 Thread Maxime Vallee
Hello, In my script I have one list of 1,132,533 vectors (each vector contains 381 elements). When I use write to save this list in a flat text file (I unlist my list, separate by tabs, and set ncol to 381), I end up with a file of 1,132,535 lines (2 additional lines). I checked back, my R

[R] ReadMM does not support array format

2014-09-16 Thread Manoj Kumar
Hello, I have a file x.mtx stored as a array format MatrixMarket file. The header says %%MatrixMarket matrix array real general However when I do readMM(x.mtx), it raises an error saying Error in scan(file, what, nmax, sep, dec, quote, skip, nlines, na.strings, : scan() expected 'an

Re: [R] R's memory limitation and Hadoop

2014-09-16 Thread Prof Brian Ripley
On 16/09/2014 13:56, peter dalgaard wrote: Not sure trolling was intended here. Anyways: Yes, there are ways of working with very large datasets in R, using databases or otherwise. Check the CRAN task views. SAS will for _some_ purposes be able to avoid overflowing RAM by using sequential

Re: [R] Using R to get updated access token on FB Graph API?

2014-09-16 Thread Chichi Shu
Thanks, Ista. I think RFacebook package is using HTTR to get access token too. So I don’t have to use it separately to get access token. Once someone creates an app ID and secret, do they ever expire? I can’t find any information online about the expiration of App ID and App secret so I’m

Re: [R] ncdf size error

2014-09-16 Thread Hernan A. Moreno Ramirez
Sure, here it is. Thanks for any help with this Dr. Pierce: # Exporting to NETCDF files # # define the netcdf coordinate variables library(ncdf) dim1 = dim.def.ncdf( Nodes,, seq(1,19000)) dim2= dim.def.ncdf( Time,Hours since

Re: [R] R's memory limitation and Hadoop

2014-09-16 Thread William Dunlap
[*] I recall a student fitting a GLM with about 30 predictors to 1.5m records: at the time (ca R 2.14) it did not fit in 4GB but did in 8GB. You can easily run out of memory when a few of the variables are factors, each with many levels, and the user looks for interactions between them. This

Re: [R] ncdf size error

2014-09-16 Thread David W. Pierce
On Tue, Sep 16, 2014 at 11:36 AM, Hernan A. Moreno Ramirez hmore...@uwyo.edu wrote: Sure, here it is. Thanks for any help with this Dr. Pierce: # Exporting to NETCDF files # # define the netcdf coordinate variables

Re: [R] ncdf size error

2014-09-16 Thread David W. Pierce
Remember, you have to *also* set force_v4=TRUE in the nc_create() call. Regards, --Dave On Tue, Sep 16, 2014 at 2:01 PM, Hernan A. Moreno Ramirez hmore...@uwyo.edu wrote: Hi Professor, Thanks for you valuable help. I did change my code to use ncdf4 package and still I get the same error:

Re: [R] ncdf size error

2014-09-16 Thread Hernan A. Moreno Ramirez
Hi Professor, Thanks for you valuable help. I did change my code to use ncdf4 package and still I get the same error: # Exporting to NETCDF files # # define the netcdf coordinate variables -- note these have values!

[R] Problem when estimating through dlm package

2014-09-16 Thread Daniel Miquelluti
I'm trying to set up an AR(2) model in the dlm context. I've generated a time series utilizing the code: am = 800; #sample size des = 200; #initial values to be discarded V = 0.5 v = rnorm((am+des+1),0,sqrt(V)) W = 0.9 w = rnorm((am+des+1),0,sqrt(W)) U = 0.9

[R] Changepoint analysis--is it possible to attribute changpoints to explanatory variables?

2014-09-16 Thread Jones, Kristopher@DWR
Hello, I would like to evaluate the relationship between flows and phytoplankton abundance (or Chlorophyll a concentrations) using a changepoint analysis.  Specifically, I have two study questions: Study Question 1: Are there certain flow thresholds that result in spikes in phytoplankton

[R] Error in predict.lm() for models with no intercept?

2014-09-16 Thread isabella
Hi everyone, Could there be an error in the predict() function in R for models without intercepts which include one or more predictors? When using the predict() function to obtain the standard errors of the fitted values produced by a linear model (lm), the behaviour of the

Re: [R] Error in predict.lm() for models with no intercept?

2014-09-16 Thread isabella
Hi Antonio, I've just sent an e-mail to the r-help list with some R code which shows that the standard errors of the fitted values are indeed computed incorrectly by R (please see below). Let's hope that there will be at least one helpful answer to the question.

Re: [R] Changepoint analysis--is it possible to attribute changpoints to explanatory variables?

2014-09-16 Thread Bert Gunter
This is primarily a statistical issue and is offtopic here. I would strongly suggest that you consult with a local statistical expert. The answer is almost certainly yes: this is regression (perhaps quantile regression) in which the error structure is not iid (the response is an autocorrelated

Re: [R] Error in predict.lm() for models with no intercept?

2014-09-16 Thread Ista Zahn
Hi, Your example didn't come through, probably because you sent your message in HTML. Please re send it in plain text (messages sent to this list should always be plain text, as explained in the posting guide). Best, Ista Hi everyone, Could there be an error in the predict()

[R] Error in predict.lm() for models without intercept?

2014-09-16 Thread isabella
Hi everyone, It appears my R code didn't come through the first time (thanks for letting me know, Ista). Here is my message again: Could there be an error in the predict() function in R for models without intercepts which include one or more predictors? When using the predict() function to

Re: [R] Error in predict.lm() for models without intercept?

2014-09-16 Thread Rolf Turner
When I run your code (in the single predictor case) I get exactly what I would expect. In particular the standard errors are indeed proportional to the (absolute) value of x, and the standard error is indeed 0 at x = 0. The proportionality constant is exactly what it should be, explicitly

[R] Unexpected behaviour of plyr::ddply

2014-09-16 Thread walmes .
Hello R users, I'm writing a brief tutorial of getting statistical measures by splitting according strata and over columns. When I used plyr::ddply I got and unexpected result, with NA/NaN for non existing cells. Below is a minimal reproducible code with the result that I got. For comparison, the

Re: [R] Error in predict.lm() for models without intercept?

2014-09-16 Thread isabella
Hi Rolf, BODY { font-family:Arial, Helvetica, sans-serif;font-size:12px; } Thanks very much for your response. You are right - my simulated example works as intended, so it can't be used to get to the bottom of this problem (if it is a problem at all). Here is