Re: [R] no results

2015-11-10 Thread Johannes Huesing

Alaa Sindi  [Tue, Nov 10, 2015 at 08:47:23PM CET]:

Hi All,

I am not getting any summary results and I do not have any error. what would be 
the problem?







sem=mlogit.optim ( LL  , Start, method = 'nr', iterlim = 2000, tol = 1E-05, 
ftol = 1e-08, steptol = 1e-10, print.level = 0)


I see some expressions which are undefined. For instance mlogit.optim, LL, Start. 


This causes the code not to run on my machine.


R.Version()

$platform
[1] "i686-pc-linux-gnu"

$arch
[1] "i686"

$os
[1] "linux-gnu"

$system
[1] "i686, linux-gnu"

$status
[1] ""

$major
[1] "3"

$minor
[1] "0.2"

$year
[1] "2013"

$month
[1] "09"

$day
[1] "25"

$`svn rev`
[1] "63987"

$language
[1] "R"

$version.string
[1] "R version 3.0.2 (2013-09-25)"

$nickname
[1] "Frisbee Sailing"
--
Johannes Hüsing   
http://derwisch.wikidot.com

Threema-ID: VHVJYH3H

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

[R] Identifying attributes of specific foreach task on error.

2015-11-10 Thread nevil amos
I am running foreach to reclassify a large number of rasters.

I am getting the Error in { : task 1359 failed - "cannot open the
connection"

How do I get the attributes ( in this case files being read and written)
for that specific task?


It runs fine until that point presumably there is a problem with the input
or output file name, but I cannot determine which file it refers to.

cheers

Nevil Amos

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Revolutions blog: October 2015 roundup

2015-11-10 Thread David Smith
Since 2008, Revolution Analytics (and now Microsoft) staff and guests have 
written about R every weekday at the
Revolutions blog:
http://blog.revolutionanalytics.com
and every month I post a summary of articles from the previous month of 
particular interest to readers of r-help.

In case you missed them, here are some articles related to R from the month of 
October:

A way of dealing with confounding variables in experiments: instrumental 
variable analysis with the ivmodel package for
R: http://blog.revolutionanalytics.com/2015/10/instrumental-variables.html

The new dplyrXdf package 
http://blog.revolutionanalytics.com/2015/10/the-dplyrxdf-package.html allows 
you to manipulate
large, out-of-memory data sets in the XDF format (used by the RevoScaleR 
package) using dplyr syntax:
http://blog.revolutionanalytics.com/2015/10/using-the-dplyrxdf-package.html

Some guidelines for using explicit parallel programming (e.g. the parallel 
package) with the implicit multithreading
provided by Revolution R Open:
http://blog.revolutionanalytics.com/2015/10/edge-cases-in-using-the-intel-mkl-and-parallel-programming.html

Ross Ihaka was featured in a full-page advertisement for the University of 
Auckland in The Economist:
http://blog.revolutionanalytics.com/2015/10/ross-ihaka-in-the-economist.html

A video from the PASS 2015 conference in Seattle shows R running within SQL 
Server 2016:
http://blog.revolutionanalytics.com/215/10/demo-r-in-sql-server-2016.html . The 
preview for SQL Server 2016 includes
Revolution R Enterprise (as SQL Server R Services)
http://blog.revolutionanalytics.com/2015/10/revolution-r-now-available-with-sql-server-community-preview.html

A comparison of fitting decision trees in R with the party and rpart packages:
http://blog.revolutionanalytics.com/2015/10/party-with-the-first-tribe.html

The foreach suite of packages for parallel programming in R has been updated, 
and now includes support for progress bars
when using doSNOW: 
http://blog.revolutionanalytics.com/2015/10/updates-to-the-foreach-package-and-its-friends.html

The "reach" package allows you to call Matlab functions directly from R:
http://blog.revolutionanalytics.com/2015/10/reach-for-your-matlab-data-with-r.html

A review of support vector machines (SVMs) in R:
http://blog.revolutionanalytics.com/2015/10/the-5th-tribe-support-vector-machines-and-caret.html

A presentation (with sample code) shows how to call Revolution R Enterprise 
from SQL Server 2016:
http://blog.revolutionanalytics.com/2015/10/previewing-using-revolution-r-enterprise-inside-sql-server.html

A tutorial on using the miniCRAN package to set up packages for use with R in 
Azure ML:
http://blog.revolutionanalytics.com/2015/10/using-minicran-in-azure-ml.html

Asif Salam shows how to use the RDCOMClient package to construct interactive 
Powerpoint slide shows with R:
http://blog.revolutionanalytics.com/2015/10/programmatically-create-interactive-powerpoint-slides-with-r.html

A directory of online R courses for all skill levels:
http://blog.revolutionanalytics.com/2015/10/learning-r-oct-2015.html

Using R's nls() optimizer to solve a problem in Bayesian inference:
http://blog.revolutionanalytics.com/2015/10/parameters-and-percentiles-the-gamma-distribution.html

A professor uses the miniCRAN package to deliver R packages to offline 
facilities in Turkey and Iran:
http://blog.revolutionanalytics.com/2015/10/using-minicran-on-site-in-iran.html

Amanda Cox, graphics editor at the New York Times, calls R "the greatest 
software on Earth" in a podcast:
http://blog.revolutionanalytics.com/2015/10/amanda-cox-on-using-r-at-the-nyt.html

Hadley Wickham answered many questions in a Reddit "Ask Me Anything" session:
http://blog.revolutionanalytics.com/2015/10/hadley-wickhams-ask-me-anything-on-reddit.html

A roundup of several talks given at R user group meetings around the world:
http://blog.revolutionanalytics.com/2015/10/r-user-groups-highlight-r-creativity.html

General interest stories (not related to R) in the past month included: 
visualizing the movements of chess pieces
(http://blog.revolutionanalytics.com/2015/10/chess-piece-moves.html), real-time 
face replication
(http://blog.revolutionanalytics.com/2015/10/because-its-friday-faceon.html), a 
world map of antineutrinos
(http://blog.revolutionanalytics.com/2015/10/because-its-friday-mapping-antineutrinos.html),
 a transformation
(http://blog.revolutionanalytics.com/2015/10/because-its-friday-a-transformation.html),
 and a warning about "big data"
applications 
(http://blog.revolutionanalytics.com/2015/10/because-its-friday-are-we-selling-radium-underpants.html).

Meeting times for local R user groups 
(http://blog.revolutionanalytics.com/local-r-groups.html) can be found on the
updated R Community Calendar at: 
http://blog.revolutionanalytics.com/calendar.html

If you're looking for more articles about R, you can find summaries from 
previous months at
http://blog.revolutionanalytics.com/roundups/. You can recei

[R] no results

2015-11-10 Thread Alaa Sindi
Hi All,

I am not getting any summary results and I do not have any error. what would be 
the problem? 



sem=mlogit.optim ( LL  , Start, method = 'nr', iterlim = 2000, tol = 1E-05, 
ftol = 1e-08, steptol = 1e-10, print.level = 0)
summary(sem)

thanks

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] MAc-package, on OS-X 10.11.1: Warning Message In sqrt(var.T.agg.tau) : NaNs produced

2015-11-10 Thread Roebuck,Paul L
Maybe change the warnings to errors temporarily, and decide for yourself about 
the problem.

...
R> options(warn=2)# Turn warnings into errors
R> omni(es=z, var=var.z, data=dat, type="weighted", method="fixed", ztor=TRUE)
R> traceback()



From: R-help [r-help-boun...@r-project.org] on behalf of Kostadin Kushlev 
[kushl...@gmail.com]
Sent: Tuesday, November 10, 2015 2:14 AM
To: r-help@r-project.org
Subject: [R] MAc-pakcage,   on OS-X 10.11.1: Warning Message In 
sqrt(var.T.agg.tau) : NaNs  produced

I am trying to run a mini meta-analysis for three studies within one of my 
papers. I am using the Mac package for meta-analyzing correlation coefficients.

I am using the following command: omni(es = z, var = var.z, data = dat, 
type="weighted", method = "fixed", ztor = TRUE)

This produces the expected values, but also a warning message: In 
sqrt(var.T.agg.tau) : NaNs produced

This message is repeated twice.

I am wondering how to interpret this error message. Is this because the program 
can’t estimate certain parameters due to the small number of correlation 
coefficients I am trying to analyze (only 3)?

Most importantly, can I report and interpret the output despite those messages?
The information contained in this e-mail message may be privileged, 
confidential, and/or protected from disclosure. This e-mail message may contain 
protected health information (PHI); dissemination of PHI should comply with 
applicable federal and state laws. If you are not the intended recipient, or an 
authorized representative of the intended recipient, any further review, 
disclosure, use, dissemination, distribution, or copying of this message or any 
attachment (or the information contained therein) is strictly prohibited. If 
you think that you have received this e-mail message in error, please notify 
the sender by return e-mail and delete all references to it and its contents 
from your systems.

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] conditionally disable evaluation of chunks in Rmarkdown...

2015-11-10 Thread Henrik Bengtsson
On Tue, Nov 10, 2015 at 9:00 AM, Yihui Xie  wrote:
> The short answer is you cannot. Inline R code is always evaluated.
> When it is not evaluated, I doubt if your output still makes sense,
> e.g. "The value of x is `r x`." becomes "The value of x is ." That
> sounds odd to me.
>
> If you want to disable the evaluate of inline code anyway, you may use
> a custom function to do it. e.g.
>
> cond_eval = function(x) {
>   if (isTRUE(knitr::opts_chunk$get('eval'))) x
> }
>
> Then `r cond_eval(x)` instead of `r x`.
>
> Regards,
> Yihui
> --
> Yihui Xie 
> Web: http://yihui.name
>
>
> On Tue, Nov 10, 2015 at 4:40 AM, Witold E Wolski  wrote:
>> I do have an Rmd where I would like to conditionally evaluate the second 
>> part.
>>
>> So far I am working with :
>>
>> ```{r}
>> if(length(specLibrary@ionlibrary) ==0){
>>   library(knitr)
>>   opts_chunk$set(eval=FALSE, message=FALSE, echo=FALSE)
>> }
>> ```
>>
>> Which disables the evaluation of subsequent chunks.
>>
>> However my RMD file contains also these kind of snippets : `r `
>>
>> How do I disable them?

Just a FYI and maybe/maybe not a option for you; this is one of the
use cases where RSP (https://cran.r-project.org/package=R.rsp) is
handy because it does not require that code snippets (aka "code
chunks" as originally defined by weave/tangle literate programming) to
contain complete expressions.  With RSP-embedded documents, you can do
things such

<% if (length(specLibrary@ionlibrary) > 0) { %>

[... code and text blocks to conditionally include ...]

<% } # if (length(specLibrary@ionlibrary) > 0) %>

or include from a separate file, e.g.

<% if (length(specLibrary@ionlibrary) > 0) { %> <%@include
file="extras.md.rsp"%> <% } %>

You can also use loops over a mixture of code and text blocks etc.

Depending on when 'specLibrary@ionlibrary' gets assigned, you could
preprocess you R Markdown file with RSP, but for this to work out of
the box you basically need to know the value
length(specLibrary@ionlibrary) before your R Markdown code is
evaluated, i.e. before you compile the Rmd file.  Your build pipeline
would then look something like:

rmd <- R.rsp::rfile("report.rmd.rsp")
rmarkdown::render(rmd)

/Henrik
(author of R.rsp)

>>
>> regards
>>
>>
>>
>> --
>> Witold Eryk Wolski
>
> __
> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] [GGplot] Geom_smooth with formula "power"?

2015-11-10 Thread Bert Gunter
See section 8.4-8.5 of MASS 4th Ed. (the book) and the use of
profile.nls() and friends for profiling the log likelihhood surface.

Warning: standard F,t approximations may be poor if the log likelihood
is not nearly enough quadratic. The whole issue of what df to use is
also contentious/unresolved.

-- Bert

Bert Gunter

"Data is not information. Information is not knowledge. And knowledge
is certainly not wisdom."
   -- Clifford Stoll


On Tue, Nov 10, 2015 at 9:47 AM, David Winsemius  wrote:
>
>> On Nov 9, 2015, at 7:19 AM, Catarina Silva 
>>  wrote:
>>
>> I've tried others initial solutions and the adjustement was done to power 
>> model in ggplot - geom_smooth.
>> But, with "nls" I can't do the confidence interval with ggplot - 
>> geom_smooth? I read that with "nls" we have to force "se=FALSE". Is this 
>> true?
>
> Well, sort of. Setting `se = FALSE` prevents the ggplot2 functions from 
> trying to force nls and nls.predict to do something that is not in their 
> design.
>
>> How can I draw confidence interval in the plot?
>>
>> I've done this:
>>> ggplot(data,aes(x = idade,y = v_mt)) +
>> +   geom_point(alpha=2/10, shape=21,fill="darkgray", colour="black", size=3) 
>> +
>> +   geom_smooth(method = 'nls', formula = y ~ a * x^b, start = 
>> list(a=1,b=2),se=FALSE)
>>>
>> And then I don't have the confidence interval.
>>
>> If I do:
>>> ggplot(data,aes(x = idade,y = v_mt)) +
>> +   geom_point(alpha=2/10, shape=21,fill="darkgray", colour="black", size=3) 
>> +
>> +   geom_smooth(method = 'nls', formula = y ~ a * x^b, start = list(a=1,b=2))
>> Error in pred$fit : $ operator is invalid for atomic vectors
>>>
>>
>> Return error…
>>
> Read the help page for nls. The ‘se.fit' parameter is set to FALSE and 
> efforts to make it TRUE will be ignored. So `predict.nls` simply does not 
> return std-error estimates in the typical manner of other predict.* 
> functions. I believe this is because the authors of `nls` did not think there 
> was a clear answer to the question of what confidence bounds should be 
> returned.
>
>  If you want to add confidence bounds to an nls, then you need to decide what 
> bounds to add, and then use the ggplot2 line-drawing functions to overlay 
> them on your own. I found posts in Rhelp that pointed me to the ‘nls2' 
> package, but when I tried to run the code I got messages saying that the 
> `as.lm` function could not be found.
>
> http://markmail.org/message/7kvolf5zzpqyb7l2?q=list:org%2Er-project%2Er-help+as%2Elm+is+a+linear+model+between+the+response+variable+and+the+gradient
>
>> require(nls2)
> Loading required package: nls2
> Loading required package: proto
>> fm <- nls(demand ~ SSasympOrig(Time, A, lrc), data = BOD)
>> predict(as.lm(fm), interval = "confidence")
> Error in predict(as.lm(fm), interval = "confidence") :
>   could not find function "as.lm"
>> getAnywhere(as.lm)
> no object named ‘as.lm’ was found
>
>
> I also found a couple of posts on R-bloggers pointing me to the ‘propagate' 
> package which has two different methods for constructing confidence intervals.
>
> http://rmazing.wordpress.com/2013/08/14/predictnls-part-1-monte-carlo-simulation-confidence-intervals-for-nls-models/
> http://rmazing.wordpress.com/2013/08/26/predictnls-part-2-taylor-approximation-confidence-intervals-for-nls-models/
>
>
>
> —
> David.
>
>> Ty,
>> Catarina Silva
>>
>> -Original Message-
>> From: Jeff Newmiller [mailto:jdnew...@dcn.davis.ca.us]
>> Sent: sábado, 7 de Novembro de 2015 01:09
>> To: bolseiro.raiz.csi...@portucelsoporcel.com; R mailling list
>> Subject: Re: [R] [GGplot] Geom_smooth with formula "power"?
>>
>> Does  [1] help?
>>
>> [1] 
>> http://stackoverflow.com/questions/10528631/add-exp-power-trend-line-to-a-ggplot
>> ---
>> Jeff NewmillerThe .   .  Go Live...
>> DCN:Basics: ##.#.   ##.#.  Live Go...
>>  Live:   OO#.. Dead: OO#..  Playing
>> Research Engineer (Solar/BatteriesO.O#.   #.O#.  with
>> /Software/Embedded Controllers)   .OO#.   .OO#.  rocks...1k
>> ---
>> Sent from my phone. Please excuse my brevity.
>>
>> On November 6, 2015 2:41:18 AM PST, Catarina Silva 
>>  wrote:
>>> Hi,
>>>
>>> It's possible to use ggplot and geom_smooth to adjust a power curve to
>>> the data?
>>>
>>> Initially i have done the adjustement with nls and the formula 'a*x^b',
>>> but resulted the singular matrix error for start solution.
>>> Alternatively I used the log transformation and i had correct results,
>>> but I can't draw a power curve on the graphic.
>>>
>>> Someone know how to solve this problem?
>>>
>>>
>
> David Winsemius
> Alameda, CA, USA
>
> __
> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> P

Re: [R] MAc-pakcage, on OS-X 10.11.1: Warning Message In sqrt(var.T.agg.tau) : NaNs produced

2015-11-10 Thread Michael Dewey

Dear Kostadin

You could give us the data (after all there are only three studies) so 
we can run it through a different package or you could look at the code 
for omni and see where it is computing sqrt(var.T.agg.tau) and how it 
computed that in the first place.


And you could turn off HTML in your mailer as otherwise your mail can 
get corrupted.


On 10/11/2015 08:14, Kostadin Kushlev wrote:

Hi there,

I am trying to run a mini meta-analysis for three studies within one of my 
papers. I am using the Mac package for meta-analyzing correlation coefficients.

I am using the following command: omni(es = z, var = var.z, data = dat, type="weighted", 
method = "fixed", ztor = TRUE)

This produces the expected values, but also a warning message: In 
sqrt(var.T.agg.tau) : NaNs produced

This message is repeated twice.

I am wondering how to interpret this error message. Is this because the program 
can’t estimate certain parameters due to the small number of correlation 
coefficients I am trying to analyze (only 3)?

Most importantly, can I report and interpret the output despite those messages?

Kosta



[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.



--
Michael
http://www.dewey.myzen.co.uk/home.html

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Re: [R] [GGplot] Geom_smooth with formula "power"?

2015-11-10 Thread David Winsemius

> On Nov 9, 2015, at 7:19 AM, Catarina Silva 
>  wrote:
> 
> I've tried others initial solutions and the adjustement was done to power 
> model in ggplot - geom_smooth.
> But, with "nls" I can't do the confidence interval with ggplot - geom_smooth? 
> I read that with "nls" we have to force "se=FALSE". Is this true?

Well, sort of. Setting `se = FALSE` prevents the ggplot2 functions from trying 
to force nls and nls.predict to do something that is not in their design.

> How can I draw confidence interval in the plot?
> 
> I've done this:
>> ggplot(data,aes(x = idade,y = v_mt)) +
> +   geom_point(alpha=2/10, shape=21,fill="darkgray", colour="black", size=3) 
> + 
> +   geom_smooth(method = 'nls', formula = y ~ a * x^b, start = 
> list(a=1,b=2),se=FALSE) 
>> 
> And then I don't have the confidence interval.
> 
> If I do:
>> ggplot(data,aes(x = idade,y = v_mt)) +
> +   geom_point(alpha=2/10, shape=21,fill="darkgray", colour="black", size=3) 
> + 
> +   geom_smooth(method = 'nls', formula = y ~ a * x^b, start = list(a=1,b=2)) 
> Error in pred$fit : $ operator is invalid for atomic vectors
>> 
> 
> Return error…
> 
Read the help page for nls. The ‘se.fit' parameter is set to FALSE and efforts 
to make it TRUE will be ignored. So `predict.nls` simply does not return 
std-error estimates in the typical manner of other predict.* functions. I 
believe this is because the authors of `nls` did not think there was a clear 
answer to the question of what confidence bounds should be returned. 

 If you want to add confidence bounds to an nls, then you need to decide what 
bounds to add, and then use the ggplot2 line-drawing functions to overlay them 
on your own. I found posts in Rhelp that pointed me to the ‘nls2' package, but 
when I tried to run the code I got messages saying that the `as.lm` function 
could not be found. 

http://markmail.org/message/7kvolf5zzpqyb7l2?q=list:org%2Er-project%2Er-help+as%2Elm+is+a+linear+model+between+the+response+variable+and+the+gradient

> require(nls2)
Loading required package: nls2
Loading required package: proto
> fm <- nls(demand ~ SSasympOrig(Time, A, lrc), data = BOD)
> predict(as.lm(fm), interval = "confidence")
Error in predict(as.lm(fm), interval = "confidence") : 
  could not find function "as.lm"
> getAnywhere(as.lm)
no object named ‘as.lm’ was found


I also found a couple of posts on R-bloggers pointing me to the ‘propagate' 
package which has two different methods for constructing confidence intervals.

http://rmazing.wordpress.com/2013/08/14/predictnls-part-1-monte-carlo-simulation-confidence-intervals-for-nls-models/
http://rmazing.wordpress.com/2013/08/26/predictnls-part-2-taylor-approximation-confidence-intervals-for-nls-models/



— 
David.

> Ty,
> Catarina Silva  
> 
> -Original Message-
> From: Jeff Newmiller [mailto:jdnew...@dcn.davis.ca.us] 
> Sent: sábado, 7 de Novembro de 2015 01:09
> To: bolseiro.raiz.csi...@portucelsoporcel.com; R mailling list
> Subject: Re: [R] [GGplot] Geom_smooth with formula "power"?
> 
> Does  [1] help? 
> 
> [1] 
> http://stackoverflow.com/questions/10528631/add-exp-power-trend-line-to-a-ggplot
> ---
> Jeff NewmillerThe .   .  Go Live...
> DCN:Basics: ##.#.   ##.#.  Live Go...
>  Live:   OO#.. Dead: OO#..  Playing
> Research Engineer (Solar/BatteriesO.O#.   #.O#.  with
> /Software/Embedded Controllers)   .OO#.   .OO#.  rocks...1k
> ---
> Sent from my phone. Please excuse my brevity.
> 
> On November 6, 2015 2:41:18 AM PST, Catarina Silva 
>  wrote:
>> Hi,
>> 
>> It's possible to use ggplot and geom_smooth to adjust a power curve to 
>> the data?
>> 
>> Initially i have done the adjustement with nls and the formula 'a*x^b', 
>> but resulted the singular matrix error for start solution. 
>> Alternatively I used the log transformation and i had correct results, 
>> but I can't draw a power curve on the graphic.
>> 
>> Someone know how to solve this problem?
>> 
>> 

David Winsemius
Alameda, CA, USA

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

[R] alignments in random points

2015-11-10 Thread Denis Francisci
Dear forum,
I have a number of random point in a polygon. I would like to find those
points lying on the same lines.
Is there an R function to find alignments in points (something like this:
https://en.wikipedia.org/wiki/Alignments_of_random_points)?
And is it possible to do the same thing but changing the width of lines?

If it could be useful, I attached an example of R code that I'm using.

Thank's in advance,

DF


CODE:

library(sp)
poly<-matrix(c(16,17,25,22,16,58,55,55,61,58),ncol=2,nrow=5)

> poly
 [,1] [,2]
[1,]   16   58
[2,]   17   55
[3,]   25   55
[4,]   22   61
[5,]   16   58

p=Polygon(poly)
ps=Polygons(list(p),1)
sps=SpatialPolygons(list(ps))

p.rd=spsample(sps,n=150,"random")
plot(sps)
points(p.rd,pch=16, col='blue',cex=0.3)

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] MAc-pakcage, on OS-X 10.11.1: Warning Message In sqrt(var.T.agg.tau) : NaNs produced

2015-11-10 Thread Kostadin Kushlev
Hi there, 

I am trying to run a mini meta-analysis for three studies within one of my 
papers. I am using the Mac package for meta-analyzing correlation coefficients. 

I am using the following command: omni(es = z, var = var.z, data = dat, 
type="weighted", method = "fixed", ztor = TRUE)

This produces the expected values, but also a warning message: In 
sqrt(var.T.agg.tau) : NaNs produced

This message is repeated twice.

I am wondering how to interpret this error message. Is this because the program 
can’t estimate certain parameters due to the small number of correlation 
coefficients I am trying to analyze (only 3)? 

Most importantly, can I report and interpret the output despite those messages? 

Kosta 
 


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Re: [R] conditionally disable evaluation of chunks in Rmarkdown...

2015-11-10 Thread Yihui Xie
The short answer is you cannot. Inline R code is always evaluated.
When it is not evaluated, I doubt if your output still makes sense,
e.g. "The value of x is `r x`." becomes "The value of x is ." That
sounds odd to me.

If you want to disable the evaluate of inline code anyway, you may use
a custom function to do it. e.g.

cond_eval = function(x) {
  if (isTRUE(knitr::opts_chunk$get('eval'))) x
}

Then `r cond_eval(x)` instead of `r x`.

Regards,
Yihui
--
Yihui Xie 
Web: http://yihui.name


On Tue, Nov 10, 2015 at 4:40 AM, Witold E Wolski  wrote:
> I do have an Rmd where I would like to conditionally evaluate the second part.
>
> So far I am working with :
>
> ```{r}
> if(length(specLibrary@ionlibrary) ==0){
>   library(knitr)
>   opts_chunk$set(eval=FALSE, message=FALSE, echo=FALSE)
> }
> ```
>
> Which disables the evaluation of subsequent chunks.
>
> However my RMD file contains also these kind of snippets : `r `
>
> How do I disable them?
>
> regards
>
>
>
> --
> Witold Eryk Wolski

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] ts.intersect() not working

2015-11-10 Thread Giorgio Garziano
It appears to be a numerical precision issue introduced while computing the 
"end" value of a time series,
if not already specified at ts() input parameter level.

You may want to download the R source code:

  https://cran.r-project.org/src/base/R-3/R-3.2.2.tar.gz

and look into R-3.2.2\src\library\stats\R\ts.R, specifically at code block 
lines 52..64,
where "end" is handled.

Then look at block code 140-142 ts.R, where a comparison is performed in order 
to determine if
the time series are overlapping.

In your second scenario (b2, b3) it happens that:

> tsps
   [,1]   [,2]
[1,] 2009.58333 2009.7
[2,] 2009.5 2009.7
[3,]   12.0   12.0
> st <- max(tsps[1,])
> en <- min(tsps[2,])
> st
[1] 2009.7
> en
[1] 2009.5

And (st > en) triggers the "non-intersecting series" warning.

That issue has origin inside the ts() function in the "end" computation based 
on start, ndata and frequency.

What basically happens can be so replicated:

start = c(2009, 8)
end = c(2009,9)
frequency=12
ndata=2

start <- start[1L] + (start[2L] - 1)/frequency
start
[1] 2009.58333

end <- end[1L] + (end[2L] - 1)/frequency
end
[1] 2009.7

end <- start + (ndata - 1)/frequency
end
[1] 2009.5

Note the difference between the two "end" values above.

As workaround, you can specify the "end" parameter in the ts() call.


> b2 <- ts(data = c(10, 20), start = c(2009, 8), end = c(2009,9), frequency = 
> 12);
> b2
 Aug Sep
2009  10  20
>
> b3 <- ts(data = matrix(data = 4:6, nrow = 1), start = c(2009, 9), end = 
> c(2009,9), frequency = 12);
> b3
 Series 1 Series 2 Series 3
Sep 2009456
>
> bb <- ts.intersect(b2, b3);
> bb
 b2 b3.Series 1 b3.Series 2 b3.Series 3
Sep 2009 20   4   5   6


Hope this helps

--
GG

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] conditionally disable evaluation of chunks in Rmarkdown...

2015-11-10 Thread Jeff Newmiller
Well, strictly speaking knitr and rmarkdown are a contributed packages and the 
official contact for any contributed package is found via the maintainer() 
function. The Posting Guide indicates that R-help should only be used as a 
backup if that avenue is a dead end.

To be fair to the OP, there are a lot of useful contributed packages that get 
discussed here anyway. But Bert is right that if there is a forum more 
appropriate for a contributed package then it should be preferred and any 
question posed here should mention any dead ends encountered and good 
netiquette would be to link to any publicly-visible record of those attempts.
---
Jeff NewmillerThe .   .  Go Live...
DCN:Basics: ##.#.   ##.#.  Live Go...
  Live:   OO#.. Dead: OO#..  Playing
Research Engineer (Solar/BatteriesO.O#.   #.O#.  with
/Software/Embedded Controllers)   .OO#.   .OO#.  rocks...1k
--- 
Sent from my phone. Please excuse my brevity.

On November 10, 2015 6:44:16 AM PST, Bert Gunter  wrote:
>Strictly speaking, wrong email list.
>
>Markdown is R Studio software, and you should post on their support
>site.
>
>Cheers,
>Bert
>
>
>Bert Gunter
>
>"Data is not information. Information is not knowledge. And knowledge
>is certainly not wisdom."
>   -- Clifford Stoll
>
>
>On Tue, Nov 10, 2015 at 2:40 AM, Witold E Wolski 
>wrote:
>> I do have an Rmd where I would like to conditionally evaluate the
>second part.
>>
>> So far I am working with :
>>
>> ```{r}
>> if(length(specLibrary@ionlibrary) ==0){
>>   library(knitr)
>>   opts_chunk$set(eval=FALSE, message=FALSE, echo=FALSE)
>> }
>> ```
>>
>> Which disables the evaluation of subsequent chunks.
>>
>> However my RMD file contains also these kind of snippets : `r `
>>
>> How do I disable them?
>>
>> regards
>>
>>
>>
>> --
>> Witold Eryk Wolski
>>
>> __
>> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>
>__
>R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
>https://stat.ethz.ch/mailman/listinfo/r-help
>PLEASE do read the posting guide
>http://www.R-project.org/posting-guide.html
>and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] conditionally disable evaluation of chunks in Rmarkdown...

2015-11-10 Thread Bert Gunter
Strictly speaking, wrong email list.

Markdown is R Studio software, and you should post on their support site.

Cheers,
Bert


Bert Gunter

"Data is not information. Information is not knowledge. And knowledge
is certainly not wisdom."
   -- Clifford Stoll


On Tue, Nov 10, 2015 at 2:40 AM, Witold E Wolski  wrote:
> I do have an Rmd where I would like to conditionally evaluate the second part.
>
> So far I am working with :
>
> ```{r}
> if(length(specLibrary@ionlibrary) ==0){
>   library(knitr)
>   opts_chunk$set(eval=FALSE, message=FALSE, echo=FALSE)
> }
> ```
>
> Which disables the evaluation of subsequent chunks.
>
> However my RMD file contains also these kind of snippets : `r `
>
> How do I disable them?
>
> regards
>
>
>
> --
> Witold Eryk Wolski
>
> __
> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] conditionally disable evaluation of chunks in Rmarkdown...

2015-11-10 Thread Witold E Wolski
I do have an Rmd where I would like to conditionally evaluate the second part.

So far I am working with :

```{r}
if(length(specLibrary@ionlibrary) ==0){
  library(knitr)
  opts_chunk$set(eval=FALSE, message=FALSE, echo=FALSE)
}
```

Which disables the evaluation of subsequent chunks.

However my RMD file contains also these kind of snippets : `r `

How do I disable them?

regards



-- 
Witold Eryk Wolski

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] R units of margins, text and points in a figure

2015-11-10 Thread Jim Lemon
Hi Gudrun,
Let's see. First, the "cex" argument means character _expansion_, not an
absolute size. So as the symbols and letters start out different sizes,
that proportional difference remains as they are expanded. The size of text
depends a bit upon the font that is being used. Perhaps if you determine
the expansions necessary to get the "A" the same size as the pch=20 symbol
(I get about two and one half times the expansion for the symbol as for the
letter) you can write this into your code.

Margins are another unit, the number of lines (vertical space required for
a line of text). This is somewhat larger than the actual vertical extent of
the letters as there is a space left between lines. If you change the
device (eg from PNG to PDF) you will probably encounter yet another change
in the apparent size of things. I suggest using the eyeball method to work
out the proportions for the device you want to use and be prepared to
change things if you move to another graphics device.

Jim


On Tue, Nov 10, 2015 at 7:15 PM, Gygli, Gudrun  wrote:

> Dear R-help,
>
>
> I am trying to plot some data in a plot where I leave a big margin free to
> add other information, namely text and points.
>
> I am now struggling with keeping the size of the margin, text and points
> in a fixed ratio. I want this so that the layout of the figure does not
> change every time I plot new data.
>
> I have:
>
> a<-8
> counter<-1
> spacing<-a/(3*a)
> adding<-a/(3*a)
> start<-a*(a/3)
>
> alphabet<-c("A","A","A","A","A","A","A","A")
> x<-c(1,2,3,4,5,6,7,8)
> y<-c(10,20,30,40,50,60,70,80)
> png(file="TESTING.png", units="in", width=a, height=a, res=a*100)
> par(xpd=NA,mar=c(0,0,0,0),oma=c(0,0,0,(3*a))) #bottom, left,top, right
>
> plot.new()
> plot(x,y)
> points(pch=20,10, 10, cex=5)
> text(10,10, alphabet, cex=5, col="blue")
>
> What I do not understand is why the size of the point and the text is not
> the same and why the margin can be "bigger" than the width of the figure.
>
> BASICALLY, the units of the margins, the points and the text are not the
> same...
>
> What I need is a way to make the size of the point and text AND the margin
> independent of the data I plot so that the figure always looks the same
> although there is more data in it in some cases (for example 10 or 20
> points with text in the margin).
>
> This could be done by setting the units of points, text and margin to
> inches so that a is the same for all of them... Or to know the ratio
> between the different units used by R for margin, points and text...
>
> Any ideas on how to solve that?
>
> Please let me know if clarifications are needed!
>
>
> Gudrun
>
>
>
>
>
>
>
> Gudrun Gygli, MSc
>
> PhD candidate
>
> Wageningen University
> Laboratory of Biochemistry
> Dreijenlaan 3
> 6703 HA Wageningen
> The Netherlands
>
> Phone  31 317483387
> e-mail: gudrun.gy...@wur.nl
>
> - - - - - - - - - - - - - - - - - -
>
> Project information:
> http://www.wageningenur.nl/en/show/Bioinformatics-structural-biology-and-molecular-modeling-of-Vanillyl-Alcohol-Oxidases-VAOs.htm
>
> __
> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Subsetting dataframe by the nearest values of a vector elements

2015-11-10 Thread Harun Rashid via R-help
HI Jean,
Here is part of my data. As you can see, I have cross-section point and 
corresponding elevation of a river. Now I want to select cross-section 
points by 50m interval. But the real cross-section data might not have 
exact points say 0, 50, 100,…and so on. Therefore, I need to take points 
closest to those values.

cross_section elevation
1: 5.608 12.765
2: 11.694 10.919
3: 14.784 10.274
4: 20.437 7.949
5: 22.406 7.180
101: 594.255 7.710
102: 595.957 7.717
103: 597.144 7.495
104: 615.925 7.513
105: 615.890 7.751

I checked for some suggestions [particularly here 
]
 
and finally did like this.

intervals <- c(5,50,100,150,200,250,300,350,400,450,500,550,600)
dt = data.table(real.val = w$cross_section, w)
setattr(dt,’sorted’,’cross_section’)
dt[J(intervals), roll = “nearest”]

And it gave me what I wanted.

dt[J(intervals), roll = “nearest”]
cross_section real.val elevation
1: 5 5.608 12.765
2: 50 49.535 6.744
3: 100 115.614 8.026
4: 150 152.029 7.206
5: 200 198.201 6.417
6: 250 247.855 4.497
7: 300 298.450 11.299
8: 350 352.473 11.534
9: 400 401.287 10.550
10: 450 447.768 9.371
11: 500 501.284 8.984
12: 550 550.650 16.488
13: 600 597.144 7.495

I don’t know whether there is a smarter to accomplish this!
Thanks in advance.
Regards,
Harun

On 11/10/15 11:17 AM, David Winsemius wrote:

>> On Nov 9, 2015, at 9:19 AM, Adams, Jean  wrote:
>>
>> Harun,
>>
>> Can you give a simple example?
>>
>> If your cross_section looked like this
>> c(144, 179, 214, 39, 284, 109, 74, 4, 249)
>> and your other vector looked like this
>> c(0, 50, 100, 150, 200, 250, 300, 350)
>> what would you want your subset to look like?
>>
>> Jean
>>
>> On Mon, Nov 9, 2015 at 7:26 AM, Harun Rashid via R-help <
>> r-help@r-project.org> wrote:
>>
>>> Hello,
>>> I have a dataset with two columns 1. cross_section (range: 0~635), and
>>> 2. elevation. The dataset has more than 100 rows. Now I want to make a
>>> subset on the condition that the 'cross_section' column will pick up the
>>> nearest cell from another vector (say 0, 50,100,150,200,.,650).
>>> How can I do this? I would really appreciate a solution.
> If you what the "other vector" to define the “cell” boundaries, and using 
> Jean’s example, it is a simple application of `findInterval`:
>
>> inp <- c(144, 179, 214, 39, 284, 109, 74, 4, 249)
>> mids <- c(0, 50, 100, 150, 200, 250, 300, 350)
>> findInterval( inp, c(mids) )
> [1] 3 4 5 1 6 3 2 1 5
>
> On the other hand ...
>
> To find the number of "closest point", this might help:
>
>
>> findInterval(inp, c( mids[1]-.001, head(mids,-1)+diff(mids)/2, 
>> tail(mids,1)+.001 ) )
> [1] 4 5 5 2 7 3 2 1 6
>
>
>
> —
> David Winsemius
> Alameda, CA, USA
>
​

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

[R] R units of margins, text and points in a figure

2015-11-10 Thread Gygli, Gudrun
Dear R-help,


I am trying to plot some data in a plot where I leave a big margin free to add 
other information, namely text and points.

I am now struggling with keeping the size of the margin, text and points in a 
fixed ratio. I want this so that the layout of the figure does not change every 
time I plot new data.

I have:

a<-8
counter<-1
spacing<-a/(3*a)
adding<-a/(3*a)
start<-a*(a/3)

alphabet<-c("A","A","A","A","A","A","A","A")
x<-c(1,2,3,4,5,6,7,8)
y<-c(10,20,30,40,50,60,70,80)
png(file="TESTING.png", units="in", width=a, height=a, res=a*100)
par(xpd=NA,mar=c(0,0,0,0),oma=c(0,0,0,(3*a))) #bottom, left,top, right

plot.new()
plot(x,y)
points(pch=20,10, 10, cex=5)
text(10,10, alphabet, cex=5, col="blue")

What I do not understand is why the size of the point and the text is not the 
same and why the margin can be "bigger" than the width of the figure.

BASICALLY, the units of the margins, the points and the text are not the same...

What I need is a way to make the size of the point and text AND the margin 
independent of the data I plot so that the figure always looks the same 
although there is more data in it in some cases (for example 10 or 20 points 
with text in the margin).

This could be done by setting the units of points, text and margin to inches so 
that a is the same for all of them... Or to know the ratio between the 
different units used by R for margin, points and text...

Any ideas on how to solve that?

Please let me know if clarifications are needed!


Gudrun







Gudrun Gygli, MSc

PhD candidate

Wageningen University
Laboratory of Biochemistry
Dreijenlaan 3
6703 HA Wageningen
The Netherlands

Phone  31 317483387
e-mail: gudrun.gy...@wur.nl

- - - - - - - - - - - - - - - - - -

Project information: 
http://www.wageningenur.nl/en/show/Bioinformatics-structural-biology-and-molecular-modeling-of-Vanillyl-Alcohol-Oxidases-VAOs.htm

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.