[R] Bayesian Analysis in GJR-GARCH (p, d) model with Student-t innovations

2018-01-18 Thread RESA MAE SANGCO
Good day Ma'am/Sir, I am Resa Mae R. Sangco a Master of Statistics student
from the MSU- Iligan Institute of Technology located in Iligan City,
Philippines. I am currently doing my thesis entitled “Bayesian Analysis in
GJR-GARCH (p,d) model with Student-t innovations". In finding my posterior
distribution since it is hard to integrate, I use Markov Chain Monte Carlo
simulation particularly the Metropolis-Hasting Algorithm. Now, I search a
package in R but I only found a package ‘bayesGarch’  which performs the
Bayesian estimation of the GARCH(1,1) model with Student-t innovations. In
line with this, I would like to ask if there is a package/function exist in
the Bayesian estimation in the GJR-GARCH(p,d) model with student-t
innovation. Your response is highly appreciated. Thank you!

-- 
---
*DISCLAIMER AND CONFIDENTIALITY NOTICE* The Mindanao Sta...{{dropped:30}}

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Re: [R] error while loading ggplot2

2018-01-18 Thread shijin mathew via R-help
Thnaks Jeff.
Below is the session info you requested.
R version 3.4.3 (2017-11-30)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows >= 8 x64 (build 9200)

Matrix products: default

locale:
[1] LC_COLLATE=English_United States.1252  LC_CTYPE=English_United States.1252  
 
[3] LC_MONETARY=English_United States.1252 LC_NUMERIC=C 
 
[5] LC_TIME=English_United States.1252

attached base packages:
[1] grid  stats graphics  grDevices utils datasets  methods   base  
   

other attached packages:
 [1] reshape_0.8.7   Formula_1.2-2   survival_2.41-3 
lattice_0.20-35
 [5] versions_0.3VIM_4.7.0   data.table_1.10.4-3 
colorspace_1.3-2   
 [9] nnet_7.3-12 rpart_4.1-11fBasics_3042.89 
timeSeries_3042.102
[13] timeDate_3042.101   fpc_2.1-11  rattle_5.1.0   

loaded via a namespace (and not attached):
 [1] nlme_3.1-131pbkrtest_0.4-7  RColorBrewer_1.1-2  prabclus_2.2-6 

 [5] tools_3.4.3 lazyeval_0.2.1  mgcv_1.8-22 
trimcluster_0.1-2  
 [9] sp_1.2-6compiler_3.4.3  quantreg_5.34   SparseM_1.77   

[13] diptest_0.75-7  scales_0.5.0lmtest_0.9-35   DEoptimR_1.0-8 

[17] mvtnorm_1.0-6   robustbase_0.92-8   randomForest_4.6-12 spatial_7.3-11 

[21] stringr_1.2.0   minqa_1.2.4 lme4_1.1-15 rlang_0.1.6

[25] zoo_1.8-1   mclust_5.4  car_2.1-6   magrittr_1.5   

[29] modeltools_0.2-21   Matrix_1.2-12   Rcpp_0.12.14munsell_0.4.3  

[33] stringi_1.1.6   yaml_2.1.16 MASS_7.3-47 flexmix_2.3-14 

[37] plyr_1.8.4  parallel_3.4.3  splines_3.4.3   boot_1.3-20

[41] stats4_3.4.3XML_3.98-1.9rpart.plot_2.1.2laeken_0.4.6   

[45] vcd_1.4-4   nloptr_1.0.4MatrixModels_0.4-1  gtable_0.2.0   

[49] kernlab_0.9-25  amap_0.8-14 e1071_1.6-8 class_7.3-14   

[53] cluster_2.0.6   RGtk2_2.20.33   

On Wednesday, January 17, 2018, 11:34:17 PM PST, Jeff Newmiller 
 wrote:  
 
 Please post using plain text... the mailing list will strip HTML anyway and 
mess up what you send. 

Send the output of sessionInfo() so we know what versions of R and packages you 
have. 
-- 
Sent from my phone. Please excuse my brevity.

On January 17, 2018 4:37:06 PM PST, shijin mathew via R-help 
 wrote:
>Getting the following error while loading ggplot2.
>
>> library(ggplot2)
>Error: package or namespace load failed for ‘ggplot2’ in
>loadNamespace(i, c(lib.loc, .libPaths()), versionCheck = vI[[i]]):
> object 'vI' not found
>Tried different version of R and ggplot2 but still doesnt work.
>Any help to resolve is appreciated. Appreciate any pointers.
>-S
>
>    [[alternative HTML version deleted]]
>
>__
>R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
>https://stat.ethz.ch/mailman/listinfo/r-help
>PLEASE do read the posting guide
>http://www.R-project.org/posting-guide.html
>and provide commented, minimal, self-contained, reproducible code.  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Re: [R] reading lisp file in R

2018-01-18 Thread Martin Møller Skarbiniks Pedersen
Here are the beginning of a R program.
I hope it can help you writing the rest of the program.

Regards
Martin M. S. Pedersen



filename <- "university.data"


lines <- readLines(filename)

first <- T

for (ALine in lines) {
ALine <- sub("^ +","",ALine)
ALine <- sub(")","",ALine, fixed = T)
if (length(grep("def-instance", ALine))) {
if (first) {
first = F
}
else {
cat(paste(instance,state,control,no_of_students_thous,"\n", sep  = ","))
}
instance <- sub("(def-instance ","",ALine, fixed = T)
next
}
if (length(grep("state ", ALine))) {
state <- sub("(state ","",ALine, fixed = T)
next
}
if (length(grep("control ", ALine))) {
control <- sub("(control ","", ALine, fixed = T)
next
}
if (length(grep("no-of-students thous:", ALine))) {
no_of_students_thous <- ALine
no_of_students_thous <- sub("(no-of-students thous:","", ALine, fixed = T)
next
}
}

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[ESS] Sweave with ESS and AUCTeX

2018-01-18 Thread Sparapani, Rodney
Hi Gang:

I’ve run into a problem that I can’t seem to figure out.  I have a vignette 
that I am 
writing based on jss.sty (the JSS style) which comes with R.  From the command 
line 
everything works as expected, i.e., “R CMD Sweave --pdf computing.Rnw” produces 
computing.pdf without issue.  However, I would hate to have to go to the command
line every time that I want to generate a PDF.  So, I’m using the support in 
ESS and
AUCTeX as provided by the setting “(setq ess-swv-plug-into-AUCTeX-p t)”.  But,
this doesn’t really work.  I can generate the LaTeX file, computing.tex, but I 
can’t
generate the PDF.  The way the ESS and AUCTeX integration appear to work is
that compiling .tex uses the standard LaTeX distribution which, in my case, is 
TeX Live 2017.  However, TeX Live knows nothing about jss.sty so the compilation
fails.  It seems to me that the way this should work is the following.  Instead 
of
generating a .tex file and then making a .pdf; this functionality should just 
emit
“R CMD Sweave --pdf computing.Rnw”, right?  However, I can’t seem to find 
this in the docs.  But, I might have missed it.  Anyone have a suggestion on the
best way to go about this with AUCTeX and/or ESS?

Thanks,

Rodney

__
ESS-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/ess-help

Re: [R] Split charts with ggplot2, tidyquant

2018-01-18 Thread Joshua Ulrich
If you don't want to wait for a ggplot2 solution, here are two
alternatives you can use right now:

chartSeries(SPYxts)
# or (with xts > 0.10
plot(SPYxts$SPY.Close)
addSeries(SPYxts$SPY.Volume, type = "h")

You might also try autoplot.zoo(), though I've never used it.




On Thu, Jan 18, 2018 at 2:11 PM, Eric Berger  wrote:
> Hi Charlie,
> I am comfortable to put the data in any way that works best. Here are two
> possibilities: an xts and a data frame.
>
> library(quantmod)
> quantmod::getSymbols("SPY")  # creates xts variable SPY
> SPYxts <- SPY[,c("SPY.Close","SPY.Volume")]
> SPYdf  <- data.frame(Date=index(SPYxts),close=as.numeric(SPYxts$SPY.Close),
>  volume=as.numeric(SPYxts$SPY.Volume))
> rownames(SPYdf) <- NULL
>
> head(SPYxts)
> head(SPYdf)
>
> #   SPY.Close SPY.Volume
> #2007-01-03141.37   94807600
> #2007-01-04141.67   69620600
> #2007-01-05140.54   76645300
> #2007-01-08141.19   71655000
> #2007-01-09141.07   75680100
> #2007-01-10141.54   72428000
>
> #Date  close   volume
> #1 2007-01-03 141.37 94807600
> #2 2007-01-04 141.67 69620600
> #3 2007-01-05 140.54 76645300
> #4 2007-01-08 141.19 71655000
> #5 2007-01-09 141.07 75680100
> #6 2007-01-10 141.54 72428000
>
> Thanks,
> Eric
>
>
>
> On Thu, Jan 18, 2018 at 8:00 PM, Charlie Redmon  wrote:
>
>> Could you provide some information on your data structure (e.g., are the
>> two time series in separate columns in the data)? The solution is fairly
>> straightforward once you have the data in the right structure. And I do not
>> think tidyquant is necessary for what you want.
>>
>> Best,
>> Charlie
>>
>> --
>> Charles Redmon
>> GRA, Center for Research Methods and Data Analysis
>> PhD Student, Department of Linguistics
>> University of Kansas
>> Lawrence, KS, USA
>>
>>
>
> [[alternative HTML version deleted]]
>
> __
> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.



-- 
Joshua Ulrich  |  about.me/joshuaulrich
FOSS Trading  |  www.fosstrading.com
R/Finance 2018 | www.rinfinance.com

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Perform mantel test on subset of distance matrix

2018-01-18 Thread Sarah Goslee
Hi Andrew,

Yes, you cannot have NA values in your matrices.
Instead, you could incorporate a model matrix.

See Legendre, P. & Fortin, M.J. Vegetatio (1989) 80: 107.
https://doi.org/10.1007/BF00048036
for ideas.

Sarah

On Sun, Dec 31, 2017 at 12:55 PM, Andrew Marx  wrote:
> I'm trying to perform a mantel test that ignores specific pairs in my
> distance matrices. The reasoning is that some geographic distances
> below a certain threshold suffer from spatial autocorrelation, or
> perhaps ecological relationships become less relevant that stochastic
> processes above a certain threshold.
>
> The problem is that I can't find a way to do it. If I replace values
> in either or both of the distance matrices with NA, mantel.rtest (ade4
> package) gives the following error: Error in if (any(distmat < tol))
> warning("Zero distance(s)") : missing value where TRUE/FALSE needed
>
> Here's a trivial example that tries to exclude elements of the first
> matrix that equal 11:
>
> library(ade4)
> a <- matrix(data = 1:36, nrow = 6)
> b <- matrix(data = 1:36, nrow = 6)
> a[a==11] <- NA
> mantel.rtest(as.dist(a), as.dist(b))
>
> Is there a way to do this, either with this package or another?
>


-- 
Sarah Goslee
http://www.functionaldiversity.org

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Time-dependent coefficients in a Cox model with categorical variants

2018-01-18 Thread Jeff Newmiller
Offlist... for your information...

It is unfair to suggest that the mailing list participants are at fault for 
using old software.  Even if the mailing list participants use email programs 
that can handle HTML, any email that goes through the list gets the formatting 
stripped, which leaves it damaged to some degree. It might not seem like this 
because sometimes you CAN see formatting, but that only happens when you are 
listed in the "To" or "Cc" fields... the rest of the list saw a stripped 
version regardless of how good their mail program was. Just go look at the 
archives to confirm this. Net result is the rest of the participants see a more 
or less damaged version of the discussion/code whenever HTML is used on list.
-- 
Sent from my phone. Please excuse my brevity.

On January 18, 2018 11:38:17 AM PST, "Therneau, Terry M., Ph.D." 
 wrote:
>
>First, as others have said please obey the mailing list rules and turn
>of
>First, as others have said please obey the mailing list rules and turn
>off html, not everyone uses an html email client.
>
>Here is your code, formatted and with line numbers added.  I also fixed
>one error: "y" should be "status".
>
>1. fit0 <- coxph(Surv(futime, status) ~ x1 + x2 + x3, data = data0)
>2. p <- log(predict(fit0, newdata = data1, type = "expected"))
>3. lp <- predict(fit0, newdata = data1, type = "lp")
>4. logbase <- p - lp
>5. fit1 <- glm(status ~ offset(p), family = poisson, data = data1)
>6. fit2 <- glm(status~ lp + offset(logbase), family = poisson, data =
>data1)
>7. group <- cut(lp, c(-Inf, quantile(lp, (1:9) / 10), Inf))
>8. fit3 <- glm(status ~ -1 + group + offset(p), family = poisson, data
>= data1)
>
>The key idea of the paper you referenced is that the counterpart to the
>Hosmer-Lemishow test (wrong if used directly in a Cox model) is to look
>at the predicted values from a Cox model as input to a Poisson
>regression.  That means adding the expected from the Cox model as a
>fixed term in the Poisson.  And like any other poisson that means
>offset(log(expected)) as a term.
>
>The presence of time dependent covariates does nothing to change this,
>per se, since expected for time fixed is the same as for time varying. 
>In practice it does matter, at least philosophically.  Lines 1, 2, 5 do
>this just fine.
>
>If data1 is not the same as data0, a new study say, then the test for
>intercept=0 from fit1 is a test of overall calibration.  Models like
>line 8 try to partition out where any differences actually lie.
>
>The time-dependent covariates part lies in the fact that a single
>subject may be represented by multiple lines in data0 and/or data1.  Do
>you want to collapse that person into a single row before the glm fits?
>If subject "Jones" is represented by 15 lines in the data and "Smith"
>by 2, it does seem a bit unfair to give Jones 15 observations in the
>glm fit.  But full discussion of this is as much philosophy as
>statistics, and is perhaps best done over a beer.
>
>Terry T.
>
>
>From: Max Shell [archerr...@gmail.com]
>Sent: Wednesday, January 17, 2018 10:25 AM
>To: Therneau, Terry M., Ph.D.
>Subject: Re: Time-dependent coefficients in a Cox model with
>categorical variants
>
>Assessing calibration of Cox model with time-dependent
>coefficients
>
>I am trying to find methods for testing and visualizing calibration to
>Cox models with time-depended coefficients. I have read your nice
>article. In
>this paper, we can fit three models:
>
>fit0 <- coxph(Surv(futime, status) ~ x1 + x2 + x3, data = data0) p <-
>log(predict(fit0, newdata = data1, type = "expected")) lp <-
>predict(fit0, newdata = data1, type = "lp") logbase <- p - lp fit1 <-
>glm(y ~ offset(p), family = poisson, data = data1) fit2 <- glm(y ~ lp +
>offset(logbase), family = poisson, data = data1) group <- cut(lp,
>c(-Inf, quantile(lp, (1:9) / 10), Inf)) fit3 <- glm(y ~ -1 + group +
>offset(p), family = poisson, data = data1)
>
>Here$B!$(BI simplely use data1$B!!(B<- data0[1:500,]
>
>First, I get following error when running line 5.
>
>Error in eval(predvars, data, env) : object 'y' not found
>
>So I modifited the code by replacing the y as status looks like this:
>
>fit1 <- glm(status ~ offset(p), family = poisson, data = data1) fit2 <-
>glm(status ~ lp + offset(logbase), family = poisson, data = data1)
>group <- cut(lp, c(-Inf, quantile(lp, (1:9) / 10), Inf)) fit3 <-
>glm(status ~ -1 + group + offset(p), family = poisson, data = data1)
>
>Is this replacing correct?
>
>Second, I try to introduce the time-transform use coxph with
>ttparament.
>
>My code is:  fit0 <- coxph(Surv(time, status) ~ x1 + x2 + x3 + tt(x3),
>data = data0, function(x, t, ...) x * t) p <- log(predict(fit0, newdata
>= data1, type = "expected")) lp <- predict(fit0, newdata = data1, type
>= 

Re: [R] Split charts with ggplot2, tidyquant

2018-01-18 Thread Eric Berger
Hi Charlie,
I am comfortable to put the data in any way that works best. Here are two
possibilities: an xts and a data frame.

library(quantmod)
quantmod::getSymbols("SPY")  # creates xts variable SPY
SPYxts <- SPY[,c("SPY.Close","SPY.Volume")]
SPYdf  <- data.frame(Date=index(SPYxts),close=as.numeric(SPYxts$SPY.Close),
 volume=as.numeric(SPYxts$SPY.Volume))
rownames(SPYdf) <- NULL

head(SPYxts)
head(SPYdf)

#   SPY.Close SPY.Volume
#2007-01-03141.37   94807600
#2007-01-04141.67   69620600
#2007-01-05140.54   76645300
#2007-01-08141.19   71655000
#2007-01-09141.07   75680100
#2007-01-10141.54   72428000

#Date  close   volume
#1 2007-01-03 141.37 94807600
#2 2007-01-04 141.67 69620600
#3 2007-01-05 140.54 76645300
#4 2007-01-08 141.19 71655000
#5 2007-01-09 141.07 75680100
#6 2007-01-10 141.54 72428000

Thanks,
Eric



On Thu, Jan 18, 2018 at 8:00 PM, Charlie Redmon  wrote:

> Could you provide some information on your data structure (e.g., are the
> two time series in separate columns in the data)? The solution is fairly
> straightforward once you have the data in the right structure. And I do not
> think tidyquant is necessary for what you want.
>
> Best,
> Charlie
>
> --
> Charles Redmon
> GRA, Center for Research Methods and Data Analysis
> PhD Student, Department of Linguistics
> University of Kansas
> Lawrence, KS, USA
>
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Time-dependent coefficients in a Cox model with categorical variants

2018-01-18 Thread Therneau, Terry M., Ph.D.

First, as others have said please obey the mailing list rules and turn of
First, as others have said please obey the mailing list rules and turn off 
html, not everyone uses an html email client.

Here is your code, formatted and with line numbers added.  I also fixed one 
error: "y" should be "status".

1. fit0 <- coxph(Surv(futime, status) ~ x1 + x2 + x3, data = data0)
2. p <- log(predict(fit0, newdata = data1, type = "expected"))
3. lp <- predict(fit0, newdata = data1, type = "lp")
4. logbase <- p - lp
5. fit1 <- glm(status ~ offset(p), family = poisson, data = data1)
6. fit2 <- glm(status~ lp + offset(logbase), family = poisson, data = data1)
7. group <- cut(lp, c(-Inf, quantile(lp, (1:9) / 10), Inf))
8. fit3 <- glm(status ~ -1 + group + offset(p), family = poisson, data = data1)

The key idea of the paper you referenced is that the counterpart to the 
Hosmer-Lemishow test (wrong if used directly in a Cox model) is to look at the 
predicted values from a Cox model as input to a Poisson regression.  That means 
adding the expected from the Cox model as a fixed term in the Poisson.  And 
like any other poisson that means offset(log(expected)) as a term.

The presence of time dependent covariates does nothing to change this, per se, 
since expected for time fixed is the same as for time varying.  In practice it 
does matter, at least philosophically.  Lines 1, 2, 5 do this just fine.

If data1 is not the same as data0, a new study say, then the test for 
intercept=0 from fit1 is a test of overall calibration.  Models like line 8 try 
to partition out where any differences actually lie.

The time-dependent covariates part lies in the fact that a single subject may 
be represented by multiple lines in data0 and/or data1.  Do you want to 
collapse that person into a single row before the glm fits?  If subject "Jones" 
is represented by 15 lines in the data and "Smith" by 2, it does seem a bit 
unfair to give Jones 15 observations in the glm fit.  But full discussion of 
this is as much philosophy as statistics, and is perhaps best done over a beer.

Terry T.


From: Max Shell [archerr...@gmail.com]
Sent: Wednesday, January 17, 2018 10:25 AM
To: Therneau, Terry M., Ph.D.
Subject: Re: Time-dependent coefficients in a Cox model with categorical 
variants

Assessing calibration of Cox model with time-dependent 
coefficients

I am trying to find methods for testing and visualizing calibration to Cox 
models with time-depended coefficients. I have read your nice 
article. In this 
paper, we can fit three models:

fit0 <- coxph(Surv(futime, status) ~ x1 + x2 + x3, data = data0) p <- 
log(predict(fit0, newdata = data1, type = "expected")) lp <- predict(fit0, 
newdata = data1, type = "lp") logbase <- p - lp fit1 <- glm(y ~ offset(p), 
family = poisson, data = data1) fit2 <- glm(y ~ lp + offset(logbase), family = 
poisson, data = data1) group <- cut(lp, c(-Inf, quantile(lp, (1:9) / 10), Inf)) 
fit3 <- glm(y ~ -1 + group + offset(p), family = poisson, data = data1)

Here$B!$(BI simplely use data1$B!!(B<- data0[1:500,]

First, I get following error when running line 5.

Error in eval(predvars, data, env) : object 'y' not found

So I modifited the code by replacing the y as status looks like this:

fit1 <- glm(status ~ offset(p), family = poisson, data = data1) fit2 <- 
glm(status ~ lp + offset(logbase), family = poisson, data = data1) group <- 
cut(lp, c(-Inf, quantile(lp, (1:9) / 10), Inf)) fit3 <- glm(status ~ -1 + group 
+ offset(p), family = poisson, data = data1)

Is this replacing correct?

Second, I try to introduce the time-transform use coxph with ttparament.

My code is:  fit0 <- coxph(Surv(time, status) ~ x1 + x2 + x3 + tt(x3), data = 
data0, function(x, t, ...) x * t) p <- log(predict(fit0, newdata = data1, type 
= "expected")) lp <- predict(fit0, newdata = data1, type = "lp") logbase <- p - 
lp fit1 <- glm(status ~ offset(p), family = poisson, data = data1) fit2 <- 
glm(status ~ lp + offset(logbase), family = poisson, data = data1) group <- 
cut(lp, c(-Inf, quantile(lp, (1:9) / 10), Inf)) fit3 <- glm(status ~ -1 + group 
+ offset(p), family = poisson, data = data1)

My questions is:

  *   Is the code above correct?
  *   How to interpret the fit1, fit2, fit3? What's the connection between the 
three models and the calibration of the Cox model?
  *   How to generate the calibration plot using fit3? The article dose have a 
section discuss this, but no code is provided.

Thank you!

On Mon, Jan 15, 2018 at 9:23 PM, Therneau, Terry M., Ph.D. 
> wrote:
The model formula " ~ Histology" knows how to change your 3 level categorical 
variable into two 0/1 dummy variables for a regression matrix.  The tt() call 
is a simple function, however, and ordinary 

Re: [R] Split charts with ggplot2, tidyquant

2018-01-18 Thread Charlie Redmon
Could you provide some information on your data structure (e.g., are the 
two time series in separate columns in the data)? The solution is fairly 
straightforward once you have the data in the right structure. And I do 
not think tidyquant is necessary for what you want.


Best,
Charlie

--
Charles Redmon
GRA, Center for Research Methods and Data Analysis
PhD Student, Department of Linguistics
University of Kansas
Lawrence, KS, USA

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] MCMC Estimation for Four Parametric Logistic (4PL) Item Response Model

2018-01-18 Thread ProfJCNash
If you have the expression of the model, package nlsr should be able to
form the Jacobian analytically for nonlinear least squares. Likelihood
approaches allow for more sophisticated loss functions, but the
optimization is generally much less reliable because one is working
with essentially squared quantities and possibly multiple minima where
some are not the ones you want.

JN

On 2018-01-18 12:32 PM, Doran, Harold wrote:
> I know of no existing functions for estimating the parameters of this model 
> using MCMC or MML. Many years ago, I wrote code to estimate this model using 
> marginal maximum likelihood. I wrote this based on the using nlminb and 
> gauss-hermite quadrature points from statmod. 
> 
> I could not find that code to share with you, but I do have code for 
> estimating the 3PL in this way and you could modify the likelihood for the 
> upper asymptote yourself.
> 
> 
> 
> -Original Message-
> From: R-help [mailto:r-help-boun...@r-project.org] On Behalf Of ALYSSA FATMAH 
> MASTURA
> Sent: Thursday, January 18, 2018 10:15 AM
> To: r-help@r-project.org
> Subject: [R] MCMC Estimation for Four Parametric Logistic (4PL) Item Response 
> Model
> 
> Good day Sir/Ma'am! This is Alyssa Fatmah S. Mastura taking up Master of 
> Science in Statistics at Mindanao State University-Iligan Institute 
> Technology (MSU-IIT), Philippines. I am currently working on my master's 
> thesis titled "Comparing the Three Estimation Methods for the Four Parametric 
> Logistic (4PL) Item Response Model". While I am looking for a package about 
> Markov chain Monte Carlo (MCMC) method for 4PL model in this R forum, I found 
> an sirt package of an MCMC method but only for the three parametric normal 
> ogive (3PNO) item response model. However, my study focus on 4PL model. In 
> line with this, I would like to know if there exist a function of MCMC method 
> for 4PL model in R language. I am asking for your help to inform me if such 
> function exist. I highly appreciate your response on this matter. Thank you 
> so much. Have a great day ahead!
> 
> 
> 
> Alyssa
> 
> 
> Virus-free.
> www.avast.com
> 
> <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
> 
> --
> ---
> *DISCLAIMER AND CONFIDENTIALITY NOTICE* The Mindanao Sta...{{dropped:9}}
> 
> __
> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] MCMC Estimation for Four Parametric Logistic (4PL) Item Response Model

2018-01-18 Thread Doran, Harold
I know of no existing functions for estimating the parameters of this model 
using MCMC or MML. Many years ago, I wrote code to estimate this model using 
marginal maximum likelihood. I wrote this based on the using nlminb and 
gauss-hermite quadrature points from statmod. 

I could not find that code to share with you, but I do have code for estimating 
the 3PL in this way and you could modify the likelihood for the upper asymptote 
yourself.



-Original Message-
From: R-help [mailto:r-help-boun...@r-project.org] On Behalf Of ALYSSA FATMAH 
MASTURA
Sent: Thursday, January 18, 2018 10:15 AM
To: r-help@r-project.org
Subject: [R] MCMC Estimation for Four Parametric Logistic (4PL) Item Response 
Model

Good day Sir/Ma'am! This is Alyssa Fatmah S. Mastura taking up Master of 
Science in Statistics at Mindanao State University-Iligan Institute Technology 
(MSU-IIT), Philippines. I am currently working on my master's thesis titled 
"Comparing the Three Estimation Methods for the Four Parametric Logistic (4PL) 
Item Response Model". While I am looking for a package about Markov chain Monte 
Carlo (MCMC) method for 4PL model in this R forum, I found an sirt package of 
an MCMC method but only for the three parametric normal ogive (3PNO) item 
response model. However, my study focus on 4PL model. In line with this, I 
would like to know if there exist a function of MCMC method for 4PL model in R 
language. I am asking for your help to inform me if such function exist. I 
highly appreciate your response on this matter. Thank you so much. Have a great 
day ahead!



Alyssa


Virus-free.
www.avast.com

<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>

--
---
*DISCLAIMER AND CONFIDENTIALITY NOTICE* The Mindanao Sta...{{dropped:9}}

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] reading lisp file in R

2018-01-18 Thread peter dalgaard
Yes, and the structure is obviously case-insensitive. More troublesome is 
probably that there can be multiple ACADEMIC-EMPHASIS entries, which can be 
tricky to tidify. Also one would need to figure out what is the meaning of 
lines like

(DEFPROP BOSTON-COLLEGE0 T DUPLICATE)

-pd

> On 18 Jan 2018, at 18:04 , Barry Rowlingson  
> wrote:
> 
> The file also has a bunch of email headers stuck in the middle of it:
> 
> 
> .
> 
> (QUALITY-OF-LIFE SCALE:1-5 4)
>  (ACADEMIC-EMPHASIS HEALTH-SCIENCE)
> )
> ---
> ---
> 
> From lebow...@cs.columbia.edu Mon Feb 22 20:53:02 1988
> Received: from zodiac by meridian (5.52/4.7)
> Received: from Jessica.Stanford.EDU by ads.com (5.58/1.9)
>id AA04539; Mon, 22 Feb 88 20:59:59 PST
> Received: from Portia.Stanford.EDU by jessica.Stanford.EDU with TCP; Mon,
> 22 Feb
> 88 20:58:22 PST
> Received: from columbia.edu (COLUMBIA.EDU.ARPA) by Portia.STANFORD.EDU
> (1.2/Ultrix2.0-B)
>id AA11480; Mon, 22 Feb 88 20:49:53 pst
> Received: from CS.COLUMBIA.EDU by columbia.edu (5.54/1.14)
>id AA10186; Mon, 22 Feb 88 23:48:44 EST
> Message-Id: <8802230448.aa10...@columbia.edu>
> Date: Fri 22 Jan 88 02:50:00-EST
> From: The Mailer Daemon 
> To: lebow...@cs.columbia.edu
> Subject: Message of 18-Jan-88 20:13:54
> Resent-Date: Mon 22 Feb 88 23:44:07-EST
> Resent-From: Michael Lebowitz 
> Resent-To: soud...@portia.stanford.edu
> Resent-Message-Id: <12376918538.25.lebow...@cs.columbia.edu>
> Status: R
> 
> Message undeliverable and dequeued after 3 days:
> souders%merid...@ads.arpa: Cannot connect to host
>
> Date: Mon 18 Jan 88 20:13:54-EST
> From: Michael Lebowitz 
> Subject: bigger file part 3
> To: souders%merid...@ads.arpa
> In-Reply-To: <8801182147.aa08...@ads.arpa>
> Message-ID: <12367705229.11.lebow...@cs.columbia.edu>
> 
> (DEF-INSTANCE GEORGETOWN
>  (STATE MARYLAND)
>  (LOCATION URBAN)
>  (CONTROL PRIVATE)
>  (NO-OF-STUDENTS THOUS:10-15)
>  (MALE:FEMALE RATIO:45:55)
> 
> 
> Which dates it to 1988. Nice.
> 
> Barry
> 
> 
> 
> On Thu, Jan 18, 2018 at 9:20 AM, Peter Crowther > wrote:
> 
>> That's a nice example of why Lisp is both powerful and terrifying - you're
>> looking at a Lisp *program*, not just Lisp *data*, as Lisp makes no
>> distinction between the two.  You just read 'em in.
>> 
>> The two definitions at the bottom are function definitions.  The top one
>> defines the def-instance function.  Reading that indicates that it accepts
>> an atom as a name and a list of key-value or key-range-value lists as
>> properties, where they keys may be repeated to give you multi-valued
>> attributes in your result.  The bottom one defines a function for removing
>> duplicate entries of the same location.
>> 
>> The rest of the file (apart from the included email headers) is a whole
>> load of calls to the def-instance function.  In Lisp, you'd define the
>> functions, then just run the rest of the file.
>> 
>> To my knowledge, there is no generic way to read Lisp "data" into anything
>> else, because of this quirk that data can look like anything.  If anyone
>> can correct me on that, great, but I'd be somewhat surprised.  Therefore,
>> as David intimated, the tools you need are generic tools for handling text,
>> and you'll have to deal with the formatting yourself.  If I were doing a
>> one-off transform of this file, I'd probably reach for vi... but I'm an old
>> Unix hacker.  I certainly wouldn't teach that tooling.  awk or perl could
>> certainly handle it; or if you want to give students a wider view of the
>> world you might wish to try ANTLR and get them to write a grammar to parse
>> the file.  The Clojure grammar (
>> https://github.com/antlr/grammars-v4/blob/master/clojure/Clojure.g4) would
>> be an interesting place to start, although Terence Parr's comment of "match
>> a bunch of crap in parentheses" would probably give a flavour of what to
>> implement.  Depends what else the students are learning.
>> 
>> Hope this helps rather than hinders.
>> 
>> - Peter
>> 
>> On 18 January 2018 at 05:25, Ranjan Maitra  wrote:
>> 
>>> Thanks! I am trying to use it in R. (Actually, I try to give my students
>>> experiences with different kinds of files and I was wondering if there
>> were
>>> tools available for such kinds of files. I don't know Lisp so I do not
>>> actually know what the lines towards the bottom of the file mean.(
>>> 
>>> Many thanks for your response!
>>> 
>>> Best wishes,
>>> Ranjan
>>> 
>>> On Wed, 17 Jan 2018 20:59:48 -0800 David Winsemius <
>> dwinsem...@comcast.net>
>>> wrote:
>>> 
 
> On Jan 17, 2018, at 8:22 PM, Ranjan Maitra  wrote:
> 
> Dear friends,
> 
> Is there a way to read data files written in lisp into R?
> 
> Here is the file: https://archive.ics.uci.edu/
>>> 

[R] MCMC Estimation for Four Parametric Logistic (4PL) Item Response Model

2018-01-18 Thread ALYSSA FATMAH MASTURA
Good day Sir/Ma'am! This is Alyssa Fatmah S. Mastura taking up Master of
Science in Statistics at Mindanao State University-Iligan Institute
Technology (MSU-IIT), Philippines. I am currently working on my master's
thesis titled "Comparing the Three Estimation Methods for the Four
Parametric Logistic (4PL) Item Response Model". While I am looking for a
package about Markov chain Monte Carlo (MCMC) method for 4PL model in this
R forum, I found an sirt package of an MCMC method but only for the three
parametric normal ogive (3PNO) item response model. However, my study focus
on 4PL model. In line with this, I would like to know if there exist a
function of MCMC method for 4PL model in R language. I am asking for your
help to inform me if such function exist. I highly appreciate your response
on this matter. Thank you so much. Have a great day ahead!



Alyssa


Virus-free.
www.avast.com

<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>

-- 
---
*DISCLAIMER AND CONFIDENTIALITY NOTICE* The Mindanao Sta...{{dropped:24}}

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] reading lisp file in R

2018-01-18 Thread Barry Rowlingson
The file also has a bunch of email headers stuck in the middle of it:


.

 (QUALITY-OF-LIFE SCALE:1-5 4)
  (ACADEMIC-EMPHASIS HEALTH-SCIENCE)
)
---
---

>From lebow...@cs.columbia.edu Mon Feb 22 20:53:02 1988
Received: from zodiac by meridian (5.52/4.7)
Received: from Jessica.Stanford.EDU by ads.com (5.58/1.9)
id AA04539; Mon, 22 Feb 88 20:59:59 PST
Received: from Portia.Stanford.EDU by jessica.Stanford.EDU with TCP; Mon,
22 Feb
 88 20:58:22 PST
Received: from columbia.edu (COLUMBIA.EDU.ARPA) by Portia.STANFORD.EDU
(1.2/Ultrix2.0-B)
id AA11480; Mon, 22 Feb 88 20:49:53 pst
Received: from CS.COLUMBIA.EDU by columbia.edu (5.54/1.14)
id AA10186; Mon, 22 Feb 88 23:48:44 EST
Message-Id: <8802230448.aa10...@columbia.edu>
Date: Fri 22 Jan 88 02:50:00-EST
From: The Mailer Daemon 
To: lebow...@cs.columbia.edu
Subject: Message of 18-Jan-88 20:13:54
Resent-Date: Mon 22 Feb 88 23:44:07-EST
Resent-From: Michael Lebowitz 
Resent-To: soud...@portia.stanford.edu
Resent-Message-Id: <12376918538.25.lebow...@cs.columbia.edu>
Status: R

Message undeliverable and dequeued after 3 days:
souders%merid...@ads.arpa: Cannot connect to host

Date: Mon 18 Jan 88 20:13:54-EST
From: Michael Lebowitz 
Subject: bigger file part 3
To: souders%merid...@ads.arpa
In-Reply-To: <8801182147.aa08...@ads.arpa>
Message-ID: <12367705229.11.lebow...@cs.columbia.edu>

(DEF-INSTANCE GEORGETOWN
  (STATE MARYLAND)
  (LOCATION URBAN)
  (CONTROL PRIVATE)
  (NO-OF-STUDENTS THOUS:10-15)
  (MALE:FEMALE RATIO:45:55)


Which dates it to 1988. Nice.

Barry



On Thu, Jan 18, 2018 at 9:20 AM, Peter Crowther  wrote:

> That's a nice example of why Lisp is both powerful and terrifying - you're
> looking at a Lisp *program*, not just Lisp *data*, as Lisp makes no
> distinction between the two.  You just read 'em in.
>
> The two definitions at the bottom are function definitions.  The top one
> defines the def-instance function.  Reading that indicates that it accepts
> an atom as a name and a list of key-value or key-range-value lists as
> properties, where they keys may be repeated to give you multi-valued
> attributes in your result.  The bottom one defines a function for removing
> duplicate entries of the same location.
>
> The rest of the file (apart from the included email headers) is a whole
> load of calls to the def-instance function.  In Lisp, you'd define the
> functions, then just run the rest of the file.
>
> To my knowledge, there is no generic way to read Lisp "data" into anything
> else, because of this quirk that data can look like anything.  If anyone
> can correct me on that, great, but I'd be somewhat surprised.  Therefore,
> as David intimated, the tools you need are generic tools for handling text,
> and you'll have to deal with the formatting yourself.  If I were doing a
> one-off transform of this file, I'd probably reach for vi... but I'm an old
> Unix hacker.  I certainly wouldn't teach that tooling.  awk or perl could
> certainly handle it; or if you want to give students a wider view of the
> world you might wish to try ANTLR and get them to write a grammar to parse
> the file.  The Clojure grammar (
> https://github.com/antlr/grammars-v4/blob/master/clojure/Clojure.g4) would
> be an interesting place to start, although Terence Parr's comment of "match
> a bunch of crap in parentheses" would probably give a flavour of what to
> implement.  Depends what else the students are learning.
>
> Hope this helps rather than hinders.
>
> - Peter
>
> On 18 January 2018 at 05:25, Ranjan Maitra  wrote:
>
> > Thanks! I am trying to use it in R. (Actually, I try to give my students
> > experiences with different kinds of files and I was wondering if there
> were
> > tools available for such kinds of files. I don't know Lisp so I do not
> > actually know what the lines towards the bottom of the file mean.(
> >
> > Many thanks for your response!
> >
> > Best wishes,
> > Ranjan
> >
> > On Wed, 17 Jan 2018 20:59:48 -0800 David Winsemius <
> dwinsem...@comcast.net>
> > wrote:
> >
> > >
> > > > On Jan 17, 2018, at 8:22 PM, Ranjan Maitra  wrote:
> > > >
> > > > Dear friends,
> > > >
> > > > Is there a way to read data files written in lisp into R?
> > > >
> > > > Here is the file: https://archive.ics.uci.edu/
> > ml/machine-learning-databases/university/university.data
> > > >
> > > > I would like to read it into R. Any suggestions?
> > >
> > > It's just a text file. What difficulties are you having?
> > > >
> > > >
> > > > Thanks very much in advance for pointers on this and best wishes,
> > > > Ranjan
> > > >
> > > > --
> > > > Important Notice: This mailbox is ignored: e-mails are set to be
> > deleted on receipt. Please respond to the mailing list if appropriate.
> For
> > those needing to send personal or professional e-mail, please use
> > appropriate addresses.
> > 

Re: [R] effects & lme4: error since original dataframenotfoundWASeffects: error when original data frame is missing

2018-01-18 Thread Gerrit Eichner

Thanks, John,

for your hint! (Unfortunately, I was not aware of this vignette,
but I am glad that I seem to habe been on the right track.)

Indeed very helpful, in particular of course, the warning regarding
the danger of overwriting already existing objects. That danger might
be reduced by pre-checking the intended name and if neccessary
changing it (somehow ...) automatically. (Have to think about that ...)

 Best regards  --  Gerrit

-
Dr. Gerrit Eichner   Mathematical Institute, Room 212
gerrit.eich...@math.uni-giessen.de   Justus-Liebig-University Giessen
Tel: +49-(0)641-99-32104  Arndtstr. 2, 35392 Giessen, Germany
Fax: +49-(0)641-99-32109http://www.uni-giessen.de/eichner
-

Am 17.01.2018 um 15:55 schrieb Fox, John:

Dear Gerrit,

This issue is discussed in a vignette in the car package (both for functions in the car and effects 
packages): vignette("embedding", package="car") . The solution suggested there 
is the essentially the one that you used.

I hope this helps,
  John

-
John Fox, Professor Emeritus
McMaster University
Hamilton, Ontario, Canada
Web: socialsciences.mcmaster.ca/jfox/



-Original Message-
From: R-help [mailto:r-help-boun...@r-project.org] On Behalf Of Gerrit
Eichner
Sent: Wednesday, January 17, 2018 9:50 AM
To: r-help@r-project.org
Subject: Re: [R] effects & lme4: error since original data frame
notfoundWASeffects: error when original data frame is missing

Third "hi" in this regard and for the archives:

I found a (maybe "dirty") workaround which at least does what I need by
creating a copy of the required data frame in the .GlobalEnv by means of
assign:

foo <- function() {
assign("X", sleepstudy, pos = 1)
fm <- lmer(Reaction ~ Days + (Days | Subject), data = X)
Effect("Days", fm)
}


   Hth  --  Gerrit

-
Dr. Gerrit Eichner   Mathematical Institute, Room 212
gerrit.eich...@math.uni-giessen.de   Justus-Liebig-University Giessen
Tel: +49-(0)641-99-32104  Arndtstr. 2, 35392 Giessen, Germany
Fax: +49-(0)641-99-32109http://www.uni-giessen.de/eichner
-

Am 17.01.2018 um 15:02 schrieb Gerrit Eichner:

Hi, again,

I have to modify my query since my first (too simple) example doesn't
reflect my actual problem. Second try:

When asking Effect() inside a function to compute an effect of an
lmer-fit which uses a data frame local to the body of the function, as
in the following example (simplifying my actual application), I get
the "Error in is.data.frame(data) :
object 'X' not found":

  > foo <- function() {
+  X <- sleepstudy
+  fm <- lmer(Reaction ~ Days + (Days | Subject), data = X)
+  Effect("Days", fm)
+ }

  > foo()

Error in is.data.frame(data) : object 'X' not found


With lm-objects there is no problem:

  > foo2 <- function() {
+   X <- sleepstudy
+   fm <- lm(Reaction ~ Days, data = X)
+   Effect("Days", fm)
+ }

  > foo2()



Any idea how to work around this problem?
Once again, thx in advance!

   Regards  --  Gerrit

PS: > sessionInfo()
R version 3.4.2 (2017-09-28)
Platform: x86_64-w64-mingw32/x64 (64-bit) Running under: Windows >= 8
x64 (build 9200)

Matrix products: default

locale:
[1]

LC_COLLATE=German_Germany.1252  LC_CTYPE=German_Germany.1252 [3]

LC_MONETARY=German_Germany.1252 LC_NUMERIC=C [5]
LC_TIME=German_Germany.1252

attached base packages:
[1] stats graphics  grDevices utils datasets  methods   base

other attached packages:
[1] effects_4.0-0   carData_3.0-0   lme4_1.1-14 Matrix_1.2-11
car_2.1-5 [6] lattice_0.20-35

loaded via a namespace (and not attached):
   [1] Rcpp_0.12.13   MASS_7.3-47    grid_3.4.2
MatrixModels_0.4-1
   [5] nlme_3.1-131   survey_3.32-1  SparseM_1.77 minqa_1.2.4
   [9] nloptr_1.0.4   splines_3.4.2  tools_3.4.2
survival_2.41-3 [13] pbkrtest_0.4-7 yaml_2.1.14
parallel_3.4.2 compiler_3.4.2 [17] colorspace_1.3-2   mgcv_1.8-22
nnet_7.3-12 quantreg_5.33

-
Dr. Gerrit Eichner   Mathematical Institute, Room 212
gerrit.eich...@math.uni-giessen.de   Justus-Liebig-University Giessen
Tel: +49-(0)641-99-32104  Arndtstr. 2, 35392 Giessen, Germany
Fax: +49-(0)641-99-32109    http://www.uni-giessen.de/eichner
-

Am 17.01.2018 um 10:55 schrieb Gerrit Eichner:

Hello, everyody,

when asking, e.g., Effect() to compute the effects of a fitted, e.g.,
linear model after having deleted the data frame from the workspace
for which the model was obtained an error is reported:

  > myair <- airquality
  > fm <- lm(Ozone ~ Temp, data = myair)
  > rm(myair)
  > 

Re: [R] request for code

2018-01-18 Thread Marc Schwartz
> On Jan 18, 2018, at 7:49 AM, Anjali Karol Nair  wrote:
> 
> Hi,
> 
> I want to convert my MATLAB programs to R studio programs.
> Kindly guide on the same.

Hi,

Using Google with a search phrase such as "MATLAB to R" will yield a number of 
possible resources for you such as:

  http://www.math.umaine.edu/~hiebeler/comp/matlabR.pdf

which provides a reference for associating MATLAB code to R equivalents.

You can then take it from there.

Also, a clarification on your misunderstanding above, which seems to be 
prevalent from what I can tell on various online fora.

R Studio is a third party GUI that sits on top of R, it is not R. Thus, "R 
studio programs" is not accurate. They are R programs that can be created, 
edited and run via the R Studio GUI.

Regards,

Marc Schwartz

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] request for code

2018-01-18 Thread jim holtman
a simple Google search turns up several possible choices.  There is a
package 'matconv' that might serve your purposes.


Jim Holtman
Data Munger Guru

What is the problem that you are trying to solve?
Tell me what you want to do, not how you want to do it.

On Thu, Jan 18, 2018 at 7:49 AM, Anjali Karol Nair 
wrote:

> Hi,
>
> I want to convert my MATLAB programs to R studio programs.
> Kindly guide on the same.
>
> [[alternative HTML version deleted]]
>
> __
> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/
> posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] request for code

2018-01-18 Thread Anjali Karol Nair
Hi,

I want to convert my MATLAB programs to R studio programs.
Kindly guide on the same.

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Web scraping different levels of a website

2018-01-18 Thread David Jankoski
Hey Ilio,

On the main website (the first link that you provided) if you
right-click on the title of any entry and select Inspect Element from
the menu, you will notice in the Developer Tools view that opens up
that the corresponding html looks like this

(example for the same link that you provided)

http://catalog.ihsn.org/index.php/catalog/7118; title="View
study">


http://catalog.ihsn.org/index.php/catalog/7118;
title="Demographic and Health Survey 2015">
  Demographic and Health Survey 2015

  

Notice how the number you are after is contained within the
"survey-row" div element, in the data-url attribute. Or alternatively
withing the  elem within the href attribute. It's up to you which
one you want to grab but the idea would be the same i.e.

1. read in the html
2. select all list-elements by css / xpath
3. grab the fwd link

Here is an example using the first option.

url <- 
"http://catalog.ihsn.org/index.php/catalog#_r=1890=1=100==_by=nation_order==2017==s=;

x <-
  url %>%
  GET() %>%
  content()

x %>%
  html_nodes(".survey-row") %>%
  html_attr("data-url")

hth.
david

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Letters group Games-Howell post hoc in R

2018-01-18 Thread Meyners, Michael
Apologies if I missed any earlier replies - did you check
multcompLetters in package {multcompView}?
It allows you to get connecting letters reports (if that's what you are after, 
I didn't check what exactly agricolae is providing here). May have to add some 
manual steps to combine this with any data (means or whatever) you want to 
report.
multcompLetters allows you to use p values or a logical (significant or not)
HTH, Michael

> -Original Message-
> From: R-help [mailto:r-help-boun...@r-project.org] On Behalf Of David Bars
> Cortina
> Sent: Dienstag, 16. Januar 2018 13:51
> To: R-help@r-project.org
> Subject: [R] Letters group Games-Howell post hoc in R
> 
> Hello everybody,
> 
> I use the sweetpotato database included in R package:
> 
> data(sweetpotato) This dataset contains two variables: yield(continous
> variable) and virus(factor variable).
> 
> Due to Levene test is significant I cannot assume homogeneity of variances
> and I apply Welch test in R instead of one-way ANOVA followed by Tukey
> posthoc.
> 
> Nevertheless, the problems come from when I apply posthoc test. In Tukey
> posthoc test I use library(agricolae) and displays me the superscript letters
> between virus groups. Therefore there are no problems.
> 
> Nevertheless, to perform Games-Howell posthoc, I use
> library(userfriendlyscience) and I obtain Games-Howell output but it's
> impossible for me to obtain a letter superscript comparison between virus
> groups as it is obtained through library(agricolae).
> 
> The code used it was the following:
> 
> library(userfriendlyscience)
> 
> data(sweetpotato)
> 
> oneway<-oneway(sweetpotato$virus, y=sweetpotato$yield, posthoc =
> 'games-howell')
> 
> oneway
> 
> I try with cld() importing previously library(multcompView) but doesn't work.
> 
> Can somebody could helps me?
> 
> Thanks in advance,
> 
> David Bars.
> 
> __
> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-
> guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Web scraping different levels of a website

2018-01-18 Thread Ilio Fornasero
I am web scraping a page at

http://catalog.ihsn.org/index.php/catalog#_r=1890=1=100==_by=nation_order==2017==s=

>From this url, I have built up a dataframe through the following code:

dflist <- map(.x = 1:417, .f = function(x) {
 Sys.sleep(5)
 url <- 
("http://catalog.ihsn.org/index.php/catalog#_r=1890=1=100==_by=nation_order==2017==s=;)
read_html(url) %>%
html_nodes(".title a") %>%
html_text() %>%
as.data.frame()
}) %>% do.call(rbind, .)

I have repeated the same code in order to get all the data I was interested in 
and it seems to work perfectly, although is of course a little slow due to the 
Sys.sleep() thing.

My issue has raised once I have tried to scrape the single projects 
descriptions that should be included in the dataframe.

For instance, the first project description is at

http://catalog.ihsn.org/index.php/catalog/7118/study-description

the second project description is at

http://catalog.ihsn.org/index.php/catalog/6606/study-description

and so forth.

My problem is that I can't find a dynamic way to scrape all the projects' pages 
and insert them in the data frame, being the number in the URLs not progressive 
nor at the end of the link.

To make things clearer, this is the structure of the website I am scraping:

1.http://catalog.ihsn.org/index.php/catalog#_r=1890=1=100==_by=nation_order==2017==s=
   1.1.   http://catalog.ihsn.org/index.php/catalog/7118
1.1.a http://catalog.ihsn.org/index.php/catalog/7118/related_materials
1.1.b http://catalog.ihsn.org/index.php/catalog/7118/study-description
1.1.c. http://catalog.ihsn.org/index.php/catalog/7118/data_dictionary

I have scraped successfully level 1. but cannot level 1.1.b. 
(study-description) , the one I am interested in, since the dynamic element of 
the URL (in this case: 7118) is not consistent in the website's above 6000 
pages of that level.


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] reading lisp file in R

2018-01-18 Thread Peter Crowther
That's a nice example of why Lisp is both powerful and terrifying - you're
looking at a Lisp *program*, not just Lisp *data*, as Lisp makes no
distinction between the two.  You just read 'em in.

The two definitions at the bottom are function definitions.  The top one
defines the def-instance function.  Reading that indicates that it accepts
an atom as a name and a list of key-value or key-range-value lists as
properties, where they keys may be repeated to give you multi-valued
attributes in your result.  The bottom one defines a function for removing
duplicate entries of the same location.

The rest of the file (apart from the included email headers) is a whole
load of calls to the def-instance function.  In Lisp, you'd define the
functions, then just run the rest of the file.

To my knowledge, there is no generic way to read Lisp "data" into anything
else, because of this quirk that data can look like anything.  If anyone
can correct me on that, great, but I'd be somewhat surprised.  Therefore,
as David intimated, the tools you need are generic tools for handling text,
and you'll have to deal with the formatting yourself.  If I were doing a
one-off transform of this file, I'd probably reach for vi... but I'm an old
Unix hacker.  I certainly wouldn't teach that tooling.  awk or perl could
certainly handle it; or if you want to give students a wider view of the
world you might wish to try ANTLR and get them to write a grammar to parse
the file.  The Clojure grammar (
https://github.com/antlr/grammars-v4/blob/master/clojure/Clojure.g4) would
be an interesting place to start, although Terence Parr's comment of "match
a bunch of crap in parentheses" would probably give a flavour of what to
implement.  Depends what else the students are learning.

Hope this helps rather than hinders.

- Peter

On 18 January 2018 at 05:25, Ranjan Maitra  wrote:

> Thanks! I am trying to use it in R. (Actually, I try to give my students
> experiences with different kinds of files and I was wondering if there were
> tools available for such kinds of files. I don't know Lisp so I do not
> actually know what the lines towards the bottom of the file mean.(
>
> Many thanks for your response!
>
> Best wishes,
> Ranjan
>
> On Wed, 17 Jan 2018 20:59:48 -0800 David Winsemius 
> wrote:
>
> >
> > > On Jan 17, 2018, at 8:22 PM, Ranjan Maitra  wrote:
> > >
> > > Dear friends,
> > >
> > > Is there a way to read data files written in lisp into R?
> > >
> > > Here is the file: https://archive.ics.uci.edu/
> ml/machine-learning-databases/university/university.data
> > >
> > > I would like to read it into R. Any suggestions?
> >
> > It's just a text file. What difficulties are you having?
> > >
> > >
> > > Thanks very much in advance for pointers on this and best wishes,
> > > Ranjan
> > >
> > > --
> > > Important Notice: This mailbox is ignored: e-mails are set to be
> deleted on receipt. Please respond to the mailing list if appropriate. For
> those needing to send personal or professional e-mail, please use
> appropriate addresses.
> > >
> > > __
> > > R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> > > https://stat.ethz.ch/mailman/listinfo/r-help
> > > PLEASE do read the posting guide http://www.R-project.org/
> posting-guide.html
> > > and provide commented, minimal, self-contained, reproducible code.
> >
> > David Winsemius
> > Alameda, CA, USA
> >
> > 'Any technology distinguishable from magic is insufficiently advanced.'
>  -Gehm's Corollary to Clarke's Third Law
> >
> > __
> > R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide http://www.R-project.org/
> posting-guide.html
> > and provide commented, minimal, self-contained, reproducible code.
> >
>
>
> --
> Important Notice: This mailbox is ignored: e-mails are set to be deleted
> on receipt. Please respond to the mailing list if appropriate. For those
> needing to send personal or professional e-mail, please use appropriate
> addresses.
>
> __
> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/
> posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.