Re: [R] unable to install source packages with Rtools installed

2021-08-24 Thread Jim Lemon
Hi Xiaoyu,
"non-zero exit status" is just a fancy way of saying something went
wrong. An old computing convention is for a function to return zero
(FALSE) for success or some positive integer that tells you what the
error was. In general, when you have a sequence of interdependent
functions or programs, if one goes wrong, all subsequent ones usually
do, too.

Jim

On Wed, Aug 25, 2021 at 11:16 AM Xiaoyu Zeng  wrote:
>
> Thanks for replying, Jim and Bert.
>
> I installed proxy and reinstalled jsonlite, but it failed again. I even tried 
> setting ''dependencies = TRUE", and it turned out with the same error "had 
> non-zero exit status". Session info and error information were enclosed.
>
> I need to ensure Rtool work and I can install package from source as I need 
> to install a package from Github. But I had no clue about the error "had 
> non-zero exit status". Any ideas?
>
> Best regards,
> Xiaoyu
>
> Error info:
> install.packages("jsonlite", type = "source",dependencies = TRUE)
> 还安装相依关系‘e1071’, ‘wk’, ‘tinytex’, ‘R.cache’, ‘classInt’, ‘DBI’, ‘s2’, ‘units’, 
> ‘rmarkdown’, ‘R.rsp’, ‘sf’
>
> trying URL 'https://cran.rstudio.com/src/contrib/e1071_1.7-8.tar.gz'
> Content type 'application/x-gzip' length 581347 bytes (567 KB)
> downloaded 567 KB
>
> trying URL 'https://cran.rstudio.com/src/contrib/wk_0.5.0.tar.gz'
> Content type 'application/x-gzip' length 138686 bytes (135 KB)
> downloaded 135 KB
>
> trying URL 'https://cran.rstudio.com/src/contrib/tinytex_0.33.tar.gz'
> Content type 'application/x-gzip' length 30085 bytes (29 KB)
> downloaded 29 KB
>
> trying URL 'https://cran.rstudio.com/src/contrib/R.cache_0.15.0.tar.gz'
> Content type 'application/x-gzip' length 34692 bytes (33 KB)
> downloaded 33 KB
>
> trying URL 'https://cran.rstudio.com/src/contrib/classInt_0.4-3.tar.gz'
> Content type 'application/x-gzip' length 403884 bytes (394 KB)
> downloaded 394 KB
>
> trying URL 'https://cran.rstudio.com/src/contrib/DBI_1.1.1.tar.gz'
> Content type 'application/x-gzip' length 743802 bytes (726 KB)
> downloaded 726 KB
>
> trying URL 'https://cran.rstudio.com/src/contrib/s2_1.0.6.tar.gz'
> Content type 'application/x-gzip' length 1605788 bytes (1.5 MB)
> downloaded 1.5 MB
>
> trying URL 'https://cran.rstudio.com/src/contrib/units_0.7-2.tar.gz'
> Content type 'application/x-gzip' length 855840 bytes (835 KB)
> downloaded 835 KB
>
> trying URL 'https://cran.rstudio.com/src/contrib/rmarkdown_2.10.tar.gz'
> Content type 'application/x-gzip' length 3248794 bytes (3.1 MB)
> downloaded 3.1 MB
>
> trying URL 'https://cran.rstudio.com/src/contrib/R.rsp_0.44.0.tar.gz'
> Content type 'application/x-gzip' length 823684 bytes (804 KB)
> downloaded 804 KB
>
> trying URL 'https://cran.rstudio.com/src/contrib/sf_1.0-2.tar.gz'
> Content type 'application/x-gzip' length 3645982 bytes (3.5 MB)
> downloaded 3.5 MB
>
> trying URL 'https://cran.rstudio.com/src/contrib/jsonlite_1.7.2.tar.gz'
> Content type 'application/x-gzip' length 421716 bytes (411 KB)
> downloaded 411 KB
>
> Warning in install.packages :
>   installation of package ‘e1071’ had non-zero exit status
> Warning in install.packages :
>   installation of package ‘wk’ had non-zero exit status
> Warning in install.packages :
>   installation of package ‘tinytex’ had non-zero exit status
> Warning in install.packages :
>   installation of package ‘R.cache’ had non-zero exit status
> Warning in install.packages :
>   installation of package ‘DBI’ had non-zero exit status
> Warning in install.packages :
>   installation of package ‘units’ had non-zero exit status
> Warning in install.packages :
>   installation of package ‘jsonlite’ had non-zero exit status
> Warning in install.packages :
>   installation of package ‘classInt’ had non-zero exit status
> Warning in install.packages :
>   installation of package ‘s2’ had non-zero exit status
> Warning in install.packages :
>   installation of package ‘rmarkdown’ had non-zero exit status
> Warning in install.packages :
>   installation of package ‘R.rsp’ had non-zero exit status
> Warning in install.packages :
>   installation of package ‘sf’ had non-zero exit status
>
> The downloaded source packages are in
> ‘C:\Users\xiaoy\AppData\Local\Temp\RtmpoPSf4N\downloaded_packages’
>
> Session info:
> > sessionInfo()
> R version 4.1.1 (2021-08-10)
> Platform: x86_64-w64-mingw32/x64 (64-bit)
> Running under: Windows 10 x64 (build 18363)
>
> Matrix products: default
>
> locale:
> [1] LC_COLLATE=Chinese (Simplified)_China.936  LC_CTYPE=Chinese 
> (Simplified)_China.936
> [3] LC_MONETARY=Chinese (Simplified)_China.936 LC_NUMERIC=C
> [5] LC_TIME=Chinese (Simplified)_China.936
>
> attached base packages:
> [1] stats graphics  grDevices utils datasets  methods   base
>
> loaded via a namespace (and not attached):
> [1] compiler_4.1.1 tools_4.1.1
>
>
> Xiaoyu Zeng
>
> Graduate student, Social and Affective NeuroPharmacology (SANP) Lab
>
> State Key Laboratory of Cognitive Neuroscience and Learning
>
> Beijing Normal University
>

Re: [R] Help needed with getting a decent image of ggplot2 graph

2021-08-24 Thread bharat rawlley via R-help
I tried doing that. 
So the real title of my graph is much longer than men and women and isn't not 
being incorporated in that width. 
I think I'll have to settle for a smaller title 

Sent from Yahoo Mail on Android 
 
  On Tue, 24 Aug 2021 at 6:54 PM, Jim Lemon wrote:   Ah, 
an _upper_ limit. Why not let tiff() work out the resolution
(res=NA - the default) and see if that passes muster.

Jim

On Wed, Aug 25, 2021 at 8:42 AM bharat rawlley  wrote:
>
> I am able to change but the place where I have to submit a similar graph has 
> kept a fixed upper limit of 440 pixels for the width and an upper limit of 
> 300 for the dpi.
>
> On Tuesday, 24 August, 2021, 06:36:16 pm GMT-4, Jim Lemon 
>  wrote:
>
>
> Hi bharat,
> I think there is a conflict between your image size and resolution.
> You need a lot larger height and width in pixels to get 300 dpi
> resolution for the whole plot.
>
> tiff("test.tiff", units = "px", width = 2200, height = 1250, res = 300)
>
> would probably do it for you. How come you can't change the width and
> height in pixels?
>
> Jim
>
> On Wed, Aug 25, 2021 at 8:22 AM bharat rawlley via R-help
>  wrote:
> >
> > Hello, I made the following graph in R with the following code.
> > ggplot(aes(x=factor(year), y=percentage, color = Gender, fill=Gender), data 
> > = graph_text)+  geom_bar(position = 'dodge', stat='identity')+  
> > theme_classic()+  scale_y_continuous(limits=c(0, 1.4*ymax))+  labs(x= 
> > 'Year', y = 'Percentage', title = 'Men and Women')
> >
> >
> >
> > However, on using the following code - tiff("test.tiff", units = "px", 
> > width = 440, height = 250, res = 300)ggplot(aes(x=factor(year), 
> > y=percentage, color = Gender, fill=Gender), data = graph_text)+  
> > geom_bar(position = 'dodge', stat='identity')+  theme_classic()+  
> > scale_y_continuous(limits=c(0, 1.4*ymax))+  labs(x= 'Year', y = 
> > 'Percentage', title = 'Men and Women')dev.off()
> >
> >
> > I get the following image -
> >
> >
> >
> > I need to keep the DPI = 300 and Width = 440 fixed. I can only manipulate 
> > height. Any help would be appreciated
> > Thank you
>
> >  __
> > R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> > and provide commented, minimal, self-contained, reproducible code.
>
  

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Help needed with getting a decent image of ggplot2 graph

2021-08-24 Thread Jim Lemon
Ah, an _upper_ limit. Why not let tiff() work out the resolution
(res=NA - the default) and see if that passes muster.

Jim

On Wed, Aug 25, 2021 at 8:42 AM bharat rawlley  wrote:
>
> I am able to change but the place where I have to submit a similar graph has 
> kept a fixed upper limit of 440 pixels for the width and an upper limit of 
> 300 for the dpi.
>
> On Tuesday, 24 August, 2021, 06:36:16 pm GMT-4, Jim Lemon 
>  wrote:
>
>
> Hi bharat,
> I think there is a conflict between your image size and resolution.
> You need a lot larger height and width in pixels to get 300 dpi
> resolution for the whole plot.
>
> tiff("test.tiff", units = "px", width = 2200, height = 1250, res = 300)
>
> would probably do it for you. How come you can't change the width and
> height in pixels?
>
> Jim
>
> On Wed, Aug 25, 2021 at 8:22 AM bharat rawlley via R-help
>  wrote:
> >
> > Hello, I made the following graph in R with the following code.
> > ggplot(aes(x=factor(year), y=percentage, color = Gender, fill=Gender), data 
> > = graph_text)+  geom_bar(position = 'dodge', stat='identity')+  
> > theme_classic()+  scale_y_continuous(limits=c(0, 1.4*ymax))+  labs(x= 
> > 'Year', y = 'Percentage', title = 'Men and Women')
> >
> >
> >
> > However, on using the following code - tiff("test.tiff", units = "px", 
> > width = 440, height = 250, res = 300)ggplot(aes(x=factor(year), 
> > y=percentage, color = Gender, fill=Gender), data = graph_text)+  
> > geom_bar(position = 'dodge', stat='identity')+  theme_classic()+  
> > scale_y_continuous(limits=c(0, 1.4*ymax))+  labs(x= 'Year', y = 
> > 'Percentage', title = 'Men and Women')dev.off()
> >
> >
> > I get the following image -
> >
> >
> >
> > I need to keep the DPI = 300 and Width = 440 fixed. I can only manipulate 
> > height. Any help would be appreciated
> > Thank you
>
> >  __
> > R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> > and provide commented, minimal, self-contained, reproducible code.
>

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Query regarding stats/p.adjust package (base) - specifically 'Hochberg' function

2021-08-24 Thread Jim Lemon
This is beginning to sound like a stats taliban fatwa. I don't care if
you're using an abacus, you want to get the correct result. My guess
is that the different instantiations of the Hochberg adjustment are
using different algorithms to calculate the result. The Hochberg
adjustment is known to be sensitive to the distributions of the test
statistics. People who are more expert than I in this area have
different ideas about how to handle this problem. This probably
contributes to the hopefully small differences in the eventual
corrected p-values.

Jim

On Wed, Aug 25, 2021 at 8:02 AM Rolf Turner  wrote:
>
>
> On Tue, 24 Aug 2021 14:44:55 +
> David Swanepoel  wrote:
>
> > Dear R Core Dev Team, I hope all is well your side!
> > My apologies if this is not the correct point of contact to use to
> > address this. If not, kindly advise or forward my request to the
> > relevant team/persons.
> >
> > I have a query regarding the 'Hochberg' method of the stats/p.adjust
> > R package and hope you can assist me please. I have attached the data
> > I used in Excel,
>
> 
>
> In addition to the good advice given to you earlier by Bert Gunter, you
> should consider the following advice:
>
> Don't use Excel!!!
>
> This is a corollary of a more general theorem:   Don't use Micro$oft!!!
>
> cheers,
>
> Rolf Turner
>
> --
> Honorary Research Fellow
> Department of Statistics
> University of Auckland
> Phone: +64-9-373-7599 ext. 88276
>
> __
> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Help needed with getting a decent image of ggplot2 graph

2021-08-24 Thread bharat rawlley via R-help
 I am able to change but the place where I have to submit a similar graph has 
kept a fixed upper limit of 440 pixels for the width and an upper limit of 300 
for the dpi.
On Tuesday, 24 August, 2021, 06:36:16 pm GMT-4, Jim Lemon 
 wrote:  
 
 Hi bharat,
I think there is a conflict between your image size and resolution.
You need a lot larger height and width in pixels to get 300 dpi
resolution for the whole plot.

 tiff("test.tiff", units = "px", width = 2200, height = 1250, res = 300)

would probably do it for you. How come you can't change the width and
height in pixels?

Jim

On Wed, Aug 25, 2021 at 8:22 AM bharat rawlley via R-help
 wrote:
>
> Hello, I made the following graph in R with the following code.
> ggplot(aes(x=factor(year), y=percentage, color = Gender, fill=Gender), data = 
> graph_text)+  geom_bar(position = 'dodge', stat='identity')+  
> theme_classic()+  scale_y_continuous(limits=c(0, 1.4*ymax))+  labs(x= 'Year', 
> y = 'Percentage', title = 'Men and Women')
>
>
>
> However, on using the following code - tiff("test.tiff", units = "px", width 
> = 440, height = 250, res = 300)ggplot(aes(x=factor(year), y=percentage, color 
> = Gender, fill=Gender), data = graph_text)+  geom_bar(position = 'dodge', 
> stat='identity')+  theme_classic()+  scale_y_continuous(limits=c(0, 
> 1.4*ymax))+  labs(x= 'Year', y = 'Percentage', title = 'Men and 
> Women')dev.off()
>
>
> I get the following image -
>
>
>
> I need to keep the DPI = 300 and Width = 440 fixed. I can only manipulate 
> height. Any help would be appreciated
> Thank you
>  __
> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Help needed with getting a decent image of ggplot2 graph

2021-08-24 Thread Jim Lemon
Hi bharat,
I think there is a conflict between your image size and resolution.
You need a lot larger height and width in pixels to get 300 dpi
resolution for the whole plot.

 tiff("test.tiff", units = "px", width = 2200, height = 1250, res = 300)

would probably do it for you. How come you can't change the width and
height in pixels?

Jim

On Wed, Aug 25, 2021 at 8:22 AM bharat rawlley via R-help
 wrote:
>
> Hello, I made the following graph in R with the following code.
> ggplot(aes(x=factor(year), y=percentage, color = Gender, fill=Gender), data = 
> graph_text)+  geom_bar(position = 'dodge', stat='identity')+  
> theme_classic()+  scale_y_continuous(limits=c(0, 1.4*ymax))+  labs(x= 'Year', 
> y = 'Percentage', title = 'Men and Women')
>
>
>
> However, on using the following code - tiff("test.tiff", units = "px", width 
> = 440, height = 250, res = 300)ggplot(aes(x=factor(year), y=percentage, color 
> = Gender, fill=Gender), data = graph_text)+  geom_bar(position = 'dodge', 
> stat='identity')+  theme_classic()+  scale_y_continuous(limits=c(0, 
> 1.4*ymax))+  labs(x= 'Year', y = 'Percentage', title = 'Men and 
> Women')dev.off()
>
>
> I get the following image -
>
>
>
> I need to keep the DPI = 300 and Width = 440 fixed. I can only manipulate 
> height. Any help would be appreciated
> Thank you
>  __
> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] unable to install source packages with Rtools installed

2021-08-24 Thread Bert Gunter
1. Why not installed a pre-built binary for Windows? (No need to reply
... Just saying.)

2. You should post the results of sessionInfo() to better enable
others to help you.
Also post any error or diagnostic messages you received.

Cheers,
Bert

Bert Gunter

"The trouble with having an open mind is that people keep coming along
and sticking things into it."
-- Opus (aka Berkeley Breathed in his "Bloom County" comic strip )

On Tue, Aug 24, 2021 at 2:23 PM Xiaoyu Zeng  wrote:
>
> Hi all.
>
> I installed the latest R and Rtools, and I followed the guidelines to test
> the installation of Rtools (rtools40v2-x86_64.exe for windows10). I
> verified that *make* can be found, but I still could not install an R
> package from the source.
>
> Any ideas?
>
> *More detailed info:*
> https://github.com/r-windows/rtools-packages/issues/222
>
> Best regards,
> Xiaoyu
>
> Xiaoyu Zeng
>
> Graduate student, Social and Affective NeuroPharmacology (SANP) Lab
>
> State Key Laboratory of Cognitive Neuroscience and Learning
>
> Beijing Normal University
>
> http://brain.bnu.edu.cn/home/yinama/
>
> [[alternative HTML version deleted]]
>
> __
> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Help needed with getting a decent image of ggplot2 graph

2021-08-24 Thread bharat rawlley via R-help
Hello, I made the following graph in R with the following code.
ggplot(aes(x=factor(year), y=percentage, color = Gender, fill=Gender), data = 
graph_text)+  geom_bar(position = 'dodge', stat='identity')+  theme_classic()+  
scale_y_continuous(limits=c(0, 1.4*ymax))+  labs(x= 'Year', y = 'Percentage', 
title = 'Men and Women')



However, on using the following code - tiff("test.tiff", units = "px", width = 
440, height = 250, res = 300)ggplot(aes(x=factor(year), y=percentage, color = 
Gender, fill=Gender), data = graph_text)+  geom_bar(position = 'dodge', 
stat='identity')+  theme_classic()+  scale_y_continuous(limits=c(0, 1.4*ymax))+ 
 labs(x= 'Year', y = 'Percentage', title = 'Men and Women')dev.off()


I get the following image - 



I need to keep the DPI = 300 and Width = 440 fixed. I can only manipulate 
height. Any help would be appreciated
Thank you
 __
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Specifying non-linear mixed effects models in R (non-linear DV)

2021-08-24 Thread Bert Gunter
Per the posting guide, statistics issues are generally off topic on this list:
"Questions about statistics: The R mailing lists are primarily
intended for questions and discussion about the R software. However,
questions about statistical methodology are sometimes posted. If the
question is well-asked and of interest to someone on the list, it may
elicit an informative up-to-date answer. See also the Usenet groups
sci.stat.consult (applied statistics and consulting) and sci.stat.math
(mathematical stat and probability)."

stats.stackexchange.com is also a possible venue for statistics questions.

Questions on mixed effects models -- including how to set them up
using nlme or lmer (in the lme4 package) -- are almost always better
posted on r-sig-mixed-models .

That said, the best advice may be to to find expert local help, as an
answer to your query may depend on questions of design,
interpretation, and use that are best explored in direct dialogue.

Cheers,
Bert

Bert Gunter

"The trouble with having an open mind is that people keep coming along
and sticking things into it."
-- Opus (aka Berkeley Breathed in his "Bloom County" comic strip )

On Tue, Aug 24, 2021 at 2:23 PM Patzelt, Edward  wrote:
>
> Hi R-Help,
>
> Data is below. I used a Kruskal Wallis to compare across 4 study groups for
> my DV (beta - this is highly non-normal). Now I want to add a covariate
> (cpz).
>
> 1) What package do I use and how do I specify the model? (I tried T.aov
> from fANCOVA but received a lot of simpleLoess errors)
>
> 2) Can I specify "subject" as a random effect like in lme?
>
> structure(list(subject = c("C5B1001", "C5B1002", "C5B1003", "C5B1004",
> "C5B1005", "C5B1007", "C5B1008", "C5B1009", "C5B1010", "C5B1011",
> "C5B1012", "C5B1013", "C5B1014", "C5B1015", "C5B1016", "C5B1017",
> "C5B1018", "C5B1019", "C5B1020", "C5B1021", "C5B1022", "C5B1023",
> "C5B1024", "C5B1025", "C5B1026", "C5B1027", "C5B1029", "C5B1030",
> "C5B1031", "C5B1032", "C5B1033", "C5B1034", "C5B1035", "C5B1036",
> "C5B1037", "C5B1038", "C5B1039", "C5B1040", "C5B1041", "C5B1042",
> "C5B1043", "C5B1044", "C5B1045", "C5B1046", "C5B1047", "C5B1048",
> "C5B1049", "C5D2002", "C5D2003", "C5D2005", "C5D2006", "C5D2007",
> "C5D2009", "C5D2010", "C5D2011", "C5D2012", "C5D2013", "C5D2014",
> "C5D2017", "C5D2021", "C5D2022", "C5D2023", "C5D2024", "C5D2025",
> "C5D2026", "C5D2027", "C5D2028", "C5D2029", "C5D2030", "C5D2031",
> "C5D2032", "C5D2035", "C5D2036", "C5D2037", "C5D2039", "C5D2040",
> "C5D2042", "C5D2043", "C5D2044", "C5D2045", "C5D2046", "C5D2047",
> "C5D2048", "C5D2049", "C5D2051", "C5D2052", "C5D2053", "C5D2054",
> "C5D2055", "C5M3001", "C5M3003", "C5M3004", "C5M3005", "C5M3006",
> "C5M3007", "C5M3008", "C5M3009", "C5M3010", "C5M3011", "C5M3013",
> "C5M3014", "C5M3015", "C5M3016", "C5M3017", "C5M3019", "C5M3020",
> "C5M3021", "C5M3022", "C5M3023", "C5M3024", "C5M3029", "C5M3030",
> "C5M3031", "C5M3032", "C5M3033", "C5M3034", "C5M3035", "C5M3036",
> "C5M3038", "C5M3039", "C5M3042", "C5M3043", "C5M3044", "C5M3046",
> "C5M3047", "C5M3048", "C5M3049", "C5M3050", "C5M3051", "C5M3054",
> "C5M3055", "C5M3056", "C5M3057", "C5M3058", "C5R4001", "C5R4004",
> "C5R4005", "C5R4008", "C5R4009", "C5R4010", "C5R4011", "C5R4012",
> "C5R4013", "C5R4014", "C5R4015", "C5R4016", "C5R4017", "C5R4019",
> "C5R4020", "C5R4021", "C5R4022", "C5R4024", "C5R4025", "C5R4026",
> "C5R4027", "C5R4028", "C5R4031", "C5R4032", "C5R4034", "C5R4037",
> "C5R4038", "C5R4040", "C5R4041", "C5R4043", "C5R4048", "C5R4050",
> "C5R4053", "C5R4056", "C5W5001", "C5W5002", "C5W5003", "C5W5004",
> "C5W5005", "C5W5006", "C5W5007", "C5W5008", "C5W5012", "C5W5013",
> "C5W5014", "C5W5015", "C5W5016", "C5W5017", "C5W5018", "C5W5019",
> "C5W5020", "C5W5021", "C5W5022", "C5W5023", "C5W5024", "C5W5025",
> "C5W5028", "C5W5029", "C5W5030", "C5W5031", "C5W5033", "C5W5035",
> "C5W5037", "C5W5038", "C5W5039", "C5W5042", "C5W5043", "C5W5044",
> "C5W5045", "C5W5046", "C5W5047", "C5W5048", "C5W5049", "C5W5050",
> "C5W5051", "C5W5053", "C5W5054", "C5W5055", "C5W5057", "C5W5058",
> "C5W5060"), beta = c(5, 5, 5, 4.84951578282477, 5, 1.75435411010482,
> 2.59653537897755, 4.58343041388045, 1.19813289503568, 5, 4.41030503473763,
> 3.48886522319213, 5, 3.69347465973804, 5, 3.61341511433856, 5,
> 5, 5, 5, 2.82540030433712, 5, 2.01269174411245, 5, 5, 5, 5,
> 3.66605514409922,
> 5, 5, 1.20492768779028, 5, 5, 5, 5, 4.71051510737403, 0.973607667104191,
> 2.13320899798223, 3.55527726960037, 5, 3.13840519694586, 5,
> 4.33164972914231,
> 3.2716034981509, 5, 3.59865983897491, 5, 5, 2.98982117172486,
> 3.15884653708899, 5, 1.21006283114433, 1.88594293315325, 2.37248899411035,
> 2.40289344741545, 0.262839947401338, 2.89061041570249, 2.98573373614306,
> 2.82385009686039, 1.78295361666595, 4.27268021897288, 5, 5, 5,
> 2.52131830224533, 5, 2.32463450150955, 5, 5, 2.18297518836912,
> 5, 2.53256388646574, 5, 5, 5, 1.11901989122708, 1.56266936421015,
> 5, 2.1480772866684, 1.03201411339444, 3.22476904165877, 5,
> 2.23963439946338,
> 5, 

Re: [R] Query regarding stats/p.adjust package (base) - specifically 'Hochberg' function

2021-08-24 Thread Rolf Turner


On Tue, 24 Aug 2021 14:44:55 +
David Swanepoel  wrote:

> Dear R Core Dev Team, I hope all is well your side!
> My apologies if this is not the correct point of contact to use to
> address this. If not, kindly advise or forward my request to the
> relevant team/persons.
> 
> I have a query regarding the 'Hochberg' method of the stats/p.adjust
> R package and hope you can assist me please. I have attached the data
> I used in Excel,



In addition to the good advice given to you earlier by Bert Gunter, you
should consider the following advice:

Don't use Excel!!!

This is a corollary of a more general theorem:   Don't use Micro$oft!!!

cheers,

Rolf Turner

-- 
Honorary Research Fellow
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] unable to install source packages with Rtools installed

2021-08-24 Thread Xiaoyu Zeng
Hi all.

I installed the latest R and Rtools, and I followed the guidelines to test
the installation of Rtools (rtools40v2-x86_64.exe for windows10). I
verified that *make* can be found, but I still could not install an R
package from the source.

Any ideas?

*More detailed info:*
https://github.com/r-windows/rtools-packages/issues/222

Best regards,
Xiaoyu

Xiaoyu Zeng

Graduate student, Social and Affective NeuroPharmacology (SANP) Lab

State Key Laboratory of Cognitive Neuroscience and Learning

Beijing Normal University

http://brain.bnu.edu.cn/home/yinama/

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Specifying non-linear mixed effects models in R (non-linear DV)

2021-08-24 Thread Patzelt, Edward
Hi R-Help,

Data is below. I used a Kruskal Wallis to compare across 4 study groups for
my DV (beta - this is highly non-normal). Now I want to add a covariate
(cpz).

1) What package do I use and how do I specify the model? (I tried T.aov
from fANCOVA but received a lot of simpleLoess errors)

2) Can I specify "subject" as a random effect like in lme?

structure(list(subject = c("C5B1001", "C5B1002", "C5B1003", "C5B1004",
"C5B1005", "C5B1007", "C5B1008", "C5B1009", "C5B1010", "C5B1011",
"C5B1012", "C5B1013", "C5B1014", "C5B1015", "C5B1016", "C5B1017",
"C5B1018", "C5B1019", "C5B1020", "C5B1021", "C5B1022", "C5B1023",
"C5B1024", "C5B1025", "C5B1026", "C5B1027", "C5B1029", "C5B1030",
"C5B1031", "C5B1032", "C5B1033", "C5B1034", "C5B1035", "C5B1036",
"C5B1037", "C5B1038", "C5B1039", "C5B1040", "C5B1041", "C5B1042",
"C5B1043", "C5B1044", "C5B1045", "C5B1046", "C5B1047", "C5B1048",
"C5B1049", "C5D2002", "C5D2003", "C5D2005", "C5D2006", "C5D2007",
"C5D2009", "C5D2010", "C5D2011", "C5D2012", "C5D2013", "C5D2014",
"C5D2017", "C5D2021", "C5D2022", "C5D2023", "C5D2024", "C5D2025",
"C5D2026", "C5D2027", "C5D2028", "C5D2029", "C5D2030", "C5D2031",
"C5D2032", "C5D2035", "C5D2036", "C5D2037", "C5D2039", "C5D2040",
"C5D2042", "C5D2043", "C5D2044", "C5D2045", "C5D2046", "C5D2047",
"C5D2048", "C5D2049", "C5D2051", "C5D2052", "C5D2053", "C5D2054",
"C5D2055", "C5M3001", "C5M3003", "C5M3004", "C5M3005", "C5M3006",
"C5M3007", "C5M3008", "C5M3009", "C5M3010", "C5M3011", "C5M3013",
"C5M3014", "C5M3015", "C5M3016", "C5M3017", "C5M3019", "C5M3020",
"C5M3021", "C5M3022", "C5M3023", "C5M3024", "C5M3029", "C5M3030",
"C5M3031", "C5M3032", "C5M3033", "C5M3034", "C5M3035", "C5M3036",
"C5M3038", "C5M3039", "C5M3042", "C5M3043", "C5M3044", "C5M3046",
"C5M3047", "C5M3048", "C5M3049", "C5M3050", "C5M3051", "C5M3054",
"C5M3055", "C5M3056", "C5M3057", "C5M3058", "C5R4001", "C5R4004",
"C5R4005", "C5R4008", "C5R4009", "C5R4010", "C5R4011", "C5R4012",
"C5R4013", "C5R4014", "C5R4015", "C5R4016", "C5R4017", "C5R4019",
"C5R4020", "C5R4021", "C5R4022", "C5R4024", "C5R4025", "C5R4026",
"C5R4027", "C5R4028", "C5R4031", "C5R4032", "C5R4034", "C5R4037",
"C5R4038", "C5R4040", "C5R4041", "C5R4043", "C5R4048", "C5R4050",
"C5R4053", "C5R4056", "C5W5001", "C5W5002", "C5W5003", "C5W5004",
"C5W5005", "C5W5006", "C5W5007", "C5W5008", "C5W5012", "C5W5013",
"C5W5014", "C5W5015", "C5W5016", "C5W5017", "C5W5018", "C5W5019",
"C5W5020", "C5W5021", "C5W5022", "C5W5023", "C5W5024", "C5W5025",
"C5W5028", "C5W5029", "C5W5030", "C5W5031", "C5W5033", "C5W5035",
"C5W5037", "C5W5038", "C5W5039", "C5W5042", "C5W5043", "C5W5044",
"C5W5045", "C5W5046", "C5W5047", "C5W5048", "C5W5049", "C5W5050",
"C5W5051", "C5W5053", "C5W5054", "C5W5055", "C5W5057", "C5W5058",
"C5W5060"), beta = c(5, 5, 5, 4.84951578282477, 5, 1.75435411010482,
2.59653537897755, 4.58343041388045, 1.19813289503568, 5, 4.41030503473763,
3.48886522319213, 5, 3.69347465973804, 5, 3.61341511433856, 5,
5, 5, 5, 2.82540030433712, 5, 2.01269174411245, 5, 5, 5, 5,
3.66605514409922,
5, 5, 1.20492768779028, 5, 5, 5, 5, 4.71051510737403, 0.973607667104191,
2.13320899798223, 3.55527726960037, 5, 3.13840519694586, 5,
4.33164972914231,
3.2716034981509, 5, 3.59865983897491, 5, 5, 2.98982117172486,
3.15884653708899, 5, 1.21006283114433, 1.88594293315325, 2.37248899411035,
2.40289344741545, 0.262839947401338, 2.89061041570249, 2.98573373614306,
2.82385009686039, 1.78295361666595, 4.27268021897288, 5, 5, 5,
2.52131830224533, 5, 2.32463450150955, 5, 5, 2.18297518836912,
5, 2.53256388646574, 5, 5, 5, 1.11901989122708, 1.56266936421015,
5, 2.1480772866684, 1.03201411339444, 3.22476904165877, 5,
2.23963439946338,
5, 3.85477002456212, 5, 5, 3.15602152904957, 4.81306354520538,
1.20566795082516, 5, 5, 5, 5, 3.04288106123443, 5, 4.06490230904187,
3.06547070051755, 5, 2.5258266208828, 3.52552152448218, 0.0968896467078101,
5, 5, 5, 5, 5, 5, 5, 4.99152057373263, 5, 5, 5, 1.1311501363613,
1.28951722667904, 0.001, 5, 4.58718394461838, 1.22231984982818,
5, 5, 3.35873683772968, 5, 3.87156907439221, 4.8859664986002,
5, 5, 0.976932521703834, 5, 4.50479324287729, 4.65093425894735,
4.22173593981599, 3.15590632469025, 4.86144574792365, 3.39926845337078,
1.24825519695535, 5, 3.27167737085564, 2.2107731064995, 0.187339326704238,
5, 5, 2.78773672362584, 0.977242332964066, 1.05162966383033,
4.24031503174416, 1.9558880208883, 4.01331863994726, 5, 5,
4.50553723427244,
4.03830955873134, 0.0731678404955063, 0.326005643499137, 1.48169477386196,
5, 5, 2.1217771592687, 1.55381162571676, 5, 0.388739153131157,
5, 5, 1.22549904356884, 4.30605773910623, 5, 5, 3.90617032103214,
0.884096418271427, 1.7166358084411, 4.26908188373059, 1.97226101693004,
5, 0.831616239014777, 0.001, 4.15065454327444, 5, 5, 2.6582186770924,
4.69752970800906, 4.50106281557844, 4.21353152726281, 5, 0.620184007188853,
5, 3.86897558241413, 3.63483407688021, 3.18900423687508, 1.24002620770954,
5, 5, 5, 5, 5, 1.20112016323594, 1.99534703415304, 5, 2.13269987318149,
3.76529884137316, 

Re: [R] Query regarding stats/p.adjust package (base) - specifically 'Hochberg' function

2021-08-24 Thread David Swanepoel
Dear Bert,

Thanks for your prompt response. I used R-help as it was listed as the contact 
point in the Description file of the stats package.

I'll attempt to find more information from stats.stackexchange.com as suggested 
and also see if I can figure out the sci.stat.consult and sci.stat.math Usenet 
groups that is mentioned in the guidance document too. Much appreciated.

Kind regards,
David

-Original Message-
From: Bert Gunter  
Sent: 24 August 2021 19:51
To: David Swanepoel 
Cc: r-help@r-project.org; luke-tier...@uiowa.edu; kurt.hor...@wu.ac.at; 
t.kalib...@kent.ac.uk
Subject: Re: [R] Query regarding stats/p.adjust package (base) - specifically 
'Hochberg' function

1. No Excel attachments made it through. Binary attachments are generally 
stripped by the list server for security reasons.

2. As you may have already learned, this is the wrong forum for statistics or 
package specific questions. Read *and follow* the posting guide linked below to 
post on r-help appropriately. In particular, for questions about specific 
non-standard packages, contact package maintainers (found through e.g. 
?maintainers)

3. Statistics issues generally don't belong here. Try stats.stackexchange.com 
instead perhaps.

4. We are not *R Core development,*  and you probably should not be contacting 
them either.  See here for general guidelines for R lists:
https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.r-project.org%2Fmail.htmldata=04%7C01%7C%7Cba1cd94903054f3328d608d96727bcb1%7C84df9e7fe9f640afb435%7C1%7C0%7C637654242624008665%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=UgFMJoteTmcrGsYlA6w6mgMx8EhDsU9ndbBn5LeOFJ0%3Dreserved=0


Bert Gunter

"The trouble with having an open mind is that people keep coming along and 
sticking things into it."
-- Opus (aka Berkeley Breathed in his "Bloom County" comic strip )

On Tue, Aug 24, 2021 at 10:39 AM David Swanepoel  
wrote:
>
> Dear R Core Dev Team, I hope all is well your side!
> My apologies if this is not the correct point of contact to use to address 
> this. If not, kindly advise or forward my request to the relevant 
> team/persons.
>
> I have a query regarding the 'Hochberg' method of the stats/p.adjust R 
> package and hope you can assist me please. I have attached the data I used in 
> Excel, which are lists of p-values for two different tests (Hardy Weinberg 
> Equilibrium and Linkage Disequilibrium) for four population groups.
>
> The basis of my concern is a discrepancy specifically between the Hochberg 
> correction applied by four different R packages and the results of the 
> Hochberg correction by the online tool, 
> MultipleTesting.com.
>
> Using the below R packages/functions, I ran multiple test correction (MTC) 
> adjustments for the p-values listed in my dataset. All R packages below 
> agreed with each other regarding the 'significance' of the p-values for the 
> Hochberg adjustment.
>
>
>   *   stats/p.adjust (method: Hochberg)
>   *   mutoss/hochberg
>   *   multtest/mt.rawp2adjp (procedure: Hochberg)
>   *   elitism/mtp (method: Hochberg)
>
> In checking the same values on the MultipleTesting.com, more p-values were 
> flagged as significant for both the HWE and LD results across all four 
> populations. I show these differences in the Excel sheet attached.
> Essentially, using the R packages, only the first HWE p-value of Pop2 is 
> significant at an alpha of 0.05. Using the MT.com tool, however, multiple 
> p-values are shown to be significant across both tests with the Hochberg 
> correction (the highlighted cells in the Excel sheet).
>
>
> I asked the authors of MT.com about this, and they gave the following 
> response:
>
> "we have checked the issue, and we believe the computation by our page is 
> correct (I cannot give opinion about the other packages).
> When we look on the original Hochberg paper, and we only use the very first 
> (smallest) p value, then m"=1, thus, according to the equation in the 
> Hochberg 1988 paper, in this case practically there is no further correction 
> necessary.
> In other words, in case the *smallest* p value is smaller than alpha, then 
> the *smallest* p value will remain significant irrespective of the other p 
> values when we make the Hochberg correction."
>
> I have attached the Hochberg paper here but, unfortunately, I don't 
> understand enough of the stats to verify this. I have applied their logic on 
> the same Excel sheet under the section "MT.com explanation", which shows why 
> they consider the highlighted values significant.
>
> I 

Re: [R] Query regarding stats/p.adjust package (base) - specifically 'Hochberg' function

2021-08-24 Thread Martin Maechler
> Bert Gunter 
> on Tue, 24 Aug 2021 10:50:50 -0700 writes:

> 1. No Excel attachments made it through. Binary
> attachments are generally stripped by the list server for
> security reasons.

> 2. As you may have already learned, this is the wrong
> forum for statistics or package specific questions. Read
> *and follow* the posting guide linked below to post on
> r-help appropriately. In particular, for questions about
> specific non-standard packages, contact package
> maintainers (found through e.g. ?maintainers)

> 3. Statistics issues generally don't belong here. Try
> stats.stackexchange.com instead perhaps.

> 4. We are not *R Core development,* and you probably
> should not be contacting them either.  See here for
> general guidelines for R lists:
> https://www.r-project.org/mail.html


> Bert Gunter

> "The trouble with having an open mind is that people keep
> coming along and sticking things into it."  -- Opus (aka
> Berkeley Breathed in his "Bloom County" comic strip )

Well, this was a bit harsh of an answer, Bert.

p.adjust() is a standard R  function  (package 'stats') -- as
David Swanepoel did even mention.

I think he's okay asking here if the algorithms used in such a
standard R functions are  "ok"  and how/why they seemlingly
differ from other implementations ...

Martin

> On Tue, Aug 24, 2021 at 10:39 AM David Swanepoel
>  wrote:
>> 
>> Dear R Core Dev Team, I hope all is well your side!  My
>> apologies if this is not the correct point of contact to
>> use to address this. If not, kindly advise or forward my
>> request to the relevant team/persons.
>> 
>> I have a query regarding the 'Hochberg' method of the
>> stats/p.adjust R package and hope you can assist me
>> please. I have attached the data I used in Excel, which
>> are lists of p-values for two different tests (Hardy
>> Weinberg Equilibrium and Linkage Disequilibrium) for four
>> population groups.
>> 
>> The basis of my concern is a discrepancy specifically
>> between the Hochberg correction applied by four different
>> R packages and the results of the Hochberg correction by
>> the online tool,
>> MultipleTesting.com.
>> 
>> Using the below R packages/functions, I ran multiple test
>> correction (MTC) adjustments for the p-values listed in
>> my dataset. All R packages below agreed with each other
>> regarding the 'significance' of the p-values for the
>> Hochberg adjustment.
>> 
>> 
>> * stats/p.adjust (method: Hochberg) * mutoss/hochberg *
>> multtest/mt.rawp2adjp (procedure: Hochberg) * elitism/mtp
>> (method: Hochberg)
>> 
>> In checking the same values on the MultipleTesting.com,
>> more p-values were flagged as significant for both the
>> HWE and LD results across all four populations. I show
>> these differences in the Excel sheet attached.
>> Essentially, using the R packages, only the first HWE
>> p-value of Pop2 is significant at an alpha of 0.05. Using
>> the MT.com tool, however, multiple p-values are shown to
>> be significant across both tests with the Hochberg
>> correction (the highlighted cells in the Excel sheet).
>> 
>> 
>> I asked the authors of MT.com about this, and they gave
>> the following response:
>> 
>> "we have checked the issue, and we believe the
>> computation by our page is correct (I cannot give opinion
>> about the other packages).  When we look on the original
>> Hochberg paper, and we only use the very first (smallest)
>> p value, then m"=1, thus, according to the equation in
>> the Hochberg 1988 paper, in this case practically there
>> is no further correction necessary.  In other words, in
>> case the *smallest* p value is smaller than alpha, then
>> the *smallest* p value will remain significant
>> irrespective of the other p values when we make the
>> Hochberg correction."
>> 
>> I have attached the Hochberg paper here but,
>> unfortunately, I don't understand enough of the stats to
>> verify this. I have applied their logic on the same Excel
>> sheet under the section "MT.com explanation", which shows
>> why they consider the highlighted values significant.
>> 
>> I have also attached the 2 R files that I used to do the
>> MTC runs and they can be run as is. They are just quite
>> long as they contain many of the other MTC methods in the
>> different packages too.
>> 
>> Kindly provide your thoughts as to whether you agree with
>> this interpretation of the Hochberg paper or not? I would
>> like to see concordance between the MT.com tool and the
>> different R packages above (or understand why they are
>> different), so that I can be more confident in the
>> explanations 

Re: [R] Query regarding stats/p.adjust package (base) - specifically 'Hochberg' function

2021-08-24 Thread Bert Gunter
1. No Excel attachments made it through. Binary attachments are
generally stripped by the list server for security reasons.

2. As you may have already learned, this is the wrong forum for
statistics or package specific questions. Read *and follow* the
posting guide linked below to post on r-help appropriately. In
particular, for questions about specific non-standard packages,
contact package maintainers (found through e.g. ?maintainers)

3. Statistics issues generally don't belong here. Try
stats.stackexchange.com instead perhaps.

4. We are not *R Core development,*  and you probably should not be
contacting them either.  See here for general guidelines for R lists:
https://www.r-project.org/mail.html


Bert Gunter

"The trouble with having an open mind is that people keep coming along
and sticking things into it."
-- Opus (aka Berkeley Breathed in his "Bloom County" comic strip )

On Tue, Aug 24, 2021 at 10:39 AM David Swanepoel
 wrote:
>
> Dear R Core Dev Team, I hope all is well your side!
> My apologies if this is not the correct point of contact to use to address 
> this. If not, kindly advise or forward my request to the relevant 
> team/persons.
>
> I have a query regarding the 'Hochberg' method of the stats/p.adjust R 
> package and hope you can assist me please. I have attached the data I used in 
> Excel, which are lists of p-values for two different tests (Hardy Weinberg 
> Equilibrium and Linkage Disequilibrium) for four population groups.
>
> The basis of my concern is a discrepancy specifically between the Hochberg 
> correction applied by four different R packages and the results of the 
> Hochberg correction by the online tool, 
> MultipleTesting.com.
>
> Using the below R packages/functions, I ran multiple test correction (MTC) 
> adjustments for the p-values listed in my dataset. All R packages below 
> agreed with each other regarding the 'significance' of the p-values for the 
> Hochberg adjustment.
>
>
>   *   stats/p.adjust (method: Hochberg)
>   *   mutoss/hochberg
>   *   multtest/mt.rawp2adjp (procedure: Hochberg)
>   *   elitism/mtp (method: Hochberg)
>
> In checking the same values on the MultipleTesting.com, more p-values were 
> flagged as significant for both the HWE and LD results across all four 
> populations. I show these differences in the Excel sheet attached.
> Essentially, using the R packages, only the first HWE p-value of Pop2 is 
> significant at an alpha of 0.05. Using the MT.com tool, however, multiple 
> p-values are shown to be significant across both tests with the Hochberg 
> correction (the highlighted cells in the Excel sheet).
>
>
> I asked the authors of MT.com about this, and they gave the following 
> response:
>
> "we have checked the issue, and we believe the computation by our page is 
> correct (I cannot give opinion about the other packages).
> When we look on the original Hochberg paper, and we only use the very first 
> (smallest) p value, then m"=1, thus, according to the equation in the 
> Hochberg 1988 paper, in this case practically there is no further correction 
> necessary.
> In other words, in case the *smallest* p value is smaller than alpha, then 
> the *smallest* p value will remain significant irrespective of the other p 
> values when we make the Hochberg correction."
>
> I have attached the Hochberg paper here but, unfortunately, I don't 
> understand enough of the stats to verify this. I have applied their logic on 
> the same Excel sheet under the section "MT.com explanation", which shows why 
> they consider the highlighted values significant.
>
> I have also attached the 2 R files that I used to do the MTC runs and they 
> can be run as is. They are just quite long as they contain many of the other 
> MTC methods in the different packages too.
>
> Kindly provide your thoughts as to whether you agree with this interpretation 
> of the Hochberg paper or not? I would like to see concordance between the 
> MT.com tool and the different R packages above (or understand why they are 
> different), so that I can be more confident in the explanations of my own 
> results as a stats layman.
>
> I hope this makes sense. Please let me know if I need to clarify anything.
>
>
> Many thanks and kind regards,
> David
> __
> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Selecting elements

2021-08-24 Thread PIKAL Petr
Hi.

Now it is understandable.  However the solution is not clear for me. 

table(Order$Var.1[1:10])
A B C D 
4 1 2 3 

should give you a hint which scheme could be acceptable, but how to do it 
programmatically I do not know.

maybe to start with lower value in the table call and gradually increse it to 
check which scheme starts to be the chosen one

> table(data.o$Var.1[1]) # scheme 2 is out
C 
1
...
> table(data.o$Var.1[1:5]) #scheme 3
A B C D 
1 1 2 1 

> table(data.o$Var.1[1:6]) #scheme 3

A B C D 
2 1 2 1 

> table(data.o$Var.1[1:7]) # scheme1
A B C D 
2 1 2 2 

> table(data.o$Var.1[1:8]) # no such scheme, so scheme 1 is chosen one
A B C D 
2 1 2 3 

#Now you need to select values based on scheme 1.
# 3A - 3B - 2C - 2D

sss <- split(Order, Order$Var.1)
selection <- c(3,3,2,2)
result <- vector("list", 4)

#I would use loop

for(i in 1:4) {
result[[i]] <- sss[[i]][1:selection[i],]
}

Maybe someone come with other ingenious solution.

Cheers
Petr

From: Silvano Cesar da Costa  
Sent: Monday, August 23, 2021 7:54 PM
To: PIKAL Petr 
Cc: r-help@r-project.org
Subject: Re: [R] Selecting elements

Hi,

I apologize for the confusion. I will try to be clearer in my explanation. I 
believe that with the R script it becomes clearer.

I have 4 variables with 10 repetitions and each one receives a value, randomly. 
I order the dataset from largest to smallest value. I have to select 10 
elements in 
descending order of values, according to one of three schemes:

# 3A - 3B - 2C - 2D
# 2A - 5B - 0C - 3D
# 3A - 4B - 2C - 1D

If the first 3 elements (out of the 10 to be selected) are of the letter D, 
automatically 
the adopted scheme will be the second. So, I have to (following) choose 2A, 5B 
and 0C.
How to make the selection automatically?

I created two selection examples, with different schemes:



set.seed(123)

Var.1 = rep(LETTERS[1:4], 10)
Var.2 = sample(1:40, replace=FALSE)

data = data.frame(Var.1, Var.2)

(Order = data[order(data$Var.2, decreasing=TRUE), ])

# I must select the 10 highest values (), 
# but which follow a certain scheme:
#
#  3A - 3B - 2C - 2D or 
#  2A - 5B - 0C - 3D or
#  3A - 4B - 2C - 1D
#
# In this case, I started with the highest value that refers to the letter C. 
# Next comes only 1 of the letters B, A and D. All are selected once. 
# The fifth observation is the letter C, completing 2 C values. In this case, 
# following the 3 adopted schemes, note that the second scheme has 0C, 
# so this scheme is out.
# Therefore, it can be the first scheme (3A - 3B - 2C - 2D) or the 
# third scheme (3A - 4B - 2C - 1D).
# The next letter to be completed is the D (fourth and seventh elements), 
# among the 10 elements being selected. Therefore, the scheme adopted is the 
# first one (3A - 3B - 2C - 2D).
# Therefore, it is necessary to select 2 values with the letter B and 1 value 
# with the letter A.
#
# Manual Selection -
# The end result is:
(Selected.data = Order[c(1,2,3,4,5,6,7,9,13,16), ])

# Scheme: 3A - 3B - 2C - 2D
sort(Selected.data$Var.1)


#--
# Second example: -
#--
set.seed(4)

Var.1 = rep(LETTERS[1:4], 10)
Var.2 = sample(1:40, replace=FALSE)

data = data.frame(Var.1, Var.2)
(Order = data[order(data$Var.2, decreasing=TRUE), ])

# The end result is:
(Selected.data.2 = Order[c(1,2,3,4,5,6,7,8,9,11), ])

# Scheme: 3A - 4B - 2C - 1D
sort(Selected.data.2$Var.1)

How to make the selection of the 10 elements automatically?

Thank you very much.

Prof. Dr. Silvano Cesar da Costa
Universidade Estadual de Londrina
Centro de Ciências Exatas
Departamento de Estatística

Fone: (43) 3371-4346


Em seg., 23 de ago. de 2021 às 05:05, PIKAL Petr 
 escreveu:
Hi

Only I got your HTML formated mail, rest of the world got complete mess. Do not 
use HTML formating.

As I got it right I wonder why in your second example you did not follow
3A - 3B - 2C - 2D

as D were positioned 1st and 4th.

I hope that you could use something like

sss <- split(data$Var.2, data$Var.1)
lapply(sss, cumsum)
$A
 [1]  38  73 105 136 166 188 199 207 209 210

$B
 [1]  39  67  92 115 131 146 153 159 164 168

$C
 [1]  40  76 105 131 152 171 189 203 213 222

$D
 [1]  37  71 104 131 155 175 192 205 217 220

Now you need to evaluate this result according to your sets. Here the highest 
value (76) is in C so the set with 2C is the one you should choose and select 
you value according to this set.

With
> set.seed(666)
> Var.1 = rep(LETTERS[1:4], 10)
> Var.2 = sample(1:40, replace=FALSE)
> data = data.frame(Var.1, Var.2)
> data <- data[order(data$Var.2, decreasing=TRUE), ]
> sss <- split(data$Var.2, data$Var.1)
> lapply(sss, cumsum)
$A
 [1]  36  70 102 133 163 182 200 207 212 213

$B
 [1]  35  57  78  95 108 120 131 140 148 150

$C
 [1]  40  73 102 130 156 180 196 211 221 225

$D
 [1]  39  77 114 141 166 189 209 223 229 232

Highest value is in D so either 3A - 3B - 2C - 2D  or 3A - 3B - 2C - 2D should 
be appropriate. And here I am again lost as both sets are