I recently had the chance to read a book explaining how to use
ChatGPT with a certain programming language.  (I'm not going
to describe the book any more than that because I don't want to
embarrass whoever wrote it.)

They have appendix material showing three queries to ChatGPT
and the answers.  Paraphrased, the queries are "if I throw 2 (3, 4)
fair dice, what is the probability I get 7 or 11?  Show the reasoning."
I thought those questions would make a nice little example,
maybe something for Exercism or RosettaCode.  Here's the R version:

> faces <- 1:6
> sum(rowSums(expand.grid(faces, faces)) %in% c(7,11))/6^2
[1] 0.2222222
> sum(rowSums(expand.grid(faces, faces, faces)) %in% c(7,11))/6^3
[1] 0.1944444
> sum(rowSums(expand.grid(faces, faces, faces, faces)) %in% c(7,11))/6^4
[1] 0.09567901

Here's where it gets amusing.  ChatGPT explained its answers with
great thoroughness.  But its answer to the 3 dice problem, with what
was supposedly a list of success cases, was quite wrong.  ChatGPT
claimed the answer was 33/216 instead of 42/216.

Here's where it gets bemusing.  Whoever wrote the book included
the interaction in the book WITHOUT CHECKING the results, or at
least without commenting on the wrongness of one of them.

I actually wrote the program in 6 other programming languages,
and was startled at how simple and direct it was in base R.
Well done, R.

______________________________________________
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to