Are you asking if there's a way to automate the download of a list of links from that page? You could write an R script to get the HTML, then find all the HTML <A> tags, and then get the URLs in the link addresses, and there's packages for doing this kind of web scraping.
But for this kind of thing it might be easier to use a web browser add-on - I have "Down Them All" set up on Firefox, and with a click or two I can get a list of all the link URLs and hit a button that downloads everything to a single folder. Once done, I can use standard R functions to list all the downloaded files and read them. Took about 20 seconds to do for this page, and now I have a folder of 292 .tmp.per files. Barry On Tue, Jan 24, 2023 at 11:13 AM Miluji Sb <miluj...@gmail.com> wrote: > Greetings everyone, > > I have a question on extracting country-level data from CRU ( > > https://crudata.uea.ac.uk/cru/data/hrg/cru_ts_4.06/crucy.2205251923.v4.06/countries/tmp/ > ). > The data for each variable are available for individual countries and I am > struggling to download all of them. Can I extract all the files in R then > merge? Thanks so much. > > Best, > > Milu > > [[alternative HTML version deleted]] > > _______________________________________________ > R-sig-Geo mailing list > R-sig-Geo@r-project.org > https://stat.ethz.ch/mailman/listinfo/r-sig-geo > [[alternative HTML version deleted]] _______________________________________________ R-sig-Geo mailing list R-sig-Geo@r-project.org https://stat.ethz.ch/mailman/listinfo/r-sig-geo