Hi, in a package I am developing some functions need to use some external data. I have these data as a set of .csv files that I have placed in the inst/extdata folder.
At the moment I have a file "db-internal.r" where I load all the internal databases that could be used by the functions in my package; and assign them to some global (to the package) variables (all with the prefix db_ in front of them) For example (I didn't come out with a better name, sorry) db_italian_cities = read.csv(system.file("extdata/italian_cities.csv") like this I can use db_italian_cities in my functions. Some of these datasets are quite big and really slow down loading the package, plus for some of the task the package is meant to solve they might not even be required. I would like to be able to lazyload these datasets only when needed, how can I possibly achieve this without creating special databases? Some of them could change, so I intend to be able to download the most recent ones through a function that ensure the package is using the most recent version, so I would really prefer to simply use csv files. Thanks a lot in advance for the help! Cheers, Luca [[alternative HTML version deleted]] ______________________________________________ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.