> > Docker R containers are north of 250 MB. I have checked experimentally that > > you can trim R down to 16 MB (!) and you'll still be able to execute it > > (though with warnings). That *is* quite a difference, especially when > > deploying small applications. > ... I would guesstimate the libraries required to run R with any useful set > of libraries is quite a bit bigger than the cited 16M ....... Maybe. The minimal usable subset is about 37 MB, add a few custom libraries, code of your application etc... But it's *still* much less than 250 MB. > > Sure, package dependencies would be great as well - at least you'd be sure > > that users of, say, Debian-based distros will be able to run this portable > > R, as long as they've installed the required libraries. But notice that in > > your example package versions equal *or greater* than listed are required - > > so if someone has upgraded their system, they still will be able to run > > that R. With a version built from source you need *exactly* the same > > version as on the machine where R was compiled. Hence my question: how come > > the precompiled distribution of R has "less strict" library requirements > > than manually compiled versions? > Package managers don't usually cite 'less than' versions for packages - > because how do you assert a version that won't work when it hasn't been > released yet?
I meant that manually built versions of R (at least those compiled by me) are fixed at a certain version of dynamic libraries - the same as installed on the machine R was compiled on. You can't run this compiled R on an upgraded configuration. > You could go on a tear and build statically linked versions of > R-with-everything-you-need, and maybe avoid the library madness... but this > is sort of a fool's errand and a huge consumer of time. OS vendors and > compiler developers have stopped doing things that way for reasons.... it's > much simpler to reduce duplication and make everything work - while allowing > for patching out security issues - when you are *just slightly* more flexible. Why link the libraries statically? Most Linux distributions make symlinks to dynamically linked libraries - so you have for example libicuuc.so that links to libicuuc.so.XX (where XX is the version number). Why not rely on these generic names? > Doing this stuff with a container is very much the easiest route, if you > actually want it to be completely portable. You're certainly welcome to > start with an Alpine Linux base and add R on top and then start paring... but > I start to not understand the point, somewhere in there.... it's a lot of > time spent on something that doesn't seem that beneficial when you've got > (even fairly reasonably modern) hardware that can deal with a tiny bit of > extra bloat. SD cards and USB sticks are pretty cheap everywhere, now, > aren't they? > > I could say, maybe, putting time into this as some kind of retrocomputing > project... but probably not otherwise. Potential users who would have to download 250 megabytes beg to differ ;-) Best, -p- ______________________________________________ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel