Hi, > I'd say it depends on the data and how it is used inside and outside of a > workflow. Some data could very well stored in the store, and then > distributed via standard channels (Zenodo, ...) after export by "guix pack". > For big datasets, some other mechanism is required.
I am not sure to understand the point. >From my point of view, there is 2 kind of datasets: a- the ones which are part of the software, e.g., used to pass the tests. Therefore, they are usually small, not always; b- the ones which are applied to the software and somehow they are not in the source repository. They are big or not. I do not know if some policy is established in guix about the case a-, not sure that it is possible in fact (e.g., include Whole Genome fasta to test alignment tools ? etc.). It does not appear to me a good idea to try to include in the store datasets of case b-. Is it not the job of data management tools ? e.g., database etc. I do not know so much, but a idea should to write a workflow: you fetch the data, you clean them and you check by hashing that the result is the expected one. Only the softwares used to do that are in the store. The input and output data are not, but your workflow check that they are the expected ones. However, it depends on what we are calling 'cleaning' because some algorithms are not deterministic. Hum? I do not know if there is some mechanism in GWL to check the hash of the `data-inputs' field. > I think it's worth thinking carefully about how to exploit guix for > reproducible computations. As Lispers know very well, code is data and data > is code. Building a package is a computation like any other. Scientific > workflows could be handled by a specific build system. In fact, as long as > no big datasets or multiple processors are involved, we can do this right > now, using standard package declarations. It appear to me as a complement of these points ---and personnally, I learn some points about the design of GWL--- with this thread: https://lists.gnu.org/archive/html/guix-devel/2016-05/msg00380.html > It would be nice if big datasets could conceptually be handled in the same > way while being stored elsewhere - a bit like git-annex does for git. And > for parallel computing, we could have special build daemons. Hum? the point is to add data management a la git-annex to GWL ? Is it ? Have a nice week-end ! simon