[email protected] (Ludovic Courtès) writes:

> myglc2 <[email protected]> skribis:
>
>> [email protected] (Ludovic Courtès) writes:
>
> [...]
>
>>> Perhaps as a first step you could try and write a procedure and a CLI
>>> around it that simply runs a given pipeline:
>>>
>>>   $ guix biopipeline foo frog.dna human.dna
>>>   …
>>>   /gnu/store/…-freak.dna
>>>
>>> The procedure itself would be along the lines of:
>>>
>>>   (define (foo-pipeline input1 input2)
>>>     (gexp->derivation "result"
>>>                       #~(begin
>>>                           (setenv "PATH" "/foo/bar")
>>>                           (invoke-make-and-co #$input1 #$input2
>>>                                               #$output))))
>>
>> Sidebar:
>>
>> - What is "biopipeline" above? A new guix command?
>
> Right.  Basically I was suggesting implementing the pipeline as a Scheme
> procedure (‘foo-pipeline’ above), and adding a command-line interface on
> top of it (‘guix biopipeline’.)
>
> This means that all inputs and outputs would go through the store, so
> you would get caching and all that for free.
>
> But I now understand that I was slightly off-topic.  ;-)

Thanks. Having built bespoke analysis pipelines for the last five years,
I find your idea intriguing. So my response to the original post was
slightly off-topic. also ;)

Reply via email to