Hi Craig,
Thanks for your interest in the galaxy API. For the parameters you're
uncertain about:
The 'workflow_id' is indeed the encoded workflow id. You can get this by
encoding it yourself, or doing a GET on /api/workflows for a list of *all*
workflows and their encoded id's (see example below)
The 'history' is either a name for a new history to be created, or a string of
the format "hist_id=", if you have an existing
history that you'd like the results to appear in.
And the last parameter is a three part string. The first part is the step that
the input should be mapped to, the second part is the *type* of input it is,
and the third step is the actual encoded id. The type is going to be either
ldda for a library dataset, or hda for a history dataset.
All of these encoded id's are discoverable through the API itself. Try using
scripts/api/display.py to view /api/workflows,
/api/workflows/ to get a feel for what's available to you.
Lastly, an example of how the run_workflow component of the api could be used
as a part of an external pipeline can be found in
scripts/api/example_watch_folder.py. This script monitors a particular folder
for files, uploads them to galaxy, and executes a workflow on them.
Hope this helps, and definitely let me know if I can answer any more questions
or if you have feedback about the API.
-Dannon
On Dec 1, 2011, at 7:32 PM, Craig Blackhart wrote:
> I am newish to Galaxy and trying to learn how I might integrate it with our
> workflows and LIMS for automated data handling. I am aware of the API and
> have looked up all the documentation that I could find. However, there are
> many things I cannot make sense of, and have not been able to find
> information to help me out. I think a good place to start asking questions
> is with how to run workflow_execute.py and ask what each of the parameters
> are and where to get the information from them
>
> Arguments
>*API key – got this and understand
>*url – got this and understand
>*workflow_id – I have created workflows and have been able to
> find what looks to be a workflow_id by clicking on the workflow name and
> selecting “Download or Export”. It seems this may be correct, is it?
>*history – a named history to use? Should this already exist?
> I have no idea here.
>*step=src=dataset_id - ??? I have no idea ??? I have seen how
> to create data libraries manually at the command line; does this factor in?
>
> If anyone has information they can help me out with, it would be much
> appreciated.
>
> Thanks
>
> Craig Blackhart
> Computer Scientist
> Applied Engineering Technologies
> Los Alamos National Laboratory
> 505-665-6588
> This message contains no information that requires ADC review
>
> ___
> Please keep all replies on the list by using "reply all"
> in your mail client. To manage your subscriptions to this
> and other Galaxy lists, please use the interface at:
>
> http://lists.bx.psu.edu/
___
Please keep all replies on the list by using "reply all"
in your mail client. To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
http://lists.bx.psu.edu/