All right, then I guess I'll have to live with the internal dataflow id. 
Is it really impossible to get myExperiment id from the plugin? Because 
the myExperiment plugin is right there in Taverna...

So, my goal is still to find a way to automatically store workflow 
inputs and outputs to a file server. My idea is now to have a "master 
workflow" that contains the activity plugin that does this work. Then I 
could import an arbitrary workflow as a nested workflow into that master 
workflow and access all the input/output values of the subworkflow from 
inside of the activity plugin. I checked that the dataflow id of the 
subworkflow remains the same after importing. Is this scenario possible? 
I would have to get the dataflow id of the nested workflow, similar to 
the above case with the top-level workflow:

                 StringBuffer result = new StringBuffer();
                 String procID = callback.getParentProcessIdentifier();
                 // facade0:Workflow1:Example_2:invocation2

                 String topFacadeId = procID.substring(0, 
procID.indexOf(":"));
                 // facade0

                 // Look up in a map of started workflow runs
                 WeakReference<WorkflowInstanceFacade> topFacadeRef =
                         
WorkflowInstanceFacade.workflowRunFacades.get(topFacadeId);

                 WorkflowInstanceFacade topFacade = topFacadeRef.get();

                 Dataflow topDataflow = topFacade.getDataflow();

only that I need the first nested dataflow. And additionally, I would 
need to retrieve the i/o values of that nested dataflow during runtime. 
The stroring activity would have to be executed after the nested 
dataflow, so that all the output values are available.

I am aware that this functionality will be easy to implement with the 
upcoming Taverna Server, but I also need it in Taverna Workbench.


Regards
Dennis


On 20.07.2010 13:20, David R Newman wrote:
> Hi Dennis,
>
> Unfortunately one of the much discussed topics about SPARQL endpoints is how
> it might be possible to authenticate users so they can query over all the
> RDF they are permitted to access.  RDF data is commonly added to
> triplestores that back SPARQL endpoints as graphs, (individual RDF files).
> In the case of myExperiment, these graphs would be sufficiently atomic that
> each user would have permission to access an exact subset of these graphs.
> SPARQL provides a facility to only query over specific graphs but I don't
> think anyone has ever tested this with a subset of graphs as large (i.e. in
> the thousands) as would be required here.  If this was possible I would
> still be nervous that users may have access to RDF data they are not
> permitted to see, until I had performed significant testing.
>
> A simpler and more robust solution would be to provide users with a service
> that would allow they to download all the myExperiment RDF they were
> permitted to access (zipped this would be quite small) that they could then
> manage their own SPARQL endpoint.  This would have the disadvantage of
> requiring a script on a cron job to regularly update their triplestore.  It
> might be possible for myExperiment to provide a trial service that would
> allow users to request their up-to-date RDF subset be put in a
> myExperiment-hosted triplestore that they could they query through a SPARQL
> endpoint only they can access.  I will give this some further thought.
>
> Regards
>
> David Newman
>
>    

------------------------------------------------------------------------------
This SF.net email is sponsored by Sprint
What will you do first with EVO, the first 4G phone?
Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first
_______________________________________________
taverna-hackers mailing list
[email protected]
Web site: http://www.taverna.org.uk
Mailing lists: http://www.taverna.org.uk/about/contact-us/
Developers Guide: http://www.taverna.org.uk/developers/

Reply via email to