Thanks to all of you who were at the workshop for letting me attend 
remotely.  I found it quite valuable, and hope you didn't mind me asking 
you all to repeat the questions too often.  And special thanks to 
Catherine for being the driving force behind getting it all set up!

I had a couple extra thoughts on some of the presentations, which I 
probably would have chatted with you about had I been there, but since I 
wasn't, here they are in email instead.

The OpenCell network visualization tool:  
     I agree that there is definitely a need both for automatically-
generated network visualization and for human-arranged network 
visualization.  One thing that might help this is to have a standard for 
exchanging or at least storing these--I know SBML has a 'layout extension' 
that stores the visual representation of SBML models.  I don't know if a 
CellML extension is the right way to go or if that information is better 
suited to storing in a separate file, but you could at least look at that 
extension for an example of the information you'd want to store.

CellML 1.2:
    One thing I noticed was the claim that events could be stored in 
CellML as piecewise formulas.  I'm pretty sure that won't be the case with 
SBML events, which are 'fire once' events instead of 'true while' events.  
Maybe one could come up with a piecewise hack to store SBML events, but if 
a goal is to become more amenable to SBML-translated models, you might 
want to think about how best to translate SBML events.  Or maybe I'm wrong 
and there's already a way to do it?
    Another thing I noticed was a reluctance to add too many new features 
to the language in the fear that interpreters might not be able or willing 
to handle them.  One way to mitigate this would be to allow models to 
claim somewhere in the header whether the model required that feature or 
not--an interpreter could then more cleanly note whether it was able to 
correctly interpret a given model, while still being able to interpret 
other 1.2 models.

Roundtrip translators:  
    I think I'm coming around to the theory that we don't need non-lossy 
translators--that's probably an unattainable goal.  What might be more 
helpful instead is roundtrop re-integrators.  What I mean by that is that 
nobody actually needs a way to round-trip a model that has never been 
modified, because you still have the original.  Instead, what you need is 
a way to translate a model to a new format, modify it in that new format, 
translate it back to the original format, and end up with a model that is 
identical to the original model except for those bits that have been 
changed.  I think the best hope for this is if you take the original model 
and the round-tripped-and-modified model, and re-integrate the two.

Obviously, the trickiest bit of this is if you've modified the model by 
deleting something--you need to know that the round-tripping was *able* to 
keep the information, so that if it's gone, that means it's been deleted 
on purpose.  But everything else can stay.  Similarly, you need to know if 
the format of the information will have changed, and if so, how to map the 
new format to the old format.  This may well be the trickier bit--I 
certainly don't know off the top of my head what a good algorithm would be 
to track CellML's modularity through modularless SBML, for example.  But 
perhaps with the Hierarchical Modeling extention up and running SBML, this 
would become attainable?

At any rate, I'd be interested in others' thoughts on this.  I'm certainly 
not saying that translators should discard information they could have 
saved--the more information they save, the better.  I just think there 
will alwaybe be information they can't save (even if it's just some 
proprietary annotations), and re-itegration seems like the way to go here.

-Lucian
_______________________________________________
cellml-discussion mailing list
cellml-discussion@cellml.org
http://www.cellml.org/mailman/listinfo/cellml-discussion

Reply via email to