Ok - the way I do it here.  We are all Softimage, so this is a 
Rig/animation/effects -> rendering caching flow using an in house asset system 
that includes rig versions and render versions of assets.

As it is Softimage everything is emdl - the company I last worked at did a 
workflow from Softimage -> Houdini that used bgeos - otherwise caches 
themselves are mdds.

The farm is used to generate the caches, all done with scripts through RR - 
when the caches are written, I write out a text file - that could be an xml - 
that includes the asset name, version and the path to the cache files for each 
asset written at that time - the loader script allows the user to find these 
text files and load them - then they are parsed and using the info, load the 
render model - apply PO ops with paths to the caches - all quite simple and 
pretty easy to setup.  If you don't have an asset tracking system in place you 
can do what I did for Zambezia - where I detected what character I was caching 
and in the text file wrote out the path to the render model, so the loader 
script could use that to load the render model - now I do not need it as our 
asset system knows where everything is.

If you like I can send you my loader script - it is in VBS as that is what I 
was using when we started Zambezia - when I have the time I will redo it in 
python.  I can also send you an example of the text file I write out when 
caching that is used by the loader script.

We used MDDs as the loader in Softimage has an option to use spline 
interpolation that works qutie well to do motion blur.

Another route that is well worth looking at is Alembic - the Exocortex guys 
have an awesome version for Softimage that I tested, and would have gone for if 
we did not have the budgetry constraints we did at the time.  I don't know what 
is available for Maya on that front, but I assume there will be something 
worthwhile.

Cheers

S.


Sandy Sutherland<mailto:[email protected]> | Technical 
Supervisor
[http://triggerfish.co.za/en/wp-content/uploads/udf_foundry/images/logo.png] 
<http://triggerfish.co.za/en>
[http://static.ak.fbcdn.net/rsrc.php/v2/ym/x/lFV-lsMcC_0.png] 
<http://www.facebook.com/triggerfishanimation>

[https://si0.twimg.com/a/1349296073/images/resources/twitter-bird-white-on-blue.png]
 <http://www.twitter.com/triggerfishza>
________________________________
From: [email protected] 
[[email protected]] on behalf of Nick Angus 
[[email protected]]
Sent: 29 November 2012 04:51
To: [email protected]
Subject: Pointcache workflow

Hi folks,

I am after a bit of advice on a pointcache pipeline, as we seem to be only able 
to regularly secure the services of Maya animators and riggers we have decided 
for now to keep that side of things Maya oriented.
I am thinking of building a basic publishing system that takes a model and 
exports it to FBX on completion, the fbx is then split into Softimage for 
Shading/Fur/look dev.  The FBX is also imported into maya for rigging.  The 
Asset would then appear as one in our publishing system, but the animation team 
would be importing a Maya file to work with and the Lighters would be 
referencing an emdl into Softimage.

This is just where my head is at right now anyway, my goal then would be to geo 
cache out of Maya.  The problem I am striking is the workflow to then apply the 
cache, I am sure it is scriptable but it it seems that manually you must apply 
it to one object at a time.

I would be really keen to hear from how other people (if any) are doing this, 
it seems the simplest way in my mind as you are never transferring geo or 
shaders. Of course this workflow requires every object to be animated via bones 
or clusters so there is animation data getting to the points…

Sorry for the long winded post, but this is the best place I can think of to 
ask these questions…

Cheers, Nick

Reply via email to