Hey Gene,
no worries, I don't take it wrong. It is tough to compile a set of
arguments in a so short amount of time, but I'll give it a try.
So forgive me if I repeat myself or I am not enough detailled.
At the end there is always a good reason mostly driven by our own
experience, the production or the studio toolset.
I do believe like Alan said that you want to keep that simple and
lightweight, and this is with this in mind that I suggested
those additions. I am not reasoning for your tool to become THE standard
but being enough generic and flexible for studios considering
its integration.
I will try to elaborate on some of my notes.
- units are usually in productions set per show whatever software's
defaults are, in other words, whatever go through the
pipeline must be talked in that unit. In real-life, nothing is that
perfect and often a third-party user (individuals, dept, software)
tweaks the unit which lead to data-corruption and headaches for the
pipeline tds if the camera gets exported without considering this. I
would say most of the time this data would be there for pure debug
purpose though as your format is intended to be cross-applications
and might even be cross-shows, you cannot force/restrict/deduce the
metrics used. Your file might also get lost somewhere in the
middle of the jungle, without this critical piece of informations, the
data is good for garbage.
It happened more than once at work that data get exported with a
different unit that the pipeline was supposed to support
by the anim dept, fucking up lighter's work for example.
- spec version, this information should be stored in the file itself, even
if you're working with version is 1.0. Once the first cam
file will be exported and published as an asset, the asset and file will
be sealed because of possible dependencies. With that
piece of informations, you will be able to make additions/extensions to
the file format without breaking existings.
- I mentioned the scale for pure flexibility, an artist/dept might be
working at a smaller scale than another, this informations is
required to scale accordingly your camera transforms. Actually this
should always be the case, when transforms are saved (not in a matrix
form), the scale or globalScale should follow. To transpose this to a
similar issue, transforms are usually baked out from the
global space which piss off shot modelers. The shot modeler does his job
fixing deformations, the geometry is baked out, but the rig
is modified and so all the shapes must be done again. Again pure
convenience and flexibility.
- concerning the camera rigs, the idea is not to track down an entire
hierarchy but being aware of that hierarchy. Cameras are more than
a null in a scene and even if you're willing to simplify this, you cannot
assume everybody want. A side effect of being able to
describe a camera rig to your library and this library to understand it
is you dont even have to code specific procedures to
extract informations as the library will do it for you according the
description you've done of the rig.
- high-precision doubles is not necessarily something wanted to be output,
because they waste space, are expensive to compute (double semantics)
and conversion are slow (bytes to doubles). FYI, UltraJSON offers a
similar option.
- JSON is a huge standard for the web and maybe for some small studios but
I am not aware of a lot of studios using it in their pipeline (they
usually prefer standards such as yaml for config files and xml for
whatever else, and when comes the need of speed, google protobuf).
Actually
some of them ended up creating their own, as Dreamworks or R+H. Anyway,
here I am talking about your architecture: you want to abstract the backend
for clients to add in a highly convenient way support for the format they
have choosen. My first thought is the support of a binary file format,
they dont want to wait after you to add it.
- Channels are extra properties attached to a scene object (softimage
object parameters, Maya node attributes, ...)
A lot of informations are stored in those, and other applications may
require them to properly re-build a camera from the data.
Again, you dont know what will be their specifics (naming conv, which
ones must be ignored, etc -- it depends a lot from the
artist/dept/studio).
- what you call tiny a file size may be not acceptable size for others.
exporting a camera with a frame range spanning 1000 frames will result
in 2mb on disk just to store a frame range which can be specify as simply
as "1-1001x1". Time is also spent in reading those data.
Size is just an argument as convenience could be, this makes the data
dense and editing might be tedious as error-prone.
We had a need once to export very dense geometries with their internal
structure. Because our pipeline was not designed to handle
so much density, the firt try has been a disaster resulting in easily an
hour just to write on disk datas and a file of something like 150Mb.
When you're on a show and working on a fragile network, every bytes
count. JSON has been one of the format tested, and it turned out one of
the worst. Anyway, you never know what will be the scale at which what
you do will be used.
- namespaces and ids are kinda related.
hm, thats a very big topic and which would require more production
oriented examples.
But the simplest example I could give is:
Jean (layout) exports and publish camera1 named according pipeline ->
SHOT_01_CAMERA_MAIN
Pierre (layout) exports and publish camera2 named according pipeline ->
SHOT_01_CAMERA_MAIN
Paul (light) import both camera to work on his scene -> conflict, a
camera gets renamed...
First, I am trying to keep in mind that some studios have no well-rounded
pipeline with a team of ## guys to maintain it.
There might not have a production tracking system nor an asset system
which would prevent this by organizing the tasks.
But anyway, you may have more than one person working on the same shot in
a dept but exporting cameras with generic names which needs
to be identified. (again thats the simplest case I could think of right
now).
As I said, its a complex topic and it depends a lot of how a studio
pipeline have been designed, though this is a must-have.
- metadatas is intended to be a paragraph in your spec left blank in favor
to the client. This is often not covered by the spec unless to define what
would be the
datastructure in-use.
As you pointed out, you don't want to add those keys (software,
department, role) to your spec but let clients define whatever
they want in this open space you designed for them at a standardized
location. You don't want to give free control of what your datastructure
is and so
the concept of:
JSON returns an object, so they can store this wherever they want just
decrease the interest of having a spec as it makes
unpredicatable what will be got from reading that file.
I think I did undertand your will in producing something simple and
generic. I support your initiative.
I don't have so much time unfortunately to contribute more than this, but I
hope Ive made my point clear.
My experience is coming from the last months I spent in developing and
maintaining the interchange format and pipeline used for all our
particle based systems which are coming from and sent to houdini, maya,
massive, nuke, etc. This is a fair amount of work as its not
just a file, but hundreds all packaged in a bundle and a library of a
million of lines. But also from past experience in which I ve been in
charge of developping and maintaing layout formats used across entire shows
from modeling to rendering.
This is a complex topic that require a lot of work to reach simple
objectives, even for something as simple as a camera.
This would be too easy if this was just about pushing datas into a python
dict and JSONify it. :)
cheers mate.
--jon
2013/7/10 Steven Caron <[email protected]>
> i hear ya... here is a plugin michelle sandroni wrote for this task...
> might help you work through the code faster for maya and max.
>
> http://www.threesixty3d.com/software/freeware/ms_CameraConverter_v2-2.zip
>
>
> On Wed, Jul 10, 2013 at 2:42 PM, Gene Crucean <
> [email protected]> wrote:
>
>> Yeah I think Alan summed up my thoughts. I just want something that's
>> stupidly simple to work with, incredibly flexible for pipelines of any
>> type... and zero additional bs.
>>
>>