hi Nitant,
i'm afraid, you don't to tell the honest truth.
that's what deep compositing is actually good for - when set up
correctly and offering the right tools you need for stuff like that.
the way it is right now, there's room for improvement on the comp side
regarding the available tools, to say it in a diplomatic way...
apart from that, there's no such thing as a cloud mesh. a point cloud
would be the equivalent. and yes, those files tend to be
fairly big. and Nuke doesn't deal with them in a satisfactory way yet,
if i'm not mistaken. that's why basically deep comping
is your way to go - minus the afore-mentioned sub-optimal situation...
cheers,
Holger
Nitant Karnik wrote:
Holger,
Thanks for the 'in depth' explanation!
I get what your saying that FBX, Alembic and OBJs are the way to go
for static hard surfaces but how would you handle extracting mattes
from clouds, since exporting geo for a cloud mesh would be either too
dense (if accurate) or too inaccurate (if it's not dense)?
Cheers,
Nitant
On Sun, Feb 24, 2013 at 9:45 AM, Holger Hummel|Celluloid VFX
<[email protected] <mailto:[email protected]>> wrote:
hey Nitant,
afaik, this is not possible.
to be very honest, right now i cannot even think of any advantages
in your suggestion/idea.
if i understant you correctly, you're suggesting something
comparable to FBX or Alembic files
for deep pixel data. i don't think that you'd end up with a
substantially smaller file compared
to a frame sequence. in your example of a static object and a
moving camera, FBX and the likes
save a lot of space because e.g. an object is saved in it once
plus the camera animation. the
object is not saved per frame. now with deep pixel data, i highly
doubt that there's a lot of this
kind of space saving possible. so the only way to reduce size is
compression. to achieve a good
amount of compression it should be done across multiple (or even
all) frames. which means
every frame needs to be rendered first. so this would be a
post-conversion.
even if this were a useful approach, post-processing a lot of
rendered frames into one
big file is probably not very reasonable.
next, re-rendering erroneous frames would be basically impossible
or result in the need to
re-build the all-frames-in-one-file from scratch.
what you're imagining is roughly comparable to a multilayer-exr or
stereo-exr. there is no
speed-up in using those compared to single layer/view images. for
example, to read layer 3,
the file reader needs to go through the file across layers 1 and 2
until it can read the requested
layer 3 data. i admit, i don't know 100% how this works but
apparently there is a certain amount
of time wasted this way. that's why i'm a fan of using exr files
per pass instead of multilayer-exrs.
now imagine this issue being spread across a ton of frames instead
of a few layers and there's
probably a lot more file reading/seeking time wasted.
i'm not a programmer, so what i wrote might not be 100% correct.
but my 'binary sense' somehow
tells me that i got the basics right ;)
so for what you'd like to achieve, i suggest going the FBX, OBJ
sequence or Alembic way if you need
to extract masks or similar tasks - in case you don't have those
in the rendered frames already anyway.
cheers,
Holger
Am 24.02.2013 10 <tel:24.02.2013%2010>:22, schrieb Nitant Karnik:
Hey!
I was wondering if there was a way to write out one deep image
file per object for the entire length of a shot rather than a
deep image sequence... almost like a kind of DeepGeo?
Let's say the element you were rendering deep data for was a
static object like a building, light post, or a static cloud,
couldn't you pre-calculate all angles along the path of the
camera for the duration of the shot in one file? I would imagine
it would almost be like in comp terms, 'max merging' every deep
frame together and rendering that out?
I think ultimately having 1 file of deep data would be smaller
and less of a network hit then having 200 files of deep data with
almost 50% (or more) of it being redundant information.
I'm sure someone's thought of this already... any luck
implementing it?
Nitant
_______________________________________________
Nuke-users mailing list
[email protected]
<mailto:[email protected]>, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
_______________________________________________
Nuke-users mailing list
[email protected]
<mailto:[email protected]>,
http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
------------------------------------------------------------------------
_______________________________________________
Nuke-users mailing list
[email protected], http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
--
Holger Hummel - [email protected]
Celluloid Visual Effects GmbH & Co. KG
Paul-Lincke-Ufer 39/40, 10999 Berlin
phone +49 (0)30 / 54 735 220 - [email protected]
_______________________________________________
Nuke-users mailing list
[email protected], http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users