Hey Nitant,

although your idea is nice there are some points that make an implementation of 
this rather complicated/impossible.
What you mention is basically pretty much a point cloud. The problem is that a 
point cloud is not an image. If you ever tried rendering a pointcloud with e.g. 
Nuke's scanline renderer you will see that you get holes. To compensate for 
those holes you have to use some kind of volumetric rendering/voxel rendering. 
The question is if you're really gaining flexibility and turn around times like 
that. For the moment I would suggest to still do any camera angle changes on 
the 3d side of things. 
Pointcloud compositing is still a distant wish ;) 

cheers
Patrick

----- Original Message -----
From: [email protected]
To: [email protected]
Date: 25.02.2013 17:18:58
Subject: Re: [Nuke-users] Deep Scene Image


> Holger,
> 
> Thanks for the 'in depth' explanation!
> 
> I get what your saying that FBX, Alembic and OBJs are the way to go for
> static hard surfaces but how would you handle extracting mattes from
> clouds, since exporting geo for a cloud mesh would be either too dense (if
> accurate) or too inaccurate (if it's not dense)?
> 
> Cheers,
> Nitant
> 
> 
> 
> 
> On Sun, Feb 24, 2013 at 9:45 AM, Holger Hummel|Celluloid VFX <
> [email protected]> wrote:
> 
>>  hey Nitant,
>> 
>> afaik, this is not possible.
>> 
>> to be very honest, right now i cannot even think of any advantages in your
>> suggestion/idea.
>> 
>> if i understant you correctly, you're suggesting something comparable to
>> FBX or Alembic files
>> for deep pixel data. i don't think that you'd end up with a substantially
>> smaller file compared
>> to a frame sequence. in your example of a static object and a moving
>> camera, FBX and the likes
>> save a lot of space because e.g. an object is saved in it once plus the
>> camera animation. the
>> object is not saved per frame. now with deep pixel data, i highly doubt
>> that there's a lot of this
>> kind of space saving possible. so the only way to reduce size is
>> compression. to achieve a good
>> amount of compression it should be done across multiple (or even all)
>> frames. which means
>> every frame needs to be rendered first. so this would be a post-conversion.
>> even if this were a useful approach, post-processing a lot of rendered
>> frames into one
>> big file is probably not very reasonable.
>> next, re-rendering erroneous frames would be basically impossible or
>> result in the need to
>> re-build the all-frames-in-one-file from scratch.
>> 
>> what you're imagining is roughly comparable to a multilayer-exr or
>> stereo-exr. there is no
>> speed-up in using those compared to single layer/view images. for example,
>> to read layer 3,
>> the file reader needs to go through the file across layers 1 and 2 until
>> it can read the requested
>> layer 3 data. i admit, i don't know 100% how this works but apparently
>> there is a certain amount
>> of time wasted this way. that's why i'm a fan of using exr files per pass
>> instead of multilayer-exrs.
>> now imagine this issue being spread across a ton of frames instead of a
>> few layers and there's
>> probably a lot more file reading/seeking time wasted.
>> 
>> i'm not a programmer, so what i wrote might not be 100% correct. but my
>> 'binary sense' somehow
>> tells me that i got the basics right ;)
>> 
>> so for what you'd like to achieve, i suggest going the FBX, OBJ sequence
>> or Alembic way if you need
>> to extract masks or similar tasks - in case you don't have those in the
>> rendered frames already anyway.
>> 
>> cheers,
>> Holger
>> 
>> 
>> Am 24.02.2013 10:22, schrieb Nitant Karnik:
>> 
>> Hey!
>> 
>>  I was wondering if there was a way to write out one deep image file per
>> object for the entire length of a shot rather than a deep image sequence...
>> almost like a kind of DeepGeo?
>> 
>>  Let's say the element you were rendering deep data for was a static
>> object like a building, light post, or a static cloud, couldn't you
>> pre-calculate all angles along the path of the camera for the duration of
>> the shot in one file?  I would imagine it would almost be like in comp
>> terms, 'max merging' every deep frame together and rendering that out?
>> 
>>  I think ultimately having 1 file of deep data would be smaller and less
>> of a network hit then having 200 files of deep data with almost 50% (or
>> more) of it being redundant information.
>> 
>>  I'm sure someone's thought of this already... any luck implementing it?
>> 
>>  Nitant
>> 
>> 
>> 
>> 
>> _______________________________________________
>> Nuke-users mailing [email protected], 
>> http://forums.thefoundry.co.uk/http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>> 
>> 
>> 
>> _______________________________________________
>> Nuke-users mailing list
>> [email protected], http://forums.thefoundry.co.uk/
>> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
>> 
> _______________________________________________
> Nuke-users mailing list
> [email protected], http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
_______________________________________________
Nuke-users mailing list
[email protected], http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Reply via email to