I'm thinking that you could render one deep frame from 3D. And render out your left and right camera in nuke. 2.5D to stereo sort of. Then set the appropriate eye distance in nuke for preview purposes.
If you're doing a deep render for the whole shot I would argue that you'll hurt yourself if storage space is a limiting factor. If the sole purpose of this is to play with eye distance setting I would suggest you just ask for a playblast/flipbook instead of a render. And do your stereoscopic settings in "real" 3D. I'm curious to know what path you choose! Cheers, Elias Den 30 okt 2014 19:22 skrev "Vincent Olivier" <[email protected]>: > Hi! > > Since 2K deep frames in the exr format (32 bits) are between 50MB and 3GB > in size and take a long time to render, I was wondering if it was possible > to create a approximation of a planar stereo pair using a single deep frame > by horizontally offsetting each sample according to a fake simulated stereo > camera rig and then collapsing each offset deep images to a pair of stereo > images. Such “deep” transform would take one deep input and either have 2 > deep outputs (that could then both be wired each to a deep to planar image > transform) or render the planar images to 2 planar outputs. > > Because I couldn’t find any reference to such a technique anywhere, I was > curious to know if the NDK community was aware of any obvious gotchas > before trying it. Or even better, if someone has already done it. > > Regards, > > Vincent > > _______________________________________________ > Nuke-dev mailing list > [email protected], http://forums.thefoundry.co.uk/ > http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-dev > >
_______________________________________________ Nuke-dev mailing list [email protected], http://forums.thefoundry.co.uk/ http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-dev
