Keep in mind that if you’ve rendered out any sort of non-deep pass with
filtering, those filtered values will be wrong and likely introduce artifacts.
On Jul 24, 2014, at 6:30 AM, Johannes Hezer j.he...@studiorakete.de wrote:
Hi David,
as Deke said Deep compositing is sth different in Nuke.
Although that’s not a bad idea it’s also not correct, when you add a new sample
between red and black to accommodate for B’s blue you’re making the assumption
that there’s actually “data” in that space and more so that the red and black
samples are describing some sort of start and end of a
Deep images will get you a little closer, but probably not as much as you’d
hope.
As someone has intimated at previously, it’s better if you can apply the motion
blur and defocus at the same time, even with this solved you still have the
issue of fully occluded areas of objects. There’s a
I believe the VRay Z depth render element lets the user specify what depth is
black (near) and what depth is white (far) - so I’m guessing you’ll have to
chat with your CG department to find out what they’re delivering.
On Dec 5, 2013, at 7:50 AM, Ron Ganbar ron...@gmail.com wrote:
Thanks
The biggest gotcha is avoiding the shower door look (or grease on lens?), ie.
as the camera moves the brush strokes stay locked to the geo vs. the lens.
If you can find a way to generate brush stroke positions from the CG world to
drive the watercolour look it’s going to feel a lot nicer.
On
Speak with Vlado @ Chaos Group.
On 2012-12-19, at 6:45 PM, mattdleonard nuke-users-re...@thefoundry.co.uk
wrote:
Hi,
I was wondering where you get the Nuke .vrst file reader from.
Many thanks,
Matt
Sphere VFX Ltd
3D . VFX . Training
www.spherevfx.com
We had the same problem, but once we disabled all but the single ethernet it
was fine. ctrl-click on the device in the network preferences and disable the
unused hardware. Probably useless if you're using both ethernet ports.
On 2012-12-27, at 11:23 AM, Howard Jones mrhowardjo...@yahoo.com
?
Message: 8
Date: Fri, 30 Nov 2012 13:56:18 -0500
From: Colin Doncaster colin.doncas...@gmail.com
Subject: Re: [Nuke-users] Deep to points...
To: Nuke user discussion nuke-users@support.thefoundry.co.uk
Message-ID: b4d582f1-52c7-41eb-95e6-42c7e791a...@gmail.com
Content-Type: text/plain
Have you tried starting Nuke with -m 16 and trying the same test?
This tells Nuke how many threads you want to use vs. how many cores your
computer has.
Cheers
On 2012-12-05, at 10:13 AM, Neil Scholes n...@uvfilms.co.uk wrote:
Hi Peter - thanks
Yes ext4
ok with your suggested setup -
Scene units? What are they using in Houdini vs. Nuke?
On 2012-11-30, at 1:34 PM, Henning Glabbart henni...@themill.com wrote:
Hi everyone,
this is my first post on the mailing list so please don't yell at me if I'm
not following the proper protocol :)
Anyway, I have a question about
You could use a deep expression to create a version of the image that turns all
samples with an alpha of 0.0 to an alpha of 1.0 and black RGBA with the rest
being set to 0.0 - then use that as a deep holdout with remove empty pixels on.
Maybe. I haven't tried it with Nukes tools - but a
rgba.green ((deep.front - near) / ( far - near )) * rgba.alpha
rgba.blue ((deep.front - near) / ( far - near )) * rgba.alpha
name DeepExpression1
selected true
xpos 202
ypos -32
}
On 2012-09-25, at 10:24 PM, Colin Doncaster colin.doncas...@gmail.com wrote:
The Z returned will be ugly ol' regular
What are the permissions of the file init.py?
On 2012-09-26, at 12:03 PM, Nathan Rusch nathan_ru...@hotmail.com wrote:
What does the terminal say? That dialog doesn’t display the actual traceback
or error...
-Nathan
From: davidrobinsonGFX
Sent: Wednesday, September 26, 2012 6:46 AM
The Z returned will be ugly ol' regular Zed.
You would want to copy the depth key of each samples depth into the RGB of the
sample then the deep to image.
On 2012-09-25, at 9:23 PM, Frank Rueter fr...@beingfrank.info wrote:
Not sure what you mean. DeepToImage should give you a rgbaz image,
Hi -
Give it a constant bg at the resolution you want - I just created one at 2k
full app and it fixes the issue. You're only giving the scanline renderer a 3d
scene and it's making assumptions about the output format. Giving it an empty
background avoids incorrect assumptions.
Cheers
On
in use.
Ok, thanks for any further help and suggestions.
-Adam
On 07/11/2012 07:24 PM, Colin Doncaster wrote:
You should be able to avoid the double defocus if you copy the alpha to a
different channel first, do the holdout, do the defocus any make sure your
pseudo alpha channel
Hi there -
It appears that the discrete/premultiply options in the deep image reader is
swapped if the example source code is anything to go by:
bool raw = _dtexReaderFormat-_raw;
bool discrete = _dtexReaderFormat-_premult;
bool premult = _dtexReaderFormat-_discrete;
or is this
Have you seen
http://opensource.dashing.tv/python-dirtt/
On 2012-04-03, at 12:58 PM, Bill Gilman wrote:
The Idea behind CLUTCH
CLUTCH is a highly flexible descriptor for directory structures and
filenames, using an open source standard to . By allowing the user to
pre-describe where he
What was the benchmark?
On 2012-02-27, at 7:59 PM, Gavin Greenwalt wrote:
So I've been trying to improve our GigE performance rendering Nuke comps with
NAS footage and I was noticing that my network performance never peaked above
100mbps. Thinking there was surely something wrong with my
I think you will find that although they might have more resources to manage
the problems - SPI and any other large facility still have them.
Although we haven't completely integrated OCIO what it offers is the choice of
not starting from scratch and knowing that a studio with an intelligent
You're probably better off with an XBox Kinect than stereo images.
The paper you reference still requires a sequence of images to be captured,
it's pretty similar to photogrammetry but uses the stereo depth maps to help
resolve the depth.
How cheap do you want the geometry? You could
Thanks for the contribution Ivan - this is a great start and hopefully some of
us can contribute to help expand the support.
All the best for the New Years!
On 2011-12-31, at 4:01 AM, Ivan Busquets wrote:
Happy holidays, Nukers!
Sorry for the spam to both the users and dev lists, but I
Hi Micah,
Is the issue that the id pass isn't in the dtex file or is the issue that
you're not too sure how to use it in Nuke? Are you certain you're outputting id
values into the dtex file?
With that all said looking at the dtex reader code it looks like Nuke is only
supporting RGBA in dtex
What's to say that's not already working?
Nuke 6.3 is still in beta though...
On 2011-07-13, at 3:23 PM, Jonathan Egstad wrote:
How soon can you get a version working for 6.3 which accepts deep data...?
On Jul 11, 2011, at 12:43 PM, Colin Doncaster wrote:
I can get you a trial license
, Colin Doncaster wrote:
Hi Ari,
Any input channel can be used to control the scale of the effect.
Cheers
On 2011-07-13, at 2:35 PM, ari Rubenstein wrote:
Colin,
Can you confirm if now the tool can scale the bokeh based on a greyscale
input image (such as a depth key) ?
Thx
I can get you a trial license of http://peregrinelabs.com/bokeh if you want to
give it a go. I'm sure the tool can be extended to support what you're trying
to do.
cheers
On 2011-07-11, at 12:18 PM, a...@curvstudios.com wrote:
Anyone have a nice solution / tool for animated growing /
.
At the time we were a little surprised as I didn't think DPX files *could* ( or
should ) be in sRGB.
--
Colin Doncaster
Co Founder / Head of VFX
Peregrine VFX
www.peregrinevfx.com
On 2011-05-10, at 2:14 PM, Scott Squires wrote:
We've gotten in some DPX files that came from an outside Flame source
27 matches
Mail list logo