Re: [Nuke-users] deep masking

2014-06-17 Thread Ben Dickson
It's more difficult than it initially seems..

The obvious thing is to use the DeepMerge set to holdout to punch a hole
in your A input, invert the matte and punch the inverse hole in the B
input.. but when you merge these you get the dark fringing where your
matte is semi-transparent, which is the same problem solved by adding
the two images together, or using the disjoint-over

..but, you inherently cannot do that with deep samples - when the deep
image is flattened, all the samples for a pixel are over'd.


There is a DeepKeyMix gizmo on Nukepedia, but it is very destructive -
it flattens the image with DeepToImage, applies a regular KeyMix and
then uses the DeepRecolour.. which is probably okay if you are only
rendering deep-opacity, but bad if you are rendering deep-RGB.


I had a rough idea of how to write a plugin to mix between two deep
images, but haven't got around to implementing it.. so.. there might be
some other fundamental flaw in the approach, but..

For two inputs A and B:

If all the samples are at the same depth in both A and B, easy, the
output samples are a simple mix between each sample of A and B. This is
the case for, say, keymixing in a DeepColorCorrect (where only the
existing sample values will be changed)

If the samples are not aligned, things are more complicated (e.g mixing
two separate renders). For each sample you need to make a corresponding
sample at the same depth in the other image, by interpolating between
the nearest two samples.


In other words, if you have two images like this:

A samples: empty empty red  empty black
B samples: empty empty blue blue  blue

1) For first two empty samples, nothing is done
2) For the A:red and B:blue sample pair, output sample is a simple mix
3) For the A:empty and B:blue sample pair,
  insert a sample in A which is a mix between the red and black
  samples. Then mix between that and B's blue sample

I think the case which would cause artefacts is when your samples have
large distance-gaps between them: like keymixing between a foreground
tree and the sky - the process of creating the new samples will create
tree coloured samples at the sky depth and vice-versa

- Ben

On 17/06/14 12:40, Frank Rueter|OHUfx wrote:
 Hi peeps,
 
 I'm just trying to figure out how to merge two deep images based on a
 deep mask channel, without getting fringing.
 Been playing with DeepExpression but don't know if I can reference
 samples in there (the documentation is rather sparse to say the least).
 Basically I need a true, volumetric DeepKeyMix.
 
 Any ideas?
 
 Cheers,
 frank
 
 -- 
 ohufxLogo 50x50 http://www.ohufx.com*vfx compositing
 http://ohufx.com/index.php/vfx-compositing | *workflow customisation
 and consulting http://ohufx.com/index.php/vfx-customising* *
 
 
 
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
 

-- 
ben dickson
2D TD | ben.dick...@rsp.com.au
rising sun pictures | www.rsp.com.au
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users


[Nuke-users] Nuke to After Effects exporter (script available)

2014-06-17 Thread Simon Björk
Hi all,

this comes up on the list every now and then, so I thought I'd share a
script I've written awhile ago.

The script will export 3d nodes (camera/card/axis) from Nuke, and convert
them to camera/solid (3d)/null layers in After Effects.

Note that the script isn't production tested at all, and mostly tested on
windows. If you encounter any bugs, let me know.

Find the script here (among with some other tools):
http://bjorkvisuals.com/tools/the-foundrys-nuke

Cheers!



---
Simon Björk
Compositor/TD

+46 (0)70-2859503
www.bjorkvisuals.com
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] glib crashes - anybody using Nuke 8 on (K)Ubuntu 12.10?

2014-06-17 Thread Matt Griffith
Curious if anyone has heard any update on this issue?  (Nuke glibc 
crashing, not Skype ;) ).  Still getting incessant crashing with latest 
8.0v5, to the point where it can be opening a script and touching 
anything causes a crash.


On 14-05-03 07:21 PM, Frank Rueter|OHUfx wrote:

Skype is usually too busy breaking itself :-D
I have had skype installed on my linux box ever since I set it up and 
it never seemed to cause any troubles.


On 5/4/14, 12:04 AM, Howard Jones wrote:

I have no idea unless it installs something conflicting with nuke.
But our centos / nuke 8.0v3 installs are working fine. Just hopjng 
adding skype wont break anything.


Howard

On 3 May 2014, at 04:14 am, Frank Rueter|OHUfx fr...@ohufx.com 
mailto:fr...@ohufx.com wrote:



Out if interest do you have skype installed. (We're about to)
I do. Will that interfere with Nuke? :)

On 5/2/14, 8:09 PM, Howard Jones wrote:

We're running Nuke 8.03 on centos fine here.

Our systems are quite bareboned.

Out if interest do you have skype installed. (We're about to)

Howard


On 2 May 2014, at 03:07 am, Matt Griffithmgriff...@mechnology.com  wrote:

Frank,
Yeah, I've been getting crashes like that with all the 8.0 (including 
8.0v4) on CentOS (6.5).  Thinking about rolling back to Nuke 7 here too.

Cheers!
-Matt


On 14-05-01 06:20 PM, Patrick Heinen wrote:
Hey Frank,

I'm starting to get the same error over here Nuke 7v10 though and under CentOS. 
Did you hear back from support yet?

cheers,
Patrick

Frank Rueter|OHUfx wrote on 21.04.2014 18:45:

Hi all,

I am using Nuke 8.0v4 on  Kubuntu 12.10 and keep getting the ol' glibc error 
every few mouse clicks:

*** glibc detected *** Nuke8.0: double free or corruption (fasttop): 
0x08db7ed0 ***


I used to use Nuke 7 just fine on the same machine and the crashes are so 
frequent that Nuke 8 is practically unusable on my linux box and I am 
considering rolling back to 7.

I have contacted support and they are looking into it, but I was wondering if 
anybody else seeing this issue?


Cheers,
frank


--


vfx compositinghttp://ohufx.com/index.php/vfx-compositing   | workflow 
customisation and consultinghttp://ohufx.com/index.php/vfx-customising

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk,http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk,http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk,http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users


--
ohufxLogo_50x50.png http://www.ohufx.com 	*vfx compositing 
http://ohufx.com/index.php/vfx-compositing | *workflow 
customisation and consulting 
http://ohufx.com/index.php/vfx-customising* *


___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk 
mailto:Nuke-users@support.thefoundry.co.uk, 
http://forums.thefoundry.co.uk/

http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users



___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk,http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users


--
ohufxLogo 50x50 http://www.ohufx.com 	*vfx compositing 
http://ohufx.com/index.php/vfx-compositing | *workflow customisation 
and consulting http://ohufx.com/index.php/vfx-customising* *




___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users


___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Importing SLOG 3 MXF files into Nuke

2014-06-17 Thread Rangi Sutton
Hullo JC...

Current show we're using slog2, which we implemented via OCIO, my favourite
new toy.

We now read/write everything as linear, ie, no change, in nuke, and use
OCIOcolorspace nodes for the space conversions before/after IO.

We created an .spi lut that did the slog2 conversion from a python script
provided by the ocio guys,  who happen to be mostly Sony Imageworks..

Config of ocio takes a little playing, but once you have it it's very handy
tool. Now we can just tell compositors it's log and not which log, as
there's a different ocio.conf for each project.

Big plus.. oiiotool will use the same colour space conversions, so you can
convert between formats and colour states in pure open source land. ftw.

Oh.., having someone happy to hit the makefiles helps. But nuke internally
supports ocio so no tedious linking too big close-source involved.

Cheers,
r.


http://opencolorio.org/
https://sites.google.com/site/openimageio/home
http://cinematiccolor.com/



On 17 June 2014 07:10, John Coldrick john.coldr...@gmail.com wrote:

 In our never-ending quest to be forced to work with every file format and
 colourspace known to humanity, we need to pull these plates in.  We sort of
 have a path for the MXF file format(albeit a bit shaky and thus
 questionable), but I was curious if anyone has a path for colourspace
 conversion from SLOG3 to linear/some known space?  I know it's not
 currently available directly in Nuke but curious if anyone knows a solution?

 Thanks!

 J.C.

 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users




-- 
VFX Supervisor
Cutting Edge
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Importing SLOG 3 MXF files into Nuke

2014-06-17 Thread John Coldrick
Ah cool, thanks Deke, I just needed a clear path - no heavy lifting
required.  :)

Yeah, Rangi, OCIO is definitely on our list!

Cheers guys...

J.C.


On Tue, Jun 17, 2014 at 5:10 AM, Rangi Sutton rsut...@cuttingedge.com.au
wrote:

 Hullo JC...

 Current show we're using slog2, which we implemented via OCIO, my
 favourite new toy.

 We now read/write everything as linear, ie, no change, in nuke, and use
 OCIOcolorspace nodes for the space conversions before/after IO.

 We created an .spi lut that did the slog2 conversion from a python script
 provided by the ocio guys,  who happen to be mostly Sony Imageworks..

 Config of ocio takes a little playing, but once you have it it's very
 handy tool. Now we can just tell compositors it's log and not which log,
 as there's a different ocio.conf for each project.

 Big plus.. oiiotool will use the same colour space conversions, so you can
 convert between formats and colour states in pure open source land. ftw.

 Oh.., having someone happy to hit the makefiles helps. But nuke internally
 supports ocio so no tedious linking too big close-source involved.

 Cheers,
 r.


 http://opencolorio.org/
 https://sites.google.com/site/openimageio/home
 http://cinematiccolor.com/



 On 17 June 2014 07:10, John Coldrick john.coldr...@gmail.com wrote:

 In our never-ending quest to be forced to work with every file format and
 colourspace known to humanity, we need to pull these plates in.  We sort of
 have a path for the MXF file format(albeit a bit shaky and thus
 questionable), but I was curious if anyone has a path for colourspace
 conversion from SLOG3 to linear/some known space?  I know it's not
 currently available directly in Nuke but curious if anyone knows a solution?

 Thanks!

 J.C.

 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users




 --
 VFX Supervisor
 Cutting Edge

 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] deep masking

2014-06-17 Thread Michael Garrett

 If all the samples are at the same depth in both A and B, easy, the
 output samples are a simple mix between each sample of A and B. This is
 the case for, say, keymixing in a DeepColorCorrect (where only the
 existing sample values will be changed)


I managed to get a deep volumetric keymix working using the basic scenario
Ben is describing with deep holdouts - one with the mask and the other with
the inverse mask, then deep merging them together. Typically I used this
when creating a deep Pmatte then using that as a deep keymix for a deep
colorcorrect/grade.

And yes it was extremely useful.  I'll have to revisit the specifics though
because like Ben says I think there was some additional work required to
get rid of fringing issues which since I have yet to write a plug-in, was
achieved with deep expressions (hello hackiness).

This was specific to Mantra. I have yet to extensively use the deep output
from other renderers but I believe Mantra's deep output has it's quirks and
what I did may not translate exactly to another renderer. Also we were
working a fair bit with full deep rgb output which can make things a lot
cleaner, although we did fall back to deep opacity/recolor in some cases as
deadlines approached and still got an acceptable result.

Cheers,
Michael


On 17 June 2014 02:09, Ben Dickson ben.dick...@rsp.com.au wrote:

 It's more difficult than it initially seems..

 The obvious thing is to use the DeepMerge set to holdout to punch a hole
 in your A input, invert the matte and punch the inverse hole in the B
 input.. but when you merge these you get the dark fringing where your
 matte is semi-transparent, which is the same problem solved by adding
 the two images together, or using the disjoint-over

 ..but, you inherently cannot do that with deep samples - when the deep
 image is flattened, all the samples for a pixel are over'd.


 There is a DeepKeyMix gizmo on Nukepedia, but it is very destructive -
 it flattens the image with DeepToImage, applies a regular KeyMix and
 then uses the DeepRecolour.. which is probably okay if you are only
 rendering deep-opacity, but bad if you are rendering deep-RGB.


 I had a rough idea of how to write a plugin to mix between two deep
 images, but haven't got around to implementing it.. so.. there might be
 some other fundamental flaw in the approach, but..

 For two inputs A and B:

 If all the samples are at the same depth in both A and B, easy, the
 output samples are a simple mix between each sample of A and B. This is
 the case for, say, keymixing in a DeepColorCorrect (where only the
 existing sample values will be changed)

 If the samples are not aligned, things are more complicated (e.g mixing
 two separate renders). For each sample you need to make a corresponding
 sample at the same depth in the other image, by interpolating between
 the nearest two samples.


 In other words, if you have two images like this:

 A samples: empty empty red  empty black
 B samples: empty empty blue blue  blue

 1) For first two empty samples, nothing is done
 2) For the A:red and B:blue sample pair, output sample is a simple mix
 3) For the A:empty and B:blue sample pair,
   insert a sample in A which is a mix between the red and black
   samples. Then mix between that and B's blue sample

 I think the case which would cause artefacts is when your samples have
 large distance-gaps between them: like keymixing between a foreground
 tree and the sky - the process of creating the new samples will create
 tree coloured samples at the sky depth and vice-versa

 - Ben

 On 17/06/14 12:40, Frank Rueter|OHUfx wrote:
  Hi peeps,
 
  I'm just trying to figure out how to merge two deep images based on a
  deep mask channel, without getting fringing.
  Been playing with DeepExpression but don't know if I can reference
  samples in there (the documentation is rather sparse to say the least).
  Basically I need a true, volumetric DeepKeyMix.
 
  Any ideas?
 
  Cheers,
  frank
 
  --
  ohufxLogo 50x50 http://www.ohufx.com*vfx compositing
  http://ohufx.com/index.php/vfx-compositing | *workflow customisation
  and consulting http://ohufx.com/index.php/vfx-customising* *
 
 
 
  ___
  Nuke-users mailing list
  Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
  http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
 

 --
 ben dickson
 2D TD | ben.dick...@rsp.com.au
 rising sun pictures | www.rsp.com.au
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] exr2 tool to copy channels?

2014-06-17 Thread Michael Garrett
Yes that would be really useful, as long as we are still limited to working
in a single multichannel stream for deep images in Nuke.


On 17 June 2014 01:54, Frank Rueter|OHUfx fr...@ohufx.com wrote:

  Hi all,

 since we still don't have a DeepCopy node in the default Nuke dist, does
 anybody know if there are open source tools to combine exr2 images/channels?

 Cheers,
 frank

 --
   [image: ohufxLogo 50x50] http://www.ohufx.com *vfx compositing
 http://ohufx.com/index.php/vfx-compositing | workflow customisation and
 consulting http://ohufx.com/index.php/vfx-customising *

 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Nuke to After Effects exporter (script available)

2014-06-17 Thread Gary Jaeger
That looks very cool Simon, thanks!


On Tue, Jun 17, 2014 at 12:36 AM, Simon Björk si...@bjorkvisuals.com
wrote:

 Hi all,

 this comes up on the list every now and then, so I thought I'd share a
 script I've written awhile ago.

 The script will export 3d nodes (camera/card/axis) from Nuke, and convert
 them to camera/solid (3d)/null layers in After Effects.

 Note that the script isn't production tested at all, and mostly tested on
 windows. If you encounter any bugs, let me know.

 Find the script here (among with some other tools):
 http://bjorkvisuals.com/tools/the-foundrys-nuke

 Cheers!



 ---
 Simon Björk
 Compositor/TD

 +46 (0)70-2859503
 www.bjorkvisuals.com

 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users




-- 
Gary Jaeger // Core Studio
249 Princeton Avenue
Half Moon Bay, CA 94019
650 728 7060
http://corestudio.com
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] deep masking

2014-06-17 Thread Frank Rueter|OHUfx

Thanks Michael!
I think I got reasonably close with my expression hackiness yesterday 
and with a little help from somebody else we got even closer (basically 
by doing the soft part of the mask in flat image space).

Hopefully we are good to go now.


On 18/06/14 3:25 am, Michael Garrett wrote:


If all the samples are at the same depth in both A and B, easy, the
output samples are a simple mix between each sample of A and B.
This is
the case for, say, keymixing in a DeepColorCorrect (where only the
existing sample values will be changed)


I managed to get a deep volumetric keymix working using the basic 
scenario Ben is describing with deep holdouts - one with the mask and 
the other with the inverse mask, then deep merging them together. 
Typically I used this when creating a deep Pmatte then using that as a 
deep keymix for a deep colorcorrect/grade.


And yes it was extremely useful.  I'll have to revisit the specifics 
though because like Ben says I think there was some additional work 
required to get rid of fringing issues which since I have yet to write 
a plug-in, was achieved with deep expressions (hello hackiness).


This was specific to Mantra. I have yet to extensively use the deep 
output from other renderers but I believe Mantra's deep output has 
it's quirks and what I did may not translate exactly to another 
renderer. Also we were working a fair bit with full deep rgb output 
which can make things a lot cleaner, although we did fall back to deep 
opacity/recolor in some cases as deadlines approached and still got an 
acceptable result.


Cheers,
Michael


On 17 June 2014 02:09, Ben Dickson ben.dick...@rsp.com.au 
mailto:ben.dick...@rsp.com.au wrote:


It's more difficult than it initially seems..

The obvious thing is to use the DeepMerge set to holdout to punch
a hole
in your A input, invert the matte and punch the inverse hole in the B
input.. but when you merge these you get the dark fringing where your
matte is semi-transparent, which is the same problem solved by adding
the two images together, or using the disjoint-over

..but, you inherently cannot do that with deep samples - when the deep
image is flattened, all the samples for a pixel are over'd.


There is a DeepKeyMix gizmo on Nukepedia, but it is very destructive -
it flattens the image with DeepToImage, applies a regular KeyMix and
then uses the DeepRecolour.. which is probably okay if you are only
rendering deep-opacity, but bad if you are rendering deep-RGB.


I had a rough idea of how to write a plugin to mix between two deep
images, but haven't got around to implementing it.. so.. there
might be
some other fundamental flaw in the approach, but..

For two inputs A and B:

If all the samples are at the same depth in both A and B, easy, the
output samples are a simple mix between each sample of A and B.
This is
the case for, say, keymixing in a DeepColorCorrect (where only the
existing sample values will be changed)

If the samples are not aligned, things are more complicated (e.g
mixing
two separate renders). For each sample you need to make a
corresponding
sample at the same depth in the other image, by interpolating between
the nearest two samples.


In other words, if you have two images like this:

A samples: empty empty red  empty black
B samples: empty empty blue blue  blue

1) For first two empty samples, nothing is done
2) For the A:red and B:blue sample pair, output sample is a simple mix
3) For the A:empty and B:blue sample pair,
  insert a sample in A which is a mix between the red and black
  samples. Then mix between that and B's blue sample

I think the case which would cause artefacts is when your samples have
large distance-gaps between them: like keymixing between a foreground
tree and the sky - the process of creating the new samples will create
tree coloured samples at the sky depth and vice-versa

- Ben

On 17/06/14 12:40, Frank Rueter|OHUfx wrote:
 Hi peeps,

 I'm just trying to figure out how to merge two deep images based
on a
 deep mask channel, without getting fringing.
 Been playing with DeepExpression but don't know if I can reference
 samples in there (the documentation is rather sparse to say the
least).
 Basically I need a true, volumetric DeepKeyMix.

 Any ideas?

 Cheers,
 frank

 --
 ohufxLogo 50x50 http://www.ohufx.com*vfx compositing
 http://ohufx.com/index.php/vfx-compositing | *workflow
customisation
 and consulting http://ohufx.com/index.php/vfx-customising* *



 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk
mailto:Nuke-users@support.thefoundry.co.uk,
http://forums.thefoundry.co.uk/
 

Re: [Nuke-users] deep masking

2014-06-17 Thread matt estela
On 18 June 2014 01:25, Michael Garrett michaeld...@gmail.com wrote:


 This was specific to Mantra. I have yet to extensively use the deep output
 from other renderers but I believe Mantra's deep output has it's quirks and
 what I did may not translate exactly to another renderer. Also we were
 working a fair bit with full deep rgb output which can make things a lot
 cleaner, although we did fall back to deep opacity/recolor in some cases as
 deadlines approached and still got an acceptable result.


Could you go into any more depth on this (pun partially intended)? About to
start looking into mantra and deep and nuke, information is thin on the
ground. You're reading the .rat files natively or converting them to exr2?
Any tips appreciated!
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Nuke 8.0v3 slowdown

2014-06-17 Thread Jed Smith
I have also been experiencing the viewer caching issues.

I often experience the behavior where an incorrect image will get stuck
in the viewer's cache, so if you are looking through a certain node on a
certain frame and this happens, even if you change knobs in the viewed
node, the changes won't update in the viewer. You will wonder what's
happening, zoom out or go to the next frame, and then suddenly see the
changes you made take effect again. Then if you come back to the original
frame, the stuck cache will still be displaying.

Sometimes this happens with scanlines too, so if you are zoomed out and
calculate the view, you then zoom in and change some knob values, and then
the scanlines that were calculated when you were zoomed out won't update,
but the new scanlines that were calculated after you zoomed in will
calculate.

Needless to say this is very annoying, and seems to happen mostly in
heavier scripts, and perhaps more when working of of network storage over
an nfs mount.

I have noticed increased memory usage inefficiency and DAG slowdown in
heavier scripts also. One recent example was a pretty big shot that had a
ton of nodes. It was getting to the point in Nuke 8.0v4 where just
navigating the node graph and opening control panels would cause
beachballing and freezing, stalling, and pretty common crashes. The script
wasn't using any Nuke8 specific nodes, so I went back to Nuke 7.0v10 for
this shot and things were fast and stable and interactive again.

Just circumstantial experiences, but hopefully the situation improves soon!


On Mon, Jun 16, 2014 at 2:16 PM, Diogo Girondi diogogiro...@gmail.com
wrote:

 Hi Ari,

 Even though my problems weren't with the DAG but with stability and memory
 management in general I have to say that I've noticed the very same thing
 when I went from 7 to 8.0v#. In 8.0v4 things got a little better and in
 8.0v5 things seem to have returned to where v3 was.

 8.0v5 just like v3 constantly hangs and usually quits out of the blue when
 it hits the last frame on QuickTime renders. The problem is less noticeable
 when working with DPX, R3D or other formats. I also had huge issues when
 using AtomKraft in 8.0v3 compared to 7, got better in 8.0v4 but I'm yet to
 test in 8.0v5.

 I'm also having some Viewer cache issues in v5 that I didn't had in Nuke
 for quite a while. But I'm pretty sure my computer isn't helping either,
 tabula rasa time I guess.


 cheers,
 Diogo


 On Mon, Jun 16, 2014 at 4:02 PM, Ari Rubenstein a...@curvstudios.com
 wrote:

 We are versioning up from Nuke 7.0v6 to 8.0v3 and are encountering a
 significant slowdown.

 There is a basic template we use to start shot production. In this
 template there are groups with some complexity within. It seems by deleting
 these groups (FYI, there are no external calls from the groups (ie. python,
 etc..) speeds  up the DAG. Additional complexity slows it down considerably
 in 8.0v3 as compared with 7.0v6.

 Has anyone experienced this or does 8.0v4 or v5 fix it ?

 Thx
 Ari
 Blue Sky

 Sent from my iPhone___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users



 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Nuke to After Effects exporter (script available)

2014-06-17 Thread Richard Bobo
Thanks, Simon!

I’ll give it a try as soon as I get a chance…

Rich


Rich Bobo
Senior VFX Compositor
Armstrong White
Email:  rich.b...@armstrong-white.com
http://armstrong-white.com/

Email:  richb...@mac.com
Mobile:  (248) 840-2665
Web:  http://richbobo.com/

A musician must make music, an artist must paint, a poet must write, if he is 
to be ultimately at peace with himself. What a man can be, he must be.
- Abraham Maslow (1908-1970) American Psychologist





On Jun 17, 2014, at 3:36 AM, Simon Björk si...@bjorkvisuals.com wrote:

 Hi all,
 
 this comes up on the list every now and then, so I thought I'd share a script 
 I've written awhile ago.
 
 The script will export 3d nodes (camera/card/axis) from Nuke, and convert 
 them to camera/solid (3d)/null layers in After Effects.
 
 Note that the script isn't production tested at all, and mostly tested on 
 windows. If you encounter any bugs, let me know.
 
 Find the script here (among with some other tools): 
 http://bjorkvisuals.com/tools/the-foundrys-nuke
 
 Cheers!
 
 
 
 ---
 Simon Björk
 Compositor/TD
 
 +46 (0)70-2859503
 www.bjorkvisuals.com
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] deep masking

2014-06-17 Thread Michael Garrett
Sounds like you're sorted, but I found this note on the basic method I
used. So you could use a flat bezier mask as you say, that is promoted to
deep then used for the deep holdout.

The deep image is unpremultiplied by its original deep opacity alpha and
premultiplied by the new mask. Redundant deep samples are killed if they
are a zero alpha value. This is achieved by moving those deep samples to
behind the camera and then deep cropping them.

You would do the same for the inverse, then deep merge and you should get
back to your original image but you can insert a deep colour correct
upstream of that deep merge to get a typical keymix effect.


On 17 June 2014 17:38, Frank Rueter|OHUfx fr...@ohufx.com wrote:

  Thanks Michael!
 I think I got reasonably close with my expression hackiness yesterday and
 with a little help from somebody else we got even closer (basically by
 doing the soft part of the mask in flat image space).
 Hopefully we are good to go now.



 On 18/06/14 3:25 am, Michael Garrett wrote:

  If all the samples are at the same depth in both A and B, easy, the
 output samples are a simple mix between each sample of A and B. This is
 the case for, say, keymixing in a DeepColorCorrect (where only the
 existing sample values will be changed)


  I managed to get a deep volumetric keymix working using the basic
 scenario Ben is describing with deep holdouts - one with the mask and the
 other with the inverse mask, then deep merging them together. Typically I
 used this when creating a deep Pmatte then using that as a deep keymix for
 a deep colorcorrect/grade.

  And yes it was extremely useful.  I'll have to revisit the specifics
 though because like Ben says I think there was some additional work
 required to get rid of fringing issues which since I have yet to write a
 plug-in, was achieved with deep expressions (hello hackiness).

  This was specific to Mantra. I have yet to extensively use the deep
 output from other renderers but I believe Mantra's deep output has it's
 quirks and what I did may not translate exactly to another renderer. Also
 we were working a fair bit with full deep rgb output which can make things
 a lot cleaner, although we did fall back to deep opacity/recolor in some
 cases as deadlines approached and still got an acceptable result.

  Cheers,
 Michael


 On 17 June 2014 02:09, Ben Dickson ben.dick...@rsp.com.au wrote:

 It's more difficult than it initially seems..

 The obvious thing is to use the DeepMerge set to holdout to punch a hole
 in your A input, invert the matte and punch the inverse hole in the B
 input.. but when you merge these you get the dark fringing where your
 matte is semi-transparent, which is the same problem solved by adding
 the two images together, or using the disjoint-over

 ..but, you inherently cannot do that with deep samples - when the deep
 image is flattened, all the samples for a pixel are over'd.


 There is a DeepKeyMix gizmo on Nukepedia, but it is very destructive -
 it flattens the image with DeepToImage, applies a regular KeyMix and
 then uses the DeepRecolour.. which is probably okay if you are only
 rendering deep-opacity, but bad if you are rendering deep-RGB.


 I had a rough idea of how to write a plugin to mix between two deep
 images, but haven't got around to implementing it.. so.. there might be
 some other fundamental flaw in the approach, but..

 For two inputs A and B:

 If all the samples are at the same depth in both A and B, easy, the
 output samples are a simple mix between each sample of A and B. This is
 the case for, say, keymixing in a DeepColorCorrect (where only the
 existing sample values will be changed)

 If the samples are not aligned, things are more complicated (e.g mixing
 two separate renders). For each sample you need to make a corresponding
 sample at the same depth in the other image, by interpolating between
 the nearest two samples.


 In other words, if you have two images like this:

 A samples: empty empty red  empty black
 B samples: empty empty blue blue  blue

 1) For first two empty samples, nothing is done
 2) For the A:red and B:blue sample pair, output sample is a simple mix
 3) For the A:empty and B:blue sample pair,
   insert a sample in A which is a mix between the red and black
   samples. Then mix between that and B's blue sample

 I think the case which would cause artefacts is when your samples have
 large distance-gaps between them: like keymixing between a foreground
 tree and the sky - the process of creating the new samples will create
 tree coloured samples at the sky depth and vice-versa

 - Ben

 On 17/06/14 12:40, Frank Rueter|OHUfx wrote:
  Hi peeps,
 
  I'm just trying to figure out how to merge two deep images based on a
  deep mask channel, without getting fringing.
  Been playing with DeepExpression but don't know if I can reference
  samples in there (the documentation is rather sparse to say the least).
  Basically I need a true, volumetric DeepKeyMix.