Re: [Nuke-users] goofy_titles

2017-03-16 Thread Jonathan Egstad
Many apologies to poor Sam...
Reading those goofy titles brings back some memories, eh Frank?


On Mar 16, 2017, at 3:44 PM, Matt Plec  wrote:

@adam - Bet you're thinking of the MindRead node? It's still there as of 10.0v6.

> On Sat, Mar 11, 2017 at 2:50 AM, adam jones  wrote:
> I liked the “autoComp" node that now seems to be gone but this is better :-)
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] tiled openExr

2013-12-05 Thread Jonathan Egstad
Avoid using formats that Nuke has to buffer the entire image like tiff  jpeg.  
Scanline based formats work best for memory and speed as only the scanlines 
that are needed for a section of texture are loaded.  Jpeg and tiff basically 
force Nuke to load the entire image even if a tiny portion of the image is 
every accessed.
Unfortunately the Nuke tiff reader (last time I checked) is pretty dumb about 
tile accesses and doesn’t manage the tiles like a texture support lib (i.e. 
OpenImageIO) does.

-jonathan

On Dec 5, 2013, at 4:41 PM, Luca Fiorentini luca.fiorent...@gmail.com wrote:

 Hi,
 
 I know that I should use scanline openExr while comping my shots but I was 
 wondering which is the best approach when using large images in the 3d 
 workspace as mapped textures.
 Does nuke benefit from using tiled exr? Can I use pyramidal images to load 
 only what I see from the camera? Should I just use jpg?
 
 Thanks :)
 
 -- 
 Luca Fiorentini - 3D Lighting Artist
 My Showreel - My blog - My Flickr
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Ptex from Mari into nuke?

2013-10-13 Thread Jonathan Egstad
Yes, it would certainly make the Mari-Nuke bridge much cleaner.

I was a bit hasty in saying the existing shading system could do it - I doubt 
the current shading calls can support ptex's per-face uvs without modification. 
 The ptex reader needs the face index and the barycentric coordinate which is 
missing from the current VertexContext structure (as of 7.0 that is.)
But that info certainly exists at shading time so the renderer can provide it 
to the shading system in the future with minor changes to the VertexContext.

-jonathan

On Oct 12, 2013, at 5:00 PM, Darren Poe wrote:

 Hey Jonathan!  Long time indeed, hope you've been well!
 
 Thanks for all the info on ptex...  sounds a bit more complicated than I 
 hoped.  I actually tried loading in a Mari ptx file into nuke, and it seemed 
 to be pulling in the geo but not any image data.  I also noticed that the 
 original geo (2 mb in size) became a 60 mb ptx file with only 1 2k texture 
 channel. Curious what would happen on a large model...
 
 In any case I do hope Foundry is thinking about implementing this -- it is a 
 great feature of Mari to be able to paint on UV-less models, potentially a 
 huge time saver.  I will definitely ask!
 
 -Darren
 
 On 2013-10-11, at 4:23 PM, Jonathan Egstad wrote:
 
 Whoops, I mean Darren… see, it has been a long time...  :)
 
 On Oct 11, 2013, at 4:13 PM, Jonathan Egstad wrote:
 
 Hi Darin!
 Long time no hear.
 
 I doubt it - ptex files are not normal 2D textures so Nuke's current 2D 
 structure won't support them out of the box.  There's probably some wacky 
 way to shove the ptex data into the existing float buffers but none of 
 standard Nuke nodes would know how to interpret the data.
 It's impractical to unpack as well as you may need hundreds of different 
 layers all at different resolutions.
 
 Likely Nuke's 2D system will need to be extended like it was to support 
 Deep data in order to support the packed encoding of ptex.  Perhaps the 
 Foundry is already working on this - you should ask.
 
 -jonathan
 
 On Oct 11, 2013, at 11:19 AM, Darren Poe wrote:
 
 First off let me say I'm pretty new to ptex... But if say I paint on an 
 obj model with no UVs in mari, is it currently possible to bring that 
 texture into nuke and apply it directly to the obj?
 Thanks,
 
 -Darren
 
 Sent from my iPhone___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
 
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
 
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
 
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users


Re: [Nuke-users] Ptex from Mari into nuke?

2013-10-11 Thread Jonathan Egstad
Hi Darin!
Long time no hear.

I doubt it - ptex files are not normal 2D textures so Nuke's current 2D 
structure won't support them out of the box.  There's probably some wacky way 
to shove the ptex data into the existing float buffers but none of standard 
Nuke nodes would know how to interpret the data.
It's impractical to unpack as well as you may need hundreds of different layers 
all at different resolutions.

Likely Nuke's 2D system will need to be extended like it was to support Deep 
data in order to support the packed encoding of ptex.  Perhaps the Foundry is 
already working on this - you should ask.

-jonathan

On Oct 11, 2013, at 11:19 AM, Darren Poe wrote:

 First off let me say I'm pretty new to ptex... But if say I paint on an obj 
 model with no UVs in mari, is it currently possible to bring that texture 
 into nuke and apply it directly to the obj?
 Thanks,
 
 -Darren
 
 Sent from my iPhone___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users


Re: [Nuke-users] Ptex from Mari into nuke?

2013-10-11 Thread Jonathan Egstad
Whoops, I mean Darren… see, it has been a long time...  :)

On Oct 11, 2013, at 4:13 PM, Jonathan Egstad wrote:

 Hi Darin!
 Long time no hear.
 
 I doubt it - ptex files are not normal 2D textures so Nuke's current 2D 
 structure won't support them out of the box.  There's probably some wacky way 
 to shove the ptex data into the existing float buffers but none of standard 
 Nuke nodes would know how to interpret the data.
 It's impractical to unpack as well as you may need hundreds of different 
 layers all at different resolutions.
 
 Likely Nuke's 2D system will need to be extended like it was to support Deep 
 data in order to support the packed encoding of ptex.  Perhaps the Foundry is 
 already working on this - you should ask.
 
 -jonathan
 
 On Oct 11, 2013, at 11:19 AM, Darren Poe wrote:
 
 First off let me say I'm pretty new to ptex... But if say I paint on an obj 
 model with no UVs in mari, is it currently possible to bring that texture 
 into nuke and apply it directly to the obj?
 Thanks,
 
 -Darren
 
 Sent from my iPhone___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
 
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users


Re: [Nuke-users] Ptex from Mari into nuke?

2013-10-11 Thread Jonathan Egstad
With the existing system you could write a custom PtexRead plugin that 
implemented the Iop::sample() calls and passed them to the ptex lib.  I've done 
this with .dshad files so that Nuke Lights could use pre-rendered deep shadow 
files.

If the Foundry extended the Iop::sample() calls to the Reader class then a 
ptexReader plugin could do the same thing.

-jonathan

On Oct 11, 2013, at 5:26 PM, Michael Garrett wrote:

 This is actually something you can do with Vray in Nuke, at least with a 
 recorded demo I saw online. It would be great if we could just render it 
 through the ScanlineRender node though.
 
 
 On 11 October 2013 19:23, Jonathan Egstad jegs...@earthlink.net wrote:
 Whoops, I mean Darren… see, it has been a long time...  :)
 
 On Oct 11, 2013, at 4:13 PM, Jonathan Egstad wrote:
 
  Hi Darin!
  Long time no hear.
 
  I doubt it - ptex files are not normal 2D textures so Nuke's current 2D 
  structure won't support them out of the box.  There's probably some wacky 
  way to shove the ptex data into the existing float buffers but none of 
  standard Nuke nodes would know how to interpret the data.
  It's impractical to unpack as well as you may need hundreds of different 
  layers all at different resolutions.
 
  Likely Nuke's 2D system will need to be extended like it was to support 
  Deep data in order to support the packed encoding of ptex.  Perhaps the 
  Foundry is already working on this - you should ask.
 
  -jonathan
 
  On Oct 11, 2013, at 11:19 AM, Darren Poe wrote:
 
  First off let me say I'm pretty new to ptex... But if say I paint on an 
  obj model with no UVs in mari, is it currently possible to bring that 
  texture into nuke and apply it directly to the obj?
  Thanks,
 
  -Darren
 
  Sent from my iPhone___
  Nuke-users mailing list
  Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
  http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
 
  ___
  Nuke-users mailing list
  Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
  http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
 
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
 
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] nukepedia and reflectioncard

2013-06-18 Thread Jonathan Egstad
Yes, unfortunately it's impractical to provide built plugins for all the 
various Nuke versions and system architectures - that's why the source code is 
free!  :)

-jonathan

On Jun 17, 2013, at 4:59 PM, Frank Rueter wrote:

 Hi Paul,
 
 sorry Nukepedia is taking a bit longer to come back online than anticipated. 
 We have the bulk work done but still need to put in some more hours.
 
 I just checked the database and it looks like Jonathan uploaded the source 
 code to Nukepedia for those plugs but no compiles.
 
 
 
 
 Cheers,
 frank
 
 
 
 On 15/06/13 02:58, paulinventome wrote:
 I'm really sorry for another posting but after more research it might be 
 that the ReflectionCard / ReflectMat could do what i need. However all roads 
 lead to Nukepedia which has been down a while.
 
 Does anyone have a Win x64 version of the plug in or can tell me where else 
 i might be able to pick it up?
 
 It would be really useful!
 
 thanks!
 Paul
 
 
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
 
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] colour wheels

2013-04-16 Thread Jonathan Egstad
It's a good idea - put a feature request in to Foundry support.

-jonathan

On Apr 16, 2013, at 7:13 AM, Sam Cole wrote:

 One feature I miss from the shake days was being able to hold down 't' for 
 temperature 'l' for luminance and 'h' for hue and hold+click+drag to simply 
 tweak any colour swatch without opening the colour wheel pane. 
 I also miss this, it may not actually be faster but it felt like less of an 
 interruption when working.
 
 ./sam
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users


Re: [Nuke-users] reflections

2013-04-05 Thread Jonathan Egstad
Use the ReflectMat as the material on the objects you want to see the 
reflections in - the reflections should show up using a Phong material but they 
won't layer properly in Z if you have multiple ReflectionCards overlapping.

Then attach the ReflectionCard light to the same Scene as the objects, attach 
the texture you want to reflect to the ReflectionCard, and place it in the 3D 
viewer.  You should leave single-sided off until you get the 
placement/orientation correct.

SInce you won't see the reflection in OpenGL you'll have to place it roughly 
and judge the result in the render.

-jonathan 


On Apr 4, 2013, at 12:17 PM, Pat Wong wrote:

 hi johnathon,
 
 
 ive just installed your relfct Mat and reflection cards , but a little unsure 
 on how to use them coudl please give me a run down, or a sample script. to 
 show how they work thanks,
 
 
 patrick
 
 
 
 
 On 25 March 2013 10:33, Jonathan Egstad jegs...@earthlink.net wrote:
 You can also try the ReflectionCard  ReflectMat plugins on nukepedia.
 
 -jonathan
 
 On Mar 25, 2013, at 9:45 AM, Deke Kincaid wrote:
 
 You can fake this by setup up a camera to render from the reflection point 
 and then re-project that render back onto the surface.  Or like old school 
 renderman reflections before they had raytracing you can generate a cube or 
 spherical map and then use it as a texture with the env light in nuke.
 
 -
 Deke Kincaid
 Creative Specialist
 The Foundry
 Mobile: (310) 883 4313
 Tel: (310) 399 4555 - Fax: (310) 450 4516
 
 The Foundry Visionmongers Ltd.
 Registered in England and Wales No: 4642027
 
 
 On Mon, Mar 25, 2013 at 8:01 AM, Pat Wong patwon...@gmail.com wrote:
 hi
 
 is it possible to make a real relection pass from a standard cached 3d scene 
 from a vanilla install of nuke. I can obviously get 3d to render some 
 additional aov's such as position passes. normals. etcetc. is it possibke 
 and are the results good enough from the scanline renderer. no prman 
 licences at the place im at too.
 thanks
 
 pat
 
 
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
 
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
 
 
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
 
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] reflections

2013-03-27 Thread Jonathan Egstad
 I want the sky to wizz past. the card is moving in the scene in z space. but 
 the reflections seem quite motionless.. would u achieve this by transforming 
 the inputting image or tweak with a spherical transform before it is plugged 
 into the envlight
 
That's the problem with environment lights - they're always at infinity so 
there's never any translation, only rotation, so they will never appear as if 
they're translating along the surface.

-jonathan

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Alexa Artifacts

2013-03-27 Thread Jonathan Egstad
Looks like a very  slight resize was done.

-jonathan

On Mar 27, 2013, at 4:56 PM, Igor Majdandzic subscripti...@badgerfx.com 
wrote:

 Hey guys,
 we got footage from a shoot with Alexa being the camera. It was shot in 
 ProRess 444. The problem is: The picture has some artifacts which confuse me 
 the codec being 444. I attached some images which show some of the grain 
 patterns. Is this normal?
  
 thx,
 Igor
  
  
  
 --
 igor majdandzic
 compositor |
 i...@badgerfx.com
 BadgerFX | www.badgerfx.com
  
 Von: nuke-users-boun...@support.thefoundry.co.uk 
 [mailto:nuke-users-boun...@support.thefoundry.co.uk] Im Auftrag von Deke 
 Kincaid
 Gesendet: Mittwoch, 27. März 2013 23:47
 An: Nuke user discussion
 Betreff: Re: [Nuke-users] FusionI/O and Nuke
  
 Hi Michael
 
 I'm actually testing this right now as Fusionio just gave us a bunch of them. 
  Early tests reveal that with dpx it's awesome but with openexr zip 
 compressed file it it is spending more time with compression, not sure if it 
 is cpu bound or what(needs more study but its slower).  Openexr uncompressed 
 files though are considerably superfast but of course the issue is that it is 
 18 meg a frame.  These are single layer rgba exr files.
 
 -
 Deke Kincaid
 Creative Specialist
 The Foundry
 Mobile: (310) 883 4313
 Tel: (310) 399 4555 - Fax: (310) 450 4516
 
 The Foundry Visionmongers Ltd.
 Registered in England and Wales No: 4642027
  
 
 On Wed, Mar 27, 2013 at 3:26 PM, Michael Garrett michaeld...@gmail.com 
 wrote:
 I'm evaluating one of these at the moment and am interested to know if others 
 have got it working with Nuke nicely, meaning, have you been able to really 
 utilise the insane bandwidth of this card to massively accelerate any part of 
 your day to day compositing?
  
 So far, I've found it has no benefit when localising all Reads in a somewhat 
 heavy comp, or even playing back a sequence of exr's or deep files, compared 
 to localised sequences on a 10K Raptor drive also in my workstation - 
 hopefully I'm missing something big though, this is day one after all.  
  
 There may be real tangible benefits to putting the Nuke cache on it though - 
 I'll see how it goes.
  
 I'm also guessing that as gpu processing becomes more prevalent in Nuke that 
 we will see a real speed advantage handing data from a card like this 
 straight to the gpu.
  
 Thanks,
 Michael
 
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
  
 crop-plate.jpg
 crop-plate_areas.jpg
 crop-plate_areas-edgeDetect.jpg
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: AW: [Nuke-users] Alexa Artifacts

2013-03-27 Thread Jonathan Egstad
No idea, but it looks an awful lot like filtering from a slight resize 
operation.

-jonathan

On Mar 27, 2013, at 5:29 PM, Igor Majdandzic subscripti...@badgerfx.com 
wrote:

 do you mean in camera? because that was from the original qt footage
  
 --
 igor majdandzic
 compositor |
 i...@badgerfx.com
 BadgerFX | www.badgerfx.com
  
 Von: nuke-users-boun...@support.thefoundry.co.uk 
 [mailto:nuke-users-boun...@support.thefoundry.co.uk] Im Auftrag von Jonathan 
 Egstad
 Gesendet: Donnerstag, 28. März 2013 01:10
 An: Nuke user discussion
 Cc: Nuke user discussion
 Betreff: Re: [Nuke-users] Alexa Artifacts
  
 Looks like a very  slight resize was done.
 
 -jonathan
 
 On Mar 27, 2013, at 4:56 PM, Igor Majdandzic subscripti...@badgerfx.com 
 wrote:
 
 Hey guys,
 we got footage from a shoot with Alexa being the camera. It was shot in 
 ProRess 444. The problem is: The picture has some artifacts which confuse me 
 the codec being 444. I attached some images which show some of the grain 
 patterns. Is this normal?
  
 thx,
 Igor
  
  
  
 --
 igor majdandzic
 compositor |
 i...@badgerfx.com
 BadgerFX | www.badgerfx.com
  
 Von: nuke-users-boun...@support.thefoundry.co.uk 
 [mailto:nuke-users-boun...@support.thefoundry.co.uk] Im Auftrag von Deke 
 Kincaid
 Gesendet: Mittwoch, 27. März 2013 23:47
 An: Nuke user discussion
 Betreff: Re: [Nuke-users] FusionI/O and Nuke
  
 Hi Michael
 
 I'm actually testing this right now as Fusionio just gave us a bunch of them. 
  Early tests reveal that with dpx it's awesome but with openexr zip 
 compressed file it it is spending more time with compression, not sure if it 
 is cpu bound or what(needs more study but its slower).  Openexr uncompressed 
 files though are considerably superfast but of course the issue is that it is 
 18 meg a frame.  These are single layer rgba exr files.
 
 -
 Deke Kincaid
 Creative Specialist
 The Foundry
 Mobile: (310) 883 4313
 Tel: (310) 399 4555 - Fax: (310) 450 4516
 
 The Foundry Visionmongers Ltd.
 Registered in England and Wales No: 4642027
  
 
 On Wed, Mar 27, 2013 at 3:26 PM, Michael Garrett michaeld...@gmail.com 
 wrote:
 I'm evaluating one of these at the moment and am interested to know if others 
 have got it working with Nuke nicely, meaning, have you been able to really 
 utilise the insane bandwidth of this card to massively accelerate any part of 
 your day to day compositing?
  
 So far, I've found it has no benefit when localising all Reads in a somewhat 
 heavy comp, or even playing back a sequence of exr's or deep files, compared 
 to localised sequences on a 10K Raptor drive also in my workstation - 
 hopefully I'm missing something big though, this is day one after all.  
  
 There may be real tangible benefits to putting the Nuke cache on it though - 
 I'll see how it goes.
  
 I'm also guessing that as gpu processing becomes more prevalent in Nuke that 
 we will see a real speed advantage handing data from a card like this 
 straight to the gpu.
  
 Thanks,
 Michael
 
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
  
 crop-plate.jpg
 crop-plate_areas.jpg
 crop-plate_areas-edgeDetect.jpg
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] reflections

2013-03-26 Thread Jonathan Egstad
You need the ReflectionCard plugin on Nukepedia - it will give you a localized 
reflection source and you can even layer multiple cards in depth and they will 
properly alpha blend in Z.

-jonathan


On Mar 26, 2013, at 8:22 PM, Pat Wong patwon...@gmail.com wrote:

 ive tried without too much success, also is a bit of a hack...
 
 Im looking for true reflection solution, i thought nuke would be able to 
 offer me it.
 
 
 I need to make a relection of moving traffic lights nto a moving car 
 windscreen on from an environment hdr or a cube..
 
 I suspect the env light would do what i want but i just cant to get the 
 precise look and action  from the reflection off the environment light im 
 desiring..
 
 
 
 
 On 26 March 2013 20:10, Marten Blumen mar...@gmail.com wrote:
 would this work? 
 http://www.nukepedia.com/3d/in-3dmirror/ 
 
 
 On 27 March 2013 15:59, Pat Wong patwon...@gmail.com wrote:
 well at the moment it just a standared nuke 3d scene. but im guessing it will 
 be a baked out alembic or obj..
 
 
 
 On 26 March 2013 18:07, Marten Blumen mar...@gmail.com wrote:
 Out of interest what format is your standard 3d cache in?
 
 
 On 26 March 2013 20:24, Pat Wong patwon...@gmail.com wrote:
 ive just found this online from somebody ... but cant seem to get it the tree 
 to work , anybody tried this before?
 
 
 
 http://me-kipedia.blogspot.ca/2012/07/create-reflection-map-in-nuke.html
 
 
 create reflection map in Nuke.
 - add environmental light to a scene with the object.
 - add a new scene and locate a new camera where the object is.
 - connect the scene to the scanlinRenderer and set the projection type to 
 sphere.
 - connect the scanlineRenderer to map input of the environmental light.
 - composite the object with reflection over the same object with diffusion.
 
 
 
 
 
 
 
 On 25 March 2013 23:59, Pat Wong patwon...@gmail.com wrote:
 thanks guys ill try those ..
 
 Johnathan your refl mat looks usefull , i just need to find a way to compile 
 them at work...
 
 
 
 
 On 25 March 2013 10:33, Jonathan Egstad jegs...@earthlink.net wrote:
 You can also try the ReflectionCard  ReflectMat plugins on nukepedia.
 
 -jonathan
 
 On Mar 25, 2013, at 9:45 AM, Deke Kincaid wrote:
 
 You can fake this by setup up a camera to render from the reflection point 
 and then re-project that render back onto the surface.  Or like old school 
 renderman reflections before they had raytracing you can generate a cube or 
 spherical map and then use it as a texture with the env light in nuke.
 
 -
 Deke Kincaid
 Creative Specialist
 The Foundry
 Mobile: (310) 883 4313
 Tel: (310) 399 4555 - Fax: (310) 450 4516
 
 The Foundry Visionmongers Ltd.
 Registered in England and Wales No: 4642027
 
 
 On Mon, Mar 25, 2013 at 8:01 AM, Pat Wong patwon...@gmail.com wrote:
 hi
 
 is it possible to make a real relection pass from a standard cached 3d scene 
 from a vanilla install of nuke. I can obviously get 3d to render some 
 additional aov's such as position passes. normals. etcetc. is it possibke 
 and are the results good enough from the scanline renderer. no prman 
 licences at the place im at too.
 thanks
 
 pat
 
 
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
 
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
 
 
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
 
 
 
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
 
 
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
 
 
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
 
 
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
 
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk

Re: [Nuke-users] reflections

2013-03-25 Thread Jonathan Egstad
You can also try the ReflectionCard  ReflectMat plugins on nukepedia.

-jonathan

On Mar 25, 2013, at 9:45 AM, Deke Kincaid wrote:

 You can fake this by setup up a camera to render from the reflection point 
 and then re-project that render back onto the surface.  Or like old school 
 renderman reflections before they had raytracing you can generate a cube or 
 spherical map and then use it as a texture with the env light in nuke.
 
 -
 Deke Kincaid
 Creative Specialist
 The Foundry
 Mobile: (310) 883 4313
 Tel: (310) 399 4555 - Fax: (310) 450 4516
 
 The Foundry Visionmongers Ltd.
 Registered in England and Wales No: 4642027
 
 
 On Mon, Mar 25, 2013 at 8:01 AM, Pat Wong patwon...@gmail.com wrote:
 hi
 
 is it possible to make a real relection pass from a standard cached 3d scene 
 from a vanilla install of nuke. I can obviously get 3d to render some 
 additional aov's such as position passes. normals. etcetc. is it possibke and 
 are the results good enough from the scanline renderer. no prman licences at 
 the place im at too.
 thanks
 
 pat
 
 
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
 
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Z-depth and semi-transparent objects

2013-03-13 Thread Jonathan Egstad
 Cranking the alpha up is a workaround, but only in the case above where you 
 create one scanline for every card, otherways transparent fg cards will 
 totally occlude bg cards.

afaik the no-Z output below an alpha threshold thing is pretty common in 
commercial renderers - Renderman does exactly that and calls the control 
'zthreshold'.
The problem is really that that value is hardcoded into Nuke's shading system 
and not available for user control…  Unfortunately many aspects of the 
ScanlineRender render/shading context are not exposed to users which is why 
AtomKraft is able to more deftly control the renderer's state via the rib 
interface.


 Jonathan, you shouldnt take credit for the neglect of the 3d engine for the 
 last five years if you havent been working on it since then. I apologize if 
 you personally felt targeted, but its easy to let some sarcasm slip out when 
 you have to spend your precious weekend wrestling with nuke features   ;)

Trust me, after 20 years in production my skin isn't *that* thin - but it is 
hard to not feel somewhat responsible….
My own solution to the limitations of the stock rendering system has been to 
write alternatives ala AtomKraft or full-bore replacement renderers.

-jonathan

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users


Re: [Nuke-users] Z-depth and semi-transparent objects

2013-03-12 Thread Jonathan Egstad
Well I could, but the last time I piped up I got snapped at.

-jonathan

On Mar 12, 2013, at 6:15 PM, Gustaf Nilsson wrote:

 Hi
 
 Im sure there is a fantastic reason for this, but does anyone dare to 
 speculate about the design decision to clip the scanline renderers z depth to 
 zero if the alpha is quite low (. 0001)?
 
 Work around suggestions greatly appreciated
 
 Thanks 
 G
 
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] max number of motion blur samples in the scanline renderer

2013-03-02 Thread Jonathan Egstad
Well, I personally apologize for the lack of perfection in the pre-2008 portion 
of the software.  Perhaps if at the time DD had an actual 3D developer 
architecting it rather than a lone compositor moonlighting as a developer it 
would have turned out better.

However…

I do agree that five years after the DD-Foundry handoff that the 3D system 
should be more revamped than it currently is.
I'm sure they have a plan for a revamping and I recommend you submit 
suggestions and bug reports to them so it can be improved.

-jonathan

On Mar 2, 2013, at 3:13 AM, Gustaf Nilsson wrote:

 The weakest part is everything. My problems arent with missing functionality, 
 it is with the quality and reliability and predictability of existing 
 functions. 
 
 As im sure you know, you dont want to run into unpredictable and arbitrary 
 limitations such as max 66 samples, IT HAS TO JUST WORK. 
 
 The 3d interface is clumsy and sluggish, spotlight cone angle doesnt work as 
 expected on values lower than ~5, penumbra works so-so. If you use a card 
 with an alpha to cast shadows, then the shadow map is actually smaller than 
 the full circle of the light source, creating weird artefacts. Cant aim 
 spotlights. Shutter offset in scanline renderer is not really behaving in a 
 useful way. Random flickering and artefacts in 3d renders. These are things I 
 count on working but have cost me a lot of time to work around.
 
 G
 
 
 On Sat, Mar 2, 2013 at 2:47 AM, Marten Blumen mar...@gmail.com wrote:
 What's the weakest part of the 3d system? What are you trying to achieve that 
 you can't?
 
 You can extend it pretty well with plug-ins like PolyTools, Dynamic's, JOps 
 etc
 
 I do agree though. Nukes scanline renderer (everything 3d, really) is pretty 
 useless.
 
 
 
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
 
 
 
 -- 
 ■ ■ ■ ■ ■ ■ ■ ■ ■ ■
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] max number of motion blur samples in the scanline renderer

2013-03-01 Thread Jonathan Egstad
Because that's the maximum number of jitter samples in the hardcoded internal 
jitter array.  Ten years ago it didn't seem necessary to go beyond that, 
however now it's pretty limiting.

The real answer is to completely replace the sampling schema in the renderer 
with a more modern stochastic one.
Better yet, replace the whole renderer with a proper ray tracer...

-jonathan

On Mar 1, 2013, at 3:48 PM, Gustaf Nilsson wrote:

 Just want to add that the limit seems to be 66 samples. How does that make 
 sense??
 
 G
 
 On 1 Mar 2013 23:24, Gustaf Nilsson gus...@laserpanda.com wrote:
 Yeah that would be ace if it wasn't for the fact that i need the samples on 
 animated textures and i have multiple semi transparent objects
 
 Thanks, G
 
 On 1 Mar 2013 19:25, Marten Blumen mar...@gmail.com wrote:
 You can use VectorBlur after the Scanline to add more motion blur, this 
 smoothes the 60/70 samples 'limit'.
 
 Page 444 of the Nuke 7.04 User Guide explains it well - 'Adding Motion Blur 
 Using VectorBlur'
 
 
 On 2 March 2013 03:26, Gustaf Nilsson gus...@laserpanda.com wrote:
 Hi
 
 seems like the max amount of moblur samples is somewhere between 60 and 70, 
 is there a reason for that? Is there a way to hack beyond that?
 
 (other than doing two renders with an offset of the motionblur and combine 
 them after)
 
 G
 
 -- 
 ■ ■ ■ ■ ■ ■ ■ ■ ■ ■
 
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
 
 
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] max number of motion blur samples in the scanline renderer

2013-03-01 Thread Jonathan Egstad
Well yes, that's exactly what happened...  Why is 66 any weirder than 63 or 60 
or 99?  The sample count has no relation to powers of 2.

-jonathan

On Mar 1, 2013, at 4:59 PM, Gustaf Nilsson gus...@laserpanda.com wrote:

 Yeah, but what discombobulates me is why 66? Someone 10 years ago decided to 
 write if samples  66 then .? And noones bothered to change it?
 
 I do agree though. Nukes scanline renderer (everything 3d, really) is pretty 
 useless. Unfortunately (as suggested above) that is the renderer I am stuck 
 with at the moment, can not use atomkraft.
 
 G
 
 
 On Sat, Mar 2, 2013 at 12:08 AM, Jonathan Egstad jegs...@earthlink.net 
 wrote:
 Because that's the maximum number of jitter samples in the hardcoded internal 
 jitter array.  Ten years ago it didn't seem necessary to go beyond that, 
 however now it's pretty limiting.
 
 The real answer is to completely replace the sampling schema in the renderer 
 with a more modern stochastic one.
 Better yet, replace the whole renderer with a proper ray tracer...
 
 -jonathan
 
 On Mar 1, 2013, at 3:48 PM, Gustaf Nilsson wrote:
 
 Just want to add that the limit seems to be 66 samples. How does that make 
 sense??
 
 G
 
 On 1 Mar 2013 23:24, Gustaf Nilsson gus...@laserpanda.com wrote:
 Yeah that would be ace if it wasn't for the fact that i need the samples on 
 animated textures and i have multiple semi transparent objects
 
 Thanks, G
 
 On 1 Mar 2013 19:25, Marten Blumen mar...@gmail.com wrote:
 You can use VectorBlur after the Scanline to add more motion blur, this 
 smoothes the 60/70 samples 'limit'.
 
 Page 444 of the Nuke 7.04 User Guide explains it well - 'Adding Motion Blur 
 Using VectorBlur'
 
 
 On 2 March 2013 03:26, Gustaf Nilsson gus...@laserpanda.com wrote:
 Hi
 
 seems like the max amount of moblur samples is somewhere between 60 and 70, 
 is there a reason for that? Is there a way to hack beyond that?
 
 (other than doing two renders with an offset of the motionblur and combine 
 them after)
 
 G
 
 -- 
 ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ 
 
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
 
 
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
 
 
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
 
 
 
 -- 
 ■ ■ ■ ■ ■ ■ ■ ■ ■ ■
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Tiff metadata / tags

2013-02-08 Thread Jonathan Egstad
I think the tiff writer's lack of EXIF support has more to do with libtiff not 
supporting the writing of EXIF tags - it does supports the reading of them 
however.

So adding EXIF output support is probably a pretty heavy lift without libtiff's 
help...

-jonathan

On Feb 1, 2013, at 12:33 AM, Ben Dickson wrote:

  This thread concerns accessing a basic function that I figured nuke
  was capable of: metadata/tagging in a tiff format.
 
 As you found out, the tiffWriter doesn't write any of Nuke's extra metatdata.
 
 Based on
 http://stackoverflow.com/a/13455695
 ...I would guess the reason is: the TIFF format doesn't support arbitrary 
 metadata keys, you can only use the keys defined by the EXIF spec:
 
 http://www.sno.phy.queensu.ca/~phil/exiftool/TagNames/EXIF.html
 
 Whereas the OpenEXR metadata can store any arbitrary keys (e.g you can have a 
 string property named blahblahblah, and it'll work fine)
 
 So, when the tiffWriter was written there was a choice between either:
 
 1. Writing only valid metadata keys into the TIFF, and somehow dealing with 
 unsupported keys and non-string values (which would probably be confusing for 
 users, however it's done)
 2. Don't write extra TIFF metadata.
 
 Option 2 is far easier, and less chance for surprise.
 
 You could email The Foundry support and request the code for the tiffWriter, 
 and easily modify it to write the hostname attribute (the tiffReader.cpp is 
 included in the NDK examples, so the writer shouldn't be a problem)
 
 ..but that's more work to achieve almost the same thing as using the 
 afterRenderCallback to modifying the EXIF headers
 
 On 31/01/13 04:01, John RA Benson wrote:
 while i appreciate everone's help, you pegged it: your response is sort
 of inappropriate and semi-arrogant.
 
 This thread concerns accessing a basic function that I figured nuke was
 capable of: metadata/tagging in a tiff format.
 
 the reasons are really irrelevant, but I provided them to help people
 understand why I was looking for it. I don't know why this thread turned
 into a pipeline / file format / sys admin attack.
 
 #
 
 We aren't having sour frames, we are having an occasionally erratic farm
 issue. Any ability to help with forensics is welcome. Heck, this could
 have been caused by fx loading the network and nothing to do with comp
 renders. Our sys guys aren't having cocktails. Formats and render
 managers are not something that is willy nilly decided and changed on
 the fly.
 
 I personally don't like tiff either, but that's our delivery format
 until exr's can be delivered, so that's that. We used to render exr and
 convert, but why bother doubling (yeah, I know, not quite) storage
 requirements and recheck everything twice? It's nice to deliver the
 frames that are rendered and not have to re-check conversions.
 
 I'm curious now if there is any render time difference between tiff and
 exr and then converting like we used to. On the surface, it's an extra
 process and overhead on the net, but if Nuke has a harder time with tiff
 and is actually making renders slower, that could be a huge deal and
 reason to switch back. But that's a different thread.
 
 Cheers
 JRAB
 
 
 
 On 01/30/2013 01:24 PM, Julik Tarkhanov wrote:
 I know it will sound inappropriate and semi-arrogant (and of course it
 IS after all an issue in Nuke that it has no support for TIFF
 metadata, yes) but just to recap:
 
 - you are having sour frames in your render
 - the likely problem is some machine which has corrupt RAM sticks
 (we've had this) or network filesystem issue/congestion
 - you refuse to use a format which allows the tagging you are using
 natively (not going EXR and not going DPX)
 - you know that your renders rot due to one issue or another, it
 creates problems for the facility - and yet something as basic as
 logging frameranges per render packet is not possible. What is your
 sys then, an Exchange administrator having cocktails while your
 renders go south? If there is a render node that corrupts renders it's
 all hands on deck if management is at least semi-reasonable, since
 it's everyone's priority at that stage to isolate that node and
 reevaluate it's components
 - you resort to having to install some weird header baking CLI
 application that you will need to configure, compile, script and check
 it's security (you are shelling out in there...), and that either
 through your whole deployment pipe or per machine (depending on how
 good and lazy your sys is).
 
 That instead of first baking the EXRs and then simply converting them
 into the TIFFs you need (using an after render Nuke commandline call)?
 (whats the reason for TIF in the first place? are TIFFs needed for
 paint apps? are they your deliverable? sure you want to cut off
 headroom color if your frame takes so long)?
 
 Looks like you painting yourself into a corner, meanwhile looking for
 smarter and smarter ways to do it.
 
 On a recent project where we had an EXR pipeline we've made a gizmo
 

Re: [Nuke-users] deep file size exr 2.0

2013-01-10 Thread Jonathan Egstad
A exr2.0 deep image has an extra two 32-bit Z samples per deep sample in 
additional to the color channels - are you taking that into consideration when 
you wrote the first non-deep image?


On Jan 10, 2013, at 9:48 AM, Patrick Heinen wrote:

 Hi everyone,
 
 I just stumbled across a little thing that got me wondering. What I'am doing 
 is to take a nonvolumetric deep image and apply a DeepToImage to basically 
 flatten it. Now I use that output and render it with a standard write node as 
 uncompressed exr 16bit. Secondly I use the output of the DeepToImage and plug 
 it into a DeepFromImage to get a deep image with a single sample per pixel. I 
 render that with a DeepWrite as uncompressed exr 16bit. Both are set to 
 render all channels, so that they contain the exact same information. My 
 expectation was that they should have the same or at least approximately the 
 same file size. However the deep exr is roundabout 1.5 times bigger than the 
 normal exr.
 Is there anything I'm missing? Is there maybe some documentation on exr 2.0 
 somewhere, like the technical introduction paper but for exr 2.0?
 
 cheers Patrick
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users


Re: [Nuke-users] Re: World Space units Maya to Nuke

2012-12-02 Thread Jonathan Egstad
Actually it's in meters since the camera lens is 'calibrated' in millimeters.


Sent from my iPhone

On Dec 2, 2012, at 5:02 PM, Deke Kincaid d...@thefoundry.co.uk wrote:

 Nuke units are always in centimeters.  FBX lets you pick the unit size in the 
 export dialog, obj though has no such option so you have to change the global 
 prefs in the 3d application before exporting.
 
 -
 Deke Kincaid
 Creative Specialist
 The Foundry
 Mobile: (310) 883 4313
 Tel: (310) 399 4555 - Fax: (310) 450 4516
 
 The Foundry Visionmongers Ltd.
 Registered in England and Wales No: 4642027
 
 
 On Thu, Nov 29, 2012 at 3:38 AM, Mohamed Selim 
 nuke-users-re...@thefoundry.co.uk wrote:
 Its not a conversion issue. Its just maybe that the scene is in real world 
 scale or simply just big.
 
 So you need to scale your stuff in nuke, nothing wrong with that.
 
 I guess for particles it would be easier if your 3D artists can group all of 
 the scene in Maya and scale it down for you.
 
 
 
 Mohamed Selim
 Nuke Compositor
 Cairo, Egypt
 www.mselim.com
 
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
 
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Re: How to prioritize cards layering in nuke?

2012-11-26 Thread Jonathan Egstad
Turn off z-testing in the renderer, and manage the layering order by the input 
numbers on the Scene node.

Sent from my iPhone

On Nov 26, 2012, at 6:34 AM, Julik Tarkhanov ju...@hecticelectric.nl wrote:

 Unless you work with a renderer which allows manual z-buffer splitting, like 
 in Flame.
 
 On 26 nov. 2012, at 01:34, Frank Rueter fr...@beingfrank.info wrote:
 
 Agreed. Having polygons live in exactly the same world space is never a good 
 idea in my experience.
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users


Re: [Nuke-users] Normalize Viewer

2012-10-14 Thread Jonathan Egstad
It's not the calculation of the min/max that's expansive - it's pulling every 
pixel of the source image through the pipe just to calculate the min/max.

For instance Nuke skips scanlines if your zoomed out - if it's forced to 
calculate min/max then it must pull the invisible scanlines too.  Same for if 
you're zoomed into the image and the rest of the image is cropped offscreen.  
You get no speedup for any cropping.

Not arguing against having a normalize feature - just say'in it ain't free...

-jonathan

Sent from my iPhone

On Oct 14, 2012, at 3:38 PM, Nathan Rusch nathan_ru...@hotmail.com wrote:

 Seems like this feature is a perfect candidate to be implemented directly on 
 the GPU, since it already has the display buffer, and absolute accuracy isn’t 
 paramount.
  
 -Nathan
 
  
 From: Jonathan Egstad
 Sent: Sunday, October 14, 2012 9:50 AM
 To: Nuke user discussion
 Cc: nuke-users@support.thefoundry.co.uk
 Subject: Re: [Nuke-users] Normalize Viewer
  
 Hi Frank,
  
 Not to a devil's advocate or anything...but calculating the min/max of an 
 image means sampling the entire image before a single pixel can be drawn in 
 the Viewer.  Needless to say this will destroy Nuke's update speed.
  
 As long as that's understood as a side-effect of this feature, then soldier 
 on.
  
 -jonathan
 
 Sent from my iPhone
 
 On Oct 13, 2012, at 6:22 PM, Frank Rueter fr...@beingfrank.info wrote:
 
 None of those solutions actually produce what we're after though (some of 
 your solutions seem to invert the input).
 
 We need something that can compresses the input to a 0-1 range by offsetting 
 and scaling based on the image's min and max values (so the resulting range 
 is 0-1). You can totally do this with a Grade or Expression node and a bit 
 of tcl or python (or the CurveTool if you want to pre-compute), but that's 
 not efficient.
 
 I reckon this should be a feature built into the viewer for ease-of-use and 
 speed.
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Any way to change dpi in Nuke?

2012-09-07 Thread Jonathan Egstad
I'd suggest making a feature request to the Foundry to expose this parameter in 
the tiffWriter plugin.

If you're feeling ambitious you can modify the tiffWriter code that comes with 
Nuke.

-jonathan

On Sep 7, 2012, at 11:24 AM, Rich Bobo wrote:

 Thanks, Chris.
 
 
 Rich
 
 Rich Bobo
 Senior VFX Compositor
 Armstrong-White
 http://armstrong-white.com/
 
 Mobile:  (248) 840-2665
 Web:  http://richbobo.com/
 
 On Sep 7, 2012, at 1:34 PM, chris wrote:
 
 another idea would be to use imagemagick as a batch script:
 http://www.imagemagick.org/Usage/basics/#density
 
 ++ chris
 
 
 
 
 On 9/8/12 at 5:34 PM, d...@thefoundry.co.uk (Deke Kincaid) wrote:
 
 You could do this with the python image library(PIL).
 
 import Image
 im=Image.open(blahblah.tif)
 im.save(blahblah.tif, dpi=(600,600) )
 
 http://www.pythonware.com/library/pil/handbook/format-tiff.
 htm
 http://www.pythonware.com/library/pil/handbook/image.htm
 
 
 On Sat, Sep 8, 2012 at 12:12 AM, Rich Bobo
 richb...@mac.com wrote:
 
 Hi,
 
 If possible, I'd like to be able to write out TIFF images
 at a higher DPI than 72. I am taking images from Photoshop to Nuke and 
 writing out TIFFs. They start out at 424 dpi in Photoshop and Nuke changes 
 them to 72 dpi, the screen resolution. When the images are written out, 
 they are written as 72 dpi. It would be great if I could write the output 
 files with the original 424 dpi value. Is this possible? If not, I guess 
 I'll have to re-save them in Photoshop...
 
 
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
 
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] display calibration sRGB vs Gamma 2.2

2012-08-30 Thread Jonathan Egstad
Nuke doesn't force you to work in sRGB, that's just its default.
If you're preference is to calibrate to g2.2 then simply change the Nuke's lut 
defaults from sRGB to g2.2 and don't worry about converting anything.

-jonathan

On Aug 30, 2012, at 10:30 AM, chris ze.m...@gmx.net wrote:

 hello everybody,
 
 i'm evaluating some display calibration options and got confused, so i hoped 
 somebody can shed some light on this:
 
 i always assumed that the Nuke sRGB viewer LUT is made to match a standard 
 sRGB monitor.
 now, the calibration software for the NEC monitor i'm using offers different 
 target gamma curves, with a default of 2.2 and a custom option of sRGB. since 
 nuke lists sRGB, i would expect that sRGB is the right option for nuke.
 
 however, 2.2 is the recommended default in pretty much all profiling software 
 (x-rite, datacolor and spectraview), so it would appear that for a normal 
 desktop photo/video workflow this is the desired setting which most people 
 use (which is confusing again as i thought any color dumb application, like 
 final cut pro would be designed to work on a sRGB monitor).
 
 so, what do other people with better understanding do?
 calibrate to a sRGB monitor response curve for Nuke and a Gamma 2.2 for all 
 other apps and switch between the two? or am i missing something here?
 
 would be grateful for any pointers
 ++ chris
 
 
 
 
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users


Re: [Nuke-users] Projecting Alpha channel = making holes in card?

2012-08-18 Thread Jonathan Egstad
Wherever the alpha of the texture is almost zero ( 0.001 I think) a hole will 
appear in the object (doesn't produce a Z entry.)

-jonathan

On Aug 18, 2012, at 12:59 PM, kafkaz wrote:

 Hi,
 I know this is possible in Fusion, but I haven´t found out if it is possible 
 in Nuke. I just have regular scene projection, and now I would like to 
 project windows onto the wall, so I can see through. Is this possible?
 
 Thanks.
 JK
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users


Re: [Nuke-users] Re: Zmerge, is this the best I can expect?

2012-07-31 Thread Jonathan Egstad
A solution is to render your images at a minimum 4x resolution, do the zmerge 
and resize the result.
After all that's what the renderer is doing internally.

-jonathan


On Jul 31, 2012, at 9:26 AM, irwit nuke-users-re...@thefoundry.co.uk wrote:

 so is it not really worth perusing? I was hoping this would put me on so to 
 say until deep image was more native in vray?
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users


Re: [Nuke-users] nuke viewer and gamma slider

2012-07-15 Thread Jonathan Egstad
If the Viewer's in 8-bit mode then they're correct that the gamma is applied to 
the quantized values rather than the floats (post-lut vs. pre-lut.)  If the 
Viewer's in float mode using OpenGL it may not matter.

But Ron's right - the gamma slider is not intended for correcting the monitor's 
response, it's only a convenience for you to stretch the color to see into the 
blacks better.

So use the viewer input method or, more correctly, choose the correct luts in 
the Root settings panel.

-jonathan

On Jul 15, 2012, at 7:59 AM, nookieita wrote:

 Recently I have been told not to use the gamma slider on the viewer to gamma 
 correct images for linear workflow (gamma 2.2) because that slider is not 
 32bit, but istead use a gamma node and use it as an input process.
 I have never found any problem with it don't really see the point. I would 
 love if someone could confirm this or elaborate on it.
 
 Thank you very much
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users


Re: [Nuke-users] Color Space Conversion to DCP-compliant XYZ using Nuke

2012-06-07 Thread Jonathan Egstad
Never mind what I just said - it does matter what order the transforms happen 
in, just like in a 3D transform.
This should probably be a feature request to add a transform order control to 
the Colorspace node much like an Axis node has.

-jonathan

On Jun 7, 2012, at 9:26 AM, Jonathan Egstad wrote:

 That should not be necessary - you should be able to do it in one operation 
 using a single Colorspace node as the gamut remap and sRGB-XYZ will 
 concatenate into the same 3x3 matrix linear transform.
 
 If you're doing something non-linear in the colorspace node like changing the 
 gamma response of the data then then application order can be important.
 
 So this:
 Colorspace {
 primary_in sRGBprimary_out DCI-P3
 colorspace_in sRGB colorspace_out CIE-XYZ
 }
 
 Is equivalent to:
 Colorspace {
 primary_in sRGBprimary_out DCI-P3
 }
 Colorspace {
 colorspace_in sRGB colorspace_out CIE-XYZ
 }
 
 
 -jonathan
 
 On Jun 7, 2012, at 7:59 AM, John Mateer wrote:
 
 Thanks for this. As per the other thread, I think I've found the solution -- 
 use two Colorspace nodes, the first to change the gamut from sRGB to P3 and 
 the second to change the color space from Linear to CIE-XYZ. That seems to 
 do the trick.
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
 
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users


Re: [Nuke-users] Color Space Conversion to DCP-compliant XYZ using Nuke

2012-06-01 Thread Jonathan Egstad
Also check that you have the correct primaries selected as desaturated content 
can also look like a gamma problem.
I think 'DCI-P3' is the one to use for a DCP.

-jonathan

On Jun 1, 2012, at 8:55 AM, Matt Plec wrote:

 Do you have the colorspace on the write node set to linear? If not it will 
 be adding a log conversion to the image that comes in on the assumption that 
 it's currently in linear RGB.
 
 Matt
 
 
 On Fri, Jun 1, 2012 at 10:07 AM, John Mateer 
 nuke-users-re...@thefoundry.co.uk wrote:
 I am working on a project shot on a Red Epic that involves a lot of green 
 screen work so we are using Nuke. We need to test our composites in our 
 screening theatre, which accepts DCP.
 
 If we create DPXs of our comps, convert them to XYZ color space using our 
 Nucoda Film Master and create the DCP in CineAsset the resulting DCP has the 
 correct color gamut. However, if we use the Colorspace node in Nuke to 
 convert to CIE XYZ and use those DPXs in CineAsset to create the file, the 
 resulting DCPs are washed out and do not have the correct color. I have tried 
 changing gamma using a number of settings (DCP requires 2.6) but with no 
 success.
 
 Any suggestions?
 
 Best,
 John
 
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
 
 
 
 -- 
 Matt Plec
 Senior Product Designer
 The Foundry
 Web: www.thefoundry.co.uk
 
 The Foundry Visionmongers Ltd.
 Registered in England and Wales No: 4642027
 
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] RotoPaint Speed Concerns

2012-02-17 Thread Jonathan Egstad
 FWIW I had a script with about 4000 paint strokes (mostly clones) last week 
 and had no issues. Prerenders and sensible amount of strokes and was fine 
 (still about 500+ in each node).
 
 Howard - FWIW - A sensible number for me is *a lot*. On Flame, I have not 
 felt the same kind of heaviness or slowness issues when dealing with many 
 rotosplines (in Flame, they're gmasks) that have hundreds of points. So, it 
 came as a bit of a surprise to have Nuke bog down with what would be a 
 typical load for me in the past…  ;^)

There's major architectural differences between Nuke and Flame that contribute 
to this - the primary one being Nuke's Op/Knob open architecture and the way 
knob events propagate though the tree.  Flame is buffer-based and heavily 
OpenGL accelerated and while Nuke's Roto node has some acceleration it must 
operate in the same architecture that's fighting against it due to the 
low-level design.

They are completely different beasts under the hood - the same compromises that 
keeps Nuke from being as interactive as Flame allows Nuke to spank Flame's butt 
in other areas.

I'm sure the Foundry will continue to improve the performance, just don't 
expect Flame-level interactiveness at the end of the day...

-jonathan

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Video Range vs Full Range on Rec709 Files?

2012-01-06 Thread Jonathan Egstad
This may be due to the use of ffmpeg.  At a previous company it was necessary 
for me to modify the ffmpeg lib to use rec709 primaries instead of rec601 to 
get accurate color Quicktimes.  It was a real bear to find the responsible 
code...

Ignore me if ffmpeg is not involved in this case…

-jonathan


 Yes, this is a bit of a bummer. To be clear, when reading most
 Quicktimes, Nuke seems to use the Rec601 matrix instead of the Rec709
 one. This causes chroma and saturation shifts compared to FCP, Smoke
 or Baselight's reading of the same files.
 
 The legal range problem is a a separate thing. In the past I've been
 able to read the full range from uncompressed Quicktimes by ticking
 the raw data box on the Read, which causes the superwhites/blacks to
 come in as over 1/below 0. This can then be graded back into place as
 Howard said above. I'm not sure if this works in current versions or
 with ProRes though.
 
 I mostly avoid Quicktimes getting anywhere near Nuke :-/
 
 -- 
 Lewis Saunders
 8 bit .sgi all the way
 London
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users


Re: [Nuke-users] Render issue with dpx

2011-12-31 Thread Jonathan Egstad
That behavior is defined on the 'LUT' tab of the root node panel (project 
settings.). You'll see that 10 bit and 16 bit files are defaulted to cineon and 
srgb respectively by default.

-jonathan


On Dec 31, 2011, at 2:27 AM, rahul kv metalra...@yahoo.co.in wrote:

 While changing from 10 bit to 16 bit, for some reason Nuke automatically 
 changes the color space from Cineon to Srgb. So you have to manually change 
 it back to Cineon and your color space issue will solved. 
 
 From: Mahesh Kumar maheshfromonl...@hotmail.com
 To: Nuke support nuke-users@support.thefoundry.co.uk 
 Sent: Wednesday, 28 December 2011 6:57 PM
 Subject: [Nuke-users] Render issue with dpx
 
 Dear friends,
 
  we are facing some issues regarding our rendered files from 
 Nuke , which were in .tga format . which were then taken into After effects 
 for Titling and was rendered out as a 16 bit .dpx file format. here the issue 
 is that our  final DPX file has become much darker than it initially was.
 
 plz suggest me.
 
 thanks in advance 
 Mahesh 
 
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
 
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] other channel in vray multichannel EXR

2011-10-23 Thread Jonathan Egstad
Not aware of a channel naming spec for openexr.  Afaik an openexr channel 
doesn't have to have a layer/chan split in its name - but Nuke does.  Nuke's 
not so much trying to 'fix it' for you more than it's attempting to fit the 
openexr channel name into it's layer/channel scheme.

Nuke doesn't support single channels without a containing layer, so if the 
channel name doesn't include some obvious layer/chan separator then the code 
attempts to guess the correct layer name based on the channel name, and that's 
where the 'unnamed' or 'other' comes from.

For the last two companies I've worked for I've modified the exrReader plugin 
to automatically handle unrecognized single channels by using the channel name 
as the layer and 'unnamed' as the channel…ie 'renderID.unnamed'.

And what's perhaps confusing about the code in the exrReader is that's not 
obvious that there's *additional* name munging going on during the channel 
creation code that automatically recognizes 'red' 'green' 'blue' 'alpha' and 
assigns those to the 'rgba' layer.

-jonathan

 Does nuke really have a bad name handling bug if the layer names aren't 
 written to the exr spec? Seems like it would be a lot better to make the 
 fixes upstream than wait/have nuke do a fix that could end up creating other 
 issues. I'd be curious how other viewers read your layer? 
 
 I have to admit, I'd personally rather nuke didn't do any automatic fixing 
 for me, like setting colorspaces and calling a layer.Z a depth channel or 
 the UV's 'forward' and 'motion'. Why not leave it alone and call it what the 
 user wanted to call it? Maybe that's the fix because then it wouldn't matter?
 
 jrab
 
 Ryan O'Phelan wrote:
 
 Thanks. Will do.
 
 R
 
 On Thu, Oct 20, 2011 at 3:44 PM, Jonathan Egstad jegs...@earthlink.net 
 wrote:
 The exrReader is not doing the correct name munging when converting the 
 file's channel name into the Nuke channel name.
 
 The vray channel is likely missing a separator in it's name ('.') that the 
 reader normally uses to split layer from channel.  Either modify the 
 exrReader.cpp code to handle it, or change the vray channel name to include 
 a '.' - like 'renderID.index' or something like that.
 
 (and submit a bug report to the Foundry describing the bad name handling)
 
 -jonathan
 
 On Oct 19, 2011, at 1:40 PM, Ryan O'Phelan wrote:
 
 let me revise a bit. 
 The UV pass comes into nuke as forward
 the renderID comes in as either other.red or other.green. 
 
 Strange...
 
 R
 
 On Wed, Oct 19, 2011 at 4:34 PM, Ryan O'Phelan designer...@gmail.com 
 wrote:
 I've noticed that our renderId and UV passes from maya vray 2.0 come in to 
 nuke as a channel named other.
 The UV layer is missing, but it's channels end up as separate channels in 
 other. 
 
 We haven't had this issue before mainly because we weren't using 
 multichannel files. Is there a naming convention that would stop this from 
 happening?
 
 
 Thanks,
 Ryan
 
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
 
 
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
 
 
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
 
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] Nuke + UV tiles

2011-10-19 Thread Jonathan Egstad
If you mean the ScanlineRenderer is in 'uv' mode and you're trying to get the 
unwrapped result of one of the tiles - you'll need the UVTile plugin from 
Nukepedia which specifically supports that.

-jonathan


On Oct 17, 2011, at 2:13 AM, Mason Doran wrote:

 What about baking projections to uv tiles outside 0-1?
 
 On 16/10/2011 23:20, Frank Rueter wrote:
 Yup, it does as Michael is saying.
 The way I usually use this is by utilising a ContactSheet node followed by a 
 Reformat to get my viewport in the lower left corner (first udim/uv tile) 
 and make sure to keep the bounding box.
 
 
 
 On Oct 17, 2011, at 5:19 AM, Michael Habenicht wrote:
 
 Yes, it does, tried it some weeks ago.
 
 You just have to place it outside of the viewport. So the format defines 
 your 0-1 range. If you want to put your image in the 1-2 range transform it 
 to the right by your whole width.
 
 Best regards,
 Michael
 
 - Original Message -
 From: masondo...@gmail.com
 To: nuke-users@support.thefoundry.co.uk
 Date: 16.10.2011 16:48:25
 Subject: [Nuke-users] Nuke + UV tiles
 
 
 
 Does Nuke support geometry with UV tiles outside of the 0-1 range?  I am
 using Mari for some projection work and want to take the geometry into
 Nuke without changing the UVs.
 
 
 
 cheers,
 
 m
 
 
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
 
 
 
 
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
 
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users


Re: [Nuke-users] Nuke + UV tiles

2011-10-16 Thread Jonathan Egstad
Nukepedia Plugins - UVTile

Only drawback is that you need to compile it up.

-jonathan

On Oct 16, 2011, at 7:57 AM, Ron Ganbar ron...@gmail.com wrote:

 I believe it doesn't.
 
 
 Ron Ganbar
 email: ron...@gmail.com
 tel: +44 (0)7968 007 309 [UK]
  +972 (0)54 255 9765 [Israel]
 url: http://ronganbar.wordpress.com/
 
 
 
 On 16 October 2011 16:48, Mason Doran masondo...@gmail.com wrote:
 
 
 Does Nuke support geometry with UV tiles outside of the 0-1 range?  I am 
 using Mari for some projection work and want to take the geometry into Nuke 
 without changing the UVs.
 
 
 
 cheers,
 
 m
 
 
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
 
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Re: [Nuke-users] LUT global location, variable(s)???

2011-10-07 Thread Jonathan Egstad
 Yeah, we use RV to and I've already implemented the LUTs pulldown.  Although 
 I can't figure out how to force No Conversion when RV opens.  The Alexa's 
 auto default to DPX/Cineon.

You have to override the default colorspace handling code which is triggered on 
a new-source event - I believe it's in the source_setup.mu' file.

-jonathan

 Thoughts?!
 
 Yeah, I'm feeling the same as you on FC too Nathan, although having that as a 
 secondary is nice and it pisses me off to no end to not be able to figure 
 this out or find a workaround.
 
 Thanks everyone!
 
 
 
 On Thu, Oct 6, 2011 at 7:09 PM, Nathan Rusch nathan_ru...@hotmail.com wrote:
 Well, to be fair, FrameCycler is probably the black sheep in this 
 conversation... from what I can tell they just sort of do their own thing and 
 expect everyone to deal with it. Never once have I heard of them beta testing 
 a product or feature or asking for user feedback or input. Thus, my stance on 
 Framecycler has always been to steer away from it when things start to get 
 serious as far as maintaining any kind of a consistent, unified pipeline.
  
 Offhand, RV can actually do centralized configuration stuff even with local 
 installations (we use it that way). You can set up a default preferences file 
 in a central location and point all users to it to start off with, and 
 whichever config file is newer (between the central and the user’s) will take 
 precedence in cases where a preference is defined in both. And obviously 
 being able to roll your own extensions is a huge benefit for review tools, 
 LUT repositories, etc.
  
 -Nathan
 
  
 From: Dan Walker
 Sent: Thursday, October 06, 2011 10:52 AM
 To: Nuke user discussion
 Subject: Re: [Nuke-users] LUT global location, variable(s)???
  
 Not too sure what else you were looking for really. 
 
 
 Being able to control configuration settings on a piece of software (eg. 
 FrameCycler) without having to copy something to the software's native 
 location (per machine).  
 
 Nuke and RV can do it.  RV is installed on our server and there are env's you 
 can set for custom lut locations.  You can also modify the User_settings.xml 
 file that FC generates to hard code a path, but again, that file is local to 
 the machine FC is being ran on.
 
 
 LUTPath1\\xxx\xxx_xxx\release\nuke\config\project\XXX\versions\v001\nuke_LUT\/LUTPath1
 
 After doing quiet a lot of research, I'm still stumped with FrameCycler.  Yes 
 there are variables to control where framecycler puts it's temp files (which 
 is where the FrameCycler User_settings.xml also resides) but it's ridiculous 
 to assume, when defining a LUT setting for everyone, that you'll need to do 
 it on a per machine basis.
 
 Yeah, it's great that there are software deployment systems that can control 
 the push of a config and all the other features that comes with it, but come 
 on! This isnt' the 90's for cry'in out loud.  One would think software has 
 been developed to assume that a facility would maybe want to have the 
 feature of setting a global config and not assume all facilities are 
 interconnected when it comes to Pipeline and IT departments.  What about 
 facilities that are global (meaning all around the world) that reference a 
 cloud server?  Should I have to push a config to all the users in London, 
 Hong Kong, etc
 
 My .02
 
 
 
 On Tue, Oct 4, 2011 at 3:28 PM, Dave Goodbourn dave.goodbo...@u-fx.co.uk 
 wrote:
 Now that all depends on your definition of quick! Once you have a software 
 distribution system setup, making any change to 100+ workstations and render 
 nodes can only take a matter of minutes!
  
 You can always do it with scripting to copy config files and setup variables 
 on each machine. Not too sure what else you were looking for really.
 
 
 
 On 4 Oct 2011, at 23:07, Dan Walker walkerd...@gmail.com wrote:
 
 Wow, no quick solutions? That just sucks now don't it! :-)
 
 On Oct 4, 2011 10:27 AM, Dave Goodbourn dave.goodbo...@u-fx.co.uk wrote:
  Have you looked at software deployment software like WPKG 
  http://wpkg.org/ at
  all? You can very easily deploy all the settings and environmental 
  variables
  to all your machines from the comfort of your own seat! Works well for us.
  It's only Windows based but there's plenty of *nix solutions like
  Puppethttp://puppetlabs.com/
  .
  
  D.
  
  On Tue, Oct 4, 2011 at 6:12 PM, Dan Walker walkerd...@gmail.com wrote:
  
  Hi Torax,
 
  unfortunately software cannot be installed on a server. even if it was you
  would still have to setup each individual artist with the manually defined
  environment variables or paths to the luts location. I'm specifically
  talking about frame cycler when manually defining paths. all nuke software
  is installed locally on each machine and I'm looking for a way to point
  frame cycler and rv at luts installed globally. I'm wanting all of this 
  done
  without having to go to each artist machine and perform a manual setup. 
  Nuke
  

Re: [SPAM] Re: [Nuke-users] LogC again

2011-08-19 Thread Jonathan Egstad
 If your talking about clipping at 0.0, this is a feature of the
 DD::Image::LUT class, I assume the read/write nodes use this to do the
 conversions.
 
 If there was a way to work around this it would be good as quite a few
 log-lin conversions map 0.0 linear to a negative value, clipping
 them makes the conversions non-reversible.

The LUT class supports negative output values (or greater than one,) but the 
input to the LUT is clamped by default.  So reading a log file should produce 
negative numbers, but writing one will have the negative values chopped off.

The alternative is to use a ColorLookup node which can access the same 
functions that are defined on the root node's lut page, but you can specify an 
input range for it to handle - and it will do extrapolation off the ends of the 
curves.  It's also accelerated so it should be faster than using a raw 
Animation lookup.

-jonathan

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users


Re: [Nuke-users] DOF Bokeh variations ?

2011-07-13 Thread Jonathan Egstad
How soon can you get a version working for 6.3 which accepts deep data...?


On Jul 11, 2011, at 12:43 PM, Colin Doncaster wrote:

 I can get you a trial license of http://peregrinelabs.com/bokeh if you want 
 to give it a go.  I'm sure the tool can be extended to support what you're 
 trying to do. 
 
 cheers
 
 On 2011-07-11, at 12:18 PM, a...@curvstudios.com wrote:
 
 Anyone have a nice solution / tool for animated growing / shrinking of DOF
 Bokeh on say, particles moving away from camera ?
 
 I'm trying to create Flotsam for several underwater shots with particles.
 I can cut the particles into different depth slices, then vary their DOF
 bokeh, and re-combine...but its in the animation proximity to camera which
 gets difficult to prevent the dissolve anomalie...effect.
 
 Has anyone used openFX Frischluft with success in this scenario ? or does
 it exhibit the same problem ?
 
 thx,
 Ari
 
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
 
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users


Re: [Nuke-users] zDepth glitch in lower resolutions? (scanline render)

2011-07-07 Thread Jonathan Egstad
 i'm currently in the need of getting the depth output of the scanline 
 renderer. Works fine so far, but as soon as you lower the resolution in the 
 viewer, major glitches in the depth map occur. I thought at first they might 
 originate in my node that i develop, but then i found out when rendering a 
 standard sphere through the scanline renderer, glitches may show up - in 
 terms of false depth values. 
 
 i've attached screenshots that should outline the problem. the first 3 
 impressions are from a standard sphere rendered through the scanline renderer 
 at different viewer resolutions, where the contrast was optimized by my own 
 plugin. the latter two screenshots show the original depth output, one in a 
 standard view, one with optimized contrast in photoshop - my plugin is not in 
 the stream.
 
 can someone tell me what this is - or if this is a bug - or how i could work 
 around that.

They're cracks at the edges of polys caused by a numerical instability that was 
never tracked down.  It's a bug though I'm not sure if there's an official bug 
report.
There's no workaround that I know of except to change the poly count which 
won't really fix the problem, but it might minimize its effects.

-jonathan

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users


Re: [Nuke-users] Grade clamping

2011-04-21 Thread Jonathan Egstad
Yes of course there is - defaulting it on eliminated many, many color problems 
caused by negative numbers feeding into merge operations.  We (DD) identified 
the Grade node early on as a common cause of many of the negative value issues 
we were seeing once we started using floating-point color significantly in the 
late '90s.

If you don't agree with the default just change it to 'false' with a 
knobDefault.

-jonathan

On Apr 21, 2011, at 8:12 AM, Hugo Léveillé wrote:

 Is there a good reason why nuke's grade has the black clamp checked ?
 More often than anything else, artist ends up destroying super black on
 dark plates.
 
 Am I missing something ? 
 
 
 -- 
  Hugo Léveillé
  TD Compositing, Vision Globale
  hu...@fastmail.net
 
 ___
 Nuke-users mailing list
 Nuke-users@support.thefoundry.co.uk
 http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

___
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users