Hi,
Comparing these numbers might give you a hint:
cat /proc/sys/fs/file-max # system's max file setting
lsof #list all open files
lsof -p 'nukepid' | wc # list nukes open files and count them
On 28/11/11 11:36, John RA Benson wrote:
Seeing this on Linux, suse 11, Nuke 6.2v1. No frameblends
or holds but there are frameoffsets. The write node being
output has a total of 15 nodes in the tree, 3 reads, 3 time
offsets, 3 transforms 3 reformats, 2 merges and a write.
There are about 20 other reads with offsets in the script,
but not in the tree being written. If those are being opened
too, that's probably not helping, but it's still a fraction
of the number of reads in a lot of scripts around here.
We see this error a lot with frameblends, but usually the
scripts are much bigger and there are a lot more reads. in
that case the scripts die on open. Amount of frames wouldn't
matter. This just seemed extra strange to die randomly in
the middle of a render.
Sorta glad its happening to others and not just here, only
in that its shared misery. Systems is going to increase my
file limit tomorrow, will see if it helps. I feel they
shouldn't have to - the limit is there for a reason, rights?
Seems like Nathan is probably correct in Nuke being lazy
about destroying FileReader instances.
cheers
jrab
We had that with many exr type readers and
timeoffsets... Removed every not necessary reads
eliminated the error. But sometimes scrubbing too much the
timeline got the error back.
Gabor
_______________________________________________
Nuke-users mailing list
[email protected], http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
|