Mike,
I am looking into this, and I was able to figure out how we could potentially
archive (and eventually
age off from the archive) a piece of data for which there is still an open file
handle. Specifically,
I am able to understand how this could happen only when we have a problem
reading a
I found something in the logs on the nodes where I had a problem. A
ContentNotFoundException begins occurring on these nodes and after many
thousands of times we eventually get "too many open files". Once I do
surgery on the content repository so that ContentNotFoundException stops
happening,
Mike
Ok that is a good data point. In my case they are all in archive but
I do agree that isn't super meaningful because in reality nothing
should ever be open for writing in the archive.
If you can and have enough logging on try searching for that first
part of the filename in your logs.
Another data point ... we had archiving turned on at first, and then most
(but not all) files that lsof reported were
/content_repository/0/archive/123456789-123456 (deleted).
We turned archiving off, hoping that was related in some way, but it was
not.
-- Mike
On Wed, Apr 27, 2016 at 11:53
Mike,
Definitely does not sound familiar. However, just looked up what you
describe and I do see it. In my case there are only three files but
they are sitting there open for writing by the nifi process and yet
have been deleted. So I do believe there is an issue...will dig in a
bit but