Dale,

Where there is a fetch file there is usually a list file.  And while
the symptom of memory issues is showing up in fetch file i am curious
if the issue might actually be caused in ListFile.  How many files are
in the directory being listed?

Mark,

Are we using a stream friendly API to list files and do we know if
that API on all platforms really doing things in a stream friendly
way?

THanks
Joe

On Wed, May 4, 2016 at 7:37 AM, dale.chang13 <dale.chan...@outlook.com> wrote:
> So I still haven't decrypted this problem, and I am assuming that this is an
> IOPS problem instead of a RAM issue.
>
> I have monitored the memory of the nodes in my cluster during the flow,
> before and after the "cannot allocate memory" exception occurs. However,
> there is no memory leak because the memory used by the JVM remains steady
> between 50 and 100 MB used using jconsole. As a note, I have allocated 1 GB
> as a minimum and 4 GB as a maximum for the heap size for each node.
>
> There are also no changes to the number of active threads (35) in jconsole
> while the NiFi gui shows up to 20 active threads. Additionally the number of
> classes loaded and CPU usage remains the same throughout the whole NiFi
> operation.
>
> The only difference I have seen is disk activity on the drive that is
> configured to read to/write from NiFi.
>
> My question is: does it make sense that this is an IO issue, or a RAM/memory
> issue?
>
>
>
> --
> View this message in context: 
> http://apache-nifi-developer-list.39713.n7.nabble.com/FetchFile-Cannot-Allocate-Enough-Memory-tp9720p9901.html
> Sent from the Apache NiFi Developer List mailing list archive at Nabble.com.

Reply via email to