Hi all, I have been investigating an OutOfMemory error when using the HDFS event sink. I have determined the problem to be with the
WriterLinkedHashMap sfWriters; Depending on how you generate your file name/directory path, you can run out of memory pretty quickly. You need to either set the *idleTimeout* to some non-zero value or set the number of *maxOpenFiles*. The map keeps references to BucketWriter around longer than they are needed. I was able to reproduce this consistently and took a heap dump to verify that objects being kept around. I will update this Jira to reflect my findings https://issues.apache.org/jira/browse/FLUME-1326?jql=project%20%3D%20FLUME%20AND%20text%20~%20%22memory%20leak%22 dave
