Tian,

Ok - and was this with the 512MB heap again?  Can you try with a 1GB
or 2GB heap and see if we're just looking at our minimum needs being
an issue or if we're looking at what sounds like a leak.

Thanks

On Mon, Sep 25, 2017 at 12:41 PM, Lou Tian <[email protected]> wrote:
> Hi Joe,
>
> I tested with a simple flow file.
> Only 4 processors: HandleHttpRequest, RouteOnContent, HandleHttpResponse and
> DebugFlow.
> I run the test 3 times (10 m/time and at most 50 users).
> It works fine for the first 2 run. And on the third run, got the error.
>
> I copied part of the log file. Please check if it is helpful to identify the
> error.
>
> 2017-09-25 18:21:45,673 INFO [Provenance Maintenance Thread-2]
> o.a.n.p.PersistentProvenanceRepository Created new Provenance Event Writers
> for events starting with ID 131158
>
> 2017-09-25 18:24:00,921 ERROR [FileSystemRepository Workers Thread-3]
> o.a.n.c.repository.FileSystemRepository Failed to handle destructable claims
> due to java.lang.OutOfMemoryError: Java heap space
> 2017-09-25 18:24:00,921 ERROR [Flow Service Tasks Thread-1]
> org.apache.nifi.NiFi An Unknown Error Occurred in Thread Thread[Flow Service
> Tasks Thread-1,5,main]: java.lang.OutOfMemoryError: Java heap space
> 2017-09-25 18:24:00,922 WARN [qtp574205748-107]
> o.e.jetty.util.thread.QueuedThreadPool Unexpected thread death:
> org.eclipse.jetty.util.thread.QueuedThreadPool$2@1e3a5886 in
> qtp574205748{STARTED,8<=13<=200,i=4,q=0}
> 2017-09-25 18:24:00,923 INFO [Provenance Repository Rollover Thread-1]
> o.a.n.p.lucene.SimpleIndexManager Index Writer for
> ./provenance_repository/index-1506354574000 has been returned to Index
> Manager and is no longer in use. Closing Index Writer
> 2017-09-25 18:24:00,925 ERROR [qtp574205748-107] org.apache.nifi.NiFi An
> Unknown Error Occurred in Thread Thread[qtp574205748-107,5,main]:
> java.lang.OutOfMemoryError: Java heap space
> 2017-09-25 18:24:00,929 INFO [pool-10-thread-1]
> o.a.n.c.r.WriteAheadFlowFileRepository Initiating checkpoint of FlowFile
> Repository
> 2017-09-25 18:24:00,929 ERROR [Flow Service Tasks Thread-1]
> org.apache.nifi.NiFi
> java.lang.OutOfMemoryError: Java heap space
> 2017-09-25 18:24:00,928 ERROR [Listen to Bootstrap]
> org.apache.nifi.BootstrapListener Failed to process request from Bootstrap
> due to java.lang.OutOfMemoryError: Java heap space
> java.lang.OutOfMemoryError: Java heap space
> 2017-09-25 18:24:00,929 WARN [NiFi Web Server-215]
> org.eclipse.jetty.server.HttpChannel /nifi-api/flow/controller/bulletins
> java.lang.OutOfMemoryError: Java heap space
> 2017-09-25 18:24:00,930 ERROR [pool-30-thread-1] org.apache.nifi.NiFi An
> Unknown Error Occurred in Thread Thread[pool-30-thread-1,5,main]:
> java.lang.OutOfMemoryError: Java heap space
> 2017-09-25 18:24:00,929 ERROR [Event-Driven Process Thread-3]
> org.apache.nifi.engine.FlowEngine A flow controller task execution stopped
> abnormally
> java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Java
> heap space
>    at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>    at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>    at org.apache.nifi.engine.FlowEngine.afterExecute(FlowEngine.java:100)
>    at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1150)
>    at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>    at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.OutOfMemoryError: Java heap space
> 2017-09-25 18:24:00,931 ERROR [Scheduler-1985086499] org.apache.nifi.NiFi An
> Unknown Error Occurred in Thread Thread[Scheduler-1985086499,5,main]:
> java.lang.OutOfMemoryError: Java heap space
> 2017-09-25 18:24:00,930 ERROR [Cleanup Archive for default]
> org.apache.nifi.engine.FlowEngine A flow controller task execution stopped
> abnormally
> java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Java
> heap space
>    at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>    at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>    at org.apache.nifi.engine.FlowEngine.afterExecute(FlowEngine.java:100)
>    at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1150)
>    at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>    at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.OutOfMemoryError: Java heap space
>
>
>
>
> Kind Regards,
> Tian
>
> On Mon, Sep 25, 2017 at 4:02 PM, Lou Tian <[email protected]> wrote:
>>
>> Hi Joe, Thanks for your reply.
>> I will try to do those tests. And update you with the results.
>>
>> On Mon, Sep 25, 2017 at 3:56 PM, Joe Witt <[email protected]> wrote:
>>>
>>> Tian
>>>
>>> The most common sources of memory leaks in custom processors
>>> 1) Loading large objects (contents of the flowfile, for example) into
>>> memory through byte[] or doing so using libraries that do this and not
>>> realizing it.  Doing this in parallel makes the problem even more
>>> obvious.
>>> 2) Caching objects in memory and not providing bounds on that or not
>>> sizing the JVM Heap appropriate to your flow.
>>> 3) Pull in lots of flowfiles to a single session or creating many in a
>>> single session.
>>>
>>> Try moving to a 1GB heap and see if the problem still happens.  Is it
>>> as fast?  Does it not happen.  Try 2GB if needed.  After that suspect
>>> a leak.
>>>
>>> We dont have a benchmarking unit test sort of mechanism.
>>>
>>> Thanks
>>>
>>> On Mon, Sep 25, 2017 at 9:45 AM, Lou Tian <[email protected]> wrote:
>>> > Hi Joe,
>>> >
>>> > 1. I will build a simple flow without our customised processor to test
>>> > again.
>>> >     It is a good test idea. We saw the OOME is under the
>>> > HandleHttpRequest,
>>> > we never thought about others.
>>> >
>>> > 2. About our customised processor, we use lots of these customised
>>> > processors.
>>> >     Properties are dynamic. We fetch the properties by a rest call and
>>> > cached it.
>>> >     Sorry, I cannot show you the code.
>>> >
>>> > 3. We had the unit test for the customised processors.
>>> >    Is there a way to test the memory leak in unit test using some given
>>> > methods from nifi?
>>> >
>>> > Thanks.
>>> >
>>> > On Mon, Sep 25, 2017 at 3:28 PM, Joe Witt <[email protected]> wrote:
>>> >>
>>> >> Tian,
>>> >>
>>> >> Ok thanks.  I'd try to removing your customized processor from the
>>> >> flow entirely and running your tests.  This will give you a sense of
>>> >> base nifi and the stock processors.  Once you're comfortable with that
>>> >> then add your processor in.
>>> >>
>>> >> I say this because if your custom processor is using up the heap we
>>> >> will see OOME in various places.  That it shows up in the core
>>> >> framework code, for example, does not mean that is the cause.
>>> >>
>>> >> Does your custom processor hold anything in class level variables?
>>> >> Does it open a session and keep accumulating flowfiles?  If you can
>>> >> talk more about what it is doing or show a link to the code we could
>>> >> quickly assess that.
>>> >>
>>> >> Thanks
>>> >>
>>> >> On Mon, Sep 25, 2017 at 9:24 AM, Lou Tian <[email protected]>
>>> >> wrote:
>>> >> > 1. The HandleHttpRequest Processor get the message.
>>> >> > 2. The message route to other processors based on the attribute.
>>> >> > 3. We have our customised processor to process the message.
>>> >> > 4. Then message would be redirected to the HandleHttpResponse.
>>> >> >
>>> >> > On Mon, Sep 25, 2017 at 3:20 PM, Joe Witt <[email protected]>
>>> >> > wrote:
>>> >> >>
>>> >> >> What is the flow doing in between the request/response portion?
>>> >> >> Please share more details about the configuration overall.
>>> >> >>
>>> >> >> Thanks
>>> >> >>
>>> >> >> On Mon, Sep 25, 2017 at 9:16 AM, Lou Tian <[email protected]>
>>> >> >> wrote:
>>> >> >> > Hi Joe,
>>> >> >> >
>>> >> >> > java version: 1.8.0_121
>>> >> >> > heap size:
>>> >> >> > # JVM memory settings
>>> >> >> > java.arg.2=-Xms512m
>>> >> >> > java.arg.3=-Xmx512m
>>> >> >> > nifi version: 1.3.0
>>> >> >> >
>>> >> >> > Also, we put Nifi in the Docker.
>>> >> >> >
>>> >> >> > Kind Regrads,
>>> >> >> > Tian
>>> >> >> >
>>> >> >> > On Mon, Sep 25, 2017 at 2:39 PM, Joe Witt <[email protected]>
>>> >> >> > wrote:
>>> >> >> >>
>>> >> >> >> Tian,
>>> >> >> >>
>>> >> >> >> Please provide information on the JRE being used (java -version)
>>> >> >> >> and
>>> >> >> >> the environment configuration.  How large is your heap?  This
>>> >> >> >> can be
>>> >> >> >> found in conf/bootstrap.conf.  What version of nifi are you
>>> >> >> >> using?
>>> >> >> >>
>>> >> >> >> Thanks
>>> >> >> >>
>>> >> >> >> On Mon, Sep 25, 2017 at 8:29 AM, Lou Tian
>>> >> >> >> <[email protected]>
>>> >> >> >> wrote:
>>> >> >> >> > Hi,
>>> >> >> >> >
>>> >> >> >> > We are doing performance test for our NIFI flow with Gatling.
>>> >> >> >> > But
>>> >> >> >> > after
>>> >> >> >> > several run, the NIFI always has the OutOfMemory error. I did
>>> >> >> >> > not
>>> >> >> >> > find
>>> >> >> >> > similar questions in the mailing list, if you already answered
>>> >> >> >> > similar
>>> >> >> >> > questions please let me know.
>>> >> >> >> >
>>> >> >> >> > Problem description:
>>> >> >> >> > We have the Nifi flow. The normal flow works fine. To evaluate
>>> >> >> >> > whether
>>> >> >> >> > our
>>> >> >> >> > flow can handle the load, we decided to do the performance
>>> >> >> >> > test
>>> >> >> >> > with
>>> >> >> >> > Gatling.
>>> >> >> >> >
>>> >> >> >> > 1) We add the two processors HandleHttpRequest at the start of
>>> >> >> >> > the
>>> >> >> >> > flow
>>> >> >> >> > and
>>> >> >> >> > HandleHttpResponse at the end of the flow. So our nifi is like
>>> >> >> >> > a
>>> >> >> >> > webservice
>>> >> >> >> > and Gatling will evaluate the response time.  2) Then  we
>>> >> >> >> > continuously
>>> >> >> >> > push
>>> >> >> >> > messages to HandleHttpRequest processor.
>>> >> >> >> >
>>> >> >> >> > Problem:
>>> >> >> >> > Nifi can only handle two runs. Then the third time, it failed
>>> >> >> >> > and
>>> >> >> >> > we
>>> >> >> >> > have to
>>> >> >> >> > restart the NIFI. I copied some error log here.
>>> >> >> >> >
>>> >> >> >> >>  o.a.n.p.standard.HandleHttpRequest HandleHttpRequest[id=**]
>>> >> >> >> >> HandleHttpRequest[id=**] failed to process session due to
>>> >> >> >> >> java.lang.OutOfMemoryError: Java heap space: {}
>>> >> >> >> >> o.a.n.p.standard.HandleHttpRequest HandleHttpRequest[id=**]
>>> >> >> >> >> HandleHttpRequest[id=**] failed to process session due to
>>> >> >> >> >> java.lang.OutOfMemoryError: Java heap space: {}
>>> >> >> >> >> java.lang.OutOfMemoryError: Java heap space
>>> >> >> >> >> at java.util.HashMap.values(HashMap.java:958)
>>> >> >> >> >> at
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >> org.apache.nifi.controller.repository.StandardProcessSession.resetWriteClaims(StandardProcessSession.java:2720)
>>> >> >> >> >> at
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >> org.apache.nifi.controller.repository.StandardProcessSession.checkpoint(StandardProcessSession.java:213)
>>> >> >> >> >> at
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >> org.apache.nifi.controller.repository.StandardProcessSession.commit(StandardProcessSession.java:318)
>>> >> >> >> >> at
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:28)
>>> >> >> >> >> at
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1120)
>>> >> >> >> >> at
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
>>> >> >> >> >> at
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
>>> >> >> >> >> at
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132)
>>> >> >> >> >> at
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>>> >> >> >> >> at
>>> >> >> >> >>
>>> >> >> >> >> java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>>> >> >> >> >> at
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>>> >> >> >> >> at
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>>> >> >> >> >> at
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>>> >> >> >> >> at
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>>> >> >> >> >> at java.lang.Thread.run(Thread.java:748)
>>> >> >> >> >
>>> >> >> >> >
>>> >> >> >> > So our final questions:
>>> >> >> >> > 1. Do you think it is the HandleHttpRequest processors
>>> >> >> >> > problem? Or
>>> >> >> >> > there
>>> >> >> >> > is
>>> >> >> >> > something wrong in our configuration. Anything we can do to
>>> >> >> >> > avoid
>>> >> >> >> > such
>>> >> >> >> > problem?
>>> >> >> >> > 2. If it's the processor, will you plan to fix it in the
>>> >> >> >> > coming
>>> >> >> >> > version?
>>> >> >> >> >
>>> >> >> >> > Thank you so much for your reply.
>>> >> >> >> >
>>> >> >> >> > Kind Regards,
>>> >> >> >> > Tian
>>> >> >> >> >
>>> >> >> >
>>> >> >> >
>>> >> >> >
>>> >> >> >
>>> >> >> > --
>>> >> >> > Kind Regards,
>>> >> >> >
>>> >> >> > Tian Lou
>>> >> >> >
>>> >> >
>>> >> >
>>> >> >
>>> >> >
>>> >> > --
>>> >> > Kind Regards,
>>> >> >
>>> >> > Tian Lou
>>> >> >
>>> >
>>> >
>>> >
>>> >
>>> > --
>>> > Kind Regards,
>>> >
>>> > Tian Lou
>>> >
>>
>>
>>
>>
>> --
>> Kind Regards,
>>
>> Tian Lou
>>
>
>
>
> --
> Kind Regards,
>
> Tian Lou
>

Reply via email to