Tian, Please provide information on the JRE being used (java -version) and the environment configuration. How large is your heap? This can be found in conf/bootstrap.conf. What version of nifi are you using?
Thanks On Mon, Sep 25, 2017 at 8:29 AM, Lou Tian <[email protected]> wrote: > Hi, > > We are doing performance test for our NIFI flow with Gatling. But after > several run, the NIFI always has the OutOfMemory error. I did not find > similar questions in the mailing list, if you already answered similar > questions please let me know. > > Problem description: > We have the Nifi flow. The normal flow works fine. To evaluate whether our > flow can handle the load, we decided to do the performance test with > Gatling. > > 1) We add the two processors HandleHttpRequest at the start of the flow and > HandleHttpResponse at the end of the flow. So our nifi is like a webservice > and Gatling will evaluate the response time. 2) Then we continuously push > messages to HandleHttpRequest processor. > > Problem: > Nifi can only handle two runs. Then the third time, it failed and we have to > restart the NIFI. I copied some error log here. > >> o.a.n.p.standard.HandleHttpRequest HandleHttpRequest[id=**] >> HandleHttpRequest[id=**] failed to process session due to >> java.lang.OutOfMemoryError: Java heap space: {} >> o.a.n.p.standard.HandleHttpRequest HandleHttpRequest[id=**] >> HandleHttpRequest[id=**] failed to process session due to >> java.lang.OutOfMemoryError: Java heap space: {} >> java.lang.OutOfMemoryError: Java heap space >> at java.util.HashMap.values(HashMap.java:958) >> at >> org.apache.nifi.controller.repository.StandardProcessSession.resetWriteClaims(StandardProcessSession.java:2720) >> at >> org.apache.nifi.controller.repository.StandardProcessSession.checkpoint(StandardProcessSession.java:213) >> at >> org.apache.nifi.controller.repository.StandardProcessSession.commit(StandardProcessSession.java:318) >> at >> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:28) >> at >> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1120) >> at >> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147) >> at >> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47) >> at >> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132) >> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) >> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) >> at >> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) >> at >> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) >> at >> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) >> at >> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) >> at java.lang.Thread.run(Thread.java:748) > > > So our final questions: > 1. Do you think it is the HandleHttpRequest processors problem? Or there is > something wrong in our configuration. Anything we can do to avoid such > problem? > 2. If it's the processor, will you plan to fix it in the coming version? > > Thank you so much for your reply. > > Kind Regards, > Tian >
