[
https://issues.apache.org/jira/browse/NIFI-1732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Joseph Percivall updated NIFI-1732:
-----------------------------------
Description:
The following e-mail was received on the dev mailing list and points out an
issue in the HandleHttpRequest, which is that the user cannot configure the
jetty timeout. As a result, requests that take longer than 30 seconds timeout
causing some problems:
{quote}
I've been experimenting with the HandleHttpRequest/Response processors in Nifi
0.5.1, and have noticed an issue that I've not been able to resolve. I'm hoping
that I'm simply missing a configuration item, but I've been unable to find the
solution.
The scenario is this: HandleHttpRequest --> Long Processing (> 30 seconds) -->
HandleHttpResponse. It appears that the jetty server backing the
HandleHttpRequest has a built in idle time timeout of 30000 ms (see
jetty-server/src/main/java/org/eclipse/jetty/server/AbstractConnector.java
_idleTimeout value). In my test flow, 30 seconds after a HTTP requests comes
in, a second request comes into the flow. It has the same information, except
the http.context.identifier and the FlowFile UUID has changed, and the
http.dispatcher.type has changed from REQUEST to ERROR. From my online research
(http://stackoverflow.com/questions/30786939/jetty-replay-request-on-timeout?),
this re-request with a type of error comes in after jetty determines that a
request has timed out.
This would not normally be a big deal. I was able to RouteOnAttribute and
capture all ERROR requests without responding. However, those requests are
never cleared from the StandardHttpContextMap. I've tested this by setting the
number of requests allowed by the StandardHttpContextMap to 4, and done 4 of my
long Request/Response tests. Each request is correctly responded to eventually
in my test, but because they take over 30 seconds each also generates an ERROR
request that is stored in the StandardHttpContextMap. If I then leave the
system alone for much longer than the Request Timeout parameter in the
StandardHttpContextMap and then attempt a request, I get a 503 response saying
that the queue is full and no requests are allowed. No requests are allowed at
all until I delete and recreate the Map.
It seems unlikely to me that no one has attempted to use these processors in
this fashion. However, looking through the unit test for this processor it
seems like no where was a timeout tested over 30 seconds, so I thought it worth
a conversation.
So finally, is there a configuration item to extend the jetty server's idle
timeout? Or is there a better way to ensure that the bogus requests don't get
stuck permanently in the StandardHttpContextMap? I appreciate any pointers you
can give.
{quote}
was:
The following e-mail was received on the dev mailing list and points out an
issue in the HandleHttpRequest, which is that the user cannot configure the
jetty timeout. As a result, requests that take longer than 30 seconds timeout
causing some problems:
{quote}
Hi Nifi Team,
I've been experimenting with the HandleHttpRequest/Response processors in Nifi
0.5.1, and have noticed an issue that I've not been able to resolve. I'm hoping
that I'm simply missing a configuration item, but I've been unable to find the
solution.
The scenario is this: HandleHttpRequest --> Long Processing (> 30 seconds) -->
HandleHttpResponse. It appears that the jetty server backing the
HandleHttpRequest has a built in idle time timeout of 30000 ms (see
jetty-server/src/main/java/org/eclipse/jetty/server/AbstractConnector.java
_idleTimeout value). In my test flow, 30 seconds after a HTTP requests comes
in, a second request comes into the flow. It has the same information, except
the http.context.identifier and the FlowFile UUID has changed, and the
http.dispatcher.type has changed from REQUEST to ERROR. From my online research
(http://stackoverflow.com/questions/30786939/jetty-replay-request-on-timeout?),
this re-request with a type of error comes in after jetty determines that a
request has timed out.
This would not normally be a big deal. I was able to RouteOnAttribute and
capture all ERROR requests without responding. However, those requests are
never cleared from the StandardHttpContextMap. I've tested this by setting the
number of requests allowed by the StandardHttpContextMap to 4, and done 4 of my
long Request/Response tests. Each request is correctly responded to eventually
in my test, but because they take over 30 seconds each also generates an ERROR
request that is stored in the StandardHttpContextMap. If I then leave the
system alone for much longer than the Request Timeout parameter in the
StandardHttpContextMap and then attempt a request, I get a 503 response saying
that the queue is full and no requests are allowed. No requests are allowed at
all until I delete and recreate the Map.
It seems unlikely to me that no one has attempted to use these processors in
this fashion. However, looking through the unit test for this processor it
seems like no where was a timeout tested over 30 seconds, so I thought it worth
a conversation.
So finally, is there a configuration item to extend the jetty server's idle
timeout? Or is there a better way to ensure that the bogus requests don't get
stuck permanently in the StandardHttpContextMap? I appreciate any pointers you
can give.
Thanks,
Luke Coder
BIT Systems
CACI - NCS
941-907-8803 x705
6851 Professional Pkwy W
Sarasota, FL 34240
{quote}
> HandleHttpRequest should allow user to configure jetty timeout
> --------------------------------------------------------------
>
> Key: NIFI-1732
> URL: https://issues.apache.org/jira/browse/NIFI-1732
> Project: Apache NiFi
> Issue Type: Improvement
> Components: Extensions
> Reporter: Mark Payne
> Assignee: Pierre Villard
> Labels: beginner, newbie
> Attachments: NIFI-1732.xml
>
>
> The following e-mail was received on the dev mailing list and points out an
> issue in the HandleHttpRequest, which is that the user cannot configure the
> jetty timeout. As a result, requests that take longer than 30 seconds timeout
> causing some problems:
> {quote}
> I've been experimenting with the HandleHttpRequest/Response processors in
> Nifi 0.5.1, and have noticed an issue that I've not been able to resolve. I'm
> hoping that I'm simply missing a configuration item, but I've been unable to
> find the solution.
> The scenario is this: HandleHttpRequest --> Long Processing (> 30 seconds)
> --> HandleHttpResponse. It appears that the jetty server backing the
> HandleHttpRequest has a built in idle time timeout of 30000 ms (see
> jetty-server/src/main/java/org/eclipse/jetty/server/AbstractConnector.java
> _idleTimeout value). In my test flow, 30 seconds after a HTTP requests comes
> in, a second request comes into the flow. It has the same information, except
> the http.context.identifier and the FlowFile UUID has changed, and the
> http.dispatcher.type has changed from REQUEST to ERROR. From my online
> research
> (http://stackoverflow.com/questions/30786939/jetty-replay-request-on-timeout?),
> this re-request with a type of error comes in after jetty determines that a
> request has timed out.
> This would not normally be a big deal. I was able to RouteOnAttribute and
> capture all ERROR requests without responding. However, those requests are
> never cleared from the StandardHttpContextMap. I've tested this by setting
> the number of requests allowed by the StandardHttpContextMap to 4, and done 4
> of my long Request/Response tests. Each request is correctly responded to
> eventually in my test, but because they take over 30 seconds each also
> generates an ERROR request that is stored in the StandardHttpContextMap. If I
> then leave the system alone for much longer than the Request Timeout
> parameter in the StandardHttpContextMap and then attempt a request, I get a
> 503 response saying that the queue is full and no requests are allowed. No
> requests are allowed at all until I delete and recreate the Map.
> It seems unlikely to me that no one has attempted to use these processors in
> this fashion. However, looking through the unit test for this processor it
> seems like no where was a timeout tested over 30 seconds, so I thought it
> worth a conversation.
> So finally, is there a configuration item to extend the jetty server's idle
> timeout? Or is there a better way to ensure that the bogus requests don't get
> stuck permanently in the StandardHttpContextMap? I appreciate any pointers
> you can give.
> {quote}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)