Thanks for all the suggestions. A set of summary answers - 1) responses are less than 5 minutes 2) Most requests do get through successfully, and I can access the service via curl from the host running the docker. In fact I can send the failed flowfiles successfully to the service 3) Nifi and the service are running in dockers generated in the same docker-compose stack, so nothing in between to terminate.
I have a suspicion that it may relate to thread/worker settings for the service - perhaps flask will close a request if the total allocated thread/worker resources are exceeded. I'm running an experiment with this atm. On Thu, Mar 16, 2023 at 1:15 AM Jeremy Pemberton-Pigott < [email protected]> wrote: > You may also want to check if there's something in the middle terminating > it for you, ngnix, proxy, OpenShift route, ... > > Regards, > > Jeremy > > > On 15 Mar 2023, at 08:07, Patrick Timmins <[email protected]> wrote: > > > > Hello Richard, > > Do you have *any* requests from the InvokeHTTP processor that are actually > getting through to the service running in the docker container? Can you > access the service that's running in the docker container via another > method (eg: browser or curl) to verify that you have the docker container > configured properly for ingress from outside the container? > > Not knowing anything else, I would guess this is a docker container > networking ingress/egress issue. > > Pat > > > On 3/14/2023 11:26 PM, Richard Beare wrote: > > I do. I didn't spot anything that looked related last time, but will check > again. > > On Wed, Mar 15, 2023 at 2:00 PM Joe Witt <[email protected]> wrote: > >> Hello >> >> I believe this is the remote service killing the socket. Do you have >> logs for that service to check? >> >> Thanks >> >> On Tue, Mar 14, 2023 at 7:54 PM Richard Beare <[email protected]> >> wrote: >> >>> Hi Everyone, >>> I have an InvokeHTTP processor experiencing the error below on a small >>> proportion of flowfiles. The service it is accessing is running on another >>> docker on the same host, and I've adjusted the frequency of requests to >>> quite low - the processor is running single threaded. The settings are >>> pretty standard - socket connect timeout 5 s, socket read timeout 600 sec, >>> socket idle timeout 5mins, socket idle connections 5. The processor is on a >>> 1s schedule (I don't want it to be that slow). My nifi setup is not >>> clustered. >>> >>> Googling suggests that the error could be caused by lack of disk space, >>> but that doesn't appear to be the case (all the nifi storage is on a drive >>> with plenty of space). >>> >>> What else should I be looking for? The operation does take a while to >>> run, but nowhere near 10minutes. I've configured the web service to support >>> several workers with the intention of processing many flowfiles quickly, >>> but this error is limiting what I can do. Sending the unsuccessful >>> flowfiles to the service using curl does wotk, so not a problem with the >>> data. >>> >>> Any ideas? >>> >>> 2023-03-14 01:48:46,334 ERROR [Timer-Driven Process Thread-36] >>> o.a.nifi.processors.standard.InvokeHTTP >>> InvokeHTTP[id=cb72d2e0-d5c0-36c1-19b6-13a542a56e60] Request Processing >>> failed: >>> StandardFlowFileRecord[uuid=eff900e7-81ce-4312-abe2-218cb78d3ca1,claim=StandardContentClaim >>> [resourceClaim=StandardResourceClaim[id=1678758435916-5, container=default, >>> section=5], offset=11133732, >>> length=125544],offset=0,name=eff900e7-81ce-4312-abe2-218cb78d3ca1,size=125544] >>> org.apache.nifi.processor.exception.ProcessException: IOException thrown >>> from InvokeHTTP[id=cb72d2e0-d5c0-36c1-19b6-13a542a56e60]: >>> java.net.SocketException: Broken pipe (Write failed) >>> at >>> org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:2716) >>> at >>> org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:2661) >>> at >>> org.apache.nifi.processors.standard.InvokeHTTP$1.writeTo(InvokeHTTP.java:1170) >>> at >>> okhttp3.internal.http.CallServerInterceptor.intercept(CallServerInterceptor.kt:59) >>> at >>> okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109) >>> >>
