James

that is good feedback though - we'd be happy to hear about
confusing/limited user experiences with provenance and queue viewing
that lead you to prefer other techniques like looking at
logs/dead-lettering/etc..

Thanks
Joe

On Wed, Feb 22, 2017 at 10:56 AM, James McMahon <[email protected]> wrote:
> The Provenance feature can sometimes present a very wide range of results
> and I've found it is sometimes difficult to see my content and attributes
> for the flowfile of interest. Queueing up the results after a processor
> limits the review set to just those results. I've found it to be easier to
> get at what I want to see during test and development. I will be the first
> to own up to my own ignorance though Mark <lol>, and will try harder to use
> the provenance features instead.
>
> Our flow is not actually that high. This threshold has been reached
> following several a few days of processing. and this is perhaps the biggest
> reason I asked about Request Expiration: I was operating under the
> impression that Request Expiration would force the removal of all those
> older than 10 minutes, regardless of connection state or anything. Could it
> be that these "split paths" / queued results I created put some sort of hold
> on the flowfile that prevents the removal from the context map, regardless
> of the fact that I have indeed Http Responded?
>
> Thank you again for your insights. I am learning a great deal about what
> this is doing under the hood, and hope to learn why, exactly, Request
> Expiration isn't working as I anticipated. I must be doing something that
> negatively impacts its purpose.
>
> Jim
>
> On Wed, Feb 22, 2017 at 10:36 AM, Mark Payne <[email protected]> wrote:
>>
>> Jim,
>>
>> re: two Success paths - Yes, you should send only one of them to the
>> HandleHttpResponse. I'm curious though - why use
>> a disabled processor and queue data up instead of using the Data
>> Provenance feature?
>>
>> Yes, StandardHttpContextMap should be removing any entries on its own that
>> exceed the timeout. How many requests per
>> second are you seeing? I am assuming that you are receiving a pretty high
>> rate if it is at the point of containing over 10K entries
>> with a 10 minute timeout. If you are not seeing that many requests, then
>> there may be something else going on there.
>>
>> Thanks
>> -Mark
>>
>>
>>
>> On Feb 22, 2017, at 10:31 AM, James McMahon <[email protected]> wrote:
>>
>> Additional questions about this. Immediately following my
>> HandleHttpRequest processor, I have an ExecuteScript processor that then
>> sends flowfile copies out two Success paths. one path eventually culminates
>> in a HandleHttpResponse that has the aforementioned auto-termintaion of
>> Success and Failure results. The second path is to a MonitorActivity
>> processor that is disabled, to permit me to queue up and review incoming
>> flowfile results after ExecuteScript during dev and test. Does that second
>> path also have to send a response? Isn't it enough that the ContextMap is
>> cleared by the response from the first path?
>>
>> Second question: how does this ever happen? Doesn't the Request Expiration
>> I set on the StandradHttpContextMap force the obliteration of all entries
>> that age beyond that point?
>>
>> Jim
>>
>> On Wed, Feb 22, 2017 at 10:13 AM, James McMahon <[email protected]>
>> wrote:
>>>
>>> I may well have that Mark. I have a number of paths where I have
>>> HandleHttpResponse that auto terminates Failures. That would cause such a
>>> problem, wouldn't it?
>>>
>>> How do people handle this situation: app does a POST, and so we handle
>>> the request. App closes or timesout for whatever the reason may be. The
>>> HanldeHttpResponse is unable to reply. Should those not be auto terminated?
>>>
>>> In a situation like this then Mark, are these the steps to recover?
>>> 1. HanldeHttpResponse at end of all paths
>>> 2. do not autoterminate failure conditions
>>> 3. DELETE the StandardHttpContextMap (to clear the log jam)
>>> 4. Recreate it fresh, which I presume creates it empty (I hope)
>>>
>>> What else must I do to recover? And how do I properly handle those
>>> "broken connection" situations?
>>>
>>> On Wed, Feb 22, 2017 at 10:06 AM, Mark Payne <[email protected]>
>>> wrote:
>>>>
>>>> Jim,
>>>>
>>>> You likely have a path through your flow where you are receiving an HTTP
>>>> Request via HandleHttpRequest
>>>> but you never respond via a HandleHttpResponse. When using these
>>>> processors, it's important that every
>>>> incoming FlowFile go to a HandleHttpResponse processor. Do you have some
>>>> path in your flow where you
>>>> are not responding to the request?
>>>>
>>>> Thanks
>>>> -Mark
>>>>
>>>>
>>>> > On Feb 22, 2017, at 9:58 AM, James McMahon <[email protected]>
>>>> > wrote:
>>>> >
>>>> > I am getting the following errors when my users attempt to use curl or
>>>> > python to post to my HandleHttpRequest processor (cannot export actual
>>>> > messages, must select pieces and retype here):
>>>> > WARNING
>>>> > Received request from [IP address is here] but could not process it
>>>> > because too many requests are already outstaning; responding with
>>>> > SERVICE_UNAVAILABLE
>>>> > ERROR
>>>> > ...claim=StandardContentClaim....
>>>> > transfer relationship not specified
>>>> >
>>>> > None of my apps can post to NiFi.
>>>> >
>>>> > I have a StandradSSLContextService and a standradHttpContextMap, both
>>>> > of which are enabled. I suspect I may have inadvertently caused this 
>>>> > problem
>>>> > by setting my ContextMap parameters badly. Here are those params:
>>>> > Maximum Outstanding Requests: 10000
>>>> > Request Expiration 10 min
>>>> >
>>>> > I've looked across my workflow and no flowfiles are queued up. So my
>>>> > expectation is that there should be ample space in my ContextMap. But 
>>>> > these
>>>> > errors indicate otherwise. How do I fix this?
>>>> > Thanks very much in advance for your help.
>>>> > Jim
>>>>
>>>
>>
>>
>

Reply via email to