James & Mark,
I added the to logback.xml
And it's logging like a champ!
Thank both!
Carl
On Thu, Feb 9, 2017 at 11:37 AM, James Wing wrote:
> Carl,
>
> I think logging still works in ExecuteScript. But starting in NiFi 1.0,
> the default log level for processors was set
That said I think we can improve our handling of the consumer (kafka
client) and session (nifi transactional logic) and solve the problem.
It is related to our backpressure/consumer handling so we can fix
that.
Thanks
Joe
On Thu, Feb 9, 2017 at 1:38 PM, Bryan Bende wrote:
> No
Good morning. How can we export and import our workflow backup snapshots
from one established NiFi server instance to a new instance of NiFi stood
up on a new server?
Also, how does one restore a previous version of a workflow backup snapshot
created from the administrative tool?
Thank you. -Jim
Hi Koji,
That looks like a very neat approach to solving the problem indeed. However,
I suspect keeping the files in the upstream connection would have some
challenges as the processor doesn't yet know about the files it has to
monitor, right?
Anyway, thanks for looking into this. Let me know if
Thank you guys, I will look to see what I can do to contribute.
On Thu, Feb 9, 2017 at 1:19 PM, Joe Witt wrote:
> That said I think we can improve our handling of the consumer (kafka
> client) and session (nifi transactional logic) and solve the problem.
> It is related to
Hi Bryan,
I am new to Nifi, I'm tried to create custom processor with hive connection
pooling service as per your concern. But
same issue is occuring in custom processor. It shows only connection id. I
have no idea about pom file dependency.
Here I mentioned the steps what i have followed to
Hi Bas,
It worked as expected (at least for me).
In a processor, it's possible to transfer incoming FlowFile back to
itself, so the processor can investigate the FlowFile and free to
decide put it back or transfer it to other relationship.
I've created a JIRA NIFI-3452, and submit a Pull
Hi
I sent multiple files, besides other information, to a remote NIFI
instance. The remote NIFI instance receives the HTTP request as follows:
-16842747222614868992046715641
Content-Disposition: form-data; name="script";
filename="Generate_GPM_JPDF_example.py"
Hello,
Moving flows between NiFi instances is definitely something the
community is working to improve [1].
Currently there are two main options: templates, or moving the whole
flow.xml.gz.
You can create templates of process groups and then import them into
another instance, some people have
Hello,
I'm not sure I fully understand the question...
You would need HandletHttpRequest -> some processors ->
HandleHttpResponse all in a running state in order for someone to
receive a response.
Can you elaborate on what you are trying to do?
Thanks,
Bryan
On Thu, Feb 9, 2017 at 4:32 AM,
Hello Nick,
First, I assume "had a queue back up" means have a queue being
back-pressure. Sorry if that was different meaning.
I was trying to reproduce by following flow:
ConsumeKafka_0_10
-- success: Back Pressure Object Threshold = 10
-- UpdateAttribute (Stopped)
Then I used
Hello,
This usually means your custom NAR is not correctly linked to the NAR
where the service is.
Take a look at these resources for examples:
https://cwiki.apache.org/confluence/display/NIFI/Maven+Projects+for+Extensions#MavenProjectsforExtensions-LinkingProcessorsandControllerServices
I believe you are running into this issue:
https://issues.apache.org/jira/browse/NIFI-3189
When back-pressure happens on the queue coming out of ConsumeKafka,
this can last for longer than session.timeout.ms, and when the
processors resumes executing it receives this error on the first
yeah this is probably a good case/cause for use of the pause concept
in kafka consumers.
On Thu, Feb 9, 2017 at 9:49 AM, Bryan Bende wrote:
> I believe you are running into this issue:
>
> https://issues.apache.org/jira/browse/NIFI-3189
>
> When back-pressure happens on the
Hey Carl,
In version 1.0.0, the default log level for the 'org.apache.nifi.processors'
class was changed
from INFO level to WARN level. This was done because there is a tremendous
amount of
information logged by most of the standard processors, and many users were
complaining
that the logs
Carl,
I think logging still works in ExecuteScript. But starting in NiFi 1.0,
the default log level for processors was set from 'INFO' back to 'WARN' to
reduce log volumes. To receive INFO messages from ExecuteScript, you will
want to do one or both of two things:
1.) Set the bulletin level of
Hi,
I recently upgraded from version 0.7 to 1.1.1
This can be recreated in Windows and Linux (Amazon Ec2).
The log.info is not working in 1.1.1 from a Groovy script.
It did work in nifi 0.7 with groovy script 'ExecuteScript processor'.
I fed the ExecuteScript with a Generate Flow File
No data loss, but you may process the same message twice in NiFi.
The ordering of operations is:
1) poll Kafka
2) write received data to flow file
3) commit NiFi session so data in flow file cannot be lost
4) commit offsets to Kafka
Doing it this way achieves at-least once processing which
Good morning. The first processor in my workflow is a HandleHttpRequest.
How do we set up to send a HandleHttpResponse if that processor is stopped
and so not in a running state?
Thank you. -Jim Mc.
That makes perfect sense. To be clear, is there any potential to lose
messages in this scenario?
On Thu, Feb 9, 2017 at 7:16 AM, Joe Witt wrote:
> yeah this is probably a good case/cause for use of the pause concept
> in kafka consumers.
>
> On Thu, Feb 9, 2017 at 9:49 AM,
20 matches
Mail list logo