Russ,
This sort of deviates from your original question, but Would applying a
flowfile expiration time on the connection (during experimentation) work with
your flow? This would keep the queue more manageable.
> On Jan 10, 2017, at 4:35 PM, Russell Bateman wrote:
>
To update this thread, ...
1. Setting up a no-op processor to "drain" the queue doesn't seem to
present any speed advantage over right-clicking the queue and choosing
Empty queue.
2. Removing the flowfile and provenance repositories (cd
flowfile_repository ; rm -rf *) is instantaneous.
3.
These have been invaluable insights Mark. Thank you very much for your
help. -Jim
On Tue, Jan 10, 2017 at 2:13 PM, Mark Payne wrote:
> Jim,
>
> Off the top of my head, I don't remember the reason for two dates,
> specifically. I think it may have had to do
> with ensuring
I'm trying your suggestion right now. It would certainly be an easy way
to avoid accumulation, but, in terms of voiding a queue with millions of
files saved up until I'm ready to lose them (at the end of a test run),
it doesn't seem any improvement in speed over just right-clicking and
In my case, I'm experimenting with huge flows and huge numbers of files.
I wasn't thinking about how much work I'd create for myself by storing
up files in a queue at the end (or, in some cases, at intermediate
points) when I might want to clean house and start over.
So, I can just bring NiFi
If I want a sink node to get rid of flow files while I’m messing around I add a
‘dev/null' update attribute processor that does nothing. Set the output to
automatically terminate and just connect the queue to it and start it up. If
you want to retain some output to look at just stop it
Millions or gajillions will indeed take a while as they have to swap in as
presently implemented. We could certainly optimize that if is a common
need.
Blowing away the repos will certainly do the trick and be faster. Though
is clearly a blunt instrument.
Do you think we need an express queue
If I'm experimenting and have gajillions of flowfiles in a queue that
takes a very long time to empty from the UI, is there a quicker way? I
can certainly bounce NiFi, delete files, both, etc.
Jim,
Off the top of my head, I don't remember the reason for two dates,
specifically. I think it may have had to do
with ensuring that if we run at time X, we could potentially pick up a file
that also has a timestamp of X. Then,
we could potentially have 1+ files come in at time X also, after
Hello,
There is currently no direct conversion from XML to Avro. You would likely
have to go from XML to JSON, and then use ConvertJsonToAvro.
There also isn't direct conversion from XML to JSON, but some people have
been able to do it with TransformXML using an XLST.
Your best bet may be to
The "Maximum File Count" controls how many files are allowed in the
destination directory that PutFile is writing to. So if you set it to 10
and then 100 flow files come in to PutFile, it will write 10 files to the
directory and transfer 90 of them to the failure relationship.
The properties on
The output of the LogAttribute processor goes to logs/nifi-app.log. Ensure you
configured the processor with the proper log threshold level (i.e. if you have
it set at ERROR, only errors will appear in the log), you are logging flowfile
content if desired, you are logging the attributes you
Well now that is pretty cool. Thanks James.
On Mon, Jan 9, 2017 at 5:27 PM, James Wing wrote:
> Nick,
>
> You could use ExecuteScript to manipulate the JSON. The sample ECMAScript
> below assumes that you already have the transformed timestamp as an
> attribute "timestamp":
>
Hi,
I am trying to make use of the Log Attribute Processor in NiFi but the
result of the processor is neither creating a seperate log file nor
appending logs to the nifi-app or the nifi-user logs. Can you please guide
me on how to make use of this processor ?
Also is there any other Processor in
Hello Gilman,
I found a way to have it working. I give view data and modify data policies to
my user too.
Nifi 1.0.0 was working without this setting and using nifi-1.1.1 need it.
What are the differences between 1.0.0 and 1.1.1 about this point ?
From: Matt
Yes, instance is clustered and secured ( https ) and it was running nifi 1.0.0.
Before nifi 1.1.1 anything was working correctly.
I authorized all the nodes with no success.
Perhaps I'm doing it in the wrong way... I just added to nodes all the
policies. But it still does not work.
I checked
Thank you very much Mark. This is very helpful. Can I ask you just a few
quick follow-up questions in an effort to better understand?
How does NiFi use those two dates? It seems that the timestamp of last
listing would be sufficient to permit NiFi to identify newly received
content. Why is it
If your instance is clustered, you'll need to authorize the nodes with the
data policies as well. Any 'data-plane' endpoint (where data or meta-data
is returned or modified) will require the nodes in the request chain to
also be approved for access as the data will traverse some of them when the
Hi,
We are using Nifi for getting the data from mongoDB and store them in
LFS. Actually we trying to explore more features of Nifi.
While storing the data in LFS, Nifi is generating one file each for
document. If i am not specifying Max file count property in PutFile
processor it is
Hi,
I have 5k xml files after invokehttp->compresscontent ->unpackcontent-> ?.
Now i want to create single avro file which would have 5k rows and each row
represents one xml in string.
Can you share me your idea.
Once i converted to avro, may be i could use mergecontent to create single
avro
20 matches
Mail list logo