# Site to Site properties-for Nifi1
# nifi.remote.input.host=
# nifi.remote.input.secure=false
# nifi.remote.input.socket.port=8443
# nifi.remote.input.http.enabled=true
# nifi.remote.input.http.transaction.ttl=30 sec
# nifi.remote.contents.cache.expiration=30 secs
https://stackoverflow.com/questions/54941738/getting-invalidclassexception-when-using-putignitecache-processor-in-nifi
I'm more than happy to help out if someone here who knows how to setup
Ignite can share a config file suitable for use w/ Docker.
Yeah, I will try both the options and will see which option will suite better
1. Split incoming file using SplitRecord and use PublishKafka
2. Take large file and use PublishKafkaRecord
Thanks,
Hemantha
From: Bryan Bende
Sent: Friday, March 1, 2019 11:37:00 PM
Thanks Bryan, I got your point. Yeah we could try PublishKafkaRecord, as in
some of other case we had already used PublishKafkaRecord(csv data to avro) to
send out records.
In the below mentioned use case we thought of sending out bunch of records(as
we are not doing anything with the data) at
You can call transfer for each segment while processing the incoming
stream, its just that the real transfer won't actually happen until
commit is called.
Most processors extend AbstractProcessor so commit is called for you
at the end, but you could choose to manage the session yourself and
call
Bryan,
So the best practice when segmenting is to
- build your segments as a list while processing the incoming stream
- then after send them all to the relationship
right?
On March 1, 2019 at 09:21:46, Bryan Bende (bbe...@gmail.com) wrote:
Hello,
Flow files are not transferred until the
Hello,
Flow files are not transferred until the session they came form is
committed. So imagine we periodically commit and some of the splits
are transferred, then half way through a failure is encountered, the
entire original flow file will be reprocessed, producing some of the
same splits that
Hi All,
We have a use case where receiving huge json(file size might vary from 1GB to
50GB) via http, convert in to XML(xml format is not fixed, any other format is
fine) and send out using Kafka. - here is the restriction is CPU & RAM usage
requirement(once it is fixed, it should handle all