d be on physical machines, and are restricted to a certain amount of
IOps that we can easily reach with disk-based Repositories ; that is the main
reason why we tried the VolatileContentRepository. Since 1.13.1, we use a
version of NiFi that we build with our custom bundles and the fix that was
lin
positories ; that is the main reason why we tried
the VolatileContentRepository. Since 1.13.1, we use a version of NiFi that
we build with our custom bundles and the fix that was linked in NIFI-8760
and for now it is working fine for us. We had some memory leaks on custom
processors but after
Hi Matthieu,
Thanks for raising this question for discussion. Other maintainers may be
able to provide additional background, but part of the reason for removing
the VolatileContentRepository implementation was that there were some more
fundamental problems with the implementation. Although
this ticket
> https://issues.apache.org/jira/browse/NIFI-8760 and the
> VolatileContentRepository... I understood that we weren't many to still use
> this Repository, but in our use case with a very limited cloud environment
> with strict IOps regulations, it fitted perfectl
Hi everyone,
We wanted to talk about this ticket
https://issues.apache.org/jira/browse/NIFI-8760 and the
VolatileContentRepository... I understood that we weren't many to still use
this Repository, but in our use case with a very limited cloud environment
with strict IOps regulations, it fitted
Thank you very much for your answers !
That's a surprise ! The VolatileContentRepository seemed to answer
perfectly our need to treat a big amount of data with low resources and
especially low I/O on mounted disks, with non critical data and potential
data loss authorized.
I just tried your
t;
> I would highly recommend against using VolatileContentRepository. You’re
> the first one I’ve heard of using it in a few years. Typically, the
> FileSystemRepository is sufficient. If you truly want to run with the
> content in RAM I would recommend creating a RAM
Matthieu,
I would highly recommend against using VolatileContentRepository. You’re the
first one I’ve heard of using it in a few years. Typically, the
FileSystemRepository is sufficient. If you truly want to run with the content
in RAM I would recommend creating a RAM Disk and pointing
t;
>> Hi all,
>>
>> Currently using NiFi 1.11.4, we face a blocking issue trying to switch to
>> NiFi 1.13.1+ due to the VolatileContentRepository : some processors we use
>> (and probably others that we didn't try) were not able to process
>> flowfiles, such as M
tch to
> NiFi 1.13.1+ due to the VolatileContentRepository : some processors we use
> (and probably others that we didn't try) were not able to process
> flowfiles, such as MargeRecord, QueryRecord or SplitJson (logs are in the Jira
> ticket NiFi-8760 <https://issues.apache.org/jira/browse/NIFI-87
Hi all,
Currently using NiFi 1.11.4, we face a blocking issue trying to switch to
NiFi 1.13.1+ due to the VolatileContentRepository : some processors we use
(and probably others that we didn't try) were not able to process
flowfiles, such as MargeRecord, QueryRecord or SplitJson (logs
Our team has worked with NiFi for over one year. Our scenario is dealing with
> 3-5 billion data using NiFi, we found that WriteAheadFlowFileRepository and
> FileSystemRepository cannot meet command,so we put data which need to be
> consumed in tmpfs and choose VolatileFlowFileRepositor
Repository and
VolatileContentRepository to reduce I/O costs and avoid WAL, because in our
scenario, the data can be thrown away when backpressure occurs or NiFi
restarted.
But, we find three problems working with VolatileFlowFileRepository and
VolatileContentRepository.
1. VolatileContentRepos
and VolatileContentRepository to reduce I/O
costs and avoid WAL, because in our scenario, the data can be thrown away
when backpressure occurs or NiFi restarted.
But, we find three problems working with VolatileFlowFileRepository and
VolatileContentRepository.
1. VolatileContentRepository
when maxSize
Hi Brian,
The question is about the content volatile repository. If you are only
using the provenance volatile repository (not the content one), then only
the provenance data would be lost in case of NiFi shutdown, you're right.
Pierre
2017-08-10 10:03 GMT+02:00 BD International
Pierre,
Just a clarification on what you said below the potential data loss on nifi
shutdown would only be on provenance information right?
Or would the data loss affect other repos?
Thanks
Brian
On 10 Aug 2017 8:57 am, "Pierre Villard"
wrote:
Hi Margus,
I
Hi Margus,
I believe that your memory settings are enough. Giving more memory will
likely increase duration of garbage collections and won't increase your
performances. Others on this mailing list will probably be able to give
better recommendations on this one. Also, keep in mind that volatile
Hi
I am playing with nifi performance using one nifi node.
At the moment I think the bottleneck in my flow is SplitJson processor
who can work with 2 000 000 items per 5 minutes (downstrem queues are
not full and queue before SplitJson is constantly full).
I tried to change as much repos to
18 matches
Mail list logo