Peter
All the details you can share on this would be good. First, we should be
resilient to any sort of repo corruption in the event of heap issues.
While obviously the flow isn't in a good state at that point the saved
state should be reliable/recoverable. Second, how the repo/journals got
I find strace (or procmon for Windows) very handy to debug such resource
loading issues.
On Thu, 15 Aug 2019, 19:02 Bryan Bende, wrote:
> I was making sure you didn't have any code that was dependent on the
> internal structure of how the NARs are unpacked.
>
> I can't explain why it can't find
Hi,
What is the best practice and pros and cons of using a single registry
shared across different environment (dev, test, prod) vs. separate registry
for each environment?
Also what is recommended security setup for Nifi Registry?
Thanks,
Muazma
We were able to recover this morning, in the end we deleted the queues that
were causing trouble from the Flow, and when the problem node came online it
deleted the FlowFile’s all on its own, since the queue did not exist. Since
this is done during the FlowFile Repository load into memory, it
Joe,
How we managed to write a FlowFile repository that we later couldn’t load back
into heap is confusing, but we did it somehow (and we even increased heap from
300GB, what it was set to when we created the repo, to 500GB’s)… A user had
some large JSON blocks stored in an attribute. They
Its inside one of the jars within the NAR_INF folder. Does it need to be
somehwere else?
Also I think we extended the AbstractNiFiProcessor from the custom kylo
processor while migrating it as kylo processor was not working as is in our
environment.. Will check that and have it packaged in the
Muazma,
It is strongly recommended to have a single shared registry across the
environments if your policies allow. This will give the best (by design)
experience of porting flows from one environment to another. The remaining
challenges you see with this is that you will have to enter things
I was making sure you didn't have any code that was dependent on the
internal structure of how the NARs are unpacked.
I can't explain why it can't find the application-context.xml since I
don't have access to your code, but I don't see why that would be
related to moving to NAR_INF from META_INF,
I think in general it’s hard for us to know when a bad keystore is provided
until a connection tries to come in because a lot of that is delegated to
Jetty. There was talk previously about a “keystore checker” toolkit feature
which would look at the complete provided configuration for TLS and