Re: Is a prompt for a user cert normal on startup?

2021-08-03 Thread David Handermann
Hi Mike,

With the default configuration using HTTPS in 1.14.0, the browser will
prompt for a certificate if one is available. The NiFi Jetty server is
configured to request a certificate, but it is not required. That is the
reason for the browser prompt, and the reason canceling the request then
prompts for username and password authentication.

Regards,
David Handermann

On Tue, Aug 3, 2021 at 1:59 PM Mike Thomsen  wrote:

> I built a fresh copy of 1.15.0-SNAPSHOT and got prompted for a cert
> when I hit the web console. It ultimately didn't block me from logging
> in with u/p. Is that normal behavior? I ask because my main laptop is
> a corporate one that does some funny things with our security
> settings.
>
> Thanks,
>
> Mike
>


Is a prompt for a user cert normal on startup?

2021-08-03 Thread Mike Thomsen
I built a fresh copy of 1.15.0-SNAPSHOT and got prompted for a cert
when I hit the web console. It ultimately didn't block me from logging
in with u/p. Is that normal behavior? I ask because my main laptop is
a corporate one that does some funny things with our security
settings.

Thanks,

Mike


Re: Execute Script Processor is not working some random times

2021-08-03 Thread Mark Payne
Vibhath,

We can see from the thread dump in the diagnostics report several threads like 
the following:

"Timer-Driven Process Thread-6" Id=59 WAITING  on 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@6e35ff81
at 
java.base@11.0.9.1/jdk.internal.misc.Unsafe.park(Native
 Method)
at 
java.base@11.0.9.1/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)
at 
java.base@11.0.9.1/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2081)
at 
org.apache.nifi.controller.repository.FileSystemRepository$ContainerState.waitForArchiveExpiration(FileSystemRepository.java:1698)
at 
org.apache.nifi.controller.repository.FileSystemRepository.create(FileSystemRepository.java:634)
at 
org.apache.nifi.controller.repository.claim.StandardContentClaimWriteCache.getContentClaim(StandardContentClaimWriteCache.java:63)
at 
org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2886)
at 
edu.test_grp.test.processors.test_pack.ExtractDB.onTrigger(ExtractDB.java:192)
at 
app//org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1202)
at 
org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:214)
at 
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:103)
at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110)
at 
java.base@11.0.9.1/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at 
java.base@11.0.9.1/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
at 
java.base@11.0.9.1/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
at 
java.base@11.0.9.1/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at 
java.base@11.0.9.1/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at 
java.base@11.0.9.1/java.lang.Thread.run(Thread.java:834)
Number of Locked Synchronizers: 1
- java.util.concurrent.ThreadPoolExecutor$Worker@ad7cc49

Note the line:

at 
org.apache.nifi.controller.repository.FileSystemRepository$ContainerState.waitForArchiveExpiration(FileSystemRepository.java:1698)

We can also see from the properties in the diagnostics dump:

nifi.content.repository.archive.max.retention.period : 7 days
nifi.content.repository.archive.max.usage.percentage : 50%

So this tells us that the Content Repository wants to ensure that the disk is 
used no more than 50%. When the disk is 50% full, it will start applying 
backpressure in order to prevent processors from writing to the content 
repository until a background thread performs cleanup of the Content Repo 
archive.

And the content repository is configured as;

Content Repository Implementation: 
org.apache.nifi.controller.repository.FileSystemRepository
Content Repository  Storage Capacity: 98.31 GB
Content Repository  Usable Space: 37.47 GB

So here we can see that the amount of Usable Space (free space) on the Content 
Repo is 37.47 GB, or 38%. I.e., used space is 62%. So this confirms that the 
content repo should be applying backpressure. If you look in your logs, you’ll 
also see logs that say something like:


Unable to write to container default due to archive file size constraints; 
waiting for archive cleanup

The reason for the content repository doing this is that depending on your 
disk(s) and your operating system, the OS can likely create smallish files (1 
MB or so) faster than the OS can delete them. As a result, when NiFi is under 
heavy load, it can be in a situation where it’s creating new data in the 
content repository faster than it can delete old data that is no longer needed. 
And this can cause you to run out of disk space. So this check is in place in 
order to give the background process (of deleting old data from the repository) 
a chance to catch up.



So. What to do about it?

The easiest thing to do is to change

nifi.content.repository.archive.max.usage.percentage=50%
to
nifi.content.repository.archive.max.usage.percentage=85%

in nifi.properties. That will allow the system to keep writing to the content 
repository until it reaches 85% full instead of 50% full.

Of course, that then has its own set of concerns. It allows the repo to grow 
much larger, which could result in running out of disk space.

Ideally, you’ll have each repository (flowfile, provenance, content) on its own 
physical partition, or at least its own logical volume. Your config shows that 
all are writing to the same location.

Hope this is helpful!

-Mark


On Aug 3, 202

Re: Re: Re: Re: No Load Balancing since 1.13.2

2021-08-03 Thread Axel Schwarz
Hey guys,

I think I found the "trick" for at least version 1.13.2 and of course I'll 
share it with you.
I now use the following load balancing properties:

# cluster load balancing properties #
nifi.cluster.load.balance.host=192.168.1.10
nifi.cluster.load.balance.port=6342
nifi.cluster.load.balance.connections.per.node=4
nifi.cluster.load.balance.max.thread.count=8
nifi.cluster.load.balance.comms.timeout=30 sec

So I use the hosts IP address for balance.host instead of 0.0.0.0 or the fqdn 
and have no balance.address property at all.
This led to partly load balancing in my case as already mentioned. It looked 
like I needed to do one more step to reach the goal and this step seems to be 
deleting all statemanagement files.

Through the state-management.xml config file I changed the state management 
directory to be outside of the nifi installation, because the config file says 
"it is important, that the directory be copied over to the new version when 
upgrading nifi". So everytime when I upgraded or reinstalled Nifi during my 
load balancing odyssey, the statemanagement remained completely untouched.
As soon as I changed that, by deleting the entire state management directory 
before reinstalling Nifi with above mentioned properties, load balancing was 
immediately working throughout the whole cluster.


I think for my flow it is not quite that bad to delete the state management as 
I only use one statefull processor to increase some counter. And the times I 
already tried this by now, I could not encounter any wrong behaviour 
whatsoever. But of course I can't test everything, so when any of you have some 
important facts about deleting the state management, please let me know :)

Beside that I now feel like this solved my problem. Gotta have an eye on that 
when updating to version 1.14.0 later on, but I think I can figure this out. So 
thanks for all your support! :)

--- Ursprüngliche Nachricht ---
Von: "Jens M. Kofoed" 
Datum: 29.07.2021 11:08:28
An: users@nifi.apache.org, Axel Schwarz 
Betreff: Re: Re: Re: No Load Balancing since 1.13.2

> Hmm... I can't remember :-( sorry
>
> My configuration for version 1.13.2 is like this:
> # cluster node properties (only configure for cluster nodes) #
> nifi.cluster.is.node=true
> nifi.cluster.node.address=nifi-node01.domaine.com
> nifi.cluster.node.protocol.port=9443
> nifi.cluster.node.protocol.threads=10
> nifi.cluster.node.protocol.max.threads=50
> nifi.cluster.node.event.history.size=25
> nifi.cluster.node.connection.timeout=5 sec
> nifi.cluster.node.read.timeout=5 sec
> nifi.cluster.node.max.concurrent.requests=100
> nifi.cluster.firewall.file=
> nifi.cluster.flow.election.max.wait.time=5 mins
> nifi.cluster.flow.election.max.candidates=3
>
> # cluster load balancing properties #
> nifi.cluster.load.balance.address=192.168.1.11
> nifi.cluster.load.balance.port=6111
> nifi.cluster.load.balance.connections.per.node=4
> nifi.cluster.load.balance.max.thread.count=8
> nifi.cluster.load.balance.comms.timeout=30 sec
>
> So I defined "nifi.cluster.node.address" with the hostname and
> not an ip
> adress and the "nifi.cluster.load.balance.address" with the ip
> address of
> the server.
> And triple check the configuration at all servers :-)
>
> Kind Regards
> Jens M. Kofoed
>
>
> Den tor. 29. jul. 2021 kl. 10.11 skrev Axel Schwarz :
>
>
> > Hey Jens,
> >
> > in Issue Nifi-8643 you wrote the last comment with the exactly same
>
> > behaviour as we're experiencing now. 2 of 3 nodes were load balancing.
>
> > How did you get the third node to participate in load balancing? An
> update
> > to 1.14.0 does not change anything for us.
> >
> >
> > https://issues.apache.org/jira/browse/NIFI-8643?focusedCommentId=17361418&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17361418
>
> >
> >
> > --- Ursprüngliche Nachricht ---
> > Von: "Jens M. Kofoed" 
> > Datum: 28.07.2021 12:07:50
> > An: users@nifi.apache.org, Axel Schwarz 
>
> > Betreff: Re: Re: No Load Balancing since 1.13.2
> >
> > > hi
> > >
> > > I can see that you have configured
> > nifi.cluster.load.balance.address=0.0.0.0
> > >
> > > Have your tried to set the correct ip adress?
> > > node1: nifi.cluster.load.balance.address=192.168.1.10
> > > node2: nifi.cluster.load.balance.address=192.168.1.11
> > > node3: nifi.cluster.load.balance.address=192.168.1.12
> > >
> > > regards
> > > Jens M. Kofoed
> > >
> > > Den ons. 28. jul. 2021 kl. 11.17 skrev Axel Schwarz <
> > axelkop...@emailn.de>:
> > >
> > >
> > > > Just tried Java 11. But still does not work. Nothing changed.
> :(
> > > >
> > > > --- Ursprüngliche Nachricht ---
> > > > Von: Jorge Machado 
> > > > Datum: 27.07.2021 13:08:55
> > > > An: users@nifi.apache.org,  Axel Schwarz 
>
> > >
> > > > Betreff: Re: No Load Balancing since 1.13.2
> > > >
> > > > > Did you tried java 11 ? I have a client running a similar
> setup
> > > to yours
> > > > > but with a lower nigh version and it works fine. Maybe
> it is worth
> > > to try
> > >