Re: Influence about removing RequiresInstanceClassLoading from AbstractHadoopProcessor processor

2019-11-11 Thread Jeff
If you remove the @RequiresInstanceClassloading, the UserGroupInformation
class from Hadoop (hadoop-common, if I remember correctly) will be shared
across all instances that come from a particular NAR (such as PutHDFS,
ListHDFS, FetchHDFS, etc, from nifi-hadoop-nar-x.y.z.nar).  If you are
using Kerberos in those processors and configured different principals
across the various processors, you could run into issues when the
processors attempt to acquire new TGTs, most likely the first time a
relogin is attempted.  UGI has some static state and
@RequiresInstanceClassloading makes sure each instance of a processor with
that annotation has its own classloader to keep that kind of state from
being shared across instances.

On Mon, Nov 11, 2019 at 9:41 PM abellnotring  wrote:

> Hi,Peter & All
>  I’m using kylo to manage the nifi flow(called feed in kylo),and there
> are 4200 instances(600+ instances extended from  AbstractHadoopProcessor)
> in my nifi canvas,The NIFI Non-Heap Memory has increased more than  6GB
> after some days running ,which is extremely abnormal . I have analyzed the
> class loaded into Compressed Class Space ,and found most of the CCS was
> used by classes related by AbstractHadoopProcessor.
>So  I think removing RequiresInstanceClassLoading from
> AbstractHadoopProcessor processor may be a Solution for reducing the CCS
> used.
>Do you have any ideas for this ?
>
>
> 
>
>
> 
>
>
> 
> 
>   Thanks
>
>
> By Hai Luo
> On 11/12/2019 02:17,Shawn Weeks
>  wrote:
>
> I’m assuming your talking about the snappy problem. If you use compress
> content prior to puthdfs you can compress with Snappy as it uses the Java
> Native Snappy Lib. The HDFS processors are limited to the actual Hadoop
> Libraries so they’d have to change from Native to get around this. I’m
> pretty sure we need instance loading to handle the other issues mentioned.
>
>
>
> Thanks
>
> Shawn
>
>
>
> *From: *Joe Witt 
> *Reply-To: *"users@nifi.apache.org" 
> *Date: *Monday, November 11, 2019 at 8:56 AM
> *To: *"users@nifi.apache.org" 
> *Subject: *Re: Influence about removing RequiresInstanceClassLoading from
> AbstractHadoopProcessor processor
>
>
>
> Peter
>
>
>
> The most common challenge is if two isolated instances both want to use a
> native lib.  No two native libs with the same name can be in the same jvm.
> We need to solve that for sure.
>
>
>
> Thanks
>
>
>
> On Mon, Nov 11, 2019 at 9:53 AM Peter Turcsanyi 
> wrote:
>
> Hi Hai Luo,
>
>
>
> @RequiresInstanceClassLoading makes possible to configure separate /
> isolated "Additional Classpath Resources" settings on your HDFS processors
> (eg. S3 storage driver on one of your PutHDFS and Azure Blob on the other).
>
>
>
> Is there any specific reason / use case why you are considering to remove
> it?
>
>
>
> Regards,
>
> Peter Turcsanyi
>
>
>
> On Mon, Nov 11, 2019 at 3:30 PM abellnotring 
> wrote:
>
> Hi,all
>
>  I’m considering removing the RequiresInstanceClassLoading annotation
> from class AbstractHadoopProcessor,
>
>  Does anybody know the potential Influence?
>
>
>
> Thanks
>
> By Hai Luo
>
>


Does NiFi support Async API + Reactive Java inside Processor?

2019-11-11 Thread Seokwon Yang

I am writing a query Processor to Cosmos DB using Async API. And my problem is 
when I run this processor outside Nifi (i.e. nifi-mock framework), the 
processor retrieves the data from backend as expected, but as soon as I run the 
processor inside Nifi, the query processor does not execute the callback block 
that handles the dat from DB.

My observation in summary is that Nifi Cluster seems to run scheduled 
processors differently.
I was curious whether nifi Procoessor framework running inside the NiFi cluster 
supports the Async API + Reactive programming model and if so, any one knows 
the processor that uses reactive java and async API.

Thanks for tips in advance


Sent from Mail for Windows 10



Re: Influence about removing RequiresInstanceClassLoading from AbstractHadoopProcessor processor

2019-11-11 Thread abellnotring






Hi,Peter & All     I’m using kylo to manage the nifi flow(called feed in kylo),and there are 4200 instances(600+ instances extended from  AbstractHadoopProcessor) in my nifi canvas,The NIFI Non-Heap Memory has increased more than  6GB after some days running ,which is extremely abnormal . I have analyzed the class loaded into Compressed Class Space ,and found most of the CCS was used by classes related by AbstractHadoopProcessor.   So  I think removing RequiresInstanceClassLoading from AbstractHadoopProcessor processor may be a Solution for reducing the CCS used.   Do you have any ideas for this ?






  



    Thanks    By Hai Luo

On 11/12/2019 02:17,Shawn Weeks wrote: 




I’m assuming your talking about the snappy problem. If you use compress content prior to puthdfs you can compress with Snappy as it uses the Java Native Snappy Lib. The HDFS processors are limited to the actual Hadoop Libraries so they’d
 have to change from Native to get around this. I’m pretty sure we need instance loading to handle the other issues mentioned.
 
Thanks
Shawn
 

From: Joe Witt 
Reply-To: "users@nifi.apache.org" 
Date: Monday, November 11, 2019 at 8:56 AM
To: "users@nifi.apache.org" 
Subject: Re: Influence about removing RequiresInstanceClassLoading from AbstractHadoopProcessor processor


 



Peter



 


The most common challenge is if two isolated instances both want to use a native lib.  No two native libs with the same name can be in the same jvm.  We need to solve that for sure.


 


Thanks


 


On Mon, Nov 11, 2019 at 9:53 AM Peter Turcsanyi  wrote:



Hi Hai Luo, 

 


@RequiresInstanceClassLoading makes possible to configure separate / isolated "Additional Classpath Resources" settings on your HDFS processors (eg. S3 storage driver on one of your PutHDFS and Azure
 Blob on the other).


 


Is there any specific reason / use case why you are considering to remove it?


 


Regards,


Peter Turcsanyi


 


On Mon, Nov 11, 2019 at 3:30 PM abellnotring  wrote:






Hi,all



     I’m considering removing the RequiresInstanceClassLoading annotation from class AbstractHadoopProcessor,


     Does anybody know the potential Influence?


 


    Thanks


    By Hai Luo













Default Retry mechanism for NIFI puts3Object processor

2019-11-11 Thread sanjeet rath
Hi Team,

I am using puts3Object processor of the nifi , to uploading object from
onprem to AWS s3 bucket. i believe we have 2 types of uploading , single
part upload and multipart upload as per the threshold value defined for
multipart.

for multipart , 3 steps are followed
1)s3.nitiateMultipartUpload , 2)s3.uploadPart 3)s3.completeMultipartUpload

while checking the code i found , in s3.completeMultipartUpload method, if
there is any server side exception(5**), then it is retrying 3 times (as in
CompleteMultipartUploadRetryCondition class of AWS SDK,  MAX_RETRY_ATTEMPTS
is constant variable of value 3) using a do while loop .

I have 2 questions

a) This default retry mechanism (value is 3)is only used in
s3.completeMultipartUpload method ? as i don't find any code for retry used
in single object upload.

b) if am going to changes MaxErrorRetry value AWS ClientConfiguration, does
this will change it retry count if there is S3exception(5**)  as per value
i have set, as its a constant value of 3. Please confirm.

c)If B answer is YES. Then only
ClientConfiguration.MaxErrorRetry(myCostumValue) will work or

I have to add bellow code for retry policy also.

ClientConfiguration.setRetryPolicy(new
RetryPolicy(config.getRetryPolicy().getRetryCondition(),config.getRetryPolicy().getBackoffStrategy(),
myCostumValue, true).


Thanks ,

Sanjeet


Re: Influence about removing RequiresInstanceClassLoading from AbstractHadoopProcessor processor

2019-11-11 Thread Matt Burgess
I can’t remember for sure but I think if you use CompressContent the compressed 
file has to fit in a single HDFS file block in order to work. IIRC 
Hadoop-Snappy is different from regular Snappy in the sense that it puts the 
compression header in each block so the file can be reassembled and 
decompressed correctly.



> On Nov 11, 2019, at 10:30 AM, Shawn Weeks  wrote:
> 
> 
> I’m assuming your talking about the snappy problem. If you use compress 
> content prior to puthdfs you can compress with Snappy as it uses the Java 
> Native Snappy Lib. The HDFS processors are limited to the actual Hadoop 
> Libraries so they’d have to change from Native to get around this. I’m pretty 
> sure we need instance loading to handle the other issues mentioned.
>  
> Thanks
> Shawn
>  
> From: Joe Witt 
> Reply-To: "users@nifi.apache.org" 
> Date: Monday, November 11, 2019 at 8:56 AM
> To: "users@nifi.apache.org" 
> Subject: Re: Influence about removing RequiresInstanceClassLoading from 
> AbstractHadoopProcessor processor
>  
> Peter
>  
> The most common challenge is if two isolated instances both want to use a 
> native lib.  No two native libs with the same name can be in the same jvm.  
> We need to solve that for sure.
>  
> Thanks
>  
> On Mon, Nov 11, 2019 at 9:53 AM Peter Turcsanyi  wrote:
> Hi Hai Luo,
>  
> @RequiresInstanceClassLoading makes possible to configure separate / isolated 
> "Additional Classpath Resources" settings on your HDFS processors (eg. S3 
> storage driver on one of your PutHDFS and Azure Blob on the other).
>  
> Is there any specific reason / use case why you are considering to remove it?
>  
> Regards,
> Peter Turcsanyi
>  
> On Mon, Nov 11, 2019 at 3:30 PM abellnotring  wrote:
> Hi,all
>  I’m considering removing the RequiresInstanceClassLoading annotation 
> from class AbstractHadoopProcessor,
>  Does anybody know the potential Influence?
>  
> Thanks
> By Hai Luo


Re: Influence about removing RequiresInstanceClassLoading from AbstractHadoopProcessor processor

2019-11-11 Thread Shawn Weeks
I’m assuming your talking about the snappy problem. If you use compress content 
prior to puthdfs you can compress with Snappy as it uses the Java Native Snappy 
Lib. The HDFS processors are limited to the actual Hadoop Libraries so they’d 
have to change from Native to get around this. I’m pretty sure we need instance 
loading to handle the other issues mentioned.

Thanks
Shawn

From: Joe Witt 
Reply-To: "users@nifi.apache.org" 
Date: Monday, November 11, 2019 at 8:56 AM
To: "users@nifi.apache.org" 
Subject: Re: Influence about removing RequiresInstanceClassLoading from 
AbstractHadoopProcessor processor

Peter

The most common challenge is if two isolated instances both want to use a 
native lib.  No two native libs with the same name can be in the same jvm.  We 
need to solve that for sure.

Thanks

On Mon, Nov 11, 2019 at 9:53 AM Peter Turcsanyi 
mailto:turcsa...@apache.org>> wrote:
Hi Hai Luo,

@RequiresInstanceClassLoading makes possible to configure separate / isolated 
"Additional Classpath Resources" settings on your HDFS processors (eg. S3 
storage driver on one of your PutHDFS and Azure Blob on the other).

Is there any specific reason / use case why you are considering to remove it?

Regards,
Peter Turcsanyi

On Mon, Nov 11, 2019 at 3:30 PM abellnotring 
mailto:abellnotr...@sina.com>> wrote:
Hi,all
 I’m considering removing the RequiresInstanceClassLoading annotation from 
class AbstractHadoopProcessor,
 Does anybody know the potential Influence?

Thanks
By Hai Luo


Re: Influence about removing RequiresInstanceClassLoading from AbstractHadoopProcessor processor

2019-11-11 Thread Joe Witt
Peter

The most common challenge is if two isolated instances both want to use a
native lib.  No two native libs with the same name can be in the same jvm.
We need to solve that for sure.

Thanks

On Mon, Nov 11, 2019 at 9:53 AM Peter Turcsanyi 
wrote:

> Hi Hai Luo,
>
> @RequiresInstanceClassLoading makes possible to configure separate /
> isolated "Additional Classpath Resources" settings on your HDFS processors
> (eg. S3 storage driver on one of your PutHDFS and Azure Blob on the other).
>
> Is there any specific reason / use case why you are considering to remove
> it?
>
> Regards,
> Peter Turcsanyi
>
> On Mon, Nov 11, 2019 at 3:30 PM abellnotring 
> wrote:
>
>> Hi,all
>>  I’m considering removing the RequiresInstanceClassLoading annotation
>> from class AbstractHadoopProcessor,
>>  Does anybody know the potential Influence?
>>
>> Thanks
>> By Hai Luo
>>
>


Re: Influence about removing RequiresInstanceClassLoading from AbstractHadoopProcessor processor

2019-11-11 Thread Peter Turcsanyi
Hi Hai Luo,

@RequiresInstanceClassLoading makes possible to configure separate /
isolated "Additional Classpath Resources" settings on your HDFS processors
(eg. S3 storage driver on one of your PutHDFS and Azure Blob on the other).

Is there any specific reason / use case why you are considering to remove
it?

Regards,
Peter Turcsanyi

On Mon, Nov 11, 2019 at 3:30 PM abellnotring  wrote:

> Hi,all
>  I’m considering removing the RequiresInstanceClassLoading annotation
> from class AbstractHadoopProcessor,
>  Does anybody know the potential Influence?
>
> Thanks
> By Hai Luo
>


Influence about removing RequiresInstanceClassLoading from AbstractHadoopProcessor processor

2019-11-11 Thread abellnotring






Hi,all     I’m considering removing the RequiresInstanceClassLoading annotation from class AbstractHadoopProcessor,     Does anybody know the potential Influence?    Thanks    By Hai Luo




Re: NiFi Upgrade 1.9.2 to 1.10.0 - LDAP Failure

2019-11-11 Thread Josef.Zahner1
And additionally, below the output of the tcpdump captured on the NiFi node 
during startup of NiFi 1.10.0. We use the standard LDAP port (389). And you 
where right, I see in the dump that NiFi tries to authenticate with “simple” 
authentication with START_TLS…

[cid:image001.png@01D59881.72F2B710]


From: "Zahner Josef, GSB-LR-TRW-LI" 
Date: Monday, 11 November 2019 at 11:06
To: "users@nifi.apache.org" 
Subject: Re: NiFi Upgrade 1.9.2 to 1.10.0 - LDAP Failure

Hi Andy,

I’ve just opened a jira bugreport:
https://issues.apache.org/jira/projects/NIFI/issues/NIFI-6860

We changed nothing on the LDAP. The whole setup still works for our production 
nodes with NiFi 1.9.2, we have multiple clusters/single NiFi’s running. As we 
use ansible I’ve removed again NiFi 1.10.0 from the test node and installed 
again NiFi 1.9.2, it was working without any issues. And the only difference 
between NiFi 1.9.2 and 1.10.0 deployment are the new config parameters.

As you can see in  the bugreport, I’ve switched now to LDAPS and this is 
working… Users are visible in the “Users” windows and I can login with an LDAP 
user. I just switched to LDAPS instead of START_TLS and added an “S” to the URL 
of the LDAP server.

Cheers Josef



From: Andy LoPresto 
Reply to: "users@nifi.apache.org" 
Date: Monday, 11 November 2019 at 10:46
To: "users@nifi.apache.org" 
Subject: Re: NiFi Upgrade 1.9.2 to 1.10.0 - LDAP Failure

Hi Josef,

My inclination is that somehow the password NiFi is trying to send to the LDAP 
service is no longer sufficiently protected? The only other change I am aware 
of that could influence this is the Spring Security upgrade from 4.2.8 to 
4.2.13 (NiFi-6412) [1]; the new version of Spring Security might enforce a new 
restriction on how the password is sent that LDAP doesn’t like. The LDAP error 
code 13 refers to the password being sent in plaintext [2]. As you are using 
StartTLS, I am assuming the LDAP port you’re connecting to is still 389? Did 
anything change on the LDAP server? Can you verify a simple lookup using 
ldapsearch still works? If you get the same error code, you may need to add -Z 
to the command to initialize a secure TLS channel.

[1] https://issues.apache.org/jira/browse/NIFI-6412
[2] 
https://ldap.com/ldap-result-code-reference-core-ldapv3-result-codes/#rc-confidentialityRequired


Andy LoPresto
alopre...@apache.org
alopresto.apa...@gmail.com
PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69



On Nov 11, 2019, at 4:59 PM, 
josef.zahn...@swisscom.com wrote:

Hi guys

We would like to upgrade from NiFi 1.9.2 to 1.10.0 and we have HTTPS with LDAP 
(START_TLS) authentication successfully enabled on 1.9.2. Now after upgrading,  
we have an issue which prevents nifi from startup:


2019-11-11 08:29:30,447 ERROR [main] o.s.web.context.ContextLoader Context 
initialization failed
org.springframework.beans.factory.UnsatisfiedDependencyException: Error 
creating bean with name 
'org.springframework.security.config.annotation.web.configuration.WebSecurityConfiguration':
 Unsatisfied dependency expressed through method 
'setFilterChainProxySecurityConfigurer' parameter 1; nested exception is 
org.springframework.beans.factory.BeanExpressionException: Expression parsing 
failed; nested exception is 
org.springframework.beans.factory.UnsatisfiedDependencyException: Error 
creating bean with name 'org.apache.nifi.web.NiFiWebApiSecurityConfiguration': 
Unsatisfied dependency expressed through method 'setJwtAuthenticationProvider' 
parameter 0; nested exception is 
org.springframework.beans.factory.BeanCreationException: Error creating bean 
with name 'jwtAuthenticationProvider' defined in class path resource 
[nifi-web-security-context.xml]: Cannot resolve reference to bean 'authorizer' 
while setting constructor argument; nested exception is 
org.springframework.beans.factory.BeanCreationException: Error creating bean 
with name 'authorizer': FactoryBean threw exception on object creation; nested 
exception is org.springframework.ldap.AuthenticationNotSupportedException: 
[LDAP: error code 13 - confidentiality required]; nested exception is 
javax.naming.AuthenticationNotSupportedException: [LDAP: error code 13 - 
confidentiality required]
at 
org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor$AutowiredMethodElement.inject(AutowiredAnnotationBeanPostProcessor.java:666)
at 
org.springframework.beans.factory.annotation.InjectionMetadata.inject(InjectionMetadata.java:87)
at 
org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor.postProcessPropertyValues(AutowiredAnnotationBeanPostProcessor.java:366)
at 
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1269)
at 
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean

Re: NiFi Upgrade 1.9.2 to 1.10.0 - LDAP Failure

2019-11-11 Thread Josef.Zahner1
Hi Andy,

I’ve just opened a jira bugreport:
https://issues.apache.org/jira/projects/NIFI/issues/NIFI-6860

We changed nothing on the LDAP. The whole setup still works for our production 
nodes with NiFi 1.9.2, we have multiple clusters/single NiFi’s running. As we 
use ansible I’ve removed again NiFi 1.10.0 from the test node and installed 
again NiFi 1.9.2, it was working without any issues. And the only difference 
between NiFi 1.9.2 and 1.10.0 deployment are the new config parameters.

As you can see in  the bugreport, I’ve switched now to LDAPS and this is 
working… Users are visible in the “Users” windows and I can login with an LDAP 
user. I just switched to LDAPS instead of START_TLS and added an “S” to the URL 
of the LDAP server.

Cheers Josef



From: Andy LoPresto 
Reply to: "users@nifi.apache.org" 
Date: Monday, 11 November 2019 at 10:46
To: "users@nifi.apache.org" 
Subject: Re: NiFi Upgrade 1.9.2 to 1.10.0 - LDAP Failure

Hi Josef,

My inclination is that somehow the password NiFi is trying to send to the LDAP 
service is no longer sufficiently protected? The only other change I am aware 
of that could influence this is the Spring Security upgrade from 4.2.8 to 
4.2.13 (NiFi-6412) [1]; the new version of Spring Security might enforce a new 
restriction on how the password is sent that LDAP doesn’t like. The LDAP error 
code 13 refers to the password being sent in plaintext [2]. As you are using 
StartTLS, I am assuming the LDAP port you’re connecting to is still 389? Did 
anything change on the LDAP server? Can you verify a simple lookup using 
ldapsearch still works? If you get the same error code, you may need to add -Z 
to the command to initialize a secure TLS channel.

[1] https://issues.apache.org/jira/browse/NIFI-6412
[2] 
https://ldap.com/ldap-result-code-reference-core-ldapv3-result-codes/#rc-confidentialityRequired


Andy LoPresto
alopre...@apache.org
alopresto.apa...@gmail.com
PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69


On Nov 11, 2019, at 4:59 PM, 
josef.zahn...@swisscom.com wrote:

Hi guys

We would like to upgrade from NiFi 1.9.2 to 1.10.0 and we have HTTPS with LDAP 
(START_TLS) authentication successfully enabled on 1.9.2. Now after upgrading,  
we have an issue which prevents nifi from startup:


2019-11-11 08:29:30,447 ERROR [main] o.s.web.context.ContextLoader Context 
initialization failed
org.springframework.beans.factory.UnsatisfiedDependencyException: Error 
creating bean with name 
'org.springframework.security.config.annotation.web.configuration.WebSecurityConfiguration':
 Unsatisfied dependency expressed through method 
'setFilterChainProxySecurityConfigurer' parameter 1; nested exception is 
org.springframework.beans.factory.BeanExpressionException: Expression parsing 
failed; nested exception is 
org.springframework.beans.factory.UnsatisfiedDependencyException: Error 
creating bean with name 'org.apache.nifi.web.NiFiWebApiSecurityConfiguration': 
Unsatisfied dependency expressed through method 'setJwtAuthenticationProvider' 
parameter 0; nested exception is 
org.springframework.beans.factory.BeanCreationException: Error creating bean 
with name 'jwtAuthenticationProvider' defined in class path resource 
[nifi-web-security-context.xml]: Cannot resolve reference to bean 'authorizer' 
while setting constructor argument; nested exception is 
org.springframework.beans.factory.BeanCreationException: Error creating bean 
with name 'authorizer': FactoryBean threw exception on object creation; nested 
exception is org.springframework.ldap.AuthenticationNotSupportedException: 
[LDAP: error code 13 - confidentiality required]; nested exception is 
javax.naming.AuthenticationNotSupportedException: [LDAP: error code 13 - 
confidentiality required]
at 
org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor$AutowiredMethodElement.inject(AutowiredAnnotationBeanPostProcessor.java:666)
at 
org.springframework.beans.factory.annotation.InjectionMetadata.inject(InjectionMetadata.java:87)
at 
org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor.postProcessPropertyValues(AutowiredAnnotationBeanPostProcessor.java:366)
at 
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1269)
at 
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:551)
at 
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:481)
at 
org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:312)
at 
org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230)
at 
org.springfram

Re: NiFi Upgrade 1.9.2 to 1.10.0 - LDAP Failure

2019-11-11 Thread Andy LoPresto
Hi Josef,

My inclination is that somehow the password NiFi is trying to send to the LDAP 
service is no longer sufficiently protected? The only other change I am aware 
of that could influence this is the Spring Security upgrade from 4.2.8 to 
4.2.13 (NiFi-6412) [1]; the new version of Spring Security might enforce a new 
restriction on how the password is sent that LDAP doesn’t like. The LDAP error 
code 13 refers to the password being sent in plaintext [2]. As you are using 
StartTLS, I am assuming the LDAP port you’re connecting to is still 389? Did 
anything change on the LDAP server? Can you verify a simple lookup using 
ldapsearch still works? If you get the same error code, you may need to add -Z 
to the command to initialize a secure TLS channel. 

[1] https://issues.apache.org/jira/browse/NIFI-6412 

[2] 
https://ldap.com/ldap-result-code-reference-core-ldapv3-result-codes/#rc-confidentialityRequired
 



Andy LoPresto
alopre...@apache.org
alopresto.apa...@gmail.com
PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69

> On Nov 11, 2019, at 4:59 PM, josef.zahn...@swisscom.com wrote:
> 
> Hi guys
>  
> We would like to upgrade from NiFi 1.9.2 to 1.10.0 and we have HTTPS with 
> LDAP (START_TLS) authentication successfully enabled on 1.9.2. Now after 
> upgrading,  we have an issue which prevents nifi from startup:
>  
>  
> 2019-11-11 08:29:30,447 ERROR [main] o.s.web.context.ContextLoader Context 
> initialization failed
> org.springframework.beans.factory.UnsatisfiedDependencyException: Error 
> creating bean with name 
> 'org.springframework.security.config.annotation.web.configuration.WebSecurityConfiguration':
>  Unsatisfied dependency expressed through method 
> 'setFilterChainProxySecurityConfigurer' parameter 1; nested exception is 
> org.springframework.beans.factory.BeanExpressionException: Expression parsing 
> failed; nested exception is 
> org.springframework.beans.factory.UnsatisfiedDependencyException: Error 
> creating bean with name 
> 'org.apache.nifi.web.NiFiWebApiSecurityConfiguration': Unsatisfied dependency 
> expressed through method 'setJwtAuthenticationProvider' parameter 0; nested 
> exception is org.springframework.beans.factory.BeanCreationException: Error 
> creating bean with name 'jwtAuthenticationProvider' defined in class path 
> resource [nifi-web-security-context.xml]: Cannot resolve reference to bean 
> 'authorizer' while setting constructor argument; nested exception is 
> org.springframework.beans.factory.BeanCreationException: Error creating bean 
> with name 'authorizer': FactoryBean threw exception on object creation; 
> nested exception is 
> org.springframework.ldap.AuthenticationNotSupportedException: [LDAP: error 
> code 13 - confidentiality required]; nested exception is 
> javax.naming.AuthenticationNotSupportedException: [LDAP: error code 13 - 
> confidentiality required]
> at 
> org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor$AutowiredMethodElement.inject(AutowiredAnnotationBeanPostProcessor.java:666)
> at 
> org.springframework.beans.factory.annotation.InjectionMetadata.inject(InjectionMetadata.java:87)
> at 
> org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor.postProcessPropertyValues(AutowiredAnnotationBeanPostProcessor.java:366)
> at 
> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1269)
> at 
> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:551)
> at 
> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:481)
> at 
> org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:312)
> at 
> org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230)
> at 
> org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:308)
> at 
> org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:197)
> at 
> org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:761)
> at 
> org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:867)
> at 
> org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:543)
> …
>  
> In authorizers.xml we added the line “false”, but beside of that at least the 
> authorizers.xml is the same. Anybody an ide