[jira] [Comment Edited] (NIFI-12836) Connection pool shut down and SocketException for many AWS processors

2024-02-23 Thread Jira


[ 
https://issues.apache.org/jira/browse/NIFI-12836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17820253#comment-17820253
 ] 

Lehel Boér edited comment on NIFI-12836 at 2/24/24 1:42 AM:


It appears that the issue encountered with the local setup stemmed from a 
conflict between the database and the Intellij debugger utilizing the same 
port. I could not replicate it in cloud environment.

Could you kindly add additional details? Specifically, does the issue persist, 
and if so, under what circumstances? Additionally, is it associated with a 
version upgrade?


was (Author: lehel44):
Looks like the issue with the local setup was due to the DB and the Intellij 
debug ran on the same port. I could not reproduce the issue in cloud 
environtment. Could you please provide some more details? Does it still occur 
and when does it occur? Was it due to a version upgrade?

> Connection pool shut down and SocketException for many AWS processors
> -
>
> Key: NIFI-12836
> URL: https://issues.apache.org/jira/browse/NIFI-12836
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Lehel Boér
>Priority: Major
>
> Reported encountering "Connection pool shut down" errors for the 
> PutDynamoDBRecord, DeleteDynamoDB, and PutSQS processors while running in a 
> NiFi cluster on version 2.0.0-M1.
>  * [Stack Trace reported on the mailing list|https://codefile.io/f/ZMXYzHt89X]
> https://lists.apache.org/thread/5fbtwk68yr4bcxpp2h2mtzwy0566rfqz
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (NIFI-12836) Connection pool shut down and SocketException for many AWS processors

2024-02-23 Thread Jira


[ 
https://issues.apache.org/jira/browse/NIFI-12836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17820253#comment-17820253
 ] 

Lehel Boér edited comment on NIFI-12836 at 2/24/24 1:42 AM:


It appears that the issue encountered with the local setup stemmed from a 
conflict between the database and the Intellij debugger utilizing the same 
port. I could not replicate it in cloud environment.

Could you kindly add more details? Specifically, does the issue persist, and if 
so, under what circumstances? Additionally, is it associated with a version 
upgrade?


was (Author: lehel44):
It appears that the issue encountered with the local setup stemmed from a 
conflict between the database and the Intellij debugger utilizing the same 
port. I could not replicate it in cloud environment.

Could you kindly add additional details? Specifically, does the issue persist, 
and if so, under what circumstances? Additionally, is it associated with a 
version upgrade?

> Connection pool shut down and SocketException for many AWS processors
> -
>
> Key: NIFI-12836
> URL: https://issues.apache.org/jira/browse/NIFI-12836
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Lehel Boér
>Priority: Major
>
> Reported encountering "Connection pool shut down" errors for the 
> PutDynamoDBRecord, DeleteDynamoDB, and PutSQS processors while running in a 
> NiFi cluster on version 2.0.0-M1.
>  * [Stack Trace reported on the mailing list|https://codefile.io/f/ZMXYzHt89X]
> https://lists.apache.org/thread/5fbtwk68yr4bcxpp2h2mtzwy0566rfqz
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-12836) Connection pool shut down and SocketException for many AWS processors

2024-02-23 Thread Jira


[ 
https://issues.apache.org/jira/browse/NIFI-12836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17820253#comment-17820253
 ] 

Lehel Boér commented on NIFI-12836:
---

Looks like the issue with the local setup was due to the DB and the Intellij 
debug ran on the same port. I could not reproduce the issue in cloud 
environtment. Could you please provide some more details? Does it still occur 
and when does it occur? Was it due to a version upgrade?

> Connection pool shut down and SocketException for many AWS processors
> -
>
> Key: NIFI-12836
> URL: https://issues.apache.org/jira/browse/NIFI-12836
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Lehel Boér
>Priority: Major
>
> Reported encountering "Connection pool shut down" errors for the 
> PutDynamoDBRecord, DeleteDynamoDB, and PutSQS processors while running in a 
> NiFi cluster on version 2.0.0-M1.
>  * [Stack Trace reported on the mailing list|https://codefile.io/f/ZMXYzHt89X]
> https://lists.apache.org/thread/5fbtwk68yr4bcxpp2h2mtzwy0566rfqz
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-12836) Connection pool shut down and SocketException for many AWS processors

2024-02-23 Thread Jira


 [ 
https://issues.apache.org/jira/browse/NIFI-12836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lehel Boér updated NIFI-12836:
--
Description: 
Reported encountering "Connection pool shut down" errors for the 
PutDynamoDBRecord, DeleteDynamoDB, and PutSQS processors while running in a 
NiFi cluster on version 2.0.0-M1.
 * [Stack Trace reported on the mailing list|https://codefile.io/f/ZMXYzHt89X]

https://lists.apache.org/thread/5fbtwk68yr4bcxpp2h2mtzwy0566rfqz

 

 

 

  was:
Reported encountering "Connection pool shut down" errors for the 
PutDynamoDBRecord, DeleteDynamoDB, and PutSQS processors while running in a 
NiFi cluster on version 2.0.0-M1.
 * [Stack Trace reported on the mailing list|https://codefile.io/f/ZMXYzHt89X]

 

 

 

 


> Connection pool shut down and SocketException for many AWS processors
> -
>
> Key: NIFI-12836
> URL: https://issues.apache.org/jira/browse/NIFI-12836
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Lehel Boér
>Priority: Major
>
> Reported encountering "Connection pool shut down" errors for the 
> PutDynamoDBRecord, DeleteDynamoDB, and PutSQS processors while running in a 
> NiFi cluster on version 2.0.0-M1.
>  * [Stack Trace reported on the mailing list|https://codefile.io/f/ZMXYzHt89X]
> https://lists.apache.org/thread/5fbtwk68yr4bcxpp2h2mtzwy0566rfqz
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-12836) Connection pool shut down and SocketException for many AWS processors

2024-02-23 Thread Jira


 [ 
https://issues.apache.org/jira/browse/NIFI-12836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lehel Boér updated NIFI-12836:
--
Description: 
Reported encountering "Connection pool shut down" errors for the 
PutDynamoDBRecord, DeleteDynamoDB, and PutSQS processors while running in a 
NiFi cluster on version 2.0.0-M1.
 * [Stack Trace reported on the mailing list|https://codefile.io/f/ZMXYzHt89X]

 

 

 

 

  was:
Reported encountering "Connection pool shut down" errors for the 
PutDynamoDBRecord, DeleteDynamoDB, and PutSQS processors while running in a 
NiFi cluster on version 2.0.0-M1.
 * [Stack Trace reported on the mailing list|https://codefile.io/f/ZMXYzHt89X]

I was able to reproduce a similar issue with NiFi 2.0.0-M1 when using local 
DynamoDB. When using an AWS instance the issue did not occur. "Connection 
reset" errors during runtime.
 * [Stack Trace - connection reset|https://codefile.io/f/XF51uNKg3g]

 

 

 

 


> Connection pool shut down and SocketException for many AWS processors
> -
>
> Key: NIFI-12836
> URL: https://issues.apache.org/jira/browse/NIFI-12836
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Lehel Boér
>Priority: Major
>
> Reported encountering "Connection pool shut down" errors for the 
> PutDynamoDBRecord, DeleteDynamoDB, and PutSQS processors while running in a 
> NiFi cluster on version 2.0.0-M1.
>  * [Stack Trace reported on the mailing list|https://codefile.io/f/ZMXYzHt89X]
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (NIFI-12832) Cleanup nifi-mock dependencies

2024-02-23 Thread David Handermann (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Handermann resolved NIFI-12832.
-
Resolution: Fixed

> Cleanup nifi-mock dependencies
> --
>
> Key: NIFI-12832
> URL: https://issues.apache.org/jira/browse/NIFI-12832
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework, Extensions
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 2.0.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> We have allowed quite a few dependencies to creep into the nifi-mock module. 
> It has dependencies now on nifi-utils, nifi-framework-api, nifi-parameter. 
> These are not modules that the mock framework should depend on. We should 
> ensure that we keep this module lean and clean.
> I suspect removing these dependencies from the mock framework will have a 
> trickle-down effect, as most modules depend on this module, and removing 
> these dependencies will likely require updates to modules who use these 
> things as transitive dependencies.
> It appears that nifi-parameter is not even used, even though it's a 
> dependency. There are two classes in nifi-utils that are in use: 
> CoreAttributes and StandardValidators. But I argue these really should move 
> to nifi-api, as they are APIs that are widely used and we will guarantee 
> backward compatibility.
> Additionally, StandardValidators depends on FormatUtils. While we don't want 
> to bring FormatUtils into nifi-api, we should introduce a new TimeFormat 
> class in nifi-api that is responsible for parsing things like durations that 
> our extensions use ("5 mins", etc.) This makes it simpler to build 
> "framework-level extensions" and allows for a cleaner implementation of 
> NiFiProperties in the future. FormatUtils should then make use of this class.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-12832) Cleanup nifi-mock dependencies

2024-02-23 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-12832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17820208#comment-17820208
 ] 

ASF subversion and git services commented on NIFI-12832:


Commit ae423fc6ba564119456fbe2797d1371cbf3d71ab in nifi's branch 
refs/heads/main from Mark Payne
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=ae423fc6ba ]

NIFI-12832 Removed unnecessary dependencies from nifi-mock

- Moved StandardValidators to nifi-api
- Moved URL creation method from UriUtils to URLValidator
- Separated FormatUtils into FormatUtils and DurationFormat classes
- Added DurationFormat to nifi-api

This closes #8442

Signed-off-by: David Handermann 


> Cleanup nifi-mock dependencies
> --
>
> Key: NIFI-12832
> URL: https://issues.apache.org/jira/browse/NIFI-12832
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework, Extensions
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
> Fix For: 2.0.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> We have allowed quite a few dependencies to creep into the nifi-mock module. 
> It has dependencies now on nifi-utils, nifi-framework-api, nifi-parameter. 
> These are not modules that the mock framework should depend on. We should 
> ensure that we keep this module lean and clean.
> I suspect removing these dependencies from the mock framework will have a 
> trickle-down effect, as most modules depend on this module, and removing 
> these dependencies will likely require updates to modules who use these 
> things as transitive dependencies.
> It appears that nifi-parameter is not even used, even though it's a 
> dependency. There are two classes in nifi-utils that are in use: 
> CoreAttributes and StandardValidators. But I argue these really should move 
> to nifi-api, as they are APIs that are widely used and we will guarantee 
> backward compatibility.
> Additionally, StandardValidators depends on FormatUtils. While we don't want 
> to bring FormatUtils into nifi-api, we should introduce a new TimeFormat 
> class in nifi-api that is responsible for parsing things like durations that 
> our extensions use ("5 mins", etc.) This makes it simpler to build 
> "framework-level extensions" and allows for a cleaner implementation of 
> NiFiProperties in the future. FormatUtils should then make use of this class.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] NIFI-12832: Eliminated unnecessary dependencies from nifi-mock; moved… [nifi]

2024-02-23 Thread via GitHub


exceptionfactory closed pull request #8442: NIFI-12832: Eliminated unnecessary 
dependencies from nifi-mock; moved…
URL: https://github.com/apache/nifi/pull/8442


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (NIFI-6959) JMS Durable subscription does not support filters

2024-02-23 Thread Michael W Moser (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael W Moser resolved NIFI-6959.
---
Resolution: Fixed

> JMS Durable subscription does not support filters
> -
>
> Key: NIFI-6959
> URL: https://issues.apache.org/jira/browse/NIFI-6959
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.6.0, 1.7.0, 1.8.0, 1.9.0, 1.10.0, 1.9.1, 1.9.2
> Environment: All
>Reporter: john uiterwyk
>Priority: Major
>
> The ConsumeJMS processor does not support passing filters when creating a 
> topic subscription.
> This can be seen in line 72 of JMSConsumer.java, where null is passed for the 
> filter when calling :
> |return session.createDurableConsumer((Topic) destination, subscriberName, 
> null, JMSConsumer.this.jmsTemplate.isPubSubDomain());|
> | | |
> Some JMS implementations require a filter when creating a durable 
> subscription for performance reasons. 
> This can be resolved by adding a property 'filter' of type string to the 
> ConsumeJMS processor, and then passing that filter string to the JMSConsumer  
> method createMessageConsumer and then passing it to the createDurableConsumer 
> method.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-12839) Maven archetype for processor bundle incorrectly sets NiFi dependency version

2024-02-23 Thread Matt Burgess (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-12839:

Status: Patch Available  (was: In Progress)

> Maven archetype for processor bundle incorrectly sets NiFi dependency version
> -
>
> Key: NIFI-12839
> URL: https://issues.apache.org/jira/browse/NIFI-12839
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Tools and Build
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
> Fix For: 2.0.0, 1.26.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The processor bundle archetype does not explicitly set the dependency version 
> for nifi-standard-services-api-nar, leading the generated POM to set it to 
> the version of the extension. It should explicitly set the version to 
> "nifiVersion".



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] NIFI-12839: Explicitly set nifiVersion for processor bundle archetype dependencies [nifi]

2024-02-23 Thread via GitHub


mattyb149 opened a new pull request, #8447:
URL: https://github.com/apache/nifi/pull/8447

   # Summary
   
   [NIFI-12839](https://issues.apache.org/jira/browse/NIFI-12839) This PR 
explicitly sets the dependency version for nifi-standard-services-api-nar, 
avoiding the issue when the extension version does not match the NiFi version.
   
   # Tracking
   
   Please complete the following tracking steps prior to pull request creation.
   
   ### Issue Tracking
   
   - [x] [Apache NiFi Jira](https://issues.apache.org/jira/browse/NIFI) issue 
created
   
   ### Pull Request Tracking
   
   - [x] Pull Request title starts with Apache NiFi Jira issue number, such as 
`NIFI-0`
   - [x] Pull Request commit message starts with Apache NiFi Jira issue number, 
as such `NIFI-0`
   
   ### Pull Request Formatting
   
   - [x] Pull Request based on current revision of the `main` branch
   - [x] Pull Request refers to a feature branch with one commit containing 
changes
   
   # Verification
   
   Please indicate the verification steps performed prior to pull request 
creation.
   
   ### Build
   
   - [ ] Build completed using `mvn clean install -P contrib-check`
 - [x] JDK 21
   
   ### Licensing
   
   - [ ] New dependencies are compatible with the [Apache License 
2.0](https://apache.org/licenses/LICENSE-2.0) according to the [License 
Policy](https://www.apache.org/legal/resolved.html)
   - [ ] New dependencies are documented in applicable `LICENSE` and `NOTICE` 
files
   
   ### Documentation
   
   - [x] Documentation formatting appears as expected in rendered files
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (NIFI-12839) Maven archetype for processor bundle incorrectly sets NiFi dependency version

2024-02-23 Thread Matt Burgess (Jira)
Matt Burgess created NIFI-12839:
---

 Summary: Maven archetype for processor bundle incorrectly sets 
NiFi dependency version
 Key: NIFI-12839
 URL: https://issues.apache.org/jira/browse/NIFI-12839
 Project: Apache NiFi
  Issue Type: Bug
  Components: Tools and Build
Reporter: Matt Burgess
Assignee: Matt Burgess
 Fix For: 2.0.0, 1.26.0


The processor bundle archetype does not explicitly set the dependency version 
for nifi-standard-services-api-nar, leading the generated POM to set it to the 
version of the extension. It should explicitly set the version to "nifiVersion".



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-12836) Connection pool shut down and SocketException for many AWS processors

2024-02-23 Thread Jira


 [ 
https://issues.apache.org/jira/browse/NIFI-12836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lehel Boér updated NIFI-12836:
--
Description: 
Reported encountering "Connection pool shut down" errors for the 
PutDynamoDBRecord, DeleteDynamoDB, and PutSQS processors while running in a 
NiFi cluster on version 2.0.0-M1.
 * [Stack Trace reported on the mailing list|https://codefile.io/f/ZMXYzHt89X]

I was able to reproduce a similar issue with NiFi 2.0.0-M1 when using local 
DynamoDB. When using an AWS instance the issue did not occur. "Connection 
reset" errors during runtime.
 * [Stack Trace - connection reset|https://codefile.io/f/XF51uNKg3g]

 

 

 

 

  was:
Reported encountering "Connection pool shut down" errors for the 
PutDynamoDBRecord, DeleteDynamoDB, and PutSQS processors while running in a 
NiFi cluster on version 2.0.0-M1.

 

Steps to Reproduce:

Set up NiFi 2.0.0-M1.
Use PutDynamoDBRecord, DeleteDynamoDB, and PutSQS processors.
Observe "Connection pool shut down" or "Connection reset" errors during runtime.
Related Stack Trace:
 * [Stack Trace 1|https://codefile.io/f/XF51uNKg3g] I encountered a slightly 
different stack trace, but the root cause and symptoms align with those reported
 * [Stack Trace 2|https://codefile.io/f/ZMXYzHt89X]

 

After attempting to reproduce the issue with a local DynamoDB setup and 
debugging, I found that the AbstractAWSProcessor::onStopped method runs 
concurrently with the RecordHandlerResult::handle method, initiated by the 
PutDynamoDBRecord::onTrigger. The problematic line 
{code:java}
clientCache.asMap().values().forEach(SdkClient::close){code}
 in AbstractAWSProcessor::onStopped leads to premature shutdown of connections, 
triggering the observed errors. Although removing this line resolves the issue, 
the root cause lies in the early execution of AbstractAWSProcessor::onStopped.

 

 

 

 


> Connection pool shut down and SocketException for many AWS processors
> -
>
> Key: NIFI-12836
> URL: https://issues.apache.org/jira/browse/NIFI-12836
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Lehel Boér
>Priority: Major
>
> Reported encountering "Connection pool shut down" errors for the 
> PutDynamoDBRecord, DeleteDynamoDB, and PutSQS processors while running in a 
> NiFi cluster on version 2.0.0-M1.
>  * [Stack Trace reported on the mailing list|https://codefile.io/f/ZMXYzHt89X]
> I was able to reproduce a similar issue with NiFi 2.0.0-M1 when using local 
> DynamoDB. When using an AWS instance the issue did not occur. "Connection 
> reset" errors during runtime.
>  * [Stack Trace - connection reset|https://codefile.io/f/XF51uNKg3g]
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] MINIFICPP-2276 Support FlowFileTransform NiFi Python processors [nifi-minifi-cpp]

2024-02-23 Thread via GitHub


lordgamez commented on code in PR #1712:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1712#discussion_r1500838899


##
extensions/python/ExecutePythonProcessor.h:
##
@@ -97,10 +99,8 @@ class ExecutePythonProcessor : public core::Processor {
 python_dynamic_ = true;
   }
 
-  void addProperty(const std::string , const std::string , 
const std::string , bool required, bool el) {
-python_properties_.emplace_back(
-
core::PropertyDefinitionBuilder<>::createProperty(name).withDefaultValue(defaultvalue).withDescription(description).isRequired(required).supportsExpressionLanguage(el).build());
-  }
+  void addProperty(const std::string , const std::string , 
const std::optional , bool required, bool el,
+  bool sensitive, const std::optional& property_type_code);
 
   const std::vector () const {
 return python_properties_;

Review Comment:
   Good point, updated in d3f47c57d21829d1715bd794f8c08a462f4fa361



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINIFICPP-2298 Make RocksDB options configurable [nifi-minifi-cpp]

2024-02-23 Thread via GitHub


lordgamez commented on PR #1731:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1731#issuecomment-1961546852

   > Do you think we could use rocksdb ini files to make these settings 
user-overridable? https://github.com/facebook/rocksdb/wiki/RocksDB-Options-File
   
   Unfortunately when loading config from a file using `LoadOptionsFromFile` 
returns a whole `DBOptions` object overwriting all previously read config 
values we retrieved using `LoadLatestOptions`, and patched with our custom 
values. 
   
   Fortunately I found that `rocksdb::GetDBOptionsFromMap` can update 
previously read `DBOptions` object from an unordered map. This way we can 
update any rocksdb option from the minifi.properties file. I updated the PR in 
84441ba21b1cf0850e25807356a730ee30040cd2 to be able to override any RocksDB 
option defined in the properties file with the `nifi.global.rocksdb.options.` 
prefix.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINIFICPP-2276 Support FlowFileTransform NiFi Python processors [nifi-minifi-cpp]

2024-02-23 Thread via GitHub


fgerlits commented on code in PR #1712:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1712#discussion_r1500803117


##
extensions/python/ExecutePythonProcessor.h:
##
@@ -97,10 +99,8 @@ class ExecutePythonProcessor : public core::Processor {
 python_dynamic_ = true;
   }
 
-  void addProperty(const std::string , const std::string , 
const std::string , bool required, bool el) {
-python_properties_.emplace_back(
-
core::PropertyDefinitionBuilder<>::createProperty(name).withDefaultValue(defaultvalue).withDescription(description).isRequired(required).supportsExpressionLanguage(el).build());
-  }
+  void addProperty(const std::string , const std::string , 
const std::optional , bool required, bool el,
+  bool sensitive, const std::optional& property_type_code);
 
   const std::vector () const {
 return python_properties_;

Review Comment:
   I think we should acquire the mutex here, too, and return a copy.  The copy 
will happen anyway at the calling site in 
`PythonCreator::registerScriptDescription`; locking the mutex adds some 
overhead, but it is necessary.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (MINIFICPP-2305) C2VerifyHeartbeatAndStopSecure test transiently fails in CI

2024-02-23 Thread Jira
Gábor Gyimesi created MINIFICPP-2305:


 Summary: C2VerifyHeartbeatAndStopSecure test transiently fails in 
CI
 Key: MINIFICPP-2305
 URL: https://issues.apache.org/jira/browse/MINIFICPP-2305
 Project: Apache NiFi MiNiFi C++
  Issue Type: Improvement
Reporter: Gábor Gyimesi
 Attachments: C2VerifyHeartbeatAndStopSecure_failure.log

C2VerifyHeartbeatAndStopSecure sometimes fails in CI:
{code:java}
Error: 2-23 14:39:39.239] 
[org::apache::nifi::minifi::core::flow::AdaptiveConfiguration] [error] Error 
while processing configuration file: Unable to parse configuration file as none 
of the possible required fields [Flow Controller] is available [in '' section 
of configuration file]
terminate called after throwing an instance of 'std::invalid_argument'
  what():  Unable to parse configuration file as none of the possible required 
fields [Flow Controller] is available [in '' section of configuration file] 
{code}
More info in the attached logs.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-12836) Connection pool shut down and SocketException for many AWS processors

2024-02-23 Thread Jira


 [ 
https://issues.apache.org/jira/browse/NIFI-12836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lehel Boér updated NIFI-12836:
--
Priority: Major  (was: Critical)

> Connection pool shut down and SocketException for many AWS processors
> -
>
> Key: NIFI-12836
> URL: https://issues.apache.org/jira/browse/NIFI-12836
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Lehel Boér
>Priority: Major
>
> Reported encountering "Connection pool shut down" errors for the 
> PutDynamoDBRecord, DeleteDynamoDB, and PutSQS processors while running in a 
> NiFi cluster on version 2.0.0-M1.
>  
> Steps to Reproduce:
> Set up NiFi 2.0.0-M1.
> Use PutDynamoDBRecord, DeleteDynamoDB, and PutSQS processors.
> Observe "Connection pool shut down" or "Connection reset" errors during 
> runtime.
> Related Stack Trace:
>  * [Stack Trace 1|https://codefile.io/f/XF51uNKg3g] I encountered a slightly 
> different stack trace, but the root cause and symptoms align with those 
> reported
>  * [Stack Trace 2|https://codefile.io/f/ZMXYzHt89X]
>  
> After attempting to reproduce the issue with a local DynamoDB setup and 
> debugging, I found that the AbstractAWSProcessor::onStopped method runs 
> concurrently with the RecordHandlerResult::handle method, initiated by the 
> PutDynamoDBRecord::onTrigger. The problematic line 
> {code:java}
> clientCache.asMap().values().forEach(SdkClient::close){code}
>  in AbstractAWSProcessor::onStopped leads to premature shutdown of 
> connections, triggering the observed errors. Although removing this line 
> resolves the issue, the root cause lies in the early execution of 
> AbstractAWSProcessor::onStopped.
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] MINIFICPP-2276 Support FlowFileTransform NiFi Python processors [nifi-minifi-cpp]

2024-02-23 Thread via GitHub


lordgamez commented on code in PR #1712:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1712#discussion_r1500753007


##
extensions/python/ExecutePythonProcessor.cpp:
##
@@ -44,8 +45,7 @@ void ExecutePythonProcessor::initialize() {
 
   try {
 loadScript();
-  } catch(const std::runtime_error&) {
-logger_->log_warn("Could not load python script while initializing. In 
case of non-native python processor this is normal and will be done in the 
schedule phase.");

Review Comment:
   Sure, added jira ticket: https://issues.apache.org/jira/browse/MINIFICPP-2304



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (MINIFICPP-2304) Clean up Python processor initialization

2024-02-23 Thread Jira
Gábor Gyimesi created MINIFICPP-2304:


 Summary: Clean up Python processor initialization
 Key: MINIFICPP-2304
 URL: https://issues.apache.org/jira/browse/MINIFICPP-2304
 Project: Apache NiFi MiNiFi C++
  Issue Type: Improvement
Reporter: Gábor Gyimesi


Python processor initialization should be refactored to be cleaner. We 
instantiate the Python processors twice:

We instantiate the Python processors that are used in the configured MiNiFi 
flow. This is straightforward and not problematic.

The problem is that before that we also instantiate all python processors in 
the PythonCreator::registerScriptDescription method for getting the class 
description for all available python processors for the agent manifest
 * In this scenario we call the Python processors' initialize method twice:
 ** Once the PythonObjectFactory::create method calls it to initialize the 
supported properties to set the ScriptFile property to the path of the Python 
processor
 ** After this the PythonCreator::registerScriptDescription also calls it 
explicitly to load the python processor from the set path
 ** This should be circumvented to not need double initialization and have a 
telling warning message in ExecutePythonProcessor::initialize() if the 
loadScript method fails
 * We should also find a way to avoid initializing all the Python processors 
and retrieve the processor data without it. With NiFi Python processors a way 
for this could be to use the "ast" python module to retrieve the processor 
details which does not require loading the python module



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-12838) Nifi toolkit command pg-import wont output JSON even when -ot json specified

2024-02-23 Thread Ryan (Jira)
Ryan created NIFI-12838:
---

 Summary: Nifi toolkit command pg-import wont output JSON even when 
-ot json specified
 Key: NIFI-12838
 URL: https://issues.apache.org/jira/browse/NIFI-12838
 Project: Apache NiFi
  Issue Type: Bug
  Components: Tools and Build
Affects Versions: 1.23.2
Reporter: Ryan


When executing a `nifi pg-import` nifi toolkit command It wouldn't output the 
JSON for the process group that is imported even when specifying `-ot json` in 
the command



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] MINIFICPP-2276 Support FlowFileTransform NiFi Python processors [nifi-minifi-cpp]

2024-02-23 Thread via GitHub


fgerlits commented on code in PR #1712:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1712#discussion_r1500714476


##
extensions/python/ExecutePythonProcessor.cpp:
##
@@ -44,8 +45,7 @@ void ExecutePythonProcessor::initialize() {
 
   try {
 loadScript();
-  } catch(const std::runtime_error&) {
-logger_->log_warn("Could not load python script while initializing. In 
case of non-native python processor this is normal and will be done in the 
schedule phase.");

Review Comment:
   I see, thanks! Can you create a Jira for clearing this up later?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINIFICPP-2276 Support FlowFileTransform NiFi Python processors [nifi-minifi-cpp]

2024-02-23 Thread via GitHub


lordgamez commented on code in PR #1712:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1712#discussion_r1500709305


##
extensions/python/types/Types.h:
##
@@ -160,7 +160,18 @@ class Long : public ReferenceHolder {
   }
 
   int64_t asInt64() {
-return static_cast(PyLong_AsLongLong(this->ref_.get()));
+auto long_value = PyLong_AsLongLong(this->ref_.get());
+if (long_value == -1 && PyErr_Occurred()) {
+  throw PyException();
+}
+return static_cast(long_value);

Review Comment:
   Updated in 6593d9b63514dd2a3b2060b747680f37691abecd



##
libminifi/src/core/ConfigurableComponent.cpp:
##
@@ -36,19 +36,28 @@ ConfigurableComponent::ConfigurableComponent()
 
 ConfigurableComponent::~ConfigurableComponent() = default;
 
+Property* ConfigurableComponent::findProperty(const std::string& name) const {

Review Comment:
   Updated in 6593d9b63514dd2a3b2060b747680f37691abecd



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINIFICPP-2276 Support FlowFileTransform NiFi Python processors [nifi-minifi-cpp]

2024-02-23 Thread via GitHub


lordgamez commented on code in PR #1712:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1712#discussion_r1500708936


##
extensions/python/ExecutePythonProcessor.cpp:
##
@@ -146,11 +151,38 @@ std::unique_ptr 
ExecutePythonProcessor::createScriptEngine()
   auto engine = std::make_unique();
 
   python_logger_ = 
core::logging::LoggerFactory::getAliasedLogger(getName());
-  engine->initialize(Success, Failure, python_logger_);
+  engine->initialize(Success, Failure, Original, python_logger_);
 
   return engine;
 }
 
+core::Property* ExecutePythonProcessor::findProperty(const std::string& name) 
const {
+  if (auto prop_ptr = core::ConfigurableComponent::findProperty(name)) {
+return prop_ptr;
+  }
+
+  auto it = ranges::find_if(python_properties_, [](const auto& item){
+return item.getName() == name;
+  });
+  if (it != python_properties_.end()) {
+return const_cast(&*it);
+  }
+
+  return nullptr;
+}
+
+std::map ExecutePythonProcessor::getProperties() 
const {
+  auto result = ConfigurableComponent::getProperties();
+
+  std::lock_guard lock(configuration_mutex_);

Review Comment:
   You are right, it should be a separate mutex for the `python_properties_`, 
it didn't really make sense this way. Updated in 
6593d9b63514dd2a3b2060b747680f37691abecd



##
extensions/python/PYTHON.md:
##
@@ -106,20 +127,39 @@ class VaderSentiment(object):
 To enable python Processor capabilities, the following options need to be 
provided in minifi.properties. The directory specified
 can contain processors. Note that the processor name will be the reference in 
your flow. Directories are treated like package names.
 Therefore if the nifi.python.processor.dir is /tmp/ and you have a 
subdirectory named packagedir with the file name file.py, it will
-produce a processor with the name 
org.apache.nifi.minifi.processors.packagedir.file. Note that each subdirectory 
will append a package 
-to the reference class name. 
+produce a processor with the name 
org.apache.nifi.minifi.processors.packagedir.file. Note that each subdirectory 
will append a package
+to the reference class name.
 
 in minifi.properties
#directory where processors exist
nifi.python.processor.dir=
-   
-   
+
+
 ## Processors
 The python directory (extensions/pythonprocessors) contains implementations 
that will be available for flows if the required dependencies
 exist.
-   
-## Sentiment Analysis
+
+### Sentiment Analysis
 
 The SentimentAnalysis processor will perform a Vader Sentiment Analysis. This 
requires that you install nltk and VaderSentiment
pip install nltk
pip install VaderSentiment
+
+## Using NiFi Python Processors
+
+MiNiFi C++ supports the use of NiFi Python processors, that are inherited from 
the FlowFileTransform base class. To use these processors, you must copy the 
Python processor module under the nifi_python_processors directory located 
under the python directory (by default it can be found at 
${minifi_root}/minifi-python/nifi_python_processors). To see how to write NiFi 
Python processors, please refer to the Python Developer Guide under the [Apache 
NiFi documentation](https://nifi.apache.org/documentation/v2/).
+
+In the flow configuration these Python processors can be referenced by their 
fully qualified class name, which looks like this: 
org.apache.nifi.minifi.processors.nifi_python_processors...
 For example, the fully qualified class name of the PromptChatGPT processor 
implemented in the file nifi_python_processors/PromptChatGPT.py is 
org.apache.nifi.minifi.processors.nifi_python_processors.PromptChatGPT. If a 
processor is copied under a subdirectory, because it is part of a python 
submodule, the submodule name will be appended to the fully qualified class 
name. For example, if the QueryPinecone processor is implemented in the 
QueryPinecone.py file that is copied to 
nifi_python_processors/vectorstores/QueryPinecone.py, the fully qualified class 
name will be 
org.apache.nifi.minifi.processors.nifi_python_processors.vectorstores.QueryPinecone
 in the configuration file.
+
+**NOTE:** The name of the NiFi Python processor file should match the class 
name in the file, otherwise the processor will not be found.
+
+Due to some differences between the NiFi and MiNiFi C++ processors and 
implementation, there are some limitations using the NiFi Python processors:
+- Record based processors are not yet supported in MiNiFi C++, so the NiFi 
Python processors inherited from RecordTransform are not supported.
+- Virtualenv support is not yet available in MiNiFi C++, so all required 
packaged must be installed on the system.

Review Comment:
   Updated in 6593d9b63514dd2a3b2060b747680f37691abecd



##
extensions/python/PYTHON.md:
##
@@ -106,20 +127,39 @@ class VaderSentiment(object):
 To enable python Processor capabilities, the following options need to be 
provided in minifi.properties. The 

Re: [PR] NIFI-12831: Add PutOpenSearchVector and QueryOpenSearchVector processors [nifi]

2024-02-23 Thread via GitHub


krisztina-zsihovszki commented on code in PR #8441:
URL: https://github.com/apache/nifi/pull/8441#discussion_r1500550037


##
nifi-python-extensions/nifi-text-embeddings-module/src/main/python/vectorstores/PutOpenSearchVector.py:
##
@@ -0,0 +1,251 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from langchain.vectorstores import OpenSearchVectorSearch
+from nifiapi.flowfiletransform import FlowFileTransform, 
FlowFileTransformResult
+from nifiapi.properties import PropertyDescriptor, StandardValidators, 
ExpressionLanguageScope, PropertyDependency
+from OpenSearchVectorUtils import (OPENAI_API_KEY, OPENAI_API_MODEL, 
HUGGING_FACE_API_KEY, HUGGING_FACE_MODEL,
+   HTTP_HOST,
+   USERNAME, PASSWORD, VERIFY_CERTIFICATES, 
INDEX_NAME, VECTOR_FIELD, TEXT_FIELD,
+   create_authentication_params, 
parse_documents)
+from EmbeddingUtils import EMBEDDING_MODEL, create_embedding_service
+from nifiapi.documentation import use_case, multi_processor_use_case, 
ProcessorConfiguration
+
+
+@use_case(description="Create vectors/embeddings that represent text content 
and send the vectors to OpenSearch",
+  notes="This use case assumes that the data has already been 
formatted in JSONL format with the text to store in OpenSearch provided in the 
'text' field.",
+  keywords=["opensearch", "embedding", "vector", "text", 
"vectorstore", "insert"],
+  configuration="""
+Configure the 'HTTP Host' to an appropriate URL where 
OpenSearch is accessible.
+Configure 'Embedding Model' to indicate whether OpenAI 
embeddings should be used or a HuggingFace embedding model should be used: 
'Hugging Face Model' or 'OpenAI Model'
+Configure the 'OpenAI API Key' or 'HuggingFace API Key', 
depending on the chosen Embedding Model.
+Set 'Index Name' to the name of your OpenSearch Index.
+Set 'Vector Field Name' to the name of the field in the 
Document which will store the vector data.
+Set 'Text Field Name' to the name of the field in the Document 
which will store the text data.
+
+If the documents to send to OpenSearch contain a unique 
identifier, set the 'Document ID Field Name' property to the name of the field 
that contains the document ID.
+This property can be left blank, in which case a unique ID 
will be generated based on the FlowFile's filename.
+
+If the provided index does not exists in OpenSearch then the 
processor is capable to create it. The 'New Index Strategy' property defines 
+that the index needs to be created from the default template 
or it should be configured with custom values.
+""")
+@use_case(description="Update vectors/embeddings in OpenSearch",
+  notes="This use case assumes that the data has already been 
formatted in JSONL format with the text to store in OpenSearch provided in the 
'text' field.",
+  keywords=["opensearch", "embedding", "vector", "text", 
"vectorstore", "update", "upsert"],
+  configuration="""
+Configure the 'HTTP Host' to an appropriate URL where 
OpenSearch is accessible.
+Configure 'Embedding Model' to indicate whether OpenAI 
embeddings should be used or a HuggingFace embedding model should be used: 
'Hugging Face Model' or 'OpenAI Model'
+Configure the 'OpenAI API Key' or 'HuggingFace API Key', 
depending on the chosen Embedding Model.
+Set 'Index Name' to the name of your OpenSearch Index.
+Set 'Vector Field Name' to the name of the field in the 
Document which will store the vector data.
+Set 'Text Field Name' to the name of the field in the Document 
which will store the text data.
+Set the 'Document ID Field Name' property to the name of the 
field that contains the identifier of the document in OpenSearch to update.
+""")
+class PutOpenSearchVector(FlowFileTransform):
+class Java:
+implements = ['org.apache.nifi.python.processor.FlowFileTransform']
+
+class 

Re: [PR] NIFI-12672 Added Azure specific versions of FileResourceService [nifi]

2024-02-23 Thread via GitHub


turcsanyip commented on code in PR #8359:
URL: https://github.com/apache/nifi/pull/8359#discussion_r1500389209


##
nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/services/azure/storage/AzureDataLakeStorageFileResourceService.java:
##
@@ -0,0 +1,139 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.services.azure.storage;
+
+import com.azure.storage.file.datalake.DataLakeDirectoryClient;
+import com.azure.storage.file.datalake.DataLakeFileClient;
+import com.azure.storage.file.datalake.DataLakeFileSystemClient;
+import com.azure.storage.file.datalake.DataLakeServiceClient;
+import com.azure.storage.file.datalake.models.DataLakeStorageException;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.documentation.UseCase;
+import org.apache.nifi.annotation.lifecycle.OnDisabled;
+import org.apache.nifi.annotation.lifecycle.OnEnabled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.controller.AbstractControllerService;
+import org.apache.nifi.controller.ConfigurationContext;
+import org.apache.nifi.fileresource.service.api.FileResource;
+import org.apache.nifi.fileresource.service.api.FileResourceService;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processors.azure.storage.FetchAzureDataLakeStorage;
+import 
org.apache.nifi.processors.azure.storage.utils.DataLakeServiceClientFactory;
+
+import java.io.IOException;
+import java.util.List;
+import java.util.Map;
+
+import static 
org.apache.nifi.processors.azure.storage.utils.AzureStorageUtils.ADLS_CREDENTIALS_SERVICE;
+import static 
org.apache.nifi.processors.azure.storage.utils.AzureStorageUtils.DIRECTORY;
+import static 
org.apache.nifi.processors.azure.storage.utils.AzureStorageUtils.FILE;
+import static 
org.apache.nifi.processors.azure.storage.utils.AzureStorageUtils.FILESYSTEM;
+import static 
org.apache.nifi.processors.azure.storage.utils.AzureStorageUtils.getProxyOptions;
+import static 
org.apache.nifi.processors.azure.storage.utils.AzureStorageUtils.validateDirectoryProperty;
+import static 
org.apache.nifi.processors.azure.storage.utils.AzureStorageUtils.validateFileProperty;
+import static 
org.apache.nifi.processors.azure.storage.utils.AzureStorageUtils.validateFileSystemProperty;
+
+@Tags({"azure", "microsoft", "cloud", "storage", "adlsgen2", "file", 
"resource", "datalake"})
+@SeeAlso({FetchAzureDataLakeStorage.class})
+@CapabilityDescription("Provides an Azure Data Lake Storage (ADLS) file 
resource for other components.")
+@UseCase(
+description = "Fetch the specified file from Azure Data Lake Storage." 
+
+" The service provides higher performance compared to fetch 
processors when the data should be moved between different storages without any 
transformation.",
+configuration = """
+"Filesystem Name" = "${azure.filesystem}"
+"Directory Name" = "${azure.directory}"
+"File Name" = "${azure.filename}"
+
+The "ADLS Credentials" property should specify an instance of 
the ADLSCredentialsService in order to provide credentials for accessing the 
filesystem.
+"""
+)
+public class AzureDataLakeStorageFileResourceService extends 
AbstractControllerService implements FileResourceService {
+
+private static final List PROPERTIES = List.of(
+ADLS_CREDENTIALS_SERVICE,
+FILESYSTEM,
+DIRECTORY,

Review Comment:
   Please add default values for these properties (`${azure.filesystem}` and 
`${azure.directory}`, respectively). The defaults should be the attributes 
emitted by `ListAzureDataLakeStorage` (as in case of `ListAzureBlobStorage_v12` 
and `AzureBlobStorageFileResourceService`).



##
nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/services/azure/storage/AzureBlobStorageFileResourceService.java:
##
@@ -0,0 +1,147 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) 

Re: [PR] MINIFICPP-2276 Support FlowFileTransform NiFi Python processors [nifi-minifi-cpp]

2024-02-23 Thread via GitHub


lordgamez commented on code in PR #1712:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1712#discussion_r1500664052


##
extensions/python/types/PyProcessContext.cpp:
##
@@ -65,12 +66,31 @@ PyObject* PyProcessContext::getProperty(PyProcessContext* 
self, PyObject* args)
 return nullptr;
   }
 
-  const char* property;
-  if (!PyArg_ParseTuple(args, "s", )) {
+  const char* property_name = nullptr;
+  PyObject* script_flow_file = nullptr;
+  if (!PyArg_ParseTuple(args, "s|O", _name, _flow_file)) {
 throw PyException();
   }
+
   std::string value;
-  context->getProperty(property, value);
+  if (!script_flow_file) {
+if (!context->getProperty(property_name, value)) {
+  Py_RETURN_NONE;
+}
+  } else {
+auto py_flow = reinterpret_cast(script_flow_file);
+const auto flow_file = py_flow->script_flow_file_.lock();
+if (!flow_file) {
+  PyErr_SetString(PyExc_AttributeError, "tried reading FlowFile outside 
'on_trigger'");
+  return nullptr;

Review Comment:
   Py_RETURN_NONE returns a Python object with the Python `None` value which 
can be a valid return value in some cases, while the returning `nullptr` 
indicates an error, and python throws an exception if nullptr is returned from 
the C API with the error set in the `PyErr_SetString`.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINIFICPP-2276 Support FlowFileTransform NiFi Python processors [nifi-minifi-cpp]

2024-02-23 Thread via GitHub


lordgamez commented on code in PR #1712:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1712#discussion_r1500637729


##
extensions/python/PythonScriptEngine.cpp:
##
@@ -180,4 +198,66 @@ void PythonScriptEngine::evaluateModuleImports() {
   }
 }
 
+void PythonScriptEngine::initializeProcessorObject(const std::string& 
python_class_name) {
+  GlobalInterpreterLock gil;
+  if (auto python_class = bindings_[python_class_name]) {
+auto num_args = [&]() -> size_t {
+  auto class_init = 
OwnedObject(PyObject_GetAttrString(python_class->get(), "__init__"));
+  if (!class_init.get()) {
+return 0;
+  }
+
+  auto inspect_module = OwnedObject(PyImport_ImportModule("inspect"));
+  if (!inspect_module.get()) {
+return 0;
+  }
+
+  auto inspect_args = 
OwnedObject(PyObject_CallMethod(inspect_module.get(), "getfullargspec", "O", 
class_init.get()));
+  if (!inspect_args.get()) {
+return 0;
+  }
+
+  auto arg_list = OwnedObject(PyObject_GetAttrString(inspect_args.get(), 
"args"));
+  if (!arg_list.get()) {
+return 0;
+  }
+
+  return PyList_Size(arg_list.get());
+}();
+
+if (num_args > 1) {
+  auto kwargs = OwnedDict::create();
+  auto value = OwnedObject(Py_None);
+  kwargs.put("jvm", value);
+  auto args = OwnedObject(PyTuple_New(0));

Review Comment:
   In NiFi there are scenarios when the `jvm` object is passed to the 
constructor, either as an arg or as a named kwarg and we only expect this one 
optional argument. We only want to pass this as part of the kwargs argument of 
the constructor call of the Python processor object, but before that we need to 
pass the positional args as well. As we do not want to pass any positional 
args, the python C API expects us to pass a zero length tuple which is a 
`PyTuple_New(0)` object to the `PyObject_Call` function in this case.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINIFICPP-2276 Support FlowFileTransform NiFi Python processors [nifi-minifi-cpp]

2024-02-23 Thread via GitHub


lordgamez commented on code in PR #1712:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1712#discussion_r1500602870


##
extensions/python/ExecutePythonProcessor.cpp:
##
@@ -44,8 +45,7 @@ void ExecutePythonProcessor::initialize() {
 
   try {
 loadScript();
-  } catch(const std::runtime_error&) {
-logger_->log_warn("Could not load python script while initializing. In 
case of non-native python processor this is normal and will be done in the 
schedule phase.");

Review Comment:
   Unfortunately the python processors are instantiated twice. We have to 
instantiate the processors defined in the flow, but also we instantiate all 
python processors when building the manifest to get the description and the 
processor properties. In the latter scenario the initialize is called twice, 
once before setting the script file in the `PythonObjectFactory.h` (to 
initialize the supported properties), and in that case the loadScript() will 
always fail, because the ScriptFile property value is not set yet. As this will 
always happen the log is useless here. This double initialization is a bit 
wonky and should be worked out sometime in the future.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (NIFI-11859) Nifi in standalone mode is not able to enable EmbeddedHazelcastCacheManager

2024-02-23 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-11859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-11859:
--
Component/s: Extensions
 (was: Configuration Management)

> Nifi in standalone mode is not able to enable EmbeddedHazelcastCacheManager 
> 
>
> Key: NIFI-11859
> URL: https://issues.apache.org/jira/browse/NIFI-11859
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.22.0
>Reporter: Jeetendra G Vasisht
>Priority: Major
> Fix For: 2.0.0, 1.26.0
>
> Attachments: embeddedHazelcastNifiControllerservice.PNG, nifi--app.log
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> EmbeddedHazelcastCacheManager Controller gets enabled in cluster mode with 
> "All Nodes" clustering strategy, but having issue when tried to run in 
> standalone mode with "None" clustering strategy. This is observed in 
> Kubernetes Environment and this is coming as part of internal Nifi packaging 
> and any external dependency or code related to Hazelcast is not being used.
> Controller gets stuck in Enabling state:
> !embeddedHazelcastNifiControllerservice.PNG|width=662,height=131!
> Respective Logs have been attached



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-11859) Nifi in standalone mode is not able to enable EmbeddedHazelcastCacheManager

2024-02-23 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-11859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-11859:
--
Priority: Major  (was: Blocker)

> Nifi in standalone mode is not able to enable EmbeddedHazelcastCacheManager 
> 
>
> Key: NIFI-11859
> URL: https://issues.apache.org/jira/browse/NIFI-11859
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Configuration Management
>Affects Versions: 1.22.0
>Reporter: Jeetendra G Vasisht
>Priority: Major
> Fix For: 2.0.0, 1.26.0
>
> Attachments: embeddedHazelcastNifiControllerservice.PNG, nifi--app.log
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> EmbeddedHazelcastCacheManager Controller gets enabled in cluster mode with 
> "All Nodes" clustering strategy, but having issue when tried to run in 
> standalone mode with "None" clustering strategy. This is observed in 
> Kubernetes Environment and this is coming as part of internal Nifi packaging 
> and any external dependency or code related to Hazelcast is not being used.
> Controller gets stuck in Enabling state:
> !embeddedHazelcastNifiControllerservice.PNG|width=662,height=131!
> Respective Logs have been attached



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (NIFI-11859) Nifi in standalone mode is not able to enable EmbeddedHazelcastCacheManager

2024-02-23 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-11859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard resolved NIFI-11859.
---
Fix Version/s: 2.0.0
   1.26.0
   Resolution: Fixed

> Nifi in standalone mode is not able to enable EmbeddedHazelcastCacheManager 
> 
>
> Key: NIFI-11859
> URL: https://issues.apache.org/jira/browse/NIFI-11859
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Configuration Management
>Affects Versions: 1.22.0
>Reporter: Jeetendra G Vasisht
>Priority: Blocker
> Fix For: 2.0.0, 1.26.0
>
> Attachments: embeddedHazelcastNifiControllerservice.PNG, nifi--app.log
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> EmbeddedHazelcastCacheManager Controller gets enabled in cluster mode with 
> "All Nodes" clustering strategy, but having issue when tried to run in 
> standalone mode with "None" clustering strategy. This is observed in 
> Kubernetes Environment and this is coming as part of internal Nifi packaging 
> and any external dependency or code related to Hazelcast is not being used.
> Controller gets stuck in Enabling state:
> !embeddedHazelcastNifiControllerservice.PNG|width=662,height=131!
> Respective Logs have been attached



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-11859) Nifi in standalone mode is not able to enable EmbeddedHazelcastCacheManager

2024-02-23 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-11859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17820040#comment-17820040
 ] 

ASF subversion and git services commented on NIFI-11859:


Commit 4d097fbfe8668a901317c2ef7981d16e814da9f6 in nifi's branch 
refs/heads/support/nifi-1.x from Bob Paulin
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=4d097fbfe8 ]

NIFI-11859: Ensure Hazelcast can not join a network when Cluster is NONE

Signed-off-by: Pierre Villard 

This closes #8440.


> Nifi in standalone mode is not able to enable EmbeddedHazelcastCacheManager 
> 
>
> Key: NIFI-11859
> URL: https://issues.apache.org/jira/browse/NIFI-11859
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Configuration Management
>Affects Versions: 1.22.0
>Reporter: Jeetendra G Vasisht
>Priority: Blocker
> Attachments: embeddedHazelcastNifiControllerservice.PNG, nifi--app.log
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> EmbeddedHazelcastCacheManager Controller gets enabled in cluster mode with 
> "All Nodes" clustering strategy, but having issue when tried to run in 
> standalone mode with "None" clustering strategy. This is observed in 
> Kubernetes Environment and this is coming as part of internal Nifi packaging 
> and any external dependency or code related to Hazelcast is not being used.
> Controller gets stuck in Enabling state:
> !embeddedHazelcastNifiControllerservice.PNG|width=662,height=131!
> Respective Logs have been attached



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-11859) Nifi in standalone mode is not able to enable EmbeddedHazelcastCacheManager

2024-02-23 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-11859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17820037#comment-17820037
 ] 

ASF subversion and git services commented on NIFI-11859:


Commit 3c74aa460e5c913bcf493bb1eb85a73b60dc7c63 in nifi's branch 
refs/heads/main from Bob Paulin
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=3c74aa460e ]

NIFI-11859: Ensure Hazelcast can not join a network when Cluster is NONE

Signed-off-by: Pierre Villard 

This closes #8440.


> Nifi in standalone mode is not able to enable EmbeddedHazelcastCacheManager 
> 
>
> Key: NIFI-11859
> URL: https://issues.apache.org/jira/browse/NIFI-11859
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Configuration Management
>Affects Versions: 1.22.0
>Reporter: Jeetendra G Vasisht
>Priority: Blocker
> Attachments: embeddedHazelcastNifiControllerservice.PNG, nifi--app.log
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> EmbeddedHazelcastCacheManager Controller gets enabled in cluster mode with 
> "All Nodes" clustering strategy, but having issue when tried to run in 
> standalone mode with "None" clustering strategy. This is observed in 
> Kubernetes Environment and this is coming as part of internal Nifi packaging 
> and any external dependency or code related to Hazelcast is not being used.
> Controller gets stuck in Enabling state:
> !embeddedHazelcastNifiControllerservice.PNG|width=662,height=131!
> Respective Logs have been attached



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] NIFI-11859: Ensure Hazelcast can not join a network when Cluster is NONE [nifi]

2024-02-23 Thread via GitHub


asfgit closed pull request #8440: NIFI-11859: Ensure Hazelcast can not join a 
network when Cluster is NONE
URL: https://github.com/apache/nifi/pull/8440


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (NIFI-12837) Make hierynomus/smbj DFS setting available in SMB processors

2024-02-23 Thread Anders (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anders updated NIFI-12837:
--
Issue Type: Improvement  (was: New Feature)

> Make hierynomus/smbj DFS setting available in SMB processors
> 
>
> Key: NIFI-12837
> URL: https://issues.apache.org/jira/browse/NIFI-12837
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.25.0
>Reporter: Anders
>Priority: Major
>
> The hierynomus/smbj library has a setting for enabling DFS which is disabled 
> by default:
> https://github.com/hierynomus/smbj/blob/f25d5c5478a5b73e9ba4202dcfb365974e15367e/src/main/java/com/hierynomus/smbj/SmbConfig.java#L106C17-L106C39
> This appears to cause problems in some SMB configurations.
> Patched 
> https://github.com/apache/nifi/blob/main/nifi-nar-bundles/nifi-smb-bundle/nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
>  to test in my environment with:
> {code}
> $ git diff 
> nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
> diff --git 
> a/nifi-nar-bundles/nifi-smb-bundle/nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
>  
> b/nifi-nar-bundles/nifi-smb-bundle/nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
> index 0895abfae0..eac765 100644
> --- 
> a/nifi-nar-bundles/nifi-smb-bundle/nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
> +++ 
> b/nifi-nar-bundles/nifi-smb-bundle/nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
> @@ -46,6 +46,8 @@ public final class SmbUtils {
>  }
>  }
> +configBuilder.withDfsEnabled(true);
> +
>  if (context.getProperty(USE_ENCRYPTION).isSet()) {
>  
> configBuilder.withEncryptData(context.getProperty(USE_ENCRYPTION).asBoolean());
>  }
> {code}
> This appeared to resolve the issue.
> It would be very useful if this setting was available to toggle in the UI for 
> all SMB processors.
> Without this setting, I get a *STATUS_PATH_NOT_COVERED* error. 
> Somewhat related hierynomus/smbj github issues:
> https://github.com/hierynomus/smbj/issues/152
> https://github.com/hierynomus/smbj/issues/419



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-12837) Make hierynomus/smbj DFS setting available in SMB processors

2024-02-23 Thread Anders (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anders updated NIFI-12837:
--
Description: 
The hierynomus/smbj library has a setting for enabling DFS which is disabled by 
default:

https://github.com/hierynomus/smbj/blob/f25d5c5478a5b73e9ba4202dcfb365974e15367e/src/main/java/com/hierynomus/smbj/SmbConfig.java#L106C17-L106C39

This appears to cause problems in some SMB configurations.

Patched 
https://github.com/apache/nifi/blob/main/nifi-nar-bundles/nifi-smb-bundle/nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
 to test in my environment with:

{code}
$ git diff 
nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
diff --git 
a/nifi-nar-bundles/nifi-smb-bundle/nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
 
b/nifi-nar-bundles/nifi-smb-bundle/nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
index 0895abfae0..eac765 100644
--- 
a/nifi-nar-bundles/nifi-smb-bundle/nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
+++ 
b/nifi-nar-bundles/nifi-smb-bundle/nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
@@ -46,6 +46,8 @@ public final class SmbUtils {
 }
 }

+configBuilder.withDfsEnabled(true);
+
 if (context.getProperty(USE_ENCRYPTION).isSet()) {
 
configBuilder.withEncryptData(context.getProperty(USE_ENCRYPTION).asBoolean());
 }
{code}

This appeared to resolve the issue.

It would be very useful if this setting was available to toggle in the UI for 
all SMB processors.

Without this setting, I get a *STATUS_PATH_NOT_COVERED* error. 

Somewhat related hierynomus/smbj github issues:
https://github.com/hierynomus/smbj/issues/152
https://github.com/hierynomus/smbj/issues/419


  was:
The hierynomus/smbj library has a setting for enabling DFS which is disabled by 
default:

https://github.com/hierynomus/smbj/blob/f25d5c5478a5b73e9ba4202dcfb365974e15367e/src/main/java/com/hierynomus/smbj/SmbConfig.java#L106C17-L106C39

This appears to cause problems in some SMB configurations.

Patched 
https://github.com/apache/nifi/blob/main/nifi-nar-bundles/nifi-smb-bundle/nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
 to test in my environment with:

{code}
$ git diff 
nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
diff --git 
a/nifi-nar-bundles/nifi-smb-bundle/nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
 
b/nifi-nar-bundles/nifi-smb-bundle/nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
index 0895abfae0..eac765 100644
--- 
a/nifi-nar-bundles/nifi-smb-bundle/nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
+++ 
b/nifi-nar-bundles/nifi-smb-bundle/nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
@@ -46,6 +46,8 @@ public final class SmbUtils {
 }
 }

+configBuilder.withDfsEnabled(true);
+
 if (context.getProperty(USE_ENCRYPTION).isSet()) {
 
configBuilder.withEncryptData(context.getProperty(USE_ENCRYPTION).asBoolean());
 }
{code}

This appeared to resolve the issue.

It would be very useful if this setting was available to toggle in the UI for 
all SMB processors.

Without this setting, I get a *STATUS_PATH_NOT_COVERED* error. 

Some related hierynomus/smbj github issues:
https://github.com/hierynomus/smbj/issues/152
https://github.com/hierynomus/smbj/issues/419



> Make hierynomus/smbj DFS setting available in SMB processors
> 
>
> Key: NIFI-12837
> URL: https://issues.apache.org/jira/browse/NIFI-12837
> Project: Apache NiFi
>  Issue Type: New Feature
>Affects Versions: 1.25.0
>Reporter: Anders
>Priority: Major
>
> The hierynomus/smbj library has a setting for enabling DFS which is disabled 
> by default:
> https://github.com/hierynomus/smbj/blob/f25d5c5478a5b73e9ba4202dcfb365974e15367e/src/main/java/com/hierynomus/smbj/SmbConfig.java#L106C17-L106C39
> This appears to cause problems in some SMB configurations.
> Patched 
> https://github.com/apache/nifi/blob/main/nifi-nar-bundles/nifi-smb-bundle/nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
>  to test in my environment with:
> {code}
> $ git diff 
> nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
> diff --git 
> a/nifi-nar-bundles/nifi-smb-bundle/nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
>  
> b/nifi-nar-bundles/nifi-smb-bundle/nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
> index 0895abfae0..eac765 100644
> --- 
> 

[jira] [Updated] (NIFI-12837) Make hierynomus/smbj DFS setting available in SMB processors

2024-02-23 Thread Anders (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anders updated NIFI-12837:
--
Description: 
The hierynomus/smbj library has a setting for enabling DFS which is disabled by 
default:

https://github.com/hierynomus/smbj/blob/f25d5c5478a5b73e9ba4202dcfb365974e15367e/src/main/java/com/hierynomus/smbj/SmbConfig.java#L106C17-L106C39

This appears to cause problems in some SMB configurations.

Patched 
https://github.com/apache/nifi/blob/main/nifi-nar-bundles/nifi-smb-bundle/nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
 to test in my environment with:

{code}
$ git diff 
nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
diff --git 
a/nifi-nar-bundles/nifi-smb-bundle/nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
 
b/nifi-nar-bundles/nifi-smb-bundle/nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
index 0895abfae0..eac765 100644
--- 
a/nifi-nar-bundles/nifi-smb-bundle/nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
+++ 
b/nifi-nar-bundles/nifi-smb-bundle/nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
@@ -46,6 +46,8 @@ public final class SmbUtils {
 }
 }

+configBuilder.withDfsEnabled(true);
+
 if (context.getProperty(USE_ENCRYPTION).isSet()) {
 
configBuilder.withEncryptData(context.getProperty(USE_ENCRYPTION).asBoolean());
 }
{code}

This appeared to resolve the issue.

It would be very useful if this setting was available to toggle in the UI for 
all SMB processors.

Without this setting, I get a *STATUS_PATH_NOT_COVERED* error. 

Some related hierynomus/smbj github issues:
https://github.com/hierynomus/smbj/issues/152
https://github.com/hierynomus/smbj/issues/419


  was:
The hierynomus/smbj library has a setting for enabling DFS which is disabled by 
default:

https://github.com/hierynomus/smbj/blob/f25d5c5478a5b73e9ba4202dcfb365974e15367e/src/main/java/com/hierynomus/smbj/SmbConfig.java#L106C17-L106C39

This appears to cause problems in some SMB configurations. It would be usefull 
if this setting was exposed in the SMB-processors UI.

Patch 
https://github.com/apache/nifi/blob/main/nifi-nar-bundles/nifi-smb-bundle/nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
 to test in my environment with:

{code}
$ git diff 
nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
diff --git 
a/nifi-nar-bundles/nifi-smb-bundle/nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
 
b/nifi-nar-bundles/nifi-smb-bundle/nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
index 0895abfae0..eac765 100644
--- 
a/nifi-nar-bundles/nifi-smb-bundle/nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
+++ 
b/nifi-nar-bundles/nifi-smb-bundle/nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
@@ -46,6 +46,8 @@ public final class SmbUtils {
 }
 }

+configBuilder.withDfsEnabled(true);
+
 if (context.getProperty(USE_ENCRYPTION).isSet()) {
 
configBuilder.withEncryptData(context.getProperty(USE_ENCRYPTION).asBoolean());
 }
{code}

This appeared to resolve the issue.

It would be very useful if this setting was available to toggle in the UI for 
all SMB processors.

Without this setting, I get a *STATUS_PATH_NOT_COVERED* error. 

Some related hierynomus/smbj github issues:
https://github.com/hierynomus/smbj/issues/152
https://github.com/hierynomus/smbj/issues/419



> Make hierynomus/smbj DFS setting available in SMB processors
> 
>
> Key: NIFI-12837
> URL: https://issues.apache.org/jira/browse/NIFI-12837
> Project: Apache NiFi
>  Issue Type: New Feature
>Affects Versions: 1.25.0
>Reporter: Anders
>Priority: Major
>
> The hierynomus/smbj library has a setting for enabling DFS which is disabled 
> by default:
> https://github.com/hierynomus/smbj/blob/f25d5c5478a5b73e9ba4202dcfb365974e15367e/src/main/java/com/hierynomus/smbj/SmbConfig.java#L106C17-L106C39
> This appears to cause problems in some SMB configurations.
> Patched 
> https://github.com/apache/nifi/blob/main/nifi-nar-bundles/nifi-smb-bundle/nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
>  to test in my environment with:
> {code}
> $ git diff 
> nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
> diff --git 
> a/nifi-nar-bundles/nifi-smb-bundle/nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
>  
> b/nifi-nar-bundles/nifi-smb-bundle/nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
> index 

[jira] [Updated] (NIFI-12837) Make hierynomus/smbj DFS setting available in SMB processors

2024-02-23 Thread Anders (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anders updated NIFI-12837:
--
Description: 
The hierynomus/smbj library has a setting for enabling DFS which is disabled by 
default:

https://github.com/hierynomus/smbj/blob/f25d5c5478a5b73e9ba4202dcfb365974e15367e/src/main/java/com/hierynomus/smbj/SmbConfig.java#L106C17-L106C39

This appears to cause problems in some SMB configurations. It would be usefull 
if this setting was exposed in the SMB-processors UI.

Patch 
https://github.com/apache/nifi/blob/main/nifi-nar-bundles/nifi-smb-bundle/nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
 to test in my environment with:

{code}
$ git diff 
nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
diff --git 
a/nifi-nar-bundles/nifi-smb-bundle/nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
 
b/nifi-nar-bundles/nifi-smb-bundle/nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
index 0895abfae0..eac765 100644
--- 
a/nifi-nar-bundles/nifi-smb-bundle/nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
+++ 
b/nifi-nar-bundles/nifi-smb-bundle/nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
@@ -46,6 +46,8 @@ public final class SmbUtils {
 }
 }

+configBuilder.withDfsEnabled(true);
+
 if (context.getProperty(USE_ENCRYPTION).isSet()) {
 
configBuilder.withEncryptData(context.getProperty(USE_ENCRYPTION).asBoolean());
 }
{code}

This appeared to resolve the issue.

It would be very useful if this setting was available to toggle in the UI for 
all SMB processors.

Without this setting, I get a *STATUS_PATH_NOT_COVERED* error. 

Some related hierynomus/smbj github issues:
https://github.com/hierynomus/smbj/issues/152
https://github.com/hierynomus/smbj/issues/419


  was:
The hierynomus/smbj library has a setting for enabling DFS which is disabled by 
default:

https://github.com/hierynomus/smbj/blob/f25d5c5478a5b73e9ba4202dcfb365974e15367e/src/main/java/com/hierynomus/smbj/SmbConfig.java#L106C17-L106C39

This appears to cause problems in some SMB configurations. It would be usefull 
if this setting was exposed in the SMB-processors UI.

Patch 
https://github.com/apache/nifi/blob/main/nifi-nar-bundles/nifi-smb-bundle/nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
 to test in my environment with:

{code}
$ git diff 
nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
diff --git 
a/nifi-nar-bundles/nifi-smb-bundle/nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
 
b/nifi-nar-bundles/nifi-smb-bundle/nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
index 0895abfae0..eac765 100644
--- 
a/nifi-nar-bundles/nifi-smb-bundle/nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
+++ 
b/nifi-nar-bundles/nifi-smb-bundle/nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
@@ -46,6 +46,8 @@ public final class SmbUtils {
 }
 }

+configBuilder.withDfsEnabled(true);
+
 if (context.getProperty(USE_ENCRYPTION).isSet()) {
 
configBuilder.withEncryptData(context.getProperty(USE_ENCRYPTION).asBoolean());
 }
{code}

This appeared to resolve the issue.

It would be very useful if this setting was available to toggle in the UI for 
all SMB processors.

Without this setting, I get a *STATUS_PATH_NOT_COVERED* error. 

Some related 


> Make hierynomus/smbj DFS setting available in SMB processors
> 
>
> Key: NIFI-12837
> URL: https://issues.apache.org/jira/browse/NIFI-12837
> Project: Apache NiFi
>  Issue Type: New Feature
>Affects Versions: 1.25.0
>Reporter: Anders
>Priority: Major
>
> The hierynomus/smbj library has a setting for enabling DFS which is disabled 
> by default:
> https://github.com/hierynomus/smbj/blob/f25d5c5478a5b73e9ba4202dcfb365974e15367e/src/main/java/com/hierynomus/smbj/SmbConfig.java#L106C17-L106C39
> This appears to cause problems in some SMB configurations. It would be 
> usefull if this setting was exposed in the SMB-processors UI.
> Patch 
> https://github.com/apache/nifi/blob/main/nifi-nar-bundles/nifi-smb-bundle/nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
>  to test in my environment with:
> {code}
> $ git diff 
> nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
> diff --git 
> a/nifi-nar-bundles/nifi-smb-bundle/nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
>  
> b/nifi-nar-bundles/nifi-smb-bundle/nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java

[jira] [Updated] (NIFI-12837) Make hierynomus/smbj DFS setting available in SMB processors

2024-02-23 Thread Anders (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anders updated NIFI-12837:
--
Description: 
The hierynomus/smbj library has a setting for enabling DFS which is disabled by 
default:

https://github.com/hierynomus/smbj/blob/f25d5c5478a5b73e9ba4202dcfb365974e15367e/src/main/java/com/hierynomus/smbj/SmbConfig.java#L106C17-L106C39

This appears to cause problems in some SMB configurations. It would be usefull 
if this setting was exposed in the SMB-processors UI.

Patch 
https://github.com/apache/nifi/blob/main/nifi-nar-bundles/nifi-smb-bundle/nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
 to test in my environment with:

{code}
$ git diff 
nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
diff --git 
a/nifi-nar-bundles/nifi-smb-bundle/nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
 
b/nifi-nar-bundles/nifi-smb-bundle/nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
index 0895abfae0..eac765 100644
--- 
a/nifi-nar-bundles/nifi-smb-bundle/nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
+++ 
b/nifi-nar-bundles/nifi-smb-bundle/nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
@@ -46,6 +46,8 @@ public final class SmbUtils {
 }
 }

+configBuilder.withDfsEnabled(true);
+
 if (context.getProperty(USE_ENCRYPTION).isSet()) {
 
configBuilder.withEncryptData(context.getProperty(USE_ENCRYPTION).asBoolean());
 }
{code}

This appeared to resolve the issue.

It would be very useful if this setting was available to toggle in the UI for 
all SMB processors.

Without this setting, I get a *STATUS_PATH_NOT_COVERED* error. 

Some related 

  was:
The hierynomus/smbj library has a setting for enabling DFS which is disabled by 
default:

https://github.com/hierynomus/smbj/blob/f25d5c5478a5b73e9ba4202dcfb365974e15367e/src/main/java/com/hierynomus/smbj/SmbConfig.java#L106C17-L106C39

This appears to cause problems in some SMB configurations. It would be usefull 
if this setting was exposed in the SMB-processors UI.

Patch 
https://github.com/apache/nifi/blob/main/nifi-nar-bundles/nifi-smb-bundle/nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
 to test in my environment with:

{code}
$ git diff 
nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
diff --git 
a/nifi-nar-bundles/nifi-smb-bundle/nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
 
b/nifi-nar-bundles/nifi-smb-bundle/nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
index 0895abfae0..eac765 100644
--- 
a/nifi-nar-bundles/nifi-smb-bundle/nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
+++ 
b/nifi-nar-bundles/nifi-smb-bundle/nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
@@ -46,6 +46,8 @@ public final class SmbUtils {
 }
 }

+configBuilder.withDfsEnabled(true);
+
 if (context.getProperty(USE_ENCRYPTION).isSet()) {
 
configBuilder.withEncryptData(context.getProperty(USE_ENCRYPTION).asBoolean());
 }
{code}

This appeared to resolve the issue.

It would be very useful if this setting was available to toggle in the UI for 
all SMB processors.



> Make hierynomus/smbj DFS setting available in SMB processors
> 
>
> Key: NIFI-12837
> URL: https://issues.apache.org/jira/browse/NIFI-12837
> Project: Apache NiFi
>  Issue Type: New Feature
>Affects Versions: 1.25.0
>Reporter: Anders
>Priority: Major
>
> The hierynomus/smbj library has a setting for enabling DFS which is disabled 
> by default:
> https://github.com/hierynomus/smbj/blob/f25d5c5478a5b73e9ba4202dcfb365974e15367e/src/main/java/com/hierynomus/smbj/SmbConfig.java#L106C17-L106C39
> This appears to cause problems in some SMB configurations. It would be 
> usefull if this setting was exposed in the SMB-processors UI.
> Patch 
> https://github.com/apache/nifi/blob/main/nifi-nar-bundles/nifi-smb-bundle/nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
>  to test in my environment with:
> {code}
> $ git diff 
> nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
> diff --git 
> a/nifi-nar-bundles/nifi-smb-bundle/nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
>  
> b/nifi-nar-bundles/nifi-smb-bundle/nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
> index 0895abfae0..eac765 100644
> --- 
> a/nifi-nar-bundles/nifi-smb-bundle/nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
> +++ 
> 

[jira] [Created] (NIFI-12837) Make hierynomus/smbj DFS setting available in SMB processors

2024-02-23 Thread Anders (Jira)
Anders created NIFI-12837:
-

 Summary: Make hierynomus/smbj DFS setting available in SMB 
processors
 Key: NIFI-12837
 URL: https://issues.apache.org/jira/browse/NIFI-12837
 Project: Apache NiFi
  Issue Type: New Feature
Affects Versions: 1.25.0
Reporter: Anders


The hierynomus/smbj library has a setting for enabling DFS which is disabled by 
default:

https://github.com/hierynomus/smbj/blob/f25d5c5478a5b73e9ba4202dcfb365974e15367e/src/main/java/com/hierynomus/smbj/SmbConfig.java#L106C17-L106C39

This appears to cause problems in some SMB configurations. It would be usefull 
if this setting was exposed in the SMB-processors UI.

Patch 
https://github.com/apache/nifi/blob/main/nifi-nar-bundles/nifi-smb-bundle/nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
 to test in my environment with:

{code}
$ git diff 
nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
diff --git 
a/nifi-nar-bundles/nifi-smb-bundle/nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
 
b/nifi-nar-bundles/nifi-smb-bundle/nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
index 0895abfae0..eac765 100644
--- 
a/nifi-nar-bundles/nifi-smb-bundle/nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
+++ 
b/nifi-nar-bundles/nifi-smb-bundle/nifi-smb-smbj-common/src/main/java/org/apache/nifi/smb/common/SmbUtils.java
@@ -46,6 +46,8 @@ public final class SmbUtils {
 }
 }

+configBuilder.withDfsEnabled(true);
+
 if (context.getProperty(USE_ENCRYPTION).isSet()) {
 
configBuilder.withEncryptData(context.getProperty(USE_ENCRYPTION).asBoolean());
 }
{code}

This appeared to resolve the issue.

It would be very useful if this setting was available to toggle in the UI for 
all SMB processors.




--
This message was sent by Atlassian Jira
(v8.20.10#820010)