[jira] [Commented] (NIFI-7882) Null as default value in avro generates exception

2020-10-06 Thread Dominik Przybysz (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17209301#comment-17209301
 ] 

Dominik Przybysz commented on NIFI-7882:


I have provided quick fix in https://github.com/apache/nifi/pull/4575

> Null as default value in avro generates exception
> -
>
> Key: NIFI-7882
> URL: https://issues.apache.org/jira/browse/NIFI-7882
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.10.0, 1.9.2, 1.11.4, 1.12.1
>Reporter: Dominik Przybysz
>Priority: Major
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> When I have a record without value for fields, then missing fields will be 
> populated with default values from schema. It was added in 
> https://issues.apache.org/jira/browse/NIFI-4030
> But when null is default value then avro 1.8.1 returns 
> org.apache.avro.JsonProperties$Null which cannot be converted to null value 
> (the issue in avro is described in 
> https://issues.apache.org/jira/browse/AVRO-1954) and processors fail with 
> error:
> {code}
> ERROR o.a.n.p.standard.ConvertRecord - 
> ConvertRecord[id=37460cbe-17f1-4456-a6b9-f4ed1baa4c45] Failed to process 
> FlowFile[0,436030859382835.mockFlowFile,161B]; will route to failure: 
> org.apache.avro.file.DataFileWriter$AppendWriteException: 
> org.apache.avro.AvroRuntimeException: Unknown datum type 
> org.apache.avro.JsonProperties$Null: 
> org.apache.avro.JsonProperties$Null@1723f29f
> {code}
> Probably Apache Nifi should upgrade to newer avro or backport the fix from 
> avro 1.9.x to AvroTypeUtil



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] alien11689 commented on pull request #4575: [NIFI-7882] Handle JsonProperties as avro field default

2020-10-06 Thread GitBox


alien11689 commented on pull request #4575:
URL: https://github.com/apache/nifi/pull/4575#issuecomment-704706942


   Bumping Avro to 1.9 or 1.10 is one of the available solutions, but it 
definitely should bump nifi minor version (1.12 -> 1.13). And of course it 
should be done.
   
   But for fixing the problem as soon as possible this change is good enough. 
Nifi version could be bumped only with patch version and versions 1.11.5 and 
1.12.2 could be released.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] mtien-apache opened a new pull request #4579: NIFI-7892 Created a Logout page to inform users of a complete logout …

2020-10-06 Thread GitBox


mtien-apache opened a new pull request #4579:
URL: https://github.com/apache/nifi/pull/4579


   …when OIDC is configured.
   
   Thank you for submitting a contribution to Apache NiFi.
   
   Please provide a short description of the PR here:
   
    Description of PR
   
   _Created a new Logout page that informs users of a complete logout when NiFi 
is configured to use OIDC for authentication._
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [x] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [x] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [x] Has your PR been rebased against the latest commit within the target 
branch (typically `main`)?
   
   - [x] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [x] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [ ] Have you written or updated unit tests to verify your changes?
   - [ ] Have you verified that the full build is successful on JDK 8?
   - [x] Have you verified that the full build is successful on JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for 
build issues and submit an update to your PR as soon as possible.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] alopresto commented on a change in pull request #4572: NIFI-7777 Added Password property to UnpackContent for decrypting Zip archives

2020-10-06 Thread GitBox


alopresto commented on a change in pull request #4572:
URL: https://github.com/apache/nifi/pull/4572#discussion_r500695287



##
File path: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/UnpackContent.java
##
@@ -145,6 +150,15 @@
 .addValidator(StandardValidators.REGULAR_EXPRESSION_VALIDATOR)
 .build();
 
+public static final PropertyDescriptor PASSWORD = new 
PropertyDescriptor.Builder()
+.name("Password")
+.displayName("Password")
+.description("Password used for decrypting archive entries. 
Supports Zip files encrypted with ZipCrypto or AES")
+.required(false)
+.sensitive(true)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)

Review comment:
   Variables (the original approach) wouldn't solve this problem, and full 
EL support would only solve it if the password was in a predictable flowfile 
attribute, which would need to be populated by some repeatable process (likely 
manual). Sensitive parameters solve this at the same level of intervention as 
variables, and if there is a repeatable process to determine the password, it 
can be persisted to a parameter via an API call. 
   
   For a high-volume approach, I think you would need some correlation process 
between a specific flowfile `filename` attribute and the corresponding password 
via some lookup, and there is no mechanism to support this currently. I am 
designing some more advanced key management & sensitive property management 
functionality (likely via controller services) for other ongoing efforts, and 
these may provide a referenceable model for this requirement as well. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] exceptionfactory commented on a change in pull request #4572: NIFI-7777 Added Password property to UnpackContent for decrypting Zip archives

2020-10-06 Thread GitBox


exceptionfactory commented on a change in pull request #4572:
URL: https://github.com/apache/nifi/pull/4572#discussion_r500695248



##
File path: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/test/java/org/apache/nifi/processors/standard/TestUnpackContent.java
##
@@ -221,6 +226,32 @@ public void testInvalidZip() throws IOException {
 }
 }
 
+@Test
+public void testZipEncryptionZipStandard() throws IOException {
+runZipEncryptionMethod(EncryptionMethod.ZIP_STANDARD);
+}
+
+@Test
+public void testZipEncryptionAes() throws IOException {
+runZipEncryptionMethod(EncryptionMethod.AES);
+}
+
+@Test
+public void testZipEncryptionNoPasswordConfigured() throws IOException {
+final TestRunner runner = TestRunners.newTestRunner(new 
UnpackContent());
+runner.setProperty(UnpackContent.PACKAGING_FORMAT, 
UnpackContent.PackageFormat.ZIP_FORMAT.toString());
+
+final String password = String.class.getSimpleName();
+final char[] streamPassword = password.toCharArray();
+final String contents = TestRunner.class.getCanonicalName();
+
+final byte[] zipEncrypted = createZipEncrypted(EncryptionMethod.AES, 
streamPassword, contents);

Review comment:
   Thanks for the confirmation.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] MikeThomsen commented on a change in pull request #4572: NIFI-7777 Added Password property to UnpackContent for decrypting Zip archives

2020-10-06 Thread GitBox


MikeThomsen commented on a change in pull request #4572:
URL: https://github.com/apache/nifi/pull/4572#discussion_r500691965



##
File path: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/test/java/org/apache/nifi/processors/standard/TestUnpackContent.java
##
@@ -221,6 +226,32 @@ public void testInvalidZip() throws IOException {
 }
 }
 
+@Test
+public void testZipEncryptionZipStandard() throws IOException {
+runZipEncryptionMethod(EncryptionMethod.ZIP_STANDARD);
+}
+
+@Test
+public void testZipEncryptionAes() throws IOException {
+runZipEncryptionMethod(EncryptionMethod.AES);
+}
+
+@Test
+public void testZipEncryptionNoPasswordConfigured() throws IOException {
+final TestRunner runner = TestRunners.newTestRunner(new 
UnpackContent());
+runner.setProperty(UnpackContent.PACKAGING_FORMAT, 
UnpackContent.PackageFormat.ZIP_FORMAT.toString());
+
+final String password = String.class.getSimpleName();
+final char[] streamPassword = password.toCharArray();
+final String contents = TestRunner.class.getCanonicalName();
+
+final byte[] zipEncrypted = createZipEncrypted(EncryptionMethod.AES, 
streamPassword, contents);

Review comment:
   I think we can skip it





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] exceptionfactory commented on a change in pull request #4572: NIFI-7777 Added Password property to UnpackContent for decrypting Zip archives

2020-10-06 Thread GitBox


exceptionfactory commented on a change in pull request #4572:
URL: https://github.com/apache/nifi/pull/4572#discussion_r500690543



##
File path: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/test/java/org/apache/nifi/processors/standard/TestUnpackContent.java
##
@@ -221,6 +226,32 @@ public void testInvalidZip() throws IOException {
 }
 }
 
+@Test
+public void testZipEncryptionZipStandard() throws IOException {
+runZipEncryptionMethod(EncryptionMethod.ZIP_STANDARD);
+}
+
+@Test
+public void testZipEncryptionAes() throws IOException {
+runZipEncryptionMethod(EncryptionMethod.AES);
+}
+
+@Test
+public void testZipEncryptionNoPasswordConfigured() throws IOException {
+final TestRunner runner = TestRunners.newTestRunner(new 
UnpackContent());
+runner.setProperty(UnpackContent.PACKAGING_FORMAT, 
UnpackContent.PackageFormat.ZIP_FORMAT.toString());
+
+final String password = String.class.getSimpleName();
+final char[] streamPassword = password.toCharArray();
+final String contents = TestRunner.class.getCanonicalName();
+
+final byte[] zipEncrypted = createZipEncrypted(EncryptionMethod.AES, 
streamPassword, contents);

Review comment:
   Further review of Zip4j indicates that the only notable improvement to 
processing unsupported algorithms would be better exception messages from that 
library.  Since there a number of possible unsupported algorithms that would 
result in some kind of ZipException or other exception from Zip4j, there 
doesn't seem to be much value to exercising the same path through the code with 
unsupported binary inputs.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] MikeThomsen commented on a change in pull request #4572: NIFI-7777 Added Password property to UnpackContent for decrypting Zip archives

2020-10-06 Thread GitBox


MikeThomsen commented on a change in pull request #4572:
URL: https://github.com/apache/nifi/pull/4572#discussion_r500664063



##
File path: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/UnpackContent.java
##
@@ -145,6 +150,15 @@
 .addValidator(StandardValidators.REGULAR_EXPRESSION_VALIDATOR)
 .build();
 
+public static final PropertyDescriptor PASSWORD = new 
PropertyDescriptor.Builder()
+.name("Password")
+.displayName("Password")
+.description("Password used for decrypting archive entries. 
Supports Zip files encrypted with ZipCrypto or AES")
+.required(false)
+.sensitive(true)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)

Review comment:
   @alopresto read through your notes, and they make sense on the EL.
   
   One thing we see with government customers in a number of cases is a small 
shop will have little to no IT budget and will just transfer data in a 
password-protected zip file. Sometimes it'll be pushed to SFTP. In those cases 
where the password is not reused, can you think of any good way we could adapt 
this sort of change to work with that?
   
   (FWIW, this is not a current blocker, just something I've seen in the past.)





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] MikeThomsen commented on a change in pull request #4572: NIFI-7777 Added Password property to UnpackContent for decrypting Zip archives

2020-10-06 Thread GitBox


MikeThomsen commented on a change in pull request #4572:
URL: https://github.com/apache/nifi/pull/4572#discussion_r500664063



##
File path: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/UnpackContent.java
##
@@ -145,6 +150,15 @@
 .addValidator(StandardValidators.REGULAR_EXPRESSION_VALIDATOR)
 .build();
 
+public static final PropertyDescriptor PASSWORD = new 
PropertyDescriptor.Builder()
+.name("Password")
+.displayName("Password")
+.description("Password used for decrypting archive entries. 
Supports Zip files encrypted with ZipCrypto or AES")
+.required(false)
+.sensitive(true)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)

Review comment:
   @alopresto read through your notes, and they make sense on the EL.
   
   One thing we see with government customers in a number of cases is a small 
shop will have little to no IT budget and will just transfer data in a 
password-protected zip file. Sometimes it'll be pushed to SFTP. In those cases 
where the password is not reused, can you think of any good way we could adapt 
this sort of change to work with that?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (NIFI-7893) Provenance is intermittently blank when `nifi.provenance.repository.compress.on.rollover` is set to `true`

2020-10-06 Thread Daniel Khodabakhsh (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Khodabakhsh updated NIFI-7893:
-
Description: 
On Nifi 1.11.4 and 1.12.1, I intermittently am unable to view the provenance in 
the UI and through the `nipyapi` python library.

Checking the logs, I get the following message:
{code:java}
2020-10-06 22:12:39,789 INFO [Timer-Driven Process Thread-10] 
o.a.n.p.store.WriteAheadStorePartition Successfully rolled over Event Writer 
for Provenance Event Store Partition[directory=./provenance_repository] due to 
MAX_TIME_REACHED
2020-10-06 22:12:39,789 ERROR [Compress Provenance Logs-1-thread-2] 
o.a.n.p.s.EventFileCompressor Failed to compress ./provenance_repository/0.prov 
on rollover
java.io.FileNotFoundException: ./provenance_repository/0.prov (No such file or 
directory)
at java.io.FileInputStream.open0(Native Method)
at java.io.FileInputStream.open(FileInputStream.java:195)
at java.io.FileInputStream.(FileInputStream.java:138)
at 
org.apache.nifi.provenance.serialization.EventFileCompressor.compress(EventFileCompressor.java:164)
at 
org.apache.nifi.provenance.serialization.EventFileCompressor.run(EventFileCompressor.java:115)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748){code}
Setting `nifi.provenance.repository.compress.on.rollover=false` fixes this.

I'd also like to note that my `/opt/nifi/nifi-current/provenance_repository` 
directory had a `0.prov.gz` which would be around 10-20 kB and never grow. It 
also contained a few other files but it never contained a `0.prov` file.

Here's a copy of all my `nifi.provenance.repository` settings from 
`nifi.properties`:
{code:java}
# Provenance Repository Properties
 
nifi.provenance.repository.implementation=org.apache.nifi.provenance.WriteAheadProvenanceRepository
 nifi.provenance.repository.debug.frequency=1_000_000
 nifi.provenance.repository.encryption.key.provider.implementation=
 nifi.provenance.repository.encryption.key.provider.location=
 nifi.provenance.repository.encryption.key.id=
 nifi.provenance.repository.encryption.key=
# Persistent Provenance Repository Properties
 nifi.provenance.repository.directory.default=./provenance_repository
 nifi.provenance.repository.max.storage.time=24 hours
 nifi.provenance.repository.max.storage.size=1 GB
 nifi.provenance.repository.rollover.time=30 secs
 nifi.provenance.repository.rollover.size=100 MB
 nifi.provenance.repository.query.threads=2
 nifi.provenance.repository.index.threads=2
 nifi.provenance.repository.compress.on.rollover=false
 nifi.provenance.repository.always.sync=false
 # Comma-separated list of fields. Fields that are not indexed will not be 
searchable. Valid fields are:
 # EventType, FlowFileUUID, Filename, TransitURI, ProcessorID, 
AlternateIdentifierURI, Relationship, Details
 nifi.provenance.repository.indexed.fields=EventType, FlowFileUUID, Filename, 
ProcessorID, Relationship
 # FlowFile Attributes that should be indexed and made searchable.  Some 
examples to consider are filename, uuid, mime.type
 nifi.provenance.repository.indexed.attributes=
 # Large values for the shard size will result in more Java heap usage when 
searching the Provenance Repository
 # but should provide better performance
 nifi.provenance.repository.index.shard.size=500 MB
 # Indicates the maximum length that a FlowFile attribute can be when 
retrieving a Provenance Event from
 # the repository. If the length of any attribute exceeds this value, it will 
be truncated when the event is retrieved.
 nifi.provenance.repository.max.attribute.length=65536
 nifi.provenance.repository.concurrent.merge.threads=2
# Volatile Provenance Respository Properties
 nifi.provenance.repository.buffer.size=10
{code}

  was:
On Nifi 1.11.4 and 1.12.1, I intermittently am unable to view the provenance in 
the UI and through the `nipyapi` python library.

Checking the logs, I get the following message:
{code:java}
2020-10-06 22:12:39,789 INFO [Timer-Driven Process Thread-10] 
o.a.n.p.store.WriteAheadStorePartition Successfully rolled over Event Writer 
for Provenance Event Store Partition[directory=./provenance_repository] due to 
MAX_TIME_REACHED
2020-10-06 22:12:39,789 ERROR [Compress Provenance Logs-1-thread-2] 
o.a.n.p.s.EventFileCompressor Failed to compress ./provenance_repository/0.prov 
on rollover
java.io.FileNotFoundException: ./provenance_repository/0.prov (No such file or 
directory)
at java.io.FileInputStream.open0(Native Method)
at java.io.FileInputStream.open(FileInputStream.java:195)
at 

[GitHub] [nifi] exceptionfactory commented on a change in pull request #4572: NIFI-7777 Added Password property to UnpackContent for decrypting Zip archives

2020-10-06 Thread GitBox


exceptionfactory commented on a change in pull request #4572:
URL: https://github.com/apache/nifi/pull/4572#discussion_r500657362



##
File path: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/test/java/org/apache/nifi/processors/standard/TestUnpackContent.java
##
@@ -221,6 +226,32 @@ public void testInvalidZip() throws IOException {
 }
 }
 
+@Test
+public void testZipEncryptionZipStandard() throws IOException {
+runZipEncryptionMethod(EncryptionMethod.ZIP_STANDARD);
+}
+
+@Test
+public void testZipEncryptionAes() throws IOException {
+runZipEncryptionMethod(EncryptionMethod.AES);
+}
+
+@Test
+public void testZipEncryptionNoPasswordConfigured() throws IOException {
+final TestRunner runner = TestRunners.newTestRunner(new 
UnpackContent());
+runner.setProperty(UnpackContent.PACKAGING_FORMAT, 
UnpackContent.PackageFormat.ZIP_FORMAT.toString());
+
+final String password = String.class.getSimpleName();
+final char[] streamPassword = password.toCharArray();
+final String contents = TestRunner.class.getCanonicalName();
+
+final byte[] zipEncrypted = createZipEncrypted(EncryptionMethod.AES, 
streamPassword, contents);

Review comment:
   Thanks for the additional feedback.  The EncryptionMethod enum lists 
only supported algorithms, so testing a different type of encrypted zip file 
would require creating an example from another program.  Creating an encrypted 
Zip file with a trial version of PKWARE SecureZIP using the 3DES option results 
in the ZipInputStream throwing a ZipException with the following message:
   
   "Reached end of entry, but crc verification failed for testing"
   
   Unfortunately this doesn't provide any particular indication that the Zip 
entry was encrypted using some other unsupported algorithm.  The end result is 
that the file is routed to failure, just as with any other ZipException.
   
   Implementing this unit test would require checking in an encrypted binary 
created with PKWARE SecureZIP.  Since the exception handling follows the same 
path as a missing or incorrect password, do you still want this test case 
included?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (NIFI-7893) Provenance is intermittently blank when `nifi.provenance.repository.compress.on.rollover` is set to `true`

2020-10-06 Thread Daniel Khodabakhsh (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Khodabakhsh updated NIFI-7893:
-
Description: 
On Nifi 1.11.4 and 1.12.1, I intermittently am unable to view the provenance in 
the UI and through the `nipyapi` python library.

Checking the logs, I get the following message:
{code:java}
2020-10-06 22:12:39,789 INFO [Timer-Driven Process Thread-10] 
o.a.n.p.store.WriteAheadStorePartition Successfully rolled over Event Writer 
for Provenance Event Store Partition[directory=./provenance_repository] due to 
MAX_TIME_REACHED
2020-10-06 22:12:39,789 ERROR [Compress Provenance Logs-1-thread-2] 
o.a.n.p.s.EventFileCompressor Failed to compress ./provenance_repository/0.prov 
on rollover
java.io.FileNotFoundException: ./provenance_repository/0.prov (No such file or 
directory)
at java.io.FileInputStream.open0(Native Method)
at java.io.FileInputStream.open(FileInputStream.java:195)
at java.io.FileInputStream.(FileInputStream.java:138)
at 
org.apache.nifi.provenance.serialization.EventFileCompressor.compress(EventFileCompressor.java:164)
at 
org.apache.nifi.provenance.serialization.EventFileCompressor.run(EventFileCompressor.java:115)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748){code}
Setting `nifi.provenance.repository.compress.on.rollover=false` fixes this.

I'd also like to note that my `/opt/nifi/nifi-current/provenance_repository` 
directory had a `0.prov.gz` which would never grow. It also contained a few 
other files but it never contained a `0.prov` file.

Here's a copy of all my `nifi.provenance.repository` settings from 
`nifi.properties`:
{code:java}
# Provenance Repository Properties
 
nifi.provenance.repository.implementation=org.apache.nifi.provenance.WriteAheadProvenanceRepository
 nifi.provenance.repository.debug.frequency=1_000_000
 nifi.provenance.repository.encryption.key.provider.implementation=
 nifi.provenance.repository.encryption.key.provider.location=
 nifi.provenance.repository.encryption.key.id=
 nifi.provenance.repository.encryption.key=
# Persistent Provenance Repository Properties
 nifi.provenance.repository.directory.default=./provenance_repository
 nifi.provenance.repository.max.storage.time=24 hours
 nifi.provenance.repository.max.storage.size=1 GB
 nifi.provenance.repository.rollover.time=30 secs
 nifi.provenance.repository.rollover.size=100 MB
 nifi.provenance.repository.query.threads=2
 nifi.provenance.repository.index.threads=2
 nifi.provenance.repository.compress.on.rollover=false
 nifi.provenance.repository.always.sync=false
 # Comma-separated list of fields. Fields that are not indexed will not be 
searchable. Valid fields are:
 # EventType, FlowFileUUID, Filename, TransitURI, ProcessorID, 
AlternateIdentifierURI, Relationship, Details
 nifi.provenance.repository.indexed.fields=EventType, FlowFileUUID, Filename, 
ProcessorID, Relationship
 # FlowFile Attributes that should be indexed and made searchable.  Some 
examples to consider are filename, uuid, mime.type
 nifi.provenance.repository.indexed.attributes=
 # Large values for the shard size will result in more Java heap usage when 
searching the Provenance Repository
 # but should provide better performance
 nifi.provenance.repository.index.shard.size=500 MB
 # Indicates the maximum length that a FlowFile attribute can be when 
retrieving a Provenance Event from
 # the repository. If the length of any attribute exceeds this value, it will 
be truncated when the event is retrieved.
 nifi.provenance.repository.max.attribute.length=65536
 nifi.provenance.repository.concurrent.merge.threads=2
# Volatile Provenance Respository Properties
 nifi.provenance.repository.buffer.size=10
{code}

  was:
On Nifi 1.11.4 and 1.12.1, I intermittently am unable to view the provenance in 
the UI and through the `nipyapi` python library.

Checking the logs, I get the following message:
{code:java}
2020-10-06 22:12:39,789 INFO [Timer-Driven Process Thread-10] 
o.a.n.p.store.WriteAheadStorePartition Successfully rolled over Event Writer 
for Provenance Event Store Partition[directory=./provenance_repository] due to 
MAX_TIME_REACHED
2020-10-06 22:12:39,789 ERROR [Compress Provenance Logs-1-thread-2] 
o.a.n.p.s.EventFileCompressor Failed to compress ./provenance_repository/0.prov 
on rollover
java.io.FileNotFoundException: ./provenance_repository/0.prov (No such file or 
directory)
at java.io.FileInputStream.open0(Native Method)
at java.io.FileInputStream.open(FileInputStream.java:195)
at 

[jira] [Updated] (NIFI-7893) Provenance is intermittently blank when `nifi.provenance.repository.compress.on.rollover` is set to `true`

2020-10-06 Thread Daniel Khodabakhsh (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Khodabakhsh updated NIFI-7893:
-
Description: 
On Nifi 1.11.4 and 1.12.1, I intermittently am unable to view the provenance in 
the UI and through the `nipyapi` python library.

Checking the logs, I get the following message:
{code:java}
2020-10-06 22:12:39,789 INFO [Timer-Driven Process Thread-10] 
o.a.n.p.store.WriteAheadStorePartition Successfully rolled over Event Writer 
for Provenance Event Store Partition[directory=./provenance_repository] due to 
MAX_TIME_REACHED
2020-10-06 22:12:39,789 ERROR [Compress Provenance Logs-1-thread-2] 
o.a.n.p.s.EventFileCompressor Failed to compress ./provenance_repository/0.prov 
on rollover
java.io.FileNotFoundException: ./provenance_repository/0.prov (No such file or 
directory)
at java.io.FileInputStream.open0(Native Method)
at java.io.FileInputStream.open(FileInputStream.java:195)
at java.io.FileInputStream.(FileInputStream.java:138)
at 
org.apache.nifi.provenance.serialization.EventFileCompressor.compress(EventFileCompressor.java:164)
at 
org.apache.nifi.provenance.serialization.EventFileCompressor.run(EventFileCompressor.java:115)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748){code}
Setting `nifi.provenance.repository.compress.on.rollover=false` fixes this.

Here's a copy of all my `nifi.provenance.repository` settings from 
`nifi.properties`:
{code:java}
# Provenance Repository Properties
 
nifi.provenance.repository.implementation=org.apache.nifi.provenance.WriteAheadProvenanceRepository
 nifi.provenance.repository.debug.frequency=1_000_000
 nifi.provenance.repository.encryption.key.provider.implementation=
 nifi.provenance.repository.encryption.key.provider.location=
 nifi.provenance.repository.encryption.key.id=
 nifi.provenance.repository.encryption.key=
# Persistent Provenance Repository Properties
 nifi.provenance.repository.directory.default=./provenance_repository
 nifi.provenance.repository.max.storage.time=24 hours
 nifi.provenance.repository.max.storage.size=1 GB
 nifi.provenance.repository.rollover.time=30 secs
 nifi.provenance.repository.rollover.size=100 MB
 nifi.provenance.repository.query.threads=2
 nifi.provenance.repository.index.threads=2
 nifi.provenance.repository.compress.on.rollover=false
 nifi.provenance.repository.always.sync=false
 # Comma-separated list of fields. Fields that are not indexed will not be 
searchable. Valid fields are:
 # EventType, FlowFileUUID, Filename, TransitURI, ProcessorID, 
AlternateIdentifierURI, Relationship, Details
 nifi.provenance.repository.indexed.fields=EventType, FlowFileUUID, Filename, 
ProcessorID, Relationship
 # FlowFile Attributes that should be indexed and made searchable.  Some 
examples to consider are filename, uuid, mime.type
 nifi.provenance.repository.indexed.attributes=
 # Large values for the shard size will result in more Java heap usage when 
searching the Provenance Repository
 # but should provide better performance
 nifi.provenance.repository.index.shard.size=500 MB
 # Indicates the maximum length that a FlowFile attribute can be when 
retrieving a Provenance Event from
 # the repository. If the length of any attribute exceeds this value, it will 
be truncated when the event is retrieved.
 nifi.provenance.repository.max.attribute.length=65536
 nifi.provenance.repository.concurrent.merge.threads=2
# Volatile Provenance Respository Properties
 nifi.provenance.repository.buffer.size=10
{code}

I'd also like to note that my `/opt/nifi/nifi-current/provenance_repository` 
directory had a `0.prov.gz` which would never grow. It also contained a few 
other files but it never contained a `0.prov` file.

  was:
On Nifi 1.11.4 and 1.12.1, I intermittently am unable to view the provenance in 
the UI and through the `nipyapi` python library.

Checking the logs, I get the following message:
{code:java}
2020-10-06 22:12:39,789 INFO [Timer-Driven Process Thread-10] 
o.a.n.p.store.WriteAheadStorePartition Successfully rolled over Event Writer 
for Provenance Event Store Partition[directory=./provenance_repository] due to 
MAX_TIME_REACHED
2020-10-06 22:12:39,789 ERROR [Compress Provenance Logs-1-thread-2] 
o.a.n.p.s.EventFileCompressor Failed to compress ./provenance_repository/0.prov 
on rollover
java.io.FileNotFoundException: ./provenance_repository/0.prov (No such file or 
directory)
at java.io.FileInputStream.open0(Native Method)
at java.io.FileInputStream.open(FileInputStream.java:195)
at 

[jira] [Updated] (NIFI-7893) Provenance is intermittently blank when `nifi.provenance.repository.compress.on.rollover` is set to `true`

2020-10-06 Thread Daniel Khodabakhsh (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Khodabakhsh updated NIFI-7893:
-
Description: 
On Nifi 1.11.4 and 1.12.1, I intermittently am unable to view the provenance in 
the UI and through the `nipyapi` python library.

Checking the logs, I get the following message:
{code:java}
2020-10-06 22:12:39,789 INFO [Timer-Driven Process Thread-10] 
o.a.n.p.store.WriteAheadStorePartition Successfully rolled over Event Writer 
for Provenance Event Store Partition[directory=./provenance_repository] due to 
MAX_TIME_REACHED
2020-10-06 22:12:39,789 ERROR [Compress Provenance Logs-1-thread-2] 
o.a.n.p.s.EventFileCompressor Failed to compress ./provenance_repository/0.prov 
on rollover
java.io.FileNotFoundException: ./provenance_repository/0.prov (No such file or 
directory)
at java.io.FileInputStream.open0(Native Method)
at java.io.FileInputStream.open(FileInputStream.java:195)
at java.io.FileInputStream.(FileInputStream.java:138)
at 
org.apache.nifi.provenance.serialization.EventFileCompressor.compress(EventFileCompressor.java:164)
at 
org.apache.nifi.provenance.serialization.EventFileCompressor.run(EventFileCompressor.java:115)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748){code}
Setting `nifi.provenance.repository.compress.on.rollover=false` fixes this.

Here's a copy of all my `nifi.provenance.repository` settings from 
`nifi.properties`:
{code:java}
# Provenance Repository Properties
 
nifi.provenance.repository.implementation=org.apache.nifi.provenance.WriteAheadProvenanceRepository
 nifi.provenance.repository.debug.frequency=1_000_000
 nifi.provenance.repository.encryption.key.provider.implementation=
 nifi.provenance.repository.encryption.key.provider.location=
 nifi.provenance.repository.encryption.key.id=
 nifi.provenance.repository.encryption.key=
# Persistent Provenance Repository Properties
 nifi.provenance.repository.directory.default=./provenance_repository
 nifi.provenance.repository.max.storage.time=24 hours
 nifi.provenance.repository.max.storage.size=1 GB
 nifi.provenance.repository.rollover.time=30 secs
 nifi.provenance.repository.rollover.size=100 MB
 nifi.provenance.repository.query.threads=2
 nifi.provenance.repository.index.threads=2
 nifi.provenance.repository.compress.on.rollover=false
 nifi.provenance.repository.always.sync=false
 # Comma-separated list of fields. Fields that are not indexed will not be 
searchable. Valid fields are:
 # EventType, FlowFileUUID, Filename, TransitURI, ProcessorID, 
AlternateIdentifierURI, Relationship, Details
 nifi.provenance.repository.indexed.fields=EventType, FlowFileUUID, Filename, 
ProcessorID, Relationship
 # FlowFile Attributes that should be indexed and made searchable.  Some 
examples to consider are filename, uuid, mime.type
 nifi.provenance.repository.indexed.attributes=
 # Large values for the shard size will result in more Java heap usage when 
searching the Provenance Repository
 # but should provide better performance
 nifi.provenance.repository.index.shard.size=500 MB
 # Indicates the maximum length that a FlowFile attribute can be when 
retrieving a Provenance Event from
 # the repository. If the length of any attribute exceeds this value, it will 
be truncated when the event is retrieved.
 nifi.provenance.repository.max.attribute.length=65536
 nifi.provenance.repository.concurrent.merge.threads=2
# Volatile Provenance Respository Properties
 nifi.provenance.repository.buffer.size=10
{code}

  was:
On Nifi 1.11.4 and 1.12.1, I intermittently am unable to view the provenance in 
the UI and through the `nipyapi` python library.

Checking the logs, I get the following message:
{code:java}
2020-10-06 22:12:39,789 INFO [Timer-Driven Process Thread-10] 
o.a.n.p.store.WriteAheadStorePartition Successfully rolled over Event Writer 
for Provenance Event Store Partition[directory=./provenance_repository] due to 
MAX_TIME_REACHED
 2020-10-06 22:12:39,789 ERROR [Compress Provenance Logs-1-thread-2] 
o.a.n.p.s.EventFileCompressor Failed to compress ./provenance_repository/0.prov 
on rollover
 java.io.FileNotFoundException: ./provenance_repository/0.prov (No such file or 
directory)
 at java.io.FileInputStream.open0(Native Method)
 at java.io.FileInputStream.open(FileInputStream.java:195)
 at java.io.FileInputStream.(FileInputStream.java:138)
 at 
org.apache.nifi.provenance.serialization.EventFileCompressor.compress(EventFileCompressor.java:164)
 at 
org.apache.nifi.provenance.serialization.EventFileCompressor.run(EventFileCompressor.java:115)
 at 

[jira] [Updated] (NIFI-7893) Provenance is intermittently blank when `nifi.provenance.repository.compress.on.rollover` is set to `true`

2020-10-06 Thread Daniel Khodabakhsh (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Khodabakhsh updated NIFI-7893:
-
Summary: Provenance is intermittently blank when 
`nifi.provenance.repository.compress.on.rollover` is set to `true`  (was: 
Provenance is blank is `nifi.provenance.repository.compress.on.rollover` is set 
to `true`)

> Provenance is intermittently blank when 
> `nifi.provenance.repository.compress.on.rollover` is set to `true`
> --
>
> Key: NIFI-7893
> URL: https://issues.apache.org/jira/browse/NIFI-7893
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.11.4, 1.12.1
> Environment: Docker 19.03.13
> Image: apache/nifi:1.12.1
> Python: nipyapi: 0.14.1
>Reporter: Daniel Khodabakhsh
>Priority: Major
>
> On Nifi 1.11.4 and 1.12.1, I intermittently am unable to view the provenance 
> in the UI and through the `nipyapi` python library.
> Checking the logs, I get the following message:
> {code:java}
> 2020-10-06 22:12:39,789 INFO [Timer-Driven Process Thread-10] 
> o.a.n.p.store.WriteAheadStorePartition Successfully rolled over Event Writer 
> for Provenance Event Store Partition[directory=./provenance_repository] due 
> to MAX_TIME_REACHED
>  2020-10-06 22:12:39,789 ERROR [Compress Provenance Logs-1-thread-2] 
> o.a.n.p.s.EventFileCompressor Failed to compress 
> ./provenance_repository/0.prov on rollover
>  java.io.FileNotFoundException: ./provenance_repository/0.prov (No such file 
> or directory)
>  at java.io.FileInputStream.open0(Native Method)
>  at java.io.FileInputStream.open(FileInputStream.java:195)
>  at java.io.FileInputStream.(FileInputStream.java:138)
>  at 
> org.apache.nifi.provenance.serialization.EventFileCompressor.compress(EventFileCompressor.java:164)
>  at 
> org.apache.nifi.provenance.serialization.EventFileCompressor.run(EventFileCompressor.java:115)
>  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748){code}
> Setting `nifi.provenance.repository.compress.on.rollover=false` fixes this.
> Here's a copy of all my `nifi.provenance.repository` settings from 
> `nifi.properties`:
> {code:java}
> # Provenance Repository Properties
>  
> nifi.provenance.repository.implementation=org.apache.nifi.provenance.WriteAheadProvenanceRepository
>  nifi.provenance.repository.debug.frequency=1_000_000
>  nifi.provenance.repository.encryption.key.provider.implementation=
>  nifi.provenance.repository.encryption.key.provider.location=
>  nifi.provenance.repository.encryption.key.id=
>  nifi.provenance.repository.encryption.key=
> # Persistent Provenance Repository Properties
>  nifi.provenance.repository.directory.default=./provenance_repository
>  nifi.provenance.repository.max.storage.time=24 hours
>  nifi.provenance.repository.max.storage.size=1 GB
>  nifi.provenance.repository.rollover.time=30 secs
>  nifi.provenance.repository.rollover.size=100 MB
>  nifi.provenance.repository.query.threads=2
>  nifi.provenance.repository.index.threads=2
>  nifi.provenance.repository.compress.on.rollover=false
>  nifi.provenance.repository.always.sync=false
>  # Comma-separated list of fields. Fields that are not indexed will not be 
> searchable. Valid fields are:
>  # EventType, FlowFileUUID, Filename, TransitURI, ProcessorID, 
> AlternateIdentifierURI, Relationship, Details
>  nifi.provenance.repository.indexed.fields=EventType, FlowFileUUID, Filename, 
> ProcessorID, Relationship
>  # FlowFile Attributes that should be indexed and made searchable.  Some 
> examples to consider are filename, uuid, mime.type
>  nifi.provenance.repository.indexed.attributes=
>  # Large values for the shard size will result in more Java heap usage when 
> searching the Provenance Repository
>  # but should provide better performance
>  nifi.provenance.repository.index.shard.size=500 MB
>  # Indicates the maximum length that a FlowFile attribute can be when 
> retrieving a Provenance Event from
>  # the repository. If the length of any attribute exceeds this value, it will 
> be truncated when the event is retrieved.
>  nifi.provenance.repository.max.attribute.length=65536
>  nifi.provenance.repository.concurrent.merge.threads=2
> # Volatile Provenance Respository Properties
>  nifi.provenance.repository.buffer.size=10
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7893) Provenance is blank is `nifi.provenance.repository.compress.on.rollover` is set to `true`

2020-10-06 Thread Daniel Khodabakhsh (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Khodabakhsh updated NIFI-7893:
-
Description: 
On Nifi 1.11.4 and 1.12.1, I intermittently am unable to view the provenance in 
the UI and through the `nipyapi` python library.

Checking the logs, I get the following message:
{code:java}
2020-10-06 22:12:39,789 INFO [Timer-Driven Process Thread-10] 
o.a.n.p.store.WriteAheadStorePartition Successfully rolled over Event Writer 
for Provenance Event Store Partition[directory=./provenance_repository] due to 
MAX_TIME_REACHED
 2020-10-06 22:12:39,789 ERROR [Compress Provenance Logs-1-thread-2] 
o.a.n.p.s.EventFileCompressor Failed to compress ./provenance_repository/0.prov 
on rollover
 java.io.FileNotFoundException: ./provenance_repository/0.prov (No such file or 
directory)
 at java.io.FileInputStream.open0(Native Method)
 at java.io.FileInputStream.open(FileInputStream.java:195)
 at java.io.FileInputStream.(FileInputStream.java:138)
 at 
org.apache.nifi.provenance.serialization.EventFileCompressor.compress(EventFileCompressor.java:164)
 at 
org.apache.nifi.provenance.serialization.EventFileCompressor.run(EventFileCompressor.java:115)
 at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
 at java.util.concurrent.FutureTask.run(FutureTask.java:266)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
 at java.lang.Thread.run(Thread.java:748){code}
Setting `nifi.provenance.repository.compress.on.rollover=false` fixes this.

Here's a copy of all my `nifi.provenance.repository` settings from 
`nifi.properties`:
{code:java}
# Provenance Repository Properties
 
nifi.provenance.repository.implementation=org.apache.nifi.provenance.WriteAheadProvenanceRepository
 nifi.provenance.repository.debug.frequency=1_000_000
 nifi.provenance.repository.encryption.key.provider.implementation=
 nifi.provenance.repository.encryption.key.provider.location=
 nifi.provenance.repository.encryption.key.id=
 nifi.provenance.repository.encryption.key=
# Persistent Provenance Repository Properties
 nifi.provenance.repository.directory.default=./provenance_repository
 nifi.provenance.repository.max.storage.time=24 hours
 nifi.provenance.repository.max.storage.size=1 GB
 nifi.provenance.repository.rollover.time=30 secs
 nifi.provenance.repository.rollover.size=100 MB
 nifi.provenance.repository.query.threads=2
 nifi.provenance.repository.index.threads=2
 nifi.provenance.repository.compress.on.rollover=false
 nifi.provenance.repository.always.sync=false
 # Comma-separated list of fields. Fields that are not indexed will not be 
searchable. Valid fields are:
 # EventType, FlowFileUUID, Filename, TransitURI, ProcessorID, 
AlternateIdentifierURI, Relationship, Details
 nifi.provenance.repository.indexed.fields=EventType, FlowFileUUID, Filename, 
ProcessorID, Relationship
 # FlowFile Attributes that should be indexed and made searchable.  Some 
examples to consider are filename, uuid, mime.type
 nifi.provenance.repository.indexed.attributes=
 # Large values for the shard size will result in more Java heap usage when 
searching the Provenance Repository
 # but should provide better performance
 nifi.provenance.repository.index.shard.size=500 MB
 # Indicates the maximum length that a FlowFile attribute can be when 
retrieving a Provenance Event from
 # the repository. If the length of any attribute exceeds this value, it will 
be truncated when the event is retrieved.
 nifi.provenance.repository.max.attribute.length=65536
 nifi.provenance.repository.concurrent.merge.threads=2
# Volatile Provenance Respository Properties
 nifi.provenance.repository.buffer.size=10
{code}

  was:
On Nifi 1.11.4 and 1.12.1, I intermittently am unable to view the provenance in 
the UI and through the `nipyapi` python library.

Checking the logs, I get the following message:

```
 {color:#ff}2020-10-06 22:12:39,789 INFO [Timer-Driven Process Thread-10] 
o.a.n.p.store.WriteAheadStorePartition Successfully rolled over Event Writer 
for Provenance Event Store Partition[directory=./provenance_repository] due to 
MAX_TIME_REACHED
 2020-10-06 22:12:39,789 ERROR [Compress Provenance Logs-1-thread-2] 
o.a.n.p.s.EventFileCompressor Failed to compress ./provenance_repository/0.prov 
on rollover
 java.io.FileNotFoundException: ./provenance_repository/0.prov (No such file or 
directory)
 at java.io.FileInputStream.open0(Native Method)
 at java.io.FileInputStream.open(FileInputStream.java:195)
 at java.io.FileInputStream.(FileInputStream.java:138)
 at 
org.apache.nifi.provenance.serialization.EventFileCompressor.compress(EventFileCompressor.java:164)
 at 
org.apache.nifi.provenance.serialization.EventFileCompressor.run(EventFileCompressor.java:115)
 at 

[jira] [Updated] (NIFI-7893) Provenance is blank is `nifi.provenance.repository.compress.on.rollover` is set to `true`

2020-10-06 Thread Daniel Khodabakhsh (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Khodabakhsh updated NIFI-7893:
-
Description: 
On Nifi 1.11.4 and 1.12.1, I intermittently am unable to view the provenance in 
the UI and through the `nipyapi` python library.

Checking the logs, I get the following message:

```
 {color:#ff}2020-10-06 22:12:39,789 INFO [Timer-Driven Process Thread-10] 
o.a.n.p.store.WriteAheadStorePartition Successfully rolled over Event Writer 
for Provenance Event Store Partition[directory=./provenance_repository] due to 
MAX_TIME_REACHED
 2020-10-06 22:12:39,789 ERROR [Compress Provenance Logs-1-thread-2] 
o.a.n.p.s.EventFileCompressor Failed to compress ./provenance_repository/0.prov 
on rollover
 java.io.FileNotFoundException: ./provenance_repository/0.prov (No such file or 
directory)
 at java.io.FileInputStream.open0(Native Method)
 at java.io.FileInputStream.open(FileInputStream.java:195)
 at java.io.FileInputStream.(FileInputStream.java:138)
 at 
org.apache.nifi.provenance.serialization.EventFileCompressor.compress(EventFileCompressor.java:164)
 at 
org.apache.nifi.provenance.serialization.EventFileCompressor.run(EventFileCompressor.java:115)
 at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
 at java.util.concurrent.FutureTask.run(FutureTask.java:266)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
 at java.lang.Thread.run(Thread.java:748){color}

```


 Setting `nifi.provenance.repository.compress.on.rollover=false` fixes this.

Here's a copy of all my `nifi.provenance.repository` settings from 
`nifi.properties`:

```
 {color:#6a9955}# Provenance Repository Properties{color}
 
{color:#569cd6}nifi.provenance.repository.implementation{color}{color:#d4d4d4}=org.apache.nifi.provenance.WriteAheadProvenanceRepository{color}
 
{color:#569cd6}nifi.provenance.repository.debug.frequency{color}{color:#d4d4d4}=1_000_000{color}
 
{color:#569cd6}nifi.provenance.repository.encryption.key.provider.implementation{color}{color:#d4d4d4}={color}
 
{color:#569cd6}nifi.provenance.repository.encryption.key.provider.location{color}{color:#d4d4d4}={color}
 
{color:#569cd6}nifi.provenance.repository.encryption.key.id{color}{color:#d4d4d4}={color}
 
{color:#569cd6}nifi.provenance.repository.encryption.key{color}{color:#d4d4d4}={color}

{color:#6a9955}# Persistent Provenance Repository Properties{color}
 
{color:#569cd6}nifi.provenance.repository.directory.default{color}{color:#d4d4d4}=./provenance_repository{color}
 
{color:#569cd6}nifi.provenance.repository.max.storage.time{color}{color:#d4d4d4}=24
 hours{color}
 
{color:#569cd6}nifi.provenance.repository.max.storage.size{color}{color:#d4d4d4}=1
 GB{color}
 
{color:#569cd6}nifi.provenance.repository.rollover.time{color}{color:#d4d4d4}=30
 secs{color}
 
{color:#569cd6}nifi.provenance.repository.rollover.size{color}{color:#d4d4d4}=100
 MB{color}
 
{color:#569cd6}nifi.provenance.repository.query.threads{color}{color:#d4d4d4}=2{color}
 
{color:#569cd6}nifi.provenance.repository.index.threads{color}{color:#d4d4d4}=2{color}
 
{color:#569cd6}nifi.provenance.repository.compress.on.rollover{color}{color:#d4d4d4}=false{color}
 
{color:#569cd6}nifi.provenance.repository.always.sync{color}{color:#d4d4d4}=false{color}
 {color:#6a9955}# Comma-separated list of fields. Fields that are not indexed 
will not be searchable. Valid fields are:{color}
 {color:#6a9955}# EventType, FlowFileUUID, Filename, TransitURI, ProcessorID, 
AlternateIdentifierURI, Relationship, Details{color}
 
{color:#569cd6}nifi.provenance.repository.indexed.fields{color}{color:#d4d4d4}=EventType,
 FlowFileUUID, Filename, ProcessorID, Relationship{color}
 {color:#6a9955}# FlowFile Attributes that should be indexed and made 
searchable.  Some examples to consider are filename, uuid, mime.type{color}
 
{color:#569cd6}nifi.provenance.repository.indexed.attributes{color}{color:#d4d4d4}={color}
 {color:#6a9955}# Large values for the shard size will result in more Java heap 
usage when searching the Provenance Repository{color}
 {color:#6a9955}# but should provide better performance{color}
 
{color:#569cd6}nifi.provenance.repository.index.shard.size{color}{color:#d4d4d4}=500
 MB{color}
 {color:#6a9955}# Indicates the maximum length that a FlowFile attribute can be 
when retrieving a Provenance Event from{color}
 {color:#6a9955}# the repository. If the length of any attribute exceeds this 
value, it will be truncated when the event is retrieved.{color}
 
{color:#569cd6}nifi.provenance.repository.max.attribute.length{color}{color:#d4d4d4}=65536{color}
 
{color:#569cd6}nifi.provenance.repository.concurrent.merge.threads{color}{color:#d4d4d4}=2{color}

{color:#6a9955}# Volatile Provenance Respository Properties{color}
 

[jira] [Created] (NIFI-7893) Provenance is blank is `nifi.provenance.repository.compress.on.rollover` is set to `true`

2020-10-06 Thread Daniel Khodabakhsh (Jira)
Daniel Khodabakhsh created NIFI-7893:


 Summary: Provenance is blank is 
`nifi.provenance.repository.compress.on.rollover` is set to `true`
 Key: NIFI-7893
 URL: https://issues.apache.org/jira/browse/NIFI-7893
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Affects Versions: 1.12.1, 1.11.4
 Environment: Docker 19.03.13
Image: apache/nifi:1.12.1
Python: nipyapi: 0.14.1
Reporter: Daniel Khodabakhsh


On Nifi 1.11.4 and 1.12.1, I intermittently am unable to view the provenance in 
the UI and through the `nipyapi` python library.

Checking the logs, I get the following message:
{color:#FF}2020-10-06 22:12:39,789 INFO [Timer-Driven Process Thread-10] 
o.a.n.p.store.WriteAheadStorePartition Successfully rolled over Event Writer 
for Provenance Event Store Partition[directory=./provenance_repository] due to 
MAX_TIME_REACHED
2020-10-06 22:12:39,789 ERROR [Compress Provenance Logs-1-thread-2] 
o.a.n.p.s.EventFileCompressor Failed to compress ./provenance_repository/0.prov 
on rollover
java.io.FileNotFoundException: ./provenance_repository/0.prov (No such file or 
directory)
  at java.io.FileInputStream.open0(Native Method)
  at java.io.FileInputStream.open(FileInputStream.java:195)
  at java.io.FileInputStream.(FileInputStream.java:138)
  at 
org.apache.nifi.provenance.serialization.EventFileCompressor.compress(EventFileCompressor.java:164)
  at 
org.apache.nifi.provenance.serialization.EventFileCompressor.run(EventFileCompressor.java:115)
  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  at java.lang.Thread.run(Thread.java:748){color}
Setting `nifi.provenance.repository.compress.on.rollover=false` fixes this.

Here's a copy of all my `nifi.provenance.repository` settings from 
`nifi.properties`:
{color:#6a9955}# Provenance Repository Properties{color}
{color:#569cd6}nifi.provenance.repository.implementation{color}{color:#d4d4d4}=org.apache.nifi.provenance.WriteAheadProvenanceRepository{color}
{color:#569cd6}nifi.provenance.repository.debug.frequency{color}{color:#d4d4d4}=1_000_000{color}
{color:#569cd6}nifi.provenance.repository.encryption.key.provider.implementation{color}{color:#d4d4d4}={color}
{color:#569cd6}nifi.provenance.repository.encryption.key.provider.location{color}{color:#d4d4d4}={color}
{color:#569cd6}nifi.provenance.repository.encryption.key.id{color}{color:#d4d4d4}={color}
{color:#569cd6}nifi.provenance.repository.encryption.key{color}{color:#d4d4d4}={color}

{color:#6a9955}# Persistent Provenance Repository Properties{color}
{color:#569cd6}nifi.provenance.repository.directory.default{color}{color:#d4d4d4}=./provenance_repository{color}
{color:#569cd6}nifi.provenance.repository.max.storage.time{color}{color:#d4d4d4}=24
 hours{color}
{color:#569cd6}nifi.provenance.repository.max.storage.size{color}{color:#d4d4d4}=1
 GB{color}
{color:#569cd6}nifi.provenance.repository.rollover.time{color}{color:#d4d4d4}=30
 secs{color}
{color:#569cd6}nifi.provenance.repository.rollover.size{color}{color:#d4d4d4}=100
 MB{color}
{color:#569cd6}nifi.provenance.repository.query.threads{color}{color:#d4d4d4}=2{color}
{color:#569cd6}nifi.provenance.repository.index.threads{color}{color:#d4d4d4}=2{color}
{color:#569cd6}nifi.provenance.repository.compress.on.rollover{color}{color:#d4d4d4}=false{color}
{color:#569cd6}nifi.provenance.repository.always.sync{color}{color:#d4d4d4}=false{color}
{color:#6a9955}# Comma-separated list of fields. Fields that are not indexed 
will not be searchable. Valid fields are:{color}
{color:#6a9955}# EventType, FlowFileUUID, Filename, TransitURI, ProcessorID, 
AlternateIdentifierURI, Relationship, Details{color}
{color:#569cd6}nifi.provenance.repository.indexed.fields{color}{color:#d4d4d4}=EventType,
 FlowFileUUID, Filename, ProcessorID, Relationship{color}
{color:#6a9955}# FlowFile Attributes that should be indexed and made 
searchable.  Some examples to consider are filename, uuid, mime.type{color}
{color:#569cd6}nifi.provenance.repository.indexed.attributes{color}{color:#d4d4d4}={color}
{color:#6a9955}# Large values for the shard size will result in more Java heap 
usage when searching the Provenance Repository{color}
{color:#6a9955}# but should provide better performance{color}
{color:#569cd6}nifi.provenance.repository.index.shard.size{color}{color:#d4d4d4}=500
 MB{color}
{color:#6a9955}# Indicates the maximum length that a FlowFile attribute can be 
when retrieving a Provenance Event from{color}
{color:#6a9955}# the repository. If the length of any attribute exceeds this 
value, it will be truncated when the event is retrieved.{color}

[jira] [Created] (NIFI-7892) Create a Logout page for OIDC

2020-10-06 Thread M Tien (Jira)
M Tien created NIFI-7892:


 Summary: Create a Logout page for OIDC
 Key: NIFI-7892
 URL: https://issues.apache.org/jira/browse/NIFI-7892
 Project: Apache NiFi
  Issue Type: Sub-task
  Components: Core UI
Affects Versions: 1.12.1
Reporter: M Tien
Assignee: M Tien


Upon logging out of NiFi using an Identity Provider (such as, OpenID Connect), 
the user will be directed to a new Logout page. This informs the user that they 
have successfully logged out of NiFi.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-7891) Allow for Defaults to be set by Avro writers

2020-10-06 Thread Alasdair Brown (Jira)
Alasdair Brown created NIFI-7891:


 Summary: Allow for Defaults to be set by Avro writers
 Key: NIFI-7891
 URL: https://issues.apache.org/jira/browse/NIFI-7891
 Project: Apache NiFi
  Issue Type: New Feature
  Components: Extensions
Reporter: Alasdair Brown


This would be an improvement to Avro writer services. Allowing a set of default 
values to be given based on the data type when infering the schema.

 

For example, you could have an additional property per type (e.g. Int, String, 
etc.) where the value of the property would be used as the default value. 
However, this would add a lot of properties Possibly you could provide a 
JSON set of types -> defaults, e.g.


{code:java}
[
   { "type":"string", "default":"test"},
   { "type":"int","default":1},
]{code}
Thus, for any field inferred as type String, it's embedded Avro schema contains 
a default value entry for that field.

This would then provide flexibility where NULL values are present  - the 
current behaviour only supports adding NULL to the allowable types - this would 
be a very useful alternative.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] alopresto commented on a change in pull request #4572: NIFI-7777 Added Password property to UnpackContent for decrypting Zip archives

2020-10-06 Thread GitBox


alopresto commented on a change in pull request #4572:
URL: https://github.com/apache/nifi/pull/4572#discussion_r500603132



##
File path: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/UnpackContent.java
##
@@ -145,6 +150,15 @@
 .addValidator(StandardValidators.REGULAR_EXPRESSION_VALIDATOR)
 .build();
 
+public static final PropertyDescriptor PASSWORD = new 
PropertyDescriptor.Builder()
+.name("Password")
+.displayName("Password")
+.description("Password used for decrypting archive entries. 
Supports Zip files encrypted with ZipCrypto or AES")
+.required(false)
+.sensitive(true)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)

Review comment:
   David and I discussed this above; he originally had included Expression 
Language support here and removed it at my request as across the project we do 
_not_ want EL parsing of sensitive properties as described above. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] alopresto commented on a change in pull request #4572: NIFI-7777 Added Password property to UnpackContent for decrypting Zip archives

2020-10-06 Thread GitBox


alopresto commented on a change in pull request #4572:
URL: https://github.com/apache/nifi/pull/4572#discussion_r500603235



##
File path: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/UnpackContent.java
##
@@ -200,7 +215,13 @@ public void onScheduled(ProcessContext context) throws 
ProcessException {
 if (fileFilter == null) {
 fileFilter = 
Pattern.compile(context.getProperty(FILE_FILTER).getValue());
 tarUnpacker = new TarUnpacker(fileFilter);
-zipUnpacker = new ZipUnpacker(fileFilter);
+
+char[] password = null;
+final PropertyValue passwordProperty = 
context.getProperty(PASSWORD);

Review comment:
   Again, please see the previous conversation. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (NIFI-7889) ConsumeMQTT - use offer instead of add

2020-10-06 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-7889:
-
Assignee: Pierre Villard
  Status: Patch Available  (was: Open)

> ConsumeMQTT - use offer instead of add
> --
>
> Key: NIFI-7889
> URL: https://issues.apache.org/jira/browse/NIFI-7889
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In ConsumeMQTT, we are filling the internal queue of the processor using this 
> code:
> {code:java}
> if (mqttQueue.size() >= maxQueueSize){
> throw new IllegalStateException("The subscriber queue is full, 
> cannot receive another message until the processor is scheduled to run.");
> } else {
> mqttQueue.add(new MQTTQueueMessage(topic, message));
> }
> {code}
> Instead of throwing an exception when the internal queue is full, we could 
> have a blocking call with {{offer}}() to give some time for the queue to be 
> drained and then add the message to the queue. If the queue is still full, 
> we'd throw the exception which would cause data loss in case the QoS is 
> configured to 0.
> Documentation should also be improved around the implications of such 
> configuration.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] pvillard31 opened a new pull request #4578: NIFI-7889 - ConsumeMQTT - use offer instead of add

2020-10-06 Thread GitBox


pvillard31 opened a new pull request #4578:
URL: https://github.com/apache/nifi/pull/4578


   Thank you for submitting a contribution to Apache NiFi.
   
   Please provide a short description of the PR here:
   
    Description of PR
   
   _Enables X functionality; fixes bug NIFI-._
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [ ] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically `main`)?
   
   - [ ] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [ ] Have you written or updated unit tests to verify your changes?
   - [ ] Have you verified that the full build is successful on JDK 8?
   - [ ] Have you verified that the full build is successful on JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for 
build issues and submit an update to your PR as soon as possible.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] MikeThomsen commented on a change in pull request #4572: NIFI-7777 Added Password property to UnpackContent for decrypting Zip archives

2020-10-06 Thread GitBox


MikeThomsen commented on a change in pull request #4572:
URL: https://github.com/apache/nifi/pull/4572#discussion_r500573830



##
File path: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/UnpackContent.java
##
@@ -145,6 +150,15 @@
 .addValidator(StandardValidators.REGULAR_EXPRESSION_VALIDATOR)
 .build();
 
+public static final PropertyDescriptor PASSWORD = new 
PropertyDescriptor.Builder()
+.name("Password")
+.displayName("Password")
+.description("Password used for decrypting archive entries. 
Supports Zip files encrypted with ZipCrypto or AES")
+.required(false)
+.sensitive(true)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)

Review comment:
   Should have expression language support. Preferably on the 
flowfile-level.

##
File path: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/UnpackContent.java
##
@@ -200,7 +215,13 @@ public void onScheduled(ProcessContext context) throws 
ProcessException {
 if (fileFilter == null) {
 fileFilter = 
Pattern.compile(context.getProperty(FILE_FILTER).getValue());
 tarUnpacker = new TarUnpacker(fileFilter);
-zipUnpacker = new ZipUnpacker(fileFilter);
+
+char[] password = null;
+final PropertyValue passwordProperty = 
context.getProperty(PASSWORD);

Review comment:
   If/when you add EL support, you'd need to add 
`evaluateExpressionLanguage` here.

##
File path: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/test/java/org/apache/nifi/processors/standard/TestUnpackContent.java
##
@@ -221,6 +226,32 @@ public void testInvalidZip() throws IOException {
 }
 }
 
+@Test
+public void testZipEncryptionZipStandard() throws IOException {
+runZipEncryptionMethod(EncryptionMethod.ZIP_STANDARD);
+}
+
+@Test
+public void testZipEncryptionAes() throws IOException {
+runZipEncryptionMethod(EncryptionMethod.AES);
+}
+
+@Test
+public void testZipEncryptionNoPasswordConfigured() throws IOException {
+final TestRunner runner = TestRunners.newTestRunner(new 
UnpackContent());
+runner.setProperty(UnpackContent.PACKAGING_FORMAT, 
UnpackContent.PackageFormat.ZIP_FORMAT.toString());
+
+final String password = String.class.getSimpleName();
+final char[] streamPassword = password.toCharArray();
+final String contents = TestRunner.class.getCanonicalName();
+
+final byte[] zipEncrypted = createZipEncrypted(EncryptionMethod.AES, 
streamPassword, contents);

Review comment:
   Would be good to have a test method where you use something like 
DES/Triple DES to prove the error handling on unsupported algorithms.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (NIFI-7890) ConsumeMQTT - add record reader/writer

2020-10-06 Thread Pierre Villard (Jira)
Pierre Villard created NIFI-7890:


 Summary: ConsumeMQTT - add record reader/writer
 Key: NIFI-7890
 URL: https://issues.apache.org/jira/browse/NIFI-7890
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Extensions
Reporter: Pierre Villard


At the moment the ConsumeMQTT processor is processing events one by one. 
Performances could be greatly improved by adding an optional record reader and 
writer so that many records can be written into a single flow file.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-7889) ConsumeMQTT - use offer instead of add

2020-10-06 Thread Pierre Villard (Jira)
Pierre Villard created NIFI-7889:


 Summary: ConsumeMQTT - use offer instead of add
 Key: NIFI-7889
 URL: https://issues.apache.org/jira/browse/NIFI-7889
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Extensions
Reporter: Pierre Villard


In ConsumeMQTT, we are filling the internal queue of the processor using this 
code:
{code:java}
if (mqttQueue.size() >= maxQueueSize){
throw new IllegalStateException("The subscriber queue is full, 
cannot receive another message until the processor is scheduled to run.");
} else {
mqttQueue.add(new MQTTQueueMessage(topic, message));
}
{code}
Instead of throwing an exception when the internal queue is full, we could have 
a blocking call with {{offer}}() to give some time for the queue to be drained 
and then add the message to the queue. If the queue is still full, we'd throw 
the exception which would cause data loss in case the QoS is configured to 0.

Documentation should also be improved around the implications of such 
configuration.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7886) FetchAzureBlobStorage, FetchS3Object, and FetchGCSObject processors should be able to fetch ranges

2020-10-06 Thread Paul Kelly (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Kelly updated NIFI-7886:
-
Status: Patch Available  (was: In Progress)

> FetchAzureBlobStorage, FetchS3Object, and FetchGCSObject processors should be 
> able to fetch ranges
> --
>
> Key: NIFI-7886
> URL: https://issues.apache.org/jira/browse/NIFI-7886
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.12.0
>Reporter: Paul Kelly
>Assignee: Paul Kelly
>Priority: Minor
>  Labels: azureblob, gcs, s3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Azure Blob Storage, AWS S3, and Google Cloud Storage all support retrieving 
> byte ranges of stored objects.  Current versions of NiFi processors for these 
> services do not support fetching by byte range.
> Allowing to fetch by range would allow multiple enhancements:
>  * Parallelized downloads
>  ** Faster speeds if the bandwidth delay product of the connection is lower 
> than the available bandwidth
>  ** Load distribution over a cluster
>  * Cost savings
>  ** If the file is large and only part of the file is needed, the desired 
> part of the file can be downloaded, saving bandwidth costs by not retrieving 
> unnecessary bytes
>  ** Download failures would only need to retry the failed segment, rather 
> than the full file
>  * Download extremely large files
>  ** Ability to download files that are larger than the available content repo 
> by downloading a segment and moving it off to a system with more capacity 
> before downloading another segment
>  
> Some of these enhancements would require an upstream processor to generate 
> multiple flow files, each covering a different part of the overall range.  
> Something like this:
> ListS3 -> ExecuteGroovyScript (to split into multiple flow files with 
> different range attributes) -> FetchS3Object.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] anaylor commented on pull request #4221: NIFI-6394 - frontend queue/connection size limit

2020-10-06 Thread GitBox


anaylor commented on pull request #4221:
URL: https://github.com/apache/nifi/pull/4221#issuecomment-704430785


   Hey @taftster I made the changes and cherry picked them to your branch, not 
authorized to push them however. Can you either give me access or if you have 
another suggestion would be fine with me.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (NIFI-5481) Add new providers of protected sensitive configuration values

2020-10-06 Thread Andy LoPresto (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-5481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17208885#comment-17208885
 ] 

Andy LoPresto commented on NIFI-5481:
-

Hi Ruben. Unfortunately other priorities have taken my attention and I have not 
been able to revisit this right now. The existing patches are not in a state to 
be merged into the project. 

> Add new providers of protected sensitive configuration values
> -
>
> Key: NIFI-5481
> URL: https://issues.apache.org/jira/browse/NIFI-5481
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Configuration, Configuration Management, Core Framework, 
> Security
>Affects Versions: 1.7.1
>Reporter: Andy LoPresto
>Assignee: Andy LoPresto
>Priority: Major
>  Labels: configuration, encryption, kubernetes, security, 
> toolkit, vault
>  Time Spent: 5h
>  Remaining Estimate: 0h
>
> In order to make NiFi more dynamically scalable in conjunction with tools 
> like Docker and Kubernetes, the "encrypted config" handling should be 
> enhanced to integrate with other secure configuration providers. The original 
> design encompassed this idea, and the {{SensitivePropertyProvider}} interface 
> is designed to be extended by various provider implementations. A provider 
> which integrates with the [Hashicorp Vault|https://www.vaultproject.io] is a 
> good next step. 
> Vault is free and open source, widely adopted, and provides a 
> [CLI|https://www.vaultproject.io/docs/commands/index.html], [HTTP 
> API|https://www.vaultproject.io/api/index.html], and community-supported Java 
> client library [vault-java-driver - MIT 
> License|https://github.com/BetterCloud/vault-java-driver] and [Spring Vault - 
> Apache 2.0 License|https://github.com/spring-projects/spring-vault]. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] tpalfy opened a new pull request #4577: NIFI-6752 Create ASN.1 RecordReader

2020-10-06 Thread GitBox


tpalfy opened a new pull request #4577:
URL: https://github.com/apache/nifi/pull/4577


   This is a collaborative work with @ijokarumawak and the continuation of
   https://github.com/apache/nifi/pull/3796
   
   Thank you for submitting a contribution to Apache NiFi.
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [ ] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically `main`)?
   
   - [ ] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [ ] Have you written or updated unit tests to verify your changes?
   - [ ] Have you verified that the full build is successful on JDK 8?
   - [ ] Have you verified that the full build is successful on JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for 
build issues and submit an update to your PR as soon as possible.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] pkelly-nifi opened a new pull request #4576: NIFI-7886: FetchAzureBlobStorage, FetchS3Object, and FetchGCSObject processors should be able to fetch ranges

2020-10-06 Thread GitBox


pkelly-nifi opened a new pull request #4576:
URL: https://github.com/apache/nifi/pull/4576


   Thank you for submitting a contribution to Apache NiFi.
   
   Please provide a short description of the PR here:
   
    Description of PR
   
   Enables pulling objects and blobs by byte ranges for FetchAzureBlobStorage, 
FetchS3Object, and FetchGCSObject processors as described in NIFI-7886.  Adds 
RANGE_START and RANGE_LENGTH parameters to all three processors and adjusts the 
API calls as necessary.
   
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [X] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [X] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [X] Has your PR been rebased against the latest commit within the target 
branch (typically `main`)?
   
   - [X] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [X] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [X] Have you written or updated unit tests to verify your changes?
   - [X] Have you verified that the full build is successful on JDK 8?
   - [X] Have you verified that the full build is successful on JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [X] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for 
build issues and submit an update to your PR as soon as possible.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] MikeThomsen commented on pull request #4575: [NIFI-7882] Handle JsonProperties as avro field default

2020-10-06 Thread GitBox


MikeThomsen commented on pull request #4575:
URL: https://github.com/apache/nifi/pull/4575#issuecomment-704369485


   Avro 1.10.0 is using Jackson 2 now, so it would be worthwhile to check a new 
Avro build to see if the behavior is the same.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] MikeThomsen commented on pull request #4570: NIFI-7879 Created record path function for UUID v5

2020-10-06 Thread GitBox


MikeThomsen commented on pull request #4570:
URL: https://github.com/apache/nifi/pull/4570#issuecomment-704366359


   @jfrazee Yes, definitely. Worst case scenario would be a tiny little jar 
file shared between the two. I'll get working on that this afternoon if I have 
time. Thanks for finding that.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (NIFI-7875) nifi.variable.registry.properties value cleared in nifi.properties at startup

2020-10-06 Thread Douglas Cooper (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17208797#comment-17208797
 ] 

Douglas Cooper commented on NIFI-7875:
--

I've only tried it with docker. 

> nifi.variable.registry.properties value cleared in nifi.properties at startup
> -
>
> Key: NIFI-7875
> URL: https://issues.apache.org/jira/browse/NIFI-7875
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Configuration, Variable Registry
>Affects Versions: 1.12.1
> Environment: docker container
>Reporter: Douglas Cooper
>Priority: Major
>  Labels: workaround
>
> The nifi.variable.registry.properties variable in nifi.properties gets 
> cleared at startup.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-7888) Support authentication via SAML

2020-10-06 Thread Bryan Bende (Jira)
Bryan Bende created NIFI-7888:
-

 Summary: Support authentication via SAML
 Key: NIFI-7888
 URL: https://issues.apache.org/jira/browse/NIFI-7888
 Project: Apache NiFi
  Issue Type: Improvement
Reporter: Bryan Bende
Assignee: Bryan Bende


We should support configuring NiFi to authenticate against a SAML identity 
provider, similar to the current OIDC integration.

Ideally we should also be able to obtain group information from the SAML 
assertions and make these groups available later during the authorization 
process.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7796) Add Prometheus metrics for total bytes received and bytes sent for components

2020-10-06 Thread Yolanda M. Davis (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yolanda M. Davis updated NIFI-7796:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

PR Merged 10/06/2020

> Add Prometheus metrics for total bytes received and bytes sent for components
> -
>
> Key: NIFI-7796
> URL: https://issues.apache.org/jira/browse/NIFI-7796
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework
>Reporter: Yolanda M. Davis
>Assignee: Matt Burgess
>Priority: Major
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> Currently metrics available via Prometheus metrics ndpoint or reporting tasks 
> include gauges for niti_amount_bytes_received and nifi_amount_bytes_sent.  
> However in order to perform rate calculations with prometheus it would be 
> valuable to also expose counters for bytes_received and bytes_sent metrics 
> that are comparable to the existing values for nifi_total_bytes_read and 
> nifi_total_bytes_written.
> Expected values would be nifi_total_bytes_received and nifi_total_bytes_sent.
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-7796) Add Prometheus metrics for total bytes received and bytes sent for components

2020-10-06 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17208746#comment-17208746
 ] 

ASF subversion and git services commented on NIFI-7796:
---

Commit 7cc37133898b365e81eda884cf386a4f7e3a7022 in nifi's branch 
refs/heads/main from Matt Burgess
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=7cc3713 ]

NIFI-7796: Add Prometheus counters for total bytes sent/received (#4522)

* NIFI-7796: Add Prometheus metrics for total bytes sent/received, fixed 
read/written metrics

* NIFI-7796: Incorporated review comments

> Add Prometheus metrics for total bytes received and bytes sent for components
> -
>
> Key: NIFI-7796
> URL: https://issues.apache.org/jira/browse/NIFI-7796
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework
>Reporter: Yolanda M. Davis
>Assignee: Matt Burgess
>Priority: Major
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> Currently metrics available via Prometheus metrics ndpoint or reporting tasks 
> include gauges for niti_amount_bytes_received and nifi_amount_bytes_sent.  
> However in order to perform rate calculations with prometheus it would be 
> valuable to also expose counters for bytes_received and bytes_sent metrics 
> that are comparable to the existing values for nifi_total_bytes_read and 
> nifi_total_bytes_written.
> Expected values would be nifi_total_bytes_received and nifi_total_bytes_sent.
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-7796) Add Prometheus metrics for total bytes received and bytes sent for components

2020-10-06 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17208747#comment-17208747
 ] 

ASF subversion and git services commented on NIFI-7796:
---

Commit 7cc37133898b365e81eda884cf386a4f7e3a7022 in nifi's branch 
refs/heads/main from Matt Burgess
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=7cc3713 ]

NIFI-7796: Add Prometheus counters for total bytes sent/received (#4522)

* NIFI-7796: Add Prometheus metrics for total bytes sent/received, fixed 
read/written metrics

* NIFI-7796: Incorporated review comments

> Add Prometheus metrics for total bytes received and bytes sent for components
> -
>
> Key: NIFI-7796
> URL: https://issues.apache.org/jira/browse/NIFI-7796
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework
>Reporter: Yolanda M. Davis
>Assignee: Matt Burgess
>Priority: Major
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> Currently metrics available via Prometheus metrics ndpoint or reporting tasks 
> include gauges for niti_amount_bytes_received and nifi_amount_bytes_sent.  
> However in order to perform rate calculations with prometheus it would be 
> valuable to also expose counters for bytes_received and bytes_sent metrics 
> that are comparable to the existing values for nifi_total_bytes_read and 
> nifi_total_bytes_written.
> Expected values would be nifi_total_bytes_received and nifi_total_bytes_sent.
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-7796) Add Prometheus metrics for total bytes received and bytes sent for components

2020-10-06 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17208745#comment-17208745
 ] 

ASF subversion and git services commented on NIFI-7796:
---

Commit 7cc37133898b365e81eda884cf386a4f7e3a7022 in nifi's branch 
refs/heads/main from Matt Burgess
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=7cc3713 ]

NIFI-7796: Add Prometheus counters for total bytes sent/received (#4522)

* NIFI-7796: Add Prometheus metrics for total bytes sent/received, fixed 
read/written metrics

* NIFI-7796: Incorporated review comments

> Add Prometheus metrics for total bytes received and bytes sent for components
> -
>
> Key: NIFI-7796
> URL: https://issues.apache.org/jira/browse/NIFI-7796
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework
>Reporter: Yolanda M. Davis
>Assignee: Matt Burgess
>Priority: Major
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> Currently metrics available via Prometheus metrics ndpoint or reporting tasks 
> include gauges for niti_amount_bytes_received and nifi_amount_bytes_sent.  
> However in order to perform rate calculations with prometheus it would be 
> valuable to also expose counters for bytes_received and bytes_sent metrics 
> that are comparable to the existing values for nifi_total_bytes_read and 
> nifi_total_bytes_written.
> Expected values would be nifi_total_bytes_received and nifi_total_bytes_sent.
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] YolandaMDavis merged pull request #4522: NIFI-7796: Add Prometheus counters for total bytes sent/received

2020-10-06 Thread GitBox


YolandaMDavis merged pull request #4522:
URL: https://github.com/apache/nifi/pull/4522


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] YolandaMDavis commented on pull request #4522: NIFI-7796: Add Prometheus counters for total bytes sent/received

2020-10-06 Thread GitBox


YolandaMDavis commented on pull request #4522:
URL: https://github.com/apache/nifi/pull/4522#issuecomment-704266658


   @mattyb149 I was able to confirm the total sent and received values worked 
as expected.  LGTM +1
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (NIFI-7875) nifi.variable.registry.properties value cleared in nifi.properties at startup

2020-10-06 Thread Bryan Bende (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17208729#comment-17208729
 ] 

Bryan Bende commented on NIFI-7875:
---

Just to be clear, this issue is specific to running a Docker container correct?

I don't believe running standalone NiFi has any problem with the file-based 
variable registry.

> nifi.variable.registry.properties value cleared in nifi.properties at startup
> -
>
> Key: NIFI-7875
> URL: https://issues.apache.org/jira/browse/NIFI-7875
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Configuration, Variable Registry
>Affects Versions: 1.12.1
> Environment: docker container
>Reporter: Douglas Cooper
>Priority: Major
>  Labels: workaround
>
> The nifi.variable.registry.properties variable in nifi.properties gets 
> cleared at startup.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-7885) Add global property for LFS access from HDFS processors

2020-10-06 Thread Bryan Bende (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17208728#comment-17208728
 ] 

Bryan Bende commented on NIFI-7885:
---

I'd suggest making this an environment variable like we have in nifi-env.sh for 
keytab control:
{code:java}
 export NIFI_ALLOW_EXPLICIT_KEYTAB=true{code}
This is because processors generally don't access NiFiProperties directly, but 
can easily call System.getEnv.

> Add global property for LFS access from HDFS processors
> ---
>
> Key: NIFI-7885
> URL: https://issues.apache.org/jira/browse/NIFI-7885
> Project: Apache NiFi
>  Issue Type: Sub-task
>  Components: Configuration, Core Framework, Extensions
>Affects Versions: 1.12.1
>Reporter: Andy LoPresto
>Priority: Major
>  Labels: file-system, permission, properties, security, validation
>
> From https://issues.apache.org/jira/browse/NIFI-7884: 
> {quote}
> This will also require introducing a global setting in {{nifi.properties}} 
> that an admin can set to allow local file system access via the HDFS 
> processors (default {{true}} for backward compatibility), and additional 
> validation logic in the HDFS processors (ideally the abstract shared logic) 
> to ensure that if this setting is disabled, the HDFS processors are not 
> accessing the local file system via the {{file:///}} protocol in their 
> configuration. 
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7875) nifi.variable.registry.properties value cleared in nifi.properties at startup

2020-10-06 Thread Douglas Cooper (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Douglas Cooper updated NIFI-7875:
-
Labels: workaround  (was: )

> nifi.variable.registry.properties value cleared in nifi.properties at startup
> -
>
> Key: NIFI-7875
> URL: https://issues.apache.org/jira/browse/NIFI-7875
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Configuration, Variable Registry
>Affects Versions: 1.12.1
> Environment: docker container
>Reporter: Douglas Cooper
>Priority: Major
>  Labels: workaround
>
> The nifi.variable.registry.properties variable in nifi.properties gets 
> cleared at startup.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-7875) nifi.variable.registry.properties value cleared in nifi.properties at startup

2020-10-06 Thread Douglas Cooper (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17208703#comment-17208703
 ] 

Douglas Cooper commented on NIFI-7875:
--

just noticed there is a {{NIFI_VARIABLE_REGISTRY_PROPERTIES that can be set 
when creating the container which sets the variable in nifi.properties 
correctly. Im guessing the logic simply does a text replace with whatever the 
environment variable is set to. I think this is still unexpected behavior 
because the logic should check if its not empty before performing a replace.}}

> nifi.variable.registry.properties value cleared in nifi.properties at startup
> -
>
> Key: NIFI-7875
> URL: https://issues.apache.org/jira/browse/NIFI-7875
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Configuration, Variable Registry
>Affects Versions: 1.12.1
> Environment: docker container
>Reporter: Douglas Cooper
>Priority: Major
>
> The nifi.variable.registry.properties variable in nifi.properties gets 
> cleared at startup.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] alien11689 opened a new pull request #4575: [NIFI-7882] Handle JsonProperties as avro field default

2020-10-06 Thread GitBox


alien11689 opened a new pull request #4575:
URL: https://github.com/apache/nifi/pull/4575


   Fixes NIFI-7882
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (NIFI-7887) Request for Hacktoberfest Github topic label

2020-10-06 Thread Dan Kim (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17208638#comment-17208638
 ] 

Dan Kim commented on NIFI-7887:
---

[https://github.com/apache/nifi/commit/f4473afad615dbbd3605d99696e8f6261f15916f]

 

Thanks [~mattyb149]!

> Request for Hacktoberfest Github topic label
> 
>
> Key: NIFI-7887
> URL: https://issues.apache.org/jira/browse/NIFI-7887
> Project: Apache NiFi
>  Issue Type: Wish
>Reporter: Dan Kim
>Priority: Trivial
>
> Hello :)
> I'm participating in [Hactoberfest|https://hacktoberfest.digitalocean.com/] 
> this year and was hoping that you would be willing to add the 
> {{hacktoberfest}} topic label to the 
> [apache/nifi|https://github.com/apache/nifi] project?
> {quote}Hacktoberfest® is open to everyone in our global community. Whether 
> you’re a developer, student learning to code, event host, or company of any 
> size, you can help drive growth of open source and make positive 
> contributions to an ever-growing community. All backgrounds and skill levels 
> are encouraged to complete the challenge.
> {quote}
> [Here|https://hacktoberfest.digitalocean.com/hacktoberfest-update?utm_medium=email_source=hacktoberfest_campaign=main__content=response]
>  if more details on how to do that, if that helps.
> Thanks.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-7887) Request for Hacktoberfest Github topic label

2020-10-06 Thread Dan Kim (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17208635#comment-17208635
 ] 

Dan Kim commented on NIFI-7887:
---

Actually, I just noticed it's there now... I checked last night and it was not 
:cheers:

Sorry for the duplicate issue: this can be closed

> Request for Hacktoberfest Github topic label
> 
>
> Key: NIFI-7887
> URL: https://issues.apache.org/jira/browse/NIFI-7887
> Project: Apache NiFi
>  Issue Type: Wish
>Reporter: Dan Kim
>Priority: Trivial
>
> Hello :)
> I'm participating in [Hactoberfest|https://hacktoberfest.digitalocean.com/] 
> this year and was hoping that you would be willing to add the 
> {{hacktoberfest}} topic label to the 
> [apache/nifi|https://github.com/apache/nifi] project?
> {quote}Hacktoberfest® is open to everyone in our global community. Whether 
> you’re a developer, student learning to code, event host, or company of any 
> size, you can help drive growth of open source and make positive 
> contributions to an ever-growing community. All backgrounds and skill levels 
> are encouraged to complete the challenge.
> {quote}
> [Here|https://hacktoberfest.digitalocean.com/hacktoberfest-update?utm_medium=email_source=hacktoberfest_campaign=main__content=response]
>  if more details on how to do that, if that helps.
> Thanks.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-7887) Request for Hacktoberfest Github topic label

2020-10-06 Thread Dan Kim (Jira)
Dan Kim created NIFI-7887:
-

 Summary: Request for Hacktoberfest Github topic label
 Key: NIFI-7887
 URL: https://issues.apache.org/jira/browse/NIFI-7887
 Project: Apache NiFi
  Issue Type: Wish
Reporter: Dan Kim


Hello :)

I'm participating in [Hactoberfest|https://hacktoberfest.digitalocean.com/] 
this year and was hoping that you would be willing to add the {{hacktoberfest}} 
topic label to the [apache/nifi|https://github.com/apache/nifi] project?
{quote}Hacktoberfest® is open to everyone in our global community. Whether 
you’re a developer, student learning to code, event host, or company of any 
size, you can help drive growth of open source and make positive contributions 
to an ever-growing community. All backgrounds and skill levels are encouraged 
to complete the challenge.
{quote}
[Here|https://hacktoberfest.digitalocean.com/hacktoberfest-update?utm_medium=email_source=hacktoberfest_campaign=main__content=response]
 if more details on how to do that, if that helps.

Thanks.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-7886) FetchAzureBlobStorage, FetchS3Object, and FetchGCSObject processors should be able to fetch ranges

2020-10-06 Thread Paul Kelly (Jira)
Paul Kelly created NIFI-7886:


 Summary: FetchAzureBlobStorage, FetchS3Object, and FetchGCSObject 
processors should be able to fetch ranges
 Key: NIFI-7886
 URL: https://issues.apache.org/jira/browse/NIFI-7886
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Extensions
Affects Versions: 1.12.0
Reporter: Paul Kelly
Assignee: Paul Kelly


Azure Blob Storage, AWS S3, and Google Cloud Storage all support retrieving 
byte ranges of stored objects.  Current versions of NiFi processors for these 
services do not support fetching by byte range.

Allowing to fetch by range would allow multiple enhancements:
 * Parallelized downloads
 ** Faster speeds if the bandwidth delay product of the connection is lower 
than the available bandwidth
 ** Load distribution over a cluster
 * Cost savings
 ** If the file is large and only part of the file is needed, the desired part 
of the file can be downloaded, saving bandwidth costs by not retrieving 
unnecessary bytes
 ** Download failures would only need to retry the failed segment, rather than 
the full file
 * Download extremely large files
 ** Ability to download files that are larger than the available content repo 
by downloading a segment and moving it off to a system with more capacity 
before downloading another segment

 

Some of these enhancements would require an upstream processor to generate 
multiple flow files, each covering a different part of the overall range.  
Something like this:
ListS3 -> ExecuteGroovyScript (to split into multiple flow files with different 
range attributes) -> FetchS3Object.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] alien11689 closed pull request #4574: NIFI-7882 Fix

2020-10-06 Thread GitBox


alien11689 closed pull request #4574:
URL: https://github.com/apache/nifi/pull/4574


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] alien11689 opened a new pull request #4574: NIFI-7882 Fix

2020-10-06 Thread GitBox


alien11689 opened a new pull request #4574:
URL: https://github.com/apache/nifi/pull/4574


   Fix bug NIFI-7882
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (NIFI-2677) Allow restarting NiFi from System Diagnostics Screen

2020-10-06 Thread Jose Luis Pedrosa (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-2677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17208590#comment-17208590
 ] 

Jose Luis Pedrosa commented on NIFI-2677:
-

Hi [~joewitt], [~pwicks]
 We are implementing a NiFi Kubernetes operator, and we will support scale 
in/out, for this functionality, this means we need to be able to shutdown a 
node (after disconnecting and offloading). We will need also to shutdown the 
nodes (without disconnect) after a config change that needs to be applied. 
 I think there are a few fair usages for this functionality , being able to 
shutdown over a secured rest API is useful for many operational tasks.

Thanks!
 JL

> Allow restarting NiFi from System Diagnostics Screen
> 
>
> Key: NIFI-2677
> URL: https://issues.apache.org/jira/browse/NIFI-2677
> Project: Apache NiFi
>  Issue Type: Wish
>  Components: Core Framework
>Reporter: Peter Wicks
>Priority: Major
>
> Allow users with appropriate permissions to restart NiFi from the System 
> Diagnostics screen.
> I'm not sure what permissions should be required to take this action.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7882) Null as default value in avro generates exception

2020-10-06 Thread Dominik Przybysz (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dominik Przybysz updated NIFI-7882:
---
Affects Version/s: 1.10.0
   1.9.2
   1.11.4

> Null as default value in avro generates exception
> -
>
> Key: NIFI-7882
> URL: https://issues.apache.org/jira/browse/NIFI-7882
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.10.0, 1.9.2, 1.11.4, 1.12.1
>Reporter: Dominik Przybysz
>Priority: Major
>
> When I have a record without value for fields, then missing fields will be 
> populated with default values from schema. It was added in 
> https://issues.apache.org/jira/browse/NIFI-4030
> But when null is default value then avro 1.8.1 returns 
> org.apache.avro.JsonProperties$Null which cannot be converted to null value 
> (the issue in avro is described in 
> https://issues.apache.org/jira/browse/AVRO-1954) and processors fail with 
> error:
> {code}
> ERROR o.a.n.p.standard.ConvertRecord - 
> ConvertRecord[id=37460cbe-17f1-4456-a6b9-f4ed1baa4c45] Failed to process 
> FlowFile[0,436030859382835.mockFlowFile,161B]; will route to failure: 
> org.apache.avro.file.DataFileWriter$AppendWriteException: 
> org.apache.avro.AvroRuntimeException: Unknown datum type 
> org.apache.avro.JsonProperties$Null: 
> org.apache.avro.JsonProperties$Null@1723f29f
> {code}
> Probably Apache Nifi should upgrade to newer avro or backport the fix from 
> avro 1.9.x to AvroTypeUtil



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7881) Coercion problem with ConvertAvroSchema processor

2020-10-06 Thread Dominik Przybysz (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dominik Przybysz updated NIFI-7881:
---
Affects Version/s: 1.10.0
   1.9.2
   1.11.4

> Coercion problem with ConvertAvroSchema processor
> -
>
> Key: NIFI-7881
> URL: https://issues.apache.org/jira/browse/NIFI-7881
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.10.0, 1.9.2, 1.11.4, 1.12.1
>Reporter: Dominik Przybysz
>Priority: Major
>
> I have two schemas - input one has Map> 
> field and output schema has the same field with type Map String>>.
> In such example Double should be coerced to String, but it is still Double 
> and processor fails to write avro object because of incompatible field type.
> I created two similar tests:
> - for AvroRecordConverter (used by ConvertAvroSchema) which fails
> - for AvroTypeUtil (used in some other components which write avro objects) 
> and here the test passes
> The tests are here: 
> https://github.com/alien11689/nifi/commit/87f890b33efd232297b79bbe19ba6cd05d36e614



--
This message was sent by Atlassian Jira
(v8.3.4#803005)