[GitHub] [nifi] gresockj commented on pull request #4988: NIFI-7870 - Fix anonymous access control for advanced UI resources

2021-04-29 Thread GitBox


gresockj commented on pull request #4988:
URL: https://github.com/apache/nifi/pull/4988#issuecomment-829590724


   I verified the expected behavior before and after the patch.  I'll do some 
final looking at the CSRF aspect tomorrow.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (NIFI-8486) Support more options for PutBigQueryBatch Load File Type

2021-04-29 Thread Harjit Singh (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17335796#comment-17335796
 ] 

Harjit Singh commented on NIFI-8486:


They are optional for Parquet. Let me see if it works with what I have and I 
will add it. 

> Support more options for PutBigQueryBatch Load File Type
> 
>
> Key: NIFI-8486
> URL: https://issues.apache.org/jira/browse/NIFI-8486
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Reese Ann
>Assignee: Harjit Singh
>Priority: Major
>
> The BigQuery API supports the following file type formats:
>  * Avro
>  * CSV
>  * JSON
>  * ORC
>  * Parquet
>  * Datastore exports
>  * -Firestore exports-
> The GCP PutBigQueryBatch processor only supports the first three. 
> It would be great to support others, particularly Parquet.
> https://cloud.google.com/bigquery/docs/reference/rest/v2/Job 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-8503) Create BigQuery Extract Processor

2021-04-29 Thread Reese Ann (Jira)
Reese Ann created NIFI-8503:
---

 Summary: Create BigQuery Extract Processor
 Key: NIFI-8503
 URL: https://issues.apache.org/jira/browse/NIFI-8503
 Project: Apache NiFi
  Issue Type: New Feature
Reporter: Reese Ann


Create a processor to start an Extract Job using the BigQuery API.

This is faster and cheaper (if using on-demand GBQ pricing) than using a 
BigQuery SQL source and a Google Cloud Storage sink.

See the JobConfigurationExtract resource:

https://cloud.google.com/bigquery/docs/reference/rest/v2/Job#jobconfigurationextract



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-8486) Support more options for PutBigQueryBatch Load File Type

2021-04-29 Thread Reese Ann (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reese Ann updated NIFI-8486:

Description: 
The BigQuery API supports the following file type formats:
 * Avro
 * CSV
 * JSON
 * ORC
 * Parquet
 * Datastore exports
 * -Firestore exports-

The GCP PutBigQueryBatch processor only supports the first three. 

It would be great to support others, particularly Parquet.

https://cloud.google.com/bigquery/docs/reference/rest/v2/Job 

  was:
The BigQuery API supports the following file type formats:
 * Avro
 * CSV
 * JSON
 * ORC
 * Parquet
 * Datastore exports
 * -Firestore exports-

The GCP PutBigQueryBatch processor only supports the first three. 

It would be great to support others, particularly Parquet.

 


> Support more options for PutBigQueryBatch Load File Type
> 
>
> Key: NIFI-8486
> URL: https://issues.apache.org/jira/browse/NIFI-8486
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Reese Ann
>Assignee: Harjit Singh
>Priority: Major
>
> The BigQuery API supports the following file type formats:
>  * Avro
>  * CSV
>  * JSON
>  * ORC
>  * Parquet
>  * Datastore exports
>  * -Firestore exports-
> The GCP PutBigQueryBatch processor only supports the first three. 
> It would be great to support others, particularly Parquet.
> https://cloud.google.com/bigquery/docs/reference/rest/v2/Job 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-8486) Support more options for PutBigQueryBatch Load File Type

2021-04-29 Thread Reese Ann (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reese Ann updated NIFI-8486:

Description: 
The BigQuery API supports the following file type formats:
 * Avro
 * CSV
 * JSON
 * ORC
 * Parquet
 * Datastore exports
 * -Firestore exports-

The GCP PutBigQueryBatch processor only supports the first three. 

It would be great to support others, particularly Parquet.

 

  was:
The BigQuery API supports the following file type formats:
 * Avro
 * CSV
 * JSON
 * ORC
 * Parquet
 * Datastore exports
 * Firestore exports

The GCP PutBigQueryBatch processor only supports the first three. 

It would be great to support others, particularly Parquet.


> Support more options for PutBigQueryBatch Load File Type
> 
>
> Key: NIFI-8486
> URL: https://issues.apache.org/jira/browse/NIFI-8486
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Reese Ann
>Assignee: Harjit Singh
>Priority: Major
>
> The BigQuery API supports the following file type formats:
>  * Avro
>  * CSV
>  * JSON
>  * ORC
>  * Parquet
>  * Datastore exports
>  * -Firestore exports-
> The GCP PutBigQueryBatch processor only supports the first three. 
> It would be great to support others, particularly Parquet.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-8486) Support more options for PutBigQueryBatch Load File Type

2021-04-29 Thread Reese Ann (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17335774#comment-17335774
 ] 

Reese Ann commented on NIFI-8486:
-

[~harjitdotsingh] Here's the documentation for the [Job 
resource|https://cloud.google.com/bigquery/docs/reference/rest/v2/Job] in the 
BQ API.
 * Parquet has a group of options job.parquetOptions. Resource type is 
ParquetOptions and is documented 
[here|https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#ParquetOptions].
 * Datastore Backup has a property job.projectionFields[]. 
 * Not seeing any additional options for ORC
 * Turns out Firestore Backup is the same as Datastore Backup. I'll cross it 
out in the description.

 

> Support more options for PutBigQueryBatch Load File Type
> 
>
> Key: NIFI-8486
> URL: https://issues.apache.org/jira/browse/NIFI-8486
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Reese Ann
>Assignee: Harjit Singh
>Priority: Major
>
> The BigQuery API supports the following file type formats:
>  * Avro
>  * CSV
>  * JSON
>  * ORC
>  * Parquet
>  * Datastore exports
>  * Firestore exports
> The GCP PutBigQueryBatch processor only supports the first three. 
> It would be great to support others, particularly Parquet.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-8486) Support more options for PutBigQueryBatch Load File Type

2021-04-29 Thread Harjit Singh (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17335764#comment-17335764
 ] 

Harjit Singh commented on NIFI-8486:


Does Parquet and ORC have any other attributes to be considered like how we 
have  for AVRO or CSV ? 

 

 

> Support more options for PutBigQueryBatch Load File Type
> 
>
> Key: NIFI-8486
> URL: https://issues.apache.org/jira/browse/NIFI-8486
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Reese Ann
>Assignee: Harjit Singh
>Priority: Major
>
> The BigQuery API supports the following file type formats:
>  * Avro
>  * CSV
>  * JSON
>  * ORC
>  * Parquet
>  * Datastore exports
>  * Firestore exports
> The GCP PutBigQueryBatch processor only supports the first three. 
> It would be great to support others, particularly Parquet.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] joewitt commented on pull request #5039: NIFI-8477: If interrupted while waiting for Node Status Update to be …

2021-04-29 Thread GitBox


joewitt commented on pull request #5039:
URL: https://github.com/apache/nifi/pull/5039#issuecomment-829502290


   based on feedback/testing +1


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] cgmckeever commented on pull request #5039: NIFI-8477: If interrupted while waiting for Node Status Update to be …

2021-04-29 Thread GitBox


cgmckeever commented on pull request #5039:
URL: https://github.com/apache/nifi/pull/5039#issuecomment-829497130


   This branch/PR was tested in a docker build deployed to ECS where the 
indicated behaviour of not being able to remove a disconnected (and 
zombie/gone) node was repeatable 99.99% of the time. After deployment of this 
branch we were able to successfully remove a zombie/disconnected node via the 
UI as well as via the toolkit api calls 100% of the time during our test runs


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] sjyang18 commented on pull request #4754: NIFI-7417: GetAzureCosmosDBRecord processor

2021-04-29 Thread GitBox


sjyang18 commented on pull request #4754:
URL: https://github.com/apache/nifi/pull/4754#issuecomment-829490194


   @jfrazee Would you take a look at this PR for me?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-registry] mtien-apache edited a comment on pull request #319: NIFIREG-395 - Implemented the ability to import and export versioned flows through the UI

2021-04-29 Thread GitBox


mtien-apache edited a comment on pull request #319:
URL: https://github.com/apache/nifi-registry/pull/319#issuecomment-828895145


   Pushed 2 commits: 
   
   - Handled the scenario when an invalid snapshot file is uploaded during 
Import New Flow @andrewmlim 
   - Refactored client side API import methods
- In the upload file methods, I removed `formData` and now passing the 
file as an API param. Also using HTTP headers to pass additional data
- In `uploadFlow`, I'm now making multiple calls to the server that 
re-uses the existing `BucketFlowResource.createFlow` method
   - Refactored server side import methods
- Removed `BucketFlowResource.importFlow`
- Refactored logic from `BucketFlowResource.importVersionedFlow` to the 
service facade
  
   UI changes: 
   - Fixed remaining UI style issues
   - In the 'Import New Flow' dialog, the initial value for the Bucket dropdown 
menu is set to the Bucket the user is currently viewing. Otherwise, no value is 
set when viewing all buckets.
   - Added messages to inform the user why the 'Import New Flow' button is 
displayed
- One displays a message in the center of the page when there are no 
existing buckets 
- Another message is displayed under the button when buckets exist but 
the user does not have any write permissions


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (NIFI-8502) Upgrade Spring Framework to latest 5.x version

2021-04-29 Thread Joseph Gresock (Jira)
Joseph Gresock created NIFI-8502:


 Summary: Upgrade Spring Framework to latest 5.x version
 Key: NIFI-8502
 URL: https://issues.apache.org/jira/browse/NIFI-8502
 Project: Apache NiFi
  Issue Type: Improvement
Reporter: Joseph Gresock


In order to take advantage of more recent Spring services like Spring Vault 
(relies on 5.3.5), we should consider upgrading spring-core and other Spring 
dependencies to the latest 5.x release.  Note that this would be a major 
version upgrade for most Spring dependencies.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] exceptionfactory commented on a change in pull request #5034: NIFI-8445: Implementing VaultCommunicationService

2021-04-29 Thread GitBox


exceptionfactory commented on a change in pull request #5034:
URL: https://github.com/apache/nifi/pull/5034#discussion_r623272293



##
File path: nifi-commons/nifi-vault-utils/pom.xml
##
@@ -0,0 +1,60 @@
+http://maven.apache.org/POM/4.0.0; 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance; 
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
https://maven.apache.org/xsd/maven-4.0.0.xsd;>
+
+4.0.0
+
+org.apache.nifi
+nifi-commons
+1.14.0-SNAPSHOT
+
+nifi-vault-utils
+
+
+org.springframework.vault
+spring-vault-core
+2.3.2
+
+
+org.springframework
+spring-core
+5.3.5

Review comment:
   I agree, that sounds like a good way forward.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (NIFI-8493) PutDatabaseRecord incorrect type resolution for auto increment columns

2021-04-29 Thread Nadeem (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nadeem updated NIFI-8493:
-
Fix Version/s: (was: 1.13.3)
   Status: Patch Available  (was: Open)

> PutDatabaseRecord incorrect type resolution for auto increment columns
> --
>
> Key: NIFI-8493
> URL: https://issues.apache.org/jira/browse/NIFI-8493
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.13.2
> Environment: Microsoft SQL Server 2019 (RTM-GDR) (KB4583458) - 
> 15.0.2080.9 (X64) 
>   Nov  6 2020 16:50:01 
>   Copyright (C) 2019 Microsoft Corporation
>   Standard Edition (64-bit) on Windows Server 2019 Datacenter 10.0  
> (Build 17763: ) (Hypervisor)
>Reporter: Julian
>Assignee: Nadeem
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When there is a column where the value is "auto increment/identity", the type 
> resolution fails.
> This is similar to NIFI-8244 and was not resolved by NIFI-8237.
> When we INSERT new data we get this error message: _error: 
> Operandtypecollision: datetime2 is incompatible with smallint._
> The problem is, that the method is using indexes and not names. In the record 
> the first value is '_a_' but the method uses '_id_' as first value. Therefore 
> everything is off by one.
> Our table looks like this:
> {color:#cc7832}create table {color}t
>  (
>  {color:#9876aa}id {color}{color:#cc7832}bigint identity 
> {color}({color:#6897bb}0{color}{color:#cc7832}, 
> {color}{color:#6897bb}1{color})
>     {color:#cc7832}constraint {color}t_PK
>   {color:#cc7832}primary key,{color}
> {color:#9876aa}a {color}{color:#cc7832}int not null,{color}
> {color:#9876aa}b {color}{color:#cc7832}bigint not null,{color}
> {color:#9876aa}c {color}{color:#cc7832}float not null,{color}
> {color:#9876aa}d {color}{color:#cc7832}datetime not null,{color}
> {color:#9876aa}e {color}{color:#cc7832}smallint not null,{color}
> {color:#9876aa}f {color}{color:#cc7832}float,{color}
> {color:#9876aa}g {color}{color:#cc7832}float,{color}
> {color:#9876aa}h {color}{color:#cc7832}datetime default 
> {color}{color:#ffc66d}getdate{color}()
>  )
>  
> Record:
> {
>   "d": 1619503081000,
>    "c": 0,
>    "a": 34,
>    "b": 34,
>    "e": 0,
>    "f": "1.1",
>    "g": "1.2",
>   "h": 1619503095159
>  {{}}}
>  
> *What worked for us was changing this line in PutDatabaseRecord.executeDML():*
> Before:
> {color:#cc7832}final {color}ColumnDescription column = 
> columns.get(currentFieldIndex);
> After:
> {color:#cc7832}final {color}ColumnDescription column = 
> tableSchema.getColumns().get(normalizeColumnName(recordReader.getSchema().getField(i).getFieldName(){color:#cc7832},
>  
> {color}settings.{color:#9876aa}translateFieldNames{color})){color:#cc7832};{color}
>  
> This change also has another benefit. The order of the fields in RecordReader 
> doesn't matter any more.
>  
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] markap14 commented on pull request #5042: NIFI-8469: Updated ProcessSession to have commitAsync methods and deprecated commit; updated stateless to make use of this improvement

2021-04-29 Thread GitBox


markap14 commented on pull request #5042:
URL: https://github.com/apache/nifi/pull/5042#issuecomment-829469369


   @joewitt agreed. Definitely need some good analysis of the changes here. 
Good news is that the changes fall into 3 categories:
   - Updating the API and StandardProcessSession. These are the most critical 
but pretty low-risk because not a whole lot changed.
   - Updating the Processors. This is the largest part of the changeset by far. 
But tiny changes (mostly 1-liners) to many classes, which makes it look like a 
lot.
   - Updating stateless. This is a big part of the changes, but low risk in 
that stateless isn't as heavily used yet, as is still pretty immature.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] gresockj commented on a change in pull request #5034: NIFI-8445: Implementing VaultCommunicationService

2021-04-29 Thread GitBox


gresockj commented on a change in pull request #5034:
URL: https://github.com/apache/nifi/pull/5034#discussion_r623270921



##
File path: nifi-commons/nifi-vault-utils/pom.xml
##
@@ -0,0 +1,60 @@
+http://maven.apache.org/POM/4.0.0; 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance; 
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
https://maven.apache.org/xsd/maven-4.0.0.xsd;>
+
+4.0.0
+
+org.apache.nifi
+nifi-commons
+1.14.0-SNAPSHOT
+
+nifi-vault-utils
+
+
+org.springframework.vault
+spring-vault-core
+2.3.2
+
+
+org.springframework
+spring-core
+5.3.5

Review comment:
   Since this code wouldn't be used yet (and thus no runtime issues should 
result), I'll create a JIRA ticket to upgrade Spring more widely.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] naddym opened a new pull request #5043: NIFI-8493: PutDatabaseRecord incorrect type resolution when column values are ordered differently

2021-04-29 Thread GitBox


naddym opened a new pull request #5043:
URL: https://github.com/apache/nifi/pull/5043


   
   Thank you for submitting a contribution to Apache NiFi.
   
   Please provide a short description of the PR here:
   
    Description of PR
   When PutDatabaseRecord is projected to different column values ordering, it 
incorrectly tries to convert type and results in exception such as 
NumberFormatException for conversion of string to int. 
   
   _Enables X functionality; fixes bug NIFI-._
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [x] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [x] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [x] Has your PR been rebased against the latest commit within the target 
branch (typically `main`)?
   
   - [x] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [x] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [x] Have you written or updated unit tests to verify your changes?
   - [ ] Have you verified that the full build is successful on JDK 8?
   - [ ] Have you verified that the full build is successful on JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for 
build issues and submit an update to your PR as soon as possible.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] joewitt commented on pull request #5042: NIFI-8469: Updated ProcessSession to have commitAsync methods and deprecated commit; updated stateless to make use of this improvement

2021-04-29 Thread GitBox


joewitt commented on pull request #5042:
URL: https://github.com/apache/nifi/pull/5042#issuecomment-829459036


   Wow.
   
   The 'old school' nifi in me is sad to see this would mean 'commit' goes 
away.  But the implication this has in achieving better semantics for 
transaction bounded groups of processors in stateless vs the traditional nifi 
model which remains unchanged is pretty powerful.  Needs some good 
attention/testing/eyes on.  


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] markap14 commented on pull request #5007: [WIP] Introduce notion of an asynchronous session commit

2021-04-29 Thread GitBox


markap14 commented on pull request #5007:
URL: https://github.com/apache/nifi/pull/5007#issuecomment-829457252


   Thanks @gresockj. Closed pull request in favor of 
https://github.com/apache/nifi/pull/5042. This PR has the contents of this 
branch plus additional commits, and is pushed against a different branch that 
represents the Jira number.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] markap14 closed pull request #5007: [WIP] Introduce notion of an asynchronous session commit

2021-04-29 Thread GitBox


markap14 closed pull request #5007:
URL: https://github.com/apache/nifi/pull/5007


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] markap14 opened a new pull request #5042: NIFI-8469: Updated ProcessSession to have commitAsync methods and deprecated commit; updated stateless to make use of this improvement

2021-04-29 Thread GitBox


markap14 opened a new pull request #5042:
URL: https://github.com/apache/nifi/pull/5042


   
   Thank you for submitting a contribution to Apache NiFi.
   
   Please provide a short description of the PR here:
   
    Description of PR
   
   _Enables X functionality; fixes bug NIFI-._
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [ ] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically `main`)?
   
   - [ ] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [ ] Have you written or updated unit tests to verify your changes?
   - [ ] Have you verified that the full build is successful on JDK 8?
   - [ ] Have you verified that the full build is successful on JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for 
build issues and submit an update to your PR as soon as possible.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (NIFI-8486) Support more options for PutBigQueryBatch Load File Type

2021-04-29 Thread Harjit Singh (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harjit Singh reassigned NIFI-8486:
--

Assignee: Harjit Singh

> Support more options for PutBigQueryBatch Load File Type
> 
>
> Key: NIFI-8486
> URL: https://issues.apache.org/jira/browse/NIFI-8486
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Reese Ann
>Assignee: Harjit Singh
>Priority: Major
>
> The BigQuery API supports the following file type formats:
>  * Avro
>  * CSV
>  * JSON
>  * ORC
>  * Parquet
>  * Datastore exports
>  * Firestore exports
> The GCP PutBigQueryBatch processor only supports the first three. 
> It would be great to support others, particularly Parquet.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-8486) Support more options for PutBigQueryBatch Load File Type

2021-04-29 Thread Mark Payne (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17335708#comment-17335708
 ] 

Mark Payne commented on NIFI-8486:
--

[~harjitdotsingh] I have added you as a contributor in Jira. You should now be 
able to assign the Jira to yourself.

> Support more options for PutBigQueryBatch Load File Type
> 
>
> Key: NIFI-8486
> URL: https://issues.apache.org/jira/browse/NIFI-8486
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Reese Ann
>Priority: Major
>
> The BigQuery API supports the following file type formats:
>  * Avro
>  * CSV
>  * JSON
>  * ORC
>  * Parquet
>  * Datastore exports
>  * Firestore exports
> The GCP PutBigQueryBatch processor only supports the first three. 
> It would be great to support others, particularly Parquet.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-8486) Support more options for PutBigQueryBatch Load File Type

2021-04-29 Thread Harjit Singh (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17335672#comment-17335672
 ] 

Harjit Singh commented on NIFI-8486:


Would like to work on this. Can you assign this to me ?

 

> Support more options for PutBigQueryBatch Load File Type
> 
>
> Key: NIFI-8486
> URL: https://issues.apache.org/jira/browse/NIFI-8486
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Reese Ann
>Priority: Major
>
> The BigQuery API supports the following file type formats:
>  * Avro
>  * CSV
>  * JSON
>  * ORC
>  * Parquet
>  * Datastore exports
>  * Firestore exports
> The GCP PutBigQueryBatch processor only supports the first three. 
> It would be great to support others, particularly Parquet.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-553) Remove experimental tag from some processors

2021-04-29 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17335594#comment-17335594
 ] 

ASF subversion and git services commented on NIFI-553:
--

Commit e1c99e3a5c1d138c322b412a66c4027359c84088 in nifi's branch 
refs/heads/main from Matt Burgess
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=e1c99e3 ]

MINIFI-553: Fix timeout-based tests in StatusLoggerTest

This closes #5041

Signed-off-by: David Handermann 


> Remove experimental tag from some processors
> 
>
> Key: NIFI-553
> URL: https://issues.apache.org/jira/browse/NIFI-553
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 0.0.2
>Reporter: Michael W Moser
>Assignee: Joe Witt
>Priority: Trivial
> Fix For: 0.2.0
>
>
> Several processors have an @Tag{"experimental"} and they do not seem to 
> actually be experimental.  An "experimental" tag is odd to begin with.  
> Remove the "experimental" tag.
> Here is a list of standard processors using the "experimental" tag.
> Base64EncodeContent
> DetectDuplicate
> EncodeContent
> EvaluateXQuery



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] asfgit closed pull request #5041: MINIFI-553: Fix timeout-based tests in StatusLoggerTest

2021-04-29 Thread GitBox


asfgit closed pull request #5041:
URL: https://github.com/apache/nifi/pull/5041


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] exceptionfactory commented on pull request #4988: NIFI-7870 - Fix anonymous access control for advanced UI resources

2021-04-29 Thread GitBox


exceptionfactory commented on pull request #4988:
URL: https://github.com/apache/nifi/pull/4988#issuecomment-829343031


   The latest round of updates look good @thenatog! I will do some additional 
runtime verification.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] gresockj commented on a change in pull request #5034: NIFI-8445: Implementing VaultCommunicationService

2021-04-29 Thread GitBox


gresockj commented on a change in pull request #5034:
URL: https://github.com/apache/nifi/pull/5034#discussion_r623166797



##
File path: nifi-commons/nifi-vault-utils/pom.xml
##
@@ -0,0 +1,60 @@
+http://maven.apache.org/POM/4.0.0; 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance; 
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
https://maven.apache.org/xsd/maven-4.0.0.xsd;>
+
+4.0.0
+
+org.apache.nifi
+nifi-commons
+1.14.0-SNAPSHOT
+
+nifi-vault-utils
+
+
+org.springframework.vault
+spring-vault-core
+2.3.2
+
+
+org.springframework
+spring-core
+5.3.5

Review comment:
   Good call, I'll definitely take a look at the Spring versions.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] gresockj commented on a change in pull request #5034: NIFI-8445: Implementing VaultCommunicationService

2021-04-29 Thread GitBox


gresockj commented on a change in pull request #5034:
URL: https://github.com/apache/nifi/pull/5034#discussion_r623166495



##
File path: 
nifi-commons/nifi-vault-utils/src/main/java/org/apache/nifi/vault/StandardVaultCommunicationService.java
##
@@ -0,0 +1,90 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.vault;
+
+import org.apache.nifi.util.FormatUtils;
+import org.apache.nifi.vault.config.VaultConfiguration;
+import org.apache.nifi.vault.config.VaultProperties;
+import org.springframework.vault.authentication.SimpleSessionManager;
+import org.springframework.vault.client.ClientHttpRequestFactoryFactory;
+import org.springframework.vault.core.VaultTemplate;
+import org.springframework.vault.support.Ciphertext;
+import org.springframework.vault.support.ClientOptions;
+import org.springframework.vault.support.Plaintext;
+import org.springframework.vault.support.SslConfiguration;
+
+import java.time.Duration;
+import java.util.Optional;
+import java.util.concurrent.TimeUnit;
+
+/**
+ * Implements the VaultCommunicationService using Spring Vault
+ */
+public class StandardVaultCommunicationService implements 
VaultCommunicationService {
+private static final String HTTPS = "https";
+
+private final VaultConfiguration vaultConfiguration;
+private final VaultTemplate vaultTemplate;
+
+/**
+ * Creates a VaultCommunicationService that uses Spring Vault.
+ * @param vaultProperties Properties to configure the service
+ * @throws VaultConfigurationException If the configuration was invalid
+ */
+public StandardVaultCommunicationService(final VaultProperties 
vaultProperties) throws VaultConfigurationException {
+this.vaultConfiguration = new VaultConfiguration(vaultProperties);
+
+final SslConfiguration sslConfiguration = 
vaultProperties.getUri().contains(HTTPS)
+? vaultConfiguration.sslConfiguration() : 
SslConfiguration.unconfigured();
+
+final ClientOptions clientOptions = getClientOptions(vaultProperties);
+
+vaultTemplate = new VaultTemplate(vaultConfiguration.vaultEndpoint(),
+ClientHttpRequestFactoryFactory.create(clientOptions, 
sslConfiguration),
+new 
SimpleSessionManager(vaultConfiguration.clientAuthentication()));
+}
+
+private static ClientOptions getClientOptions(VaultProperties 
vaultProperties) {
+final ClientOptions clientOptions = new ClientOptions();
+Duration readTimeoutDuration = clientOptions.getReadTimeout();
+Duration connectionTimeoutDuration = 
clientOptions.getConnectionTimeout();
+final Optional configuredReadTimeout = 
vaultProperties.getReadTimeout();
+if (configuredReadTimeout.isPresent()) {
+readTimeoutDuration = getDuration(configuredReadTimeout.get());
+}
+final Optional configuredConnectionTimeout = 
vaultProperties.getConnectionTimeout();
+if (configuredConnectionTimeout.isPresent()) {
+connectionTimeoutDuration = 
getDuration(configuredConnectionTimeout.get());
+}
+return new ClientOptions(connectionTimeoutDuration, 
readTimeoutDuration);
+}
+
+private static Duration getDuration(String formattedDuration) {
+final double duration = 
FormatUtils.getPreciseTimeDuration(formattedDuration, TimeUnit.MILLISECONDS);
+return Duration.ofMillis(Double.valueOf(duration).longValue());
+}
+
+@Override
+public String encrypt(String transitKey, byte[] plainText) {
+return vaultTemplate.opsForTransit().encrypt(transitKey, 
Plaintext.of(plainText)).getCiphertext();

Review comment:
   To reduce the overhead, we could store VaultTransitTemplate as a field 
and initialize it in the constructor.  
   
   The idea going forward was to continue to add interface methods and other 
VaultTemplate calls like vaultTemplate.opsForKeyValue(...), which returns 
VaultKeyValueOperations.  So we could either go with storing all of these 
*Operations objects as fields in the StandardVaultCommunicationService, or go 
with separate interfaces and implementations.  I could see benefits of either 
approach, any strong opinions either way?



[jira] [Assigned] (NIFI-8501) Add support for Azure Storage Client-Side Encryption

2021-04-29 Thread Guillaume Schaer (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guillaume Schaer reassigned NIFI-8501:
--

Assignee: Guillaume Schaer

> Add support for Azure Storage Client-Side Encryption
> 
>
> Key: NIFI-8501
> URL: https://issues.apache.org/jira/browse/NIFI-8501
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Guillaume Schaer
>Assignee: Guillaume Schaer
>Priority: Major
>  Labels: AZURE
>
> Microsoft allows for Blob stored on Azure to be encrypted client-side using 
> key wrapping algorithm. 
> Implementation details can be found here: 
> [https://docs.microsoft.com/en-us/azure/storage/common/storage-client-side-encryption-java?tabs=java]
> Adding support for such encryption method would offer more compatibility with 
> the Azure ecosystem.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-8501) Add support for Azure Storage Client-Side Encryption

2021-04-29 Thread Guillaume Schaer (Jira)
Guillaume Schaer created NIFI-8501:
--

 Summary: Add support for Azure Storage Client-Side Encryption
 Key: NIFI-8501
 URL: https://issues.apache.org/jira/browse/NIFI-8501
 Project: Apache NiFi
  Issue Type: Improvement
Reporter: Guillaume Schaer


Microsoft allows for Blob stored on Azure to be encrypted client-side using key 
wrapping algorithm. 

Implementation details can be found here: 
[https://docs.microsoft.com/en-us/azure/storage/common/storage-client-side-encryption-java?tabs=java]

Adding support for such encryption method would offer more compatibility with 
the Azure ecosystem.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-8435) PutKudu 1.13.2 Memory Leak

2021-04-29 Thread David Handermann (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17335560#comment-17335560
 ] 

David Handermann commented on NIFI-8435:


[~granthenke] might have a better handle on whether there are any particular 
features or fixes necessary in the newer version of Netty.  However, 
4.1.43.Final is over a year old and recent versions have included a large 
number of bug fixes, so it may not be a simple solution.  Another potential 
workaround is for the Kudu client to expose the allocator as a configurable 
option, which would NiFi PutKudu to set a different option and avoid the issue 
with the pooled allocator.

> PutKudu 1.13.2 Memory Leak
> --
>
> Key: NIFI-8435
> URL: https://issues.apache.org/jira/browse/NIFI-8435
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.13.2
> Environment: NiFi 1.13.2, 8-Node Cluster running on CentOS 7, Kudu 
> 1.10.0
>Reporter: Josef Zahner
>Assignee: Peter Gyori
>Priority: Critical
>  Labels: kudu, nifi, oom
> Attachments: Screenshot 2021-04-20 at 14.27.11.png, 
> grafana_heap_overview.png, kudu_inserts_per_sec.png, 
> putkudu_processor_config.png, visualvm_bytes_detail_view.png, 
> visualvm_total_bytes_used.png
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> We just upgraded from NiFi 1.11.4 to 1.13.2 and faced a huge issue with 
> PutKudu.
> PutKudu on the 1.13.2 eats up all the heap memory and garbage collection 
> can't anymore free up the memory. We allow Java to use 31GB memory and as you 
> can see with NiFi 1.11.4 it will be used like it should with GC. However with 
> NiFi 1.13.2 with our actual load it fills up the memory relatively fast. 
> Manual GC via visualvm tool didn't help at all to free up memory.
> !grafana_heap_overview.png!
>  
> Visual VM shows the following culprit:  !visualvm_total_bytes_used.png!
> !visualvm_bytes_detail_view.png!
> The bytes array shows millions of char data which isn't cleaned up. In fact 
> here 14,9GB memory (heapdump has been taken after a while of full load). If 
> we check the same on NiFi 1.11.4, the bytes array is nearly empty, around a 
> few hundred MBs.
> As you could imagine we can't upload the heap dump as currently we have only 
> productive data on the system. But don't hesitate to ask questions about the 
> heapdump if you need more information.
> I haven't done any screenshot of the processor config, but I can do that if 
> you wish (we are back to NiFi 1.11.4 at the moment). 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] mattyb149 commented on a change in pull request #5024: NIFI-8320: Fix column mismatch in PutDatabaseRecord

2021-04-29 Thread GitBox


mattyb149 commented on a change in pull request #5024:
URL: https://github.com/apache/nifi/pull/5024#discussion_r623136402



##
File path: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/test/groovy/org/apache/nifi/processors/standard/TestPutDatabaseRecord.groovy
##
@@ -301,6 +301,73 @@ class TestPutDatabaseRecord {
 conn.close()
 }
 
+@Test
+void testInsertNonRequiredColumns() throws InitializationException, 
ProcessException, SQLException, IOException {
+recreateTable(createPersons)
+final MockRecordParser parser = new MockRecordParser()
+runner.addControllerService("parser", parser)
+runner.enableControllerService(parser)
+
+parser.addSchemaField("id", RecordFieldType.INT)
+parser.addSchemaField("name", RecordFieldType.STRING)
+parser.addSchemaField("dt", RecordFieldType.DATE)
+
+LocalDate testDate1 = LocalDate.of(2021, 1, 26)
+Date nifiDate1 = new 
Date(testDate1.atStartOfDay(ZoneOffset.UTC).toInstant().toEpochMilli()) // in 
UTC
+Date jdbcDate1 = Date.valueOf(testDate1) // in local TZ
+LocalDate testDate2 = LocalDate.of(2021, 7, 26)
+Date nifiDate2 = new 
Date(testDate2.atStartOfDay(ZoneOffset.UTC).toInstant().toEpochMilli()) // in 
URC
+Date jdbcDate2 = Date.valueOf(testDate2) // in local TZ
+
+parser.addRecord(1, 'rec1', nifiDate1)
+parser.addRecord(2, 'rec2', nifiDate2)
+parser.addRecord(3, 'rec3', null)
+parser.addRecord(4, 'rec4', null)
+parser.addRecord(5, null, null)
+
+runner.setProperty(PutDatabaseRecord.RECORD_READER_FACTORY, 'parser')
+runner.setProperty(PutDatabaseRecord.STATEMENT_TYPE, 
PutDatabaseRecord.INSERT_TYPE)
+runner.setProperty(PutDatabaseRecord.TABLE_NAME, 'PERSONS')
+
+runner.enqueue(new byte[0])
+runner.run()
+
+runner.assertTransferCount(PutDatabaseRecord.REL_SUCCESS, 1)
+final Connection conn = dbcp.getConnection()
+final Statement stmt = conn.createStatement()

Review comment:
   Groovy doesn't have a try-with-resources per se, I'm going to move this 
test to a Java class and add the try-with-resources there




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] mattyb149 commented on a change in pull request #5024: NIFI-8320: Fix column mismatch in PutDatabaseRecord

2021-04-29 Thread GitBox


mattyb149 commented on a change in pull request #5024:
URL: https://github.com/apache/nifi/pull/5024#discussion_r623135925



##
File path: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/test/groovy/org/apache/nifi/processors/standard/TestPutDatabaseRecord.groovy
##
@@ -301,6 +301,73 @@ class TestPutDatabaseRecord {
 conn.close()
 }
 
+@Test
+void testInsertNonRequiredColumns() throws InitializationException, 
ProcessException, SQLException, IOException {

Review comment:
   Yes it will take a badly-behaved PutDatabaseRecord subclass to achieve, 
but should add a test to cover the path regardless




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (NIFI-8435) PutKudu 1.13.2 Memory Leak

2021-04-29 Thread Josef Zahner (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17335553#comment-17335553
 ] 

Josef Zahner commented on NIFI-8435:


I don't understand the details, but wouldn't it be a quick fix to force PutKudu 
to use the netty version from NiFi v.1.11.4 in the main pom? Or are there any 
features required from the new netty version?

> PutKudu 1.13.2 Memory Leak
> --
>
> Key: NIFI-8435
> URL: https://issues.apache.org/jira/browse/NIFI-8435
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.13.2
> Environment: NiFi 1.13.2, 8-Node Cluster running on CentOS 7, Kudu 
> 1.10.0
>Reporter: Josef Zahner
>Assignee: Peter Gyori
>Priority: Critical
>  Labels: kudu, nifi, oom
> Attachments: Screenshot 2021-04-20 at 14.27.11.png, 
> grafana_heap_overview.png, kudu_inserts_per_sec.png, 
> putkudu_processor_config.png, visualvm_bytes_detail_view.png, 
> visualvm_total_bytes_used.png
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> We just upgraded from NiFi 1.11.4 to 1.13.2 and faced a huge issue with 
> PutKudu.
> PutKudu on the 1.13.2 eats up all the heap memory and garbage collection 
> can't anymore free up the memory. We allow Java to use 31GB memory and as you 
> can see with NiFi 1.11.4 it will be used like it should with GC. However with 
> NiFi 1.13.2 with our actual load it fills up the memory relatively fast. 
> Manual GC via visualvm tool didn't help at all to free up memory.
> !grafana_heap_overview.png!
>  
> Visual VM shows the following culprit:  !visualvm_total_bytes_used.png!
> !visualvm_bytes_detail_view.png!
> The bytes array shows millions of char data which isn't cleaned up. In fact 
> here 14,9GB memory (heapdump has been taken after a while of full load). If 
> we check the same on NiFi 1.11.4, the bytes array is nearly empty, around a 
> few hundred MBs.
> As you could imagine we can't upload the heap dump as currently we have only 
> productive data on the system. But don't hesitate to ask questions about the 
> heapdump if you need more information.
> I haven't done any screenshot of the processor config, but I can do that if 
> you wish (we are back to NiFi 1.11.4 at the moment). 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] greyp9 commented on a change in pull request #5024: NIFI-8320: Fix column mismatch in PutDatabaseRecord

2021-04-29 Thread GitBox


greyp9 commented on a change in pull request #5024:
URL: https://github.com/apache/nifi/pull/5024#discussion_r623124847



##
File path: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/test/groovy/org/apache/nifi/processors/standard/TestPutDatabaseRecord.groovy
##
@@ -301,6 +301,73 @@ class TestPutDatabaseRecord {
 conn.close()
 }
 
+@Test
+void testInsertNonRequiredColumns() throws InitializationException, 
ProcessException, SQLException, IOException {
+recreateTable(createPersons)
+final MockRecordParser parser = new MockRecordParser()
+runner.addControllerService("parser", parser)
+runner.enableControllerService(parser)
+
+parser.addSchemaField("id", RecordFieldType.INT)
+parser.addSchemaField("name", RecordFieldType.STRING)
+parser.addSchemaField("dt", RecordFieldType.DATE)
+
+LocalDate testDate1 = LocalDate.of(2021, 1, 26)
+Date nifiDate1 = new 
Date(testDate1.atStartOfDay(ZoneOffset.UTC).toInstant().toEpochMilli()) // in 
UTC
+Date jdbcDate1 = Date.valueOf(testDate1) // in local TZ
+LocalDate testDate2 = LocalDate.of(2021, 7, 26)
+Date nifiDate2 = new 
Date(testDate2.atStartOfDay(ZoneOffset.UTC).toInstant().toEpochMilli()) // in 
URC
+Date jdbcDate2 = Date.valueOf(testDate2) // in local TZ
+
+parser.addRecord(1, 'rec1', nifiDate1)
+parser.addRecord(2, 'rec2', nifiDate2)
+parser.addRecord(3, 'rec3', null)
+parser.addRecord(4, 'rec4', null)
+parser.addRecord(5, null, null)
+
+runner.setProperty(PutDatabaseRecord.RECORD_READER_FACTORY, 'parser')
+runner.setProperty(PutDatabaseRecord.STATEMENT_TYPE, 
PutDatabaseRecord.INSERT_TYPE)
+runner.setProperty(PutDatabaseRecord.TABLE_NAME, 'PERSONS')
+
+runner.enqueue(new byte[0])
+runner.run()
+
+runner.assertTransferCount(PutDatabaseRecord.REL_SUCCESS, 1)
+final Connection conn = dbcp.getConnection()
+final Statement stmt = conn.createStatement()

Review comment:
   suggest use of try-with-resources for Connection and Statement




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] exceptionfactory commented on a change in pull request #5034: NIFI-8445: Implementing VaultCommunicationService

2021-04-29 Thread GitBox


exceptionfactory commented on a change in pull request #5034:
URL: https://github.com/apache/nifi/pull/5034#discussion_r623084122



##
File path: 
nifi-commons/nifi-vault-utils/src/main/java/org/apache/nifi/vault/StandardVaultCommunicationService.java
##
@@ -0,0 +1,90 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.vault;
+
+import org.apache.nifi.util.FormatUtils;
+import org.apache.nifi.vault.config.VaultConfiguration;
+import org.apache.nifi.vault.config.VaultProperties;
+import org.springframework.vault.authentication.SimpleSessionManager;
+import org.springframework.vault.client.ClientHttpRequestFactoryFactory;
+import org.springframework.vault.core.VaultTemplate;
+import org.springframework.vault.support.Ciphertext;
+import org.springframework.vault.support.ClientOptions;
+import org.springframework.vault.support.Plaintext;
+import org.springframework.vault.support.SslConfiguration;
+
+import java.time.Duration;
+import java.util.Optional;
+import java.util.concurrent.TimeUnit;
+
+/**
+ * Implements the VaultCommunicationService using Spring Vault
+ */
+public class StandardVaultCommunicationService implements 
VaultCommunicationService {
+private static final String HTTPS = "https";
+
+private final VaultConfiguration vaultConfiguration;
+private final VaultTemplate vaultTemplate;
+
+/**
+ * Creates a VaultCommunicationService that uses Spring Vault.
+ * @param vaultProperties Properties to configure the service
+ * @throws VaultConfigurationException If the configuration was invalid
+ */
+public StandardVaultCommunicationService(final VaultProperties 
vaultProperties) throws VaultConfigurationException {
+this.vaultConfiguration = new VaultConfiguration(vaultProperties);
+
+final SslConfiguration sslConfiguration = 
vaultProperties.getUri().contains(HTTPS)
+? vaultConfiguration.sslConfiguration() : 
SslConfiguration.unconfigured();
+
+final ClientOptions clientOptions = getClientOptions(vaultProperties);
+
+vaultTemplate = new VaultTemplate(vaultConfiguration.vaultEndpoint(),
+ClientHttpRequestFactoryFactory.create(clientOptions, 
sslConfiguration),
+new 
SimpleSessionManager(vaultConfiguration.clientAuthentication()));
+}
+
+private static ClientOptions getClientOptions(VaultProperties 
vaultProperties) {
+final ClientOptions clientOptions = new ClientOptions();
+Duration readTimeoutDuration = clientOptions.getReadTimeout();
+Duration connectionTimeoutDuration = 
clientOptions.getConnectionTimeout();
+final Optional configuredReadTimeout = 
vaultProperties.getReadTimeout();
+if (configuredReadTimeout.isPresent()) {
+readTimeoutDuration = getDuration(configuredReadTimeout.get());
+}
+final Optional configuredConnectionTimeout = 
vaultProperties.getConnectionTimeout();
+if (configuredConnectionTimeout.isPresent()) {
+connectionTimeoutDuration = 
getDuration(configuredConnectionTimeout.get());
+}
+return new ClientOptions(connectionTimeoutDuration, 
readTimeoutDuration);
+}
+
+private static Duration getDuration(String formattedDuration) {
+final double duration = 
FormatUtils.getPreciseTimeDuration(formattedDuration, TimeUnit.MILLISECONDS);
+return Duration.ofMillis(Double.valueOf(duration).longValue());
+}
+
+@Override
+public String encrypt(String transitKey, byte[] plainText) {

Review comment:
   Recommend marking these method parameters and others as `final`.
   ```suggestion
   public String encrypt(final String transitKey, final byte[] plainText) {
   ```

##
File path: 
nifi-commons/nifi-vault-utils/src/main/java/org/apache/nifi/vault/config/VaultEnvironment.java
##
@@ -0,0 +1,179 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not 

[jira] [Assigned] (NIFI-8490) Back-end implementation of Composite Parameter Contexts

2021-04-29 Thread Joseph Gresock (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Gresock reassigned NIFI-8490:


Assignee: Joseph Gresock

> Back-end implementation of Composite Parameter Contexts
> ---
>
> Key: NIFI-8490
> URL: https://issues.apache.org/jira/browse/NIFI-8490
> Project: Apache NiFi
>  Issue Type: Sub-task
>Reporter: Joseph Gresock
>Assignee: Joseph Gresock
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-8435) PutKudu 1.13.2 Memory Leak

2021-04-29 Thread David Handermann (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17335496#comment-17335496
 ] 

David Handermann commented on NIFI-8435:


For reference, there is an open issue with Netty, starting in version 
4.1.43.Final that appears to have very similar behavior as the issue observed 
with PutKudu 1.13.2.  The comments on the issue mention several potential 
workarounds, but no great options for the purposes of the PutKudu processor.

> PutKudu 1.13.2 Memory Leak
> --
>
> Key: NIFI-8435
> URL: https://issues.apache.org/jira/browse/NIFI-8435
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.13.2
> Environment: NiFi 1.13.2, 8-Node Cluster running on CentOS 7, Kudu 
> 1.10.0
>Reporter: Josef Zahner
>Assignee: Peter Gyori
>Priority: Critical
>  Labels: kudu, nifi, oom
> Attachments: Screenshot 2021-04-20 at 14.27.11.png, 
> grafana_heap_overview.png, kudu_inserts_per_sec.png, 
> putkudu_processor_config.png, visualvm_bytes_detail_view.png, 
> visualvm_total_bytes_used.png
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> We just upgraded from NiFi 1.11.4 to 1.13.2 and faced a huge issue with 
> PutKudu.
> PutKudu on the 1.13.2 eats up all the heap memory and garbage collection 
> can't anymore free up the memory. We allow Java to use 31GB memory and as you 
> can see with NiFi 1.11.4 it will be used like it should with GC. However with 
> NiFi 1.13.2 with our actual load it fills up the memory relatively fast. 
> Manual GC via visualvm tool didn't help at all to free up memory.
> !grafana_heap_overview.png!
>  
> Visual VM shows the following culprit:  !visualvm_total_bytes_used.png!
> !visualvm_bytes_detail_view.png!
> The bytes array shows millions of char data which isn't cleaned up. In fact 
> here 14,9GB memory (heapdump has been taken after a while of full load). If 
> we check the same on NiFi 1.11.4, the bytes array is nearly empty, around a 
> few hundred MBs.
> As you could imagine we can't upload the heap dump as currently we have only 
> productive data on the system. But don't hesitate to ask questions about the 
> heapdump if you need more information.
> I haven't done any screenshot of the processor config, but I can do that if 
> you wish (we are back to NiFi 1.11.4 at the moment). 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-8493) PutDatabaseRecord incorrect type resolution for auto increment columns

2021-04-29 Thread Nadeem (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17335486#comment-17335486
 ] 

Nadeem commented on NIFI-8493:
--

Thanks [~UliSotschok], I was able to replicate the issue even with small table 
of 4 columns by interchanging varchar and bigint. The indexing between 
ColumnDescription and Record classes causes [1] to implicitly convert int to 
varchar which gets successful but insertion fails

[1] 
[https://github.com/apache/nifi/blob/main/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/PutDatabaseRecord.java#L705]

> PutDatabaseRecord incorrect type resolution for auto increment columns
> --
>
> Key: NIFI-8493
> URL: https://issues.apache.org/jira/browse/NIFI-8493
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.13.2
> Environment: Microsoft SQL Server 2019 (RTM-GDR) (KB4583458) - 
> 15.0.2080.9 (X64) 
>   Nov  6 2020 16:50:01 
>   Copyright (C) 2019 Microsoft Corporation
>   Standard Edition (64-bit) on Windows Server 2019 Datacenter 10.0  
> (Build 17763: ) (Hypervisor)
>Reporter: Julian
>Assignee: Nadeem
>Priority: Major
> Fix For: 1.13.3
>
>
> When there is a column where the value is "auto increment/identity", the type 
> resolution fails.
> This is similar to NIFI-8244 and was not resolved by NIFI-8237.
> When we INSERT new data we get this error message: _error: 
> Operandtypecollision: datetime2 is incompatible with smallint._
> The problem is, that the method is using indexes and not names. In the record 
> the first value is '_a_' but the method uses '_id_' as first value. Therefore 
> everything is off by one.
> Our table looks like this:
> {color:#cc7832}create table {color}t
>  (
>  {color:#9876aa}id {color}{color:#cc7832}bigint identity 
> {color}({color:#6897bb}0{color}{color:#cc7832}, 
> {color}{color:#6897bb}1{color})
>     {color:#cc7832}constraint {color}t_PK
>   {color:#cc7832}primary key,{color}
> {color:#9876aa}a {color}{color:#cc7832}int not null,{color}
> {color:#9876aa}b {color}{color:#cc7832}bigint not null,{color}
> {color:#9876aa}c {color}{color:#cc7832}float not null,{color}
> {color:#9876aa}d {color}{color:#cc7832}datetime not null,{color}
> {color:#9876aa}e {color}{color:#cc7832}smallint not null,{color}
> {color:#9876aa}f {color}{color:#cc7832}float,{color}
> {color:#9876aa}g {color}{color:#cc7832}float,{color}
> {color:#9876aa}h {color}{color:#cc7832}datetime default 
> {color}{color:#ffc66d}getdate{color}()
>  )
>  
> Record:
> {
>   "d": 1619503081000,
>    "c": 0,
>    "a": 34,
>    "b": 34,
>    "e": 0,
>    "f": "1.1",
>    "g": "1.2",
>   "h": 1619503095159
>  {{}}}
>  
> *What worked for us was changing this line in PutDatabaseRecord.executeDML():*
> Before:
> {color:#cc7832}final {color}ColumnDescription column = 
> columns.get(currentFieldIndex);
> After:
> {color:#cc7832}final {color}ColumnDescription column = 
> tableSchema.getColumns().get(normalizeColumnName(recordReader.getSchema().getField(i).getFieldName(){color:#cc7832},
>  
> {color}settings.{color:#9876aa}translateFieldNames{color})){color:#cc7832};{color}
>  
> This change also has another benefit. The order of the fields in RecordReader 
> doesn't matter any more.
>  
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (NIFI-7261) Automatic reload of truststore

2021-04-29 Thread David Handermann (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Handermann resolved NIFI-7261.

Fix Version/s: 1.14.0
 Assignee: Joseph Gresock
   Resolution: Fixed

> Automatic reload of truststore
> --
>
> Key: NIFI-7261
> URL: https://issues.apache.org/jira/browse/NIFI-7261
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Security
>Reporter: Jens M Kofoed
>Assignee: Joseph Gresock
>Priority: Minor
>  Labels: truststore
> Fix For: 1.14.0
>
>
> When a new remote connection is going to be etablish between new systems. 
> Both clusters in each end has to restarted after the public keys has been 
> added to the truststore. It would be very very good if the system automatic 
> could reload the truststore instead



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (NIFI-7134) Enable JettyServer to automatically detect keystore changes and update

2021-04-29 Thread David Handermann (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Handermann resolved NIFI-7134.

Fix Version/s: 1.14.0
   Resolution: Fixed

> Enable JettyServer to automatically detect keystore changes and update
> --
>
> Key: NIFI-7134
> URL: https://issues.apache.org/jira/browse/NIFI-7134
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework, Security
>Affects Versions: 1.11.1
>Reporter: patrick white
>Assignee: Joseph Gresock
>Priority: Minor
>  Labels: jetty, keystore, restart, security, tls
> Fix For: 1.14.0
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> TLS/keystore credential change currently requires a service restart to 
> update, [~alopresto] noted on 'users' that Jetty 9.3+ supports the ability to 
> dynamically update credentials, and provided reference [1].
> Request enabling NiFi JettyServer to support detection and reload of its 
> keystore when it changes, such as during credentials update or rotation, will 
> link this request to epic [2].
> [1] https://github.com/eclipse/jetty.project/issues/918
> [2] https://issues.apache.org/jira/browse/NIFI-5458



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-7134) Enable JettyServer to automatically detect keystore changes and update

2021-04-29 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17335481#comment-17335481
 ] 

ASF subversion and git services commented on NIFI-7134:
---

Commit 54a0e27c937aeef98e17e999a6e61591a46bf91c in nifi's branch 
refs/heads/main from Joe Gresock
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=54a0e27 ]

NIFI-7134: Adding auto-reloading of Keystore and Truststore

- NIFI-7261 Included TrustStoreScanner for auto-reloading of truststore

This closes #4991

Signed-off-by: David Handermann 


> Enable JettyServer to automatically detect keystore changes and update
> --
>
> Key: NIFI-7134
> URL: https://issues.apache.org/jira/browse/NIFI-7134
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework, Security
>Affects Versions: 1.11.1
>Reporter: patrick white
>Assignee: Joseph Gresock
>Priority: Minor
>  Labels: jetty, keystore, restart, security, tls
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> TLS/keystore credential change currently requires a service restart to 
> update, [~alopresto] noted on 'users' that Jetty 9.3+ supports the ability to 
> dynamically update credentials, and provided reference [1].
> Request enabling NiFi JettyServer to support detection and reload of its 
> keystore when it changes, such as during credentials update or rotation, will 
> link this request to epic [2].
> [1] https://github.com/eclipse/jetty.project/issues/918
> [2] https://issues.apache.org/jira/browse/NIFI-5458



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-7261) Automatic reload of truststore

2021-04-29 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17335482#comment-17335482
 ] 

ASF subversion and git services commented on NIFI-7261:
---

Commit 54a0e27c937aeef98e17e999a6e61591a46bf91c in nifi's branch 
refs/heads/main from Joe Gresock
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=54a0e27 ]

NIFI-7134: Adding auto-reloading of Keystore and Truststore

- NIFI-7261 Included TrustStoreScanner for auto-reloading of truststore

This closes #4991

Signed-off-by: David Handermann 


> Automatic reload of truststore
> --
>
> Key: NIFI-7261
> URL: https://issues.apache.org/jira/browse/NIFI-7261
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Security
>Reporter: Jens M Kofoed
>Priority: Minor
>  Labels: truststore
>
> When a new remote connection is going to be etablish between new systems. 
> Both clusters in each end has to restarted after the public keys has been 
> added to the truststore. It would be very very good if the system automatic 
> could reload the truststore instead



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] asfgit closed pull request #4991: NIFI-7134: Allowing auto-reloading of Jetty key/truststores

2021-04-29 Thread GitBox


asfgit closed pull request #4991:
URL: https://github.com/apache/nifi/pull/4991


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (NIFI-8493) PutDatabaseRecord incorrect type resolution for auto increment columns

2021-04-29 Thread Nadeem (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nadeem reassigned NIFI-8493:


Assignee: Nadeem

> PutDatabaseRecord incorrect type resolution for auto increment columns
> --
>
> Key: NIFI-8493
> URL: https://issues.apache.org/jira/browse/NIFI-8493
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.13.2
> Environment: Microsoft SQL Server 2019 (RTM-GDR) (KB4583458) - 
> 15.0.2080.9 (X64) 
>   Nov  6 2020 16:50:01 
>   Copyright (C) 2019 Microsoft Corporation
>   Standard Edition (64-bit) on Windows Server 2019 Datacenter 10.0  
> (Build 17763: ) (Hypervisor)
>Reporter: Julian
>Assignee: Nadeem
>Priority: Major
> Fix For: 1.13.3
>
>
> When there is a column where the value is "auto increment/identity", the type 
> resolution fails.
> This is similar to NIFI-8244 and was not resolved by NIFI-8237.
> When we INSERT new data we get this error message: _error: 
> Operandtypecollision: datetime2 is incompatible with smallint._
> The problem is, that the method is using indexes and not names. In the record 
> the first value is '_a_' but the method uses '_id_' as first value. Therefore 
> everything is off by one.
> Our table looks like this:
> {color:#cc7832}create table {color}t
>  (
>  {color:#9876aa}id {color}{color:#cc7832}bigint identity 
> {color}({color:#6897bb}0{color}{color:#cc7832}, 
> {color}{color:#6897bb}1{color})
>     {color:#cc7832}constraint {color}t_PK
>   {color:#cc7832}primary key,{color}
> {color:#9876aa}a {color}{color:#cc7832}int not null,{color}
> {color:#9876aa}b {color}{color:#cc7832}bigint not null,{color}
> {color:#9876aa}c {color}{color:#cc7832}float not null,{color}
> {color:#9876aa}d {color}{color:#cc7832}datetime not null,{color}
> {color:#9876aa}e {color}{color:#cc7832}smallint not null,{color}
> {color:#9876aa}f {color}{color:#cc7832}float,{color}
> {color:#9876aa}g {color}{color:#cc7832}float,{color}
> {color:#9876aa}h {color}{color:#cc7832}datetime default 
> {color}{color:#ffc66d}getdate{color}()
>  )
>  
> Record:
> {
>   "d": 1619503081000,
>    "c": 0,
>    "a": 34,
>    "b": 34,
>    "e": 0,
>    "f": "1.1",
>    "g": "1.2",
>   "h": 1619503095159
>  {{}}}
>  
> *What worked for us was changing this line in PutDatabaseRecord.executeDML():*
> Before:
> {color:#cc7832}final {color}ColumnDescription column = 
> columns.get(currentFieldIndex);
> After:
> {color:#cc7832}final {color}ColumnDescription column = 
> tableSchema.getColumns().get(normalizeColumnName(recordReader.getSchema().getField(i).getFieldName(){color:#cc7832},
>  
> {color}settings.{color:#9876aa}translateFieldNames{color})){color:#cc7832};{color}
>  
> This change also has another benefit. The order of the fields in RecordReader 
> doesn't matter any more.
>  
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)