[jira] [Closed] (NIFI-6878) ConvertJSONToSQL Improvement. Statement Type Support "Use statement.type Attribute" or Supports Expression Language

2020-02-16 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng closed NIFI-6878.


> ConvertJSONToSQL Improvement. Statement Type Support  "Use statement.type 
> Attribute" or Supports Expression Language 
> -
>
> Key: NIFI-6878
> URL: https://issues.apache.org/jira/browse/NIFI-6878
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: ZhangCheng
>Assignee: Matt Burgess
>Priority: Minor
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> ConvertJSONToSQL  Statement Type provides fixed options : 
> UPDATE,INSERT,DELETE. 
> Usually, it can meet our needs. But  in actual application,I think It's not 
> flexible enough.
>  In some cases, we need to dynamically indicate the Statement Type.
> For example,the data from CpatureChangeMysql owns  the attribute  of 
> statement  type(cdc.event.type, we need to convert the data to sql(DML) 
> orderly; And we now have to use RouteOnAttribute to transfer data to three 
> branches , Build SQL statement separately ,finally,we have to use 
> EnforceOrder  to ensure the order of SQL statements.
> But it will be easy if ConvertJSONToSQL  supports dynamical Statement Type . 
> It is easy to implement this feature just like PutDatabaseRecord. 
> In practice, I did use PutDatabaseRecord   instead of ConvertJSONToSQL.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-6878) ConvertJSONToSQL Improvement. Statement Type Support "Use statement.type Attribute" or Supports Expression Language

2020-02-16 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng updated NIFI-6878:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> ConvertJSONToSQL Improvement. Statement Type Support  "Use statement.type 
> Attribute" or Supports Expression Language 
> -
>
> Key: NIFI-6878
> URL: https://issues.apache.org/jira/browse/NIFI-6878
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: ZhangCheng
>Assignee: Matt Burgess
>Priority: Minor
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> ConvertJSONToSQL  Statement Type provides fixed options : 
> UPDATE,INSERT,DELETE. 
> Usually, it can meet our needs. But  in actual application,I think It's not 
> flexible enough.
>  In some cases, we need to dynamically indicate the Statement Type.
> For example,the data from CpatureChangeMysql owns  the attribute  of 
> statement  type(cdc.event.type, we need to convert the data to sql(DML) 
> orderly; And we now have to use RouteOnAttribute to transfer data to three 
> branches , Build SQL statement separately ,finally,we have to use 
> EnforceOrder  to ensure the order of SQL statements.
> But it will be easy if ConvertJSONToSQL  supports dynamical Statement Type . 
> It is easy to implement this feature just like PutDatabaseRecord. 
> In practice, I did use PutDatabaseRecord   instead of ConvertJSONToSQL.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7140) PutSql support database transaction rollback when is false

2020-02-16 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng updated NIFI-7140:
-
Description: 
For PutSQL processor,if the 'Support Fragmented Transactions' is true, the 
processor will process all FlowFiles with that fragment.identifier as a single 
transaction;
In actuality,it works. 
But when some sql of the transaction failed and  is false 
, the database transaction will not roll back.
Sometimes,we need the  database transaction rollback and do not want  the 
flowfile rollback, we need that the failed database  transaction route  to 
REL_FAILURE.
If the is true and  is 
false , I think it should still support the capability  of database transaction 
rollback, for example :it should add a property (like )  which can indicate that whether the processor support  
database transaction rollback when the 'Support Fragmented Transactions' is 
true .Of course ,when  is true , will be ignored

  was:
For PutSQL processor,if the 'Support Fragmented Transactions' is true, the 
processor will process all FlowFiles with that fragment.identifier as a single 
transaction;
In actuality,it works. 
But when some sql of the transaction failed and  is false 
, the database transaction will not roll back.
Sometimes,we need the  database transaction rollback and do not want  the 
flowfile rollback, we need that the failed database  transaction route  to 
REL_FAILURE.
If the is true and  is 
false , I think it should still support the capability  of database transaction 
rollback, for example :it should add a property (like )  which can indicate that whether the processor support  
database transaction rollback when the 'Support Fragmented Transactions' is 
true .Of course ,when  is true ,database transaction 
rollback will be supported too.


> PutSql support database transaction rollback when is 
> false
> 
>
> Key: NIFI-7140
> URL: https://issues.apache.org/jira/browse/NIFI-7140
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.11.1
>Reporter: ZhangCheng
>Priority: Major
>
> For PutSQL processor,if the 'Support Fragmented Transactions' is true, the 
> processor will process all FlowFiles with that fragment.identifier as a 
> single transaction;
> In actuality,it works. 
> But when some sql of the transaction failed and  is 
> false , the database transaction will not roll back.
> Sometimes,we need the  database transaction rollback and do not want  the 
> flowfile rollback, we need that the failed database  transaction route  to 
> REL_FAILURE.
> If the is true and  is 
> false , I think it should still support the capability  of database 
> transaction rollback, for example :it should add a property (like  Fragmented Transactions RollBack>)  which can indicate that whether the 
> processor support  database transaction rollback when the 'Support Fragmented 
> Transactions' is true .Of course ,when  is true 
> , will be ignored



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] ottobackwards commented on issue #4058: NIFI-7157 first attempt at a basic github workflow CI action

2020-02-16 Thread GitBox
ottobackwards commented on issue #4058: NIFI-7157 first attempt at a basic 
github workflow CI action
URL: https://github.com/apache/nifi/pull/4058#issuecomment-586810691
 
 
   @joewitt, can you clarify what this change means for contributors?  I test 
my pr's in my personal travis, will this work the same way?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (NIFI-5775) DataTypeUtils "toString" incorrectly treats value as a "byte" when passing an array leading to ClassCastException

2020-02-16 Thread Joe Percivall (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-5775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17037919#comment-17037919
 ] 

Joe Percivall commented on NIFI-5775:
-

Hey Karthik,

Not exactly, the schema that would cause issues would be  [CHOICE[STRING, 
ARRAY[STRING]]. This would result in a valid test case that is either: 
[{"path": "10.2.1.3"}] or [{"path": ["10.2.1.3"]}]. I'm having trouble with my 
build environment attempting to test master (Intellij says: "Error:java: 
invalid source release: 11") so I can't test it at the moment but the code does 
look like it has changed since this was first created if the test case works if 
you change it as listed in the description then I'd consider this OBE.


> DataTypeUtils "toString" incorrectly treats value as a "byte" when passing an 
> array leading to ClassCastException
> -
>
> Key: NIFI-5775
> URL: https://issues.apache.org/jira/browse/NIFI-5775
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.8.0
>Reporter: Joe Percivall
>Assignee: karthik kadajji
>Priority: Major
>
> To reproduce, change this line[1] to either put "String" as the first choice 
> of record type or just set the key to use string. 
> The resulting error:
> {noformat}
> java.lang.ClassCastException: java.lang.String cannot be cast to 
> java.lang.Byte
>   at 
> org.apache.nifi.serialization.record.util.DataTypeUtils.toString(DataTypeUtils.java:530)
>   at 
> org.apache.nifi.serialization.record.util.DataTypeUtils.convertType(DataTypeUtils.java:147)
>   at 
> org.apache.nifi.serialization.record.util.DataTypeUtils.convertType(DataTypeUtils.java:115)
>   at 
> org.apache.nifi.json.WriteJsonResult.writeValue(WriteJsonResult.java:284)
>   at 
> org.apache.nifi.json.WriteJsonResult.writeRecord(WriteJsonResult.java:187)
>   at 
> org.apache.nifi.json.WriteJsonResult.writeRecord(WriteJsonResult.java:136)
>   at 
> org.apache.nifi.json.TestWriteJsonResult.testChoiceArray(TestWriteJsonResult.java:494)
> {noformat}
> [1] 
> https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/test/java/org/apache/nifi/json/TestWriteJsonResult.java#L479



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-7159) Mongo processors appear to not support Decimal 128 data types

2020-02-16 Thread Mike Thomsen (Jira)
Mike Thomsen created NIFI-7159:
--

 Summary: Mongo processors appear to not support Decimal 128 data 
types
 Key: NIFI-7159
 URL: https://issues.apache.org/jira/browse/NIFI-7159
 Project: Apache NiFi
  Issue Type: Bug
Reporter: Mike Thomsen
Assignee: Mike Thomsen


This is the verbatim response from the user posting on nifi-dev:

 

*_Here’s a really stripped back example._*
 
 *_I have an input as below_*
 
 *_{_*
          *_"value": "123456.00"_*
 *_}_*
 
 *_And I want to write the value into MongoDB as a numeric value, but 
critically not as a double (these will be currency values)_*
 
 *_In order to try to enforce the type used to write to Mongo we created a 
simple Avro schema, for the example above this would be as follows;_*
 
 *_{_*
          *_"type": "record",_*
          *_"name": "TransactionEvent",_*
          *_"namespace": "com.example.demo",_*
          *_"fields": [_*
                   *_{_*
                            *_"name": "value",_*
                            *_"type": [_*
                                     *_"null",_*
                                     *_{_*
                                               *_"type": "bytes",_*
                                               *_"logicalType": "decimal",_*
                                               *_"precision": 10,_*
                                               *_"scale": 2_*
                                     *_}_*
                            *_]_*
                   *_}_*
          *_]_*
 *_}_*
 
 *_Hoping that this would map to a Decimal128 in Mongo, however we consistently 
see double as the type in Mongo regardless of any variations of Avro schema we 
have tried._*
 
 *_On having a quick look into the code I’ve identified 2 possible problem 
areas._*
 
 
   *_1.  The conversion of the Avro schema into the internal representation 
which seems to treat Avro logical decimal types as double (ref 
org.apache.nifi.avro.AvroTypeUtil – line 343)_*
   *_2.  The Mongo processor which uses this type information to decide what 
Mongo types to persist data as._*
 
 *_For a quick win, which would hopefully have a smaller impact, I was hoping 
that I could fork the Mongo processor and keep the changes local to that but 
since the information about the Avro logical type is lost before the schema 
information gets to MongoDB i’m not sure that will be possible now._*
 
 *_When we reached this point and the changes we were looking at seemed like 
they could be a little more complex than hoped we wanted to reach out to see 
if_*
 
 
   *_1.  We’re doing something wrong_*
   *_2.  Anybody else had encountered a similar situation_*
   *_3.  If we did look to introduce changes either to the Mongo processor or 
more widely for support of BigDecimal would this be of wider use?_*

 

It would appear to be a distinctly different type than Double:

 

https://mongodb.github.io/mongo-java-driver/3.5/javadoc/?org/bson/types/Decimal128.html



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-7114) NiFi not closing file handles

2020-02-16 Thread Paul Kelly (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17037758#comment-17037758
 ] 

Paul Kelly commented on NIFI-7114:
--

I was able to reproduce the issue with a stripped down flow.  I'm not sure 
which component is causing the leak, but this at least represents both our 
sending and receiving production flows.  Please see the attached 
reproduction.zip file.  Inside are the sample flow.xml.gz file, lsof logs from 
around the start and end of the test which shows FIFO usage associated with the 
NiFi process grew from 783 to 6,643 overnight, a log file tracking FIFO usage 
throughout the night (watch -n 30 'date >> runningfifocount.log;lsof | grep 
FIFO | wc -l >> runningfifocount.log') – I forgot to strip it down before 
uploading, the final test starts around line 286, "Sun Feb 16 00:52:27 UTC 
2020" – plus a thread dump and a heap dump from in the morning.

To generate sample files for the GetFile processor, I left the following 
running in a separate terminal window to generate large sample files:  watch -n 
1 'dd if=/dev/zero of=file`date +%s` bs=32M count=3'

As also shown by [~vzolin], our FIFO usage doesn't start growing until almost 
two hours in.  At that point it jumps up and continues to grow steadily 
afterwards.

Please let me know if there is anything else I can provide.  Thank you for 
looking into this.  We aren't able to use NiFi >1.10.0 because of this issue.

> NiFi not closing file handles
> -
>
> Key: NIFI-7114
> URL: https://issues.apache.org/jira/browse/NIFI-7114
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Configuration
>Affects Versions: 1.10.0, 1.11.0
> Environment: Amazon EC2 running either Amazon Linux 2 or Ubuntu 18.04.
> NiFi has been installed with no change to any configuration file.
>Reporter: Vinicius Zolin
>Priority: Major
> Attachments: destination.xml, lsof.log, lsof.zip, lsofAfter.log, 
> lsofBefore.log, openFiles.xlsx, reproduction.zip, source.xml
>
>
> Since at least version 1.10 NiFi stopped closing file handles. It opens circa 
> 500 files per hour (measured using lsof) without any apparent limit until it 
> crashes due to too many open files.
>  
> Increasing the computer open file limit is not a solution since NiFi will 
> still crash, it'll only take longer to do so.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7114) NiFi not closing file handles

2020-02-16 Thread Paul Kelly (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Kelly updated NIFI-7114:
-
Attachment: reproduction.zip

> NiFi not closing file handles
> -
>
> Key: NIFI-7114
> URL: https://issues.apache.org/jira/browse/NIFI-7114
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Configuration
>Affects Versions: 1.10.0, 1.11.0
> Environment: Amazon EC2 running either Amazon Linux 2 or Ubuntu 18.04.
> NiFi has been installed with no change to any configuration file.
>Reporter: Vinicius Zolin
>Priority: Major
> Attachments: destination.xml, lsof.log, lsof.zip, lsofAfter.log, 
> lsofBefore.log, openFiles.xlsx, reproduction.zip, source.xml
>
>
> Since at least version 1.10 NiFi stopped closing file handles. It opens circa 
> 500 files per hour (measured using lsof) without any apparent limit until it 
> crashes due to too many open files.
>  
> Increasing the computer open file limit is not a solution since NiFi will 
> still crash, it'll only take longer to do so.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)