[jira] [Resolved] (NIFI-10352) Remove an unused code for GenerateTableFetch.java

2022-08-23 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-10352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng resolved NIFI-10352.
---
Fix Version/s: 1.18.0
   Resolution: Fixed

> Remove an unused code for GenerateTableFetch.java
> -
>
> Key: NIFI-10352
> URL: https://issues.apache.org/jira/browse/NIFI-10352
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: ZhangCheng
>Assignee: ZhangCheng
>Priority: Minor
> Fix For: 1.18.0
>
> Attachments: image-2022-08-13-20-13-23-493.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
>  
>  remove the meaningless code of GenerateTableFetch.java, It is confusing for 
> reading code.
> !image-2022-08-13-20-13-23-493.png|width=850,height=455!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-10352) Remove an unused code for GenerateTableFetch.java

2022-08-23 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-10352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng updated NIFI-10352:
--
Status: Patch Available  (was: In Progress)

> Remove an unused code for GenerateTableFetch.java
> -
>
> Key: NIFI-10352
> URL: https://issues.apache.org/jira/browse/NIFI-10352
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: ZhangCheng
>Assignee: ZhangCheng
>Priority: Minor
> Attachments: image-2022-08-13-20-13-23-493.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
>  
>  remove the meaningless code of GenerateTableFetch.java, It is confusing for 
> reading code.
> !image-2022-08-13-20-13-23-493.png|width=850,height=455!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-10352) Remove an unused code for GenerateTableFetch.java

2022-08-23 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-10352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng updated NIFI-10352:
--
Status: Open  (was: Patch Available)

> Remove an unused code for GenerateTableFetch.java
> -
>
> Key: NIFI-10352
> URL: https://issues.apache.org/jira/browse/NIFI-10352
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: ZhangCheng
>Assignee: ZhangCheng
>Priority: Minor
> Attachments: image-2022-08-13-20-13-23-493.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
>  
>  remove the meaningless code of GenerateTableFetch.java, It is confusing for 
> reading code.
> !image-2022-08-13-20-13-23-493.png|width=850,height=455!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-10352) Remove an unused code for GenerateTableFetch.java

2022-08-13 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-10352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng updated NIFI-10352:
--
Description: 
 

 remove the meaningless code of GenerateTableFetch.java, It is confusing for 
reading code.

!image-2022-08-13-20-13-23-493.png|width=850,height=455!

  was:!image-2022-08-13-20-13-23-493.png|width=258,height=138!


> Remove an unused code for GenerateTableFetch.java
> -
>
> Key: NIFI-10352
> URL: https://issues.apache.org/jira/browse/NIFI-10352
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: ZhangCheng
>Assignee: ZhangCheng
>Priority: Minor
> Attachments: image-2022-08-13-20-13-23-493.png
>
>
>  
>  remove the meaningless code of GenerateTableFetch.java, It is confusing for 
> reading code.
> !image-2022-08-13-20-13-23-493.png|width=850,height=455!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-10352) Remove an unused code for GenerateTableFetch.java

2022-08-13 Thread ZhangCheng (Jira)
ZhangCheng created NIFI-10352:
-

 Summary: Remove an unused code for GenerateTableFetch.java
 Key: NIFI-10352
 URL: https://issues.apache.org/jira/browse/NIFI-10352
 Project: Apache NiFi
  Issue Type: Improvement
Reporter: ZhangCheng
Assignee: ZhangCheng
 Attachments: image-2022-08-13-20-13-23-493.png

!image-2022-08-13-20-13-23-493.png|width=258,height=138!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (NIFI-9588) Update doc for `nifi.content.repository.archive.max.retention.period`

2022-01-20 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-9588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng closed NIFI-9588.


> Update doc for `nifi.content.repository.archive.max.retention.period`
> -
>
> Key: NIFI-9588
> URL: https://issues.apache.org/jira/browse/NIFI-9588
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.13.0, 1.12.1, 1.14.0, 1.13.1, 1.13.2, 1.15.0, 1.15.1, 
> 1.15.2, 1.15.3
>Reporter: ZhangCheng
>Assignee: ZhangCheng
>Priority: Minor
> Fix For: 1.16.0
>
> Attachments: image-2022-01-19-14-38-43-385.png, 
> image-2022-01-19-14-40-50-311.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The default value of `nifi.content.repository.archive.max.retention.period` 
> has been changed to `7 days`.
>  
>  
>  
> !image-2022-01-19-14-38-43-385.png|width=813,height=204!
> !image-2022-01-19-14-40-50-311.png|width=811,height=211!



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (NIFI-9588) Update doc for `nifi.content.repository.archive.max.retention.period`

2022-01-18 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-9588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng updated NIFI-9588:
-
Fix Version/s: 1.16.0
   Status: Patch Available  (was: In Progress)

> Update doc for `nifi.content.repository.archive.max.retention.period`
> -
>
> Key: NIFI-9588
> URL: https://issues.apache.org/jira/browse/NIFI-9588
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.15.3, 1.15.2, 1.15.1, 1.15.0, 1.13.2, 1.13.1, 1.14.0, 
> 1.12.1, 1.13.0
>Reporter: ZhangCheng
>Assignee: ZhangCheng
>Priority: Minor
> Fix For: 1.16.0
>
> Attachments: image-2022-01-19-14-38-43-385.png, 
> image-2022-01-19-14-40-50-311.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The default value of `nifi.content.repository.archive.max.retention.period` 
> has been changed to `7 days`.
>  
>  
>  
> !image-2022-01-19-14-38-43-385.png|width=813,height=204!
> !image-2022-01-19-14-40-50-311.png|width=811,height=211!



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (NIFI-9588) Update doc for `nifi.content.repository.archive.max.retention.period`

2022-01-18 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-9588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng reassigned NIFI-9588:


Assignee: ZhangCheng

> Update doc for `nifi.content.repository.archive.max.retention.period`
> -
>
> Key: NIFI-9588
> URL: https://issues.apache.org/jira/browse/NIFI-9588
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.13.0, 1.12.1, 1.14.0, 1.13.1, 1.13.2, 1.15.0, 1.15.1, 
> 1.15.2, 1.15.3
>Reporter: ZhangCheng
>Assignee: ZhangCheng
>Priority: Minor
> Attachments: image-2022-01-19-14-38-43-385.png, 
> image-2022-01-19-14-40-50-311.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The default value of `nifi.content.repository.archive.max.retention.period` 
> has been changed to `7 days`.
>  
>  
>  
> !image-2022-01-19-14-38-43-385.png|width=813,height=204!
> !image-2022-01-19-14-40-50-311.png|width=811,height=211!



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (NIFI-9588) Update doc for `nifi.content.repository.archive.max.retention.period`

2022-01-18 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-9588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng updated NIFI-9588:
-
Description: 
The default value of `nifi.content.repository.archive.max.retention.period` has 
been changed to `7 days`.

 

 

 

!image-2022-01-19-14-38-43-385.png|width=813,height=204!

!image-2022-01-19-14-40-50-311.png|width=811,height=211!

  was:
 

 

 

!image-2022-01-19-14-38-43-385.png|width=813,height=204!


> Update doc for `nifi.content.repository.archive.max.retention.period`
> -
>
> Key: NIFI-9588
> URL: https://issues.apache.org/jira/browse/NIFI-9588
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.13.0, 1.12.1, 1.14.0, 1.13.1, 1.13.2, 1.15.0, 1.15.1, 
> 1.15.2, 1.15.3
>Reporter: ZhangCheng
>Priority: Minor
> Attachments: image-2022-01-19-14-38-43-385.png, 
> image-2022-01-19-14-40-50-311.png
>
>
> The default value of `nifi.content.repository.archive.max.retention.period` 
> has been changed to `7 days`.
>  
>  
>  
> !image-2022-01-19-14-38-43-385.png|width=813,height=204!
> !image-2022-01-19-14-40-50-311.png|width=811,height=211!



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (NIFI-9588) Update doc for `nifi.content.repository.archive.max.retention.period`

2022-01-18 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-9588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng updated NIFI-9588:
-
Attachment: image-2022-01-19-14-40-50-311.png

> Update doc for `nifi.content.repository.archive.max.retention.period`
> -
>
> Key: NIFI-9588
> URL: https://issues.apache.org/jira/browse/NIFI-9588
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.13.0, 1.12.1, 1.14.0, 1.13.1, 1.13.2, 1.15.0, 1.15.1, 
> 1.15.2, 1.15.3
>Reporter: ZhangCheng
>Priority: Minor
> Attachments: image-2022-01-19-14-38-43-385.png, 
> image-2022-01-19-14-40-50-311.png
>
>
>  
>  
>  
> !image-2022-01-19-14-38-43-385.png|width=813,height=204!



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (NIFI-9588) Update doc for `nifi.content.repository.archive.max.retention.period`

2022-01-18 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-9588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng updated NIFI-9588:
-
Description: 
 

 

 

!image-2022-01-19-14-38-43-385.png|width=813,height=204!

  was: !image-2022-01-19-14-38-43-385.png! 


> Update doc for `nifi.content.repository.archive.max.retention.period`
> -
>
> Key: NIFI-9588
> URL: https://issues.apache.org/jira/browse/NIFI-9588
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.13.0, 1.12.1, 1.14.0, 1.13.1, 1.13.2, 1.15.0, 1.15.1, 
> 1.15.2, 1.15.3
>Reporter: ZhangCheng
>Priority: Minor
> Attachments: image-2022-01-19-14-38-43-385.png
>
>
>  
>  
>  
> !image-2022-01-19-14-38-43-385.png|width=813,height=204!



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (NIFI-9588) Update doc for `nifi.content.repository.archive.max.retention.period`

2022-01-18 Thread ZhangCheng (Jira)
ZhangCheng created NIFI-9588:


 Summary: Update doc for 
`nifi.content.repository.archive.max.retention.period`
 Key: NIFI-9588
 URL: https://issues.apache.org/jira/browse/NIFI-9588
 Project: Apache NiFi
  Issue Type: Improvement
Affects Versions: 1.15.3, 1.15.2, 1.15.1, 1.15.0, 1.13.2, 1.13.1, 1.14.0, 
1.12.1, 1.13.0
Reporter: ZhangCheng
 Attachments: image-2022-01-19-14-38-43-385.png

 !image-2022-01-19-14-38-43-385.png! 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (NIFI-9169) Improvement For PutDatabaseRecord `Update Keys`, when we set sth like ${update.keys} but there is no 'update.keys' attribute of incoming flowfile

2021-08-26 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-9169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng updated NIFI-9169:
-
Summary: Improvement For PutDatabaseRecord `Update Keys`, when we set sth 
like ${update.keys} but there is no 'update.keys' attribute of incoming 
flowfile  (was: Improv For PutDatabaseRecord `Update Keys`, when we set sth 
like ${update.keys} but there is no 'update.keys' attribute of incoming 
flowfile)

> Improvement For PutDatabaseRecord `Update Keys`, when we set sth like 
> ${update.keys} but there is no 'update.keys' attribute of incoming flowfile
> -
>
> Key: NIFI-9169
> URL: https://issues.apache.org/jira/browse/NIFI-9169
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: ZhangCheng
>Assignee: ZhangCheng
>Priority: Major
>
> For NIFI, If we set some EL to one processor, such as `${schema.name}` for 
> PutDatabaseREcord, and the incoming flowfile does not have a `schema.name` 
> attribute, then the result of PutDatabaseRecord property evalutes is Empty 
> String(""), not NULL.
> So for PutDatabaseRecord `Update Keys`, If we set ${update.keys}, but some 
> flowfiles has `update.keys` attribute , some not. And we will get wrong



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-9169) Improvement For PutDatabaseRecord `Update Keys`, when we set sth like ${update.keys} but there is no 'update.keys' attribute of incoming flowfile

2021-08-26 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-9169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng updated NIFI-9169:
-
Issue Type: Improvement  (was: Bug)

> Improvement For PutDatabaseRecord `Update Keys`, when we set sth like 
> ${update.keys} but there is no 'update.keys' attribute of incoming flowfile
> -
>
> Key: NIFI-9169
> URL: https://issues.apache.org/jira/browse/NIFI-9169
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: ZhangCheng
>Assignee: ZhangCheng
>Priority: Major
>
> For NIFI, If we set some EL to one processor, such as `${schema.name}` for 
> PutDatabaseREcord, and the incoming flowfile does not have a `schema.name` 
> attribute, then the result of PutDatabaseRecord property evalutes is Empty 
> String(""), not NULL.
> So for PutDatabaseRecord `Update Keys`, If we set ${update.keys}, but some 
> flowfiles has `update.keys` attribute , some not. And we will get wrong



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-9169) Improv For PutDatabaseRecord `Update Keys`, when we set sth like ${update.keys} but there is no 'update.keys' attribute of incoming flowfile

2021-08-26 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-9169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng updated NIFI-9169:
-
Summary: Improv For PutDatabaseRecord `Update Keys`, when we set sth like 
${update.keys} but there is no 'update.keys' attribute of incoming flowfile  
(was: Wrong For PutDatabaseRecord `Update Keys`, when we set sth like 
${update.keys} but there is no 'update.keys' attribute of incoming flowfile)

> Improv For PutDatabaseRecord `Update Keys`, when we set sth like 
> ${update.keys} but there is no 'update.keys' attribute of incoming flowfile
> 
>
> Key: NIFI-9169
> URL: https://issues.apache.org/jira/browse/NIFI-9169
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: ZhangCheng
>Assignee: ZhangCheng
>Priority: Major
>
> For NIFI, If we set some EL to one processor, such as `${schema.name}` for 
> PutDatabaseREcord, and the incoming flowfile does not have a `schema.name` 
> attribute, then the result of PutDatabaseRecord property evalutes is Empty 
> String(""), not NULL.
> So for PutDatabaseRecord `Update Keys`, If we set ${update.keys}, but some 
> flowfiles has `update.keys` attribute , some not. And we will get wrong



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-9169) Wrong For PutDatabaseRecord `Update Keys`, when we set sth like ${update.keys} but there is no 'update.keys' attribute of incoming flowfile

2021-08-26 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-9169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng updated NIFI-9169:
-
Summary: Wrong For PutDatabaseRecord `Update Keys`, when we set sth like 
${update.keys} but there is no 'update.keys' attribute of incoming flowfile  
(was: For PutDatabaseRecord `Update Keys`, if we set sth like ${update.keys} 
but there is no 'update.keys' attribute of incoming flowfile)

> Wrong For PutDatabaseRecord `Update Keys`, when we set sth like 
> ${update.keys} but there is no 'update.keys' attribute of incoming flowfile
> ---
>
> Key: NIFI-9169
> URL: https://issues.apache.org/jira/browse/NIFI-9169
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: ZhangCheng
>Assignee: ZhangCheng
>Priority: Major
>
> For NIFI, If we set some EL to one processor, such as `${schema.name}` for 
> PutDatabaseREcord, and the incoming flowfile does not have a `schema.name` 
> attribute, then the result of PutDatabaseRecord property evalutes is Empty 
> String(""), not NULL.
> So for PutDatabaseRecord `Update Keys`, If we set ${update.keys}, but some 
> flowfiles has `update.keys` attribute , some not. And we will get wrong



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-9169) For PutDatabaseRecord `Update Keys`, if we set sth like ${update.keys} but there is no 'update.keys' attribute of incoming flowfile

2021-08-26 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-9169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng updated NIFI-9169:
-
Summary: For PutDatabaseRecord `Update Keys`, if we set sth like 
${update.keys} but there is no 'update.keys' attribute of incoming flowfile  
(was: Improvement for PutDatabaseRecord `Update Keys`)

> For PutDatabaseRecord `Update Keys`, if we set sth like ${update.keys} but 
> there is no 'update.keys' attribute of incoming flowfile
> ---
>
> Key: NIFI-9169
> URL: https://issues.apache.org/jira/browse/NIFI-9169
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: ZhangCheng
>Assignee: ZhangCheng
>Priority: Major
>
> For NIFI, If we set some EL to one processor, such as `${schema.name}` for 
> PutDatabaseREcord, and the incoming flowfile does not have a `schema.name` 
> attribute, then the result of PutDatabaseRecord property evalutes is Empty 
> String(""), not NULL.
> So for PutDatabaseRecord `Update Keys`, If we set ${update.keys}, but some 
> flowfiles has `update.keys` attribute , some not. And we will get wrong



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-9169) Improvement for PutDatabaseRecord `Update Keys`

2021-08-26 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-9169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng updated NIFI-9169:
-
Description: 
For NIFI, If we set some EL to one processor, such as `${schema.name}` for 
PutDatabaseREcord, and the incoming flowfile does not have a `schema.name` 
attribute, then the result of PutDatabaseRecord property evalutes is Empty 
String(""), not NULL.

So for PutDatabaseRecord `Update Keys`, If we set ${update.keys}, but some 
flowfiles has `update.keys` attribute , some not. And we will get wrong





  was:
For NIFI, If we set some EL to one processor, such as `${schema.name}` for 
PutDatabaseREcord, and the incoming flowfile does not have a `schema.name` 
attribute, then the result of PutDatabaseRecord property evalutes is Empty 
String(""), not NULL.

So for 





> Improvement for PutDatabaseRecord `Update Keys`
> ---
>
> Key: NIFI-9169
> URL: https://issues.apache.org/jira/browse/NIFI-9169
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: ZhangCheng
>Assignee: ZhangCheng
>Priority: Major
>
> For NIFI, If we set some EL to one processor, such as `${schema.name}` for 
> PutDatabaseREcord, and the incoming flowfile does not have a `schema.name` 
> attribute, then the result of PutDatabaseRecord property evalutes is Empty 
> String(""), not NULL.
> So for PutDatabaseRecord `Update Keys`, If we set ${update.keys}, but some 
> flowfiles has `update.keys` attribute , some not. And we will get wrong



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-9169) Improvement for PutDatabaseRecord `Update Keys`

2021-08-26 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-9169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng updated NIFI-9169:
-
Issue Type: Bug  (was: Improvement)

> Improvement for PutDatabaseRecord `Update Keys`
> ---
>
> Key: NIFI-9169
> URL: https://issues.apache.org/jira/browse/NIFI-9169
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: ZhangCheng
>Assignee: ZhangCheng
>Priority: Major
>
> For NIFI, If we set some EL to one processor, such as `${schema.name}` for 
> PutDatabaseREcord, and the incoming flowfile does not have a `schema.name` 
> attribute, then the result of PutDatabaseRecord property evalutes is Empty 
> String(""), not NULL.
> So for PutDatabaseRecord `Update Keys`, If we set ${update.keys}, but some 
> flowfiles has `update.keys` attribute , some not. And we will get wrong



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-9169) Improvement for PutDatabaseRecord `Update Keys`

2021-08-26 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-9169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng updated NIFI-9169:
-
Description: 
For NIFI, If we set some EL to one processor, such as `${schema.name}` for 
PutDatabaseREcord, and the incoming flowfile does not have a `schema.name` 
attribute, then the result of PutDatabaseRecord property evalutes is Empty 
String(""), not NULL.

So for 




  was:
For NIFI, If we set some EL to one processor, such as `${schema.name}` for 
PutDatabaseREcord, and the incoming flowfile does not have a `schema.name` 
attribute, then the result of PutDatabaseRecord property evalutes is Empty 
String(""), not NULL.
{code:java}
if (expressions.size() == 1) {
final String evaluated = 
expressions.get(0).evaluate(evaluationContext, decorator);
return evaluated == null ? EMPTY_STRING : evaluated;
}
{code}

And maybe we don not want the Empty String , for example

{code:java}
// PutDatabaseRecord 
public static TableSchema from(final Connection conn, final String catalog, 
final String schema, final String tableName,
   final boolean translateColumnNames, 
final boolean includePrimaryKeys, ComponentLog log) throws SQLException {
final DatabaseMetaData dmd = conn.getMetaData();

try (final ResultSet colrs = dmd.getColumns(catalog, schema, 
tableName, "%")) {
...
{code}

If an attribute does not exist, return NULL






> Improvement for PutDatabaseRecord `Update Keys`
> ---
>
> Key: NIFI-9169
> URL: https://issues.apache.org/jira/browse/NIFI-9169
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: ZhangCheng
>Assignee: ZhangCheng
>Priority: Major
>
> For NIFI, If we set some EL to one processor, such as `${schema.name}` for 
> PutDatabaseREcord, and the incoming flowfile does not have a `schema.name` 
> attribute, then the result of PutDatabaseRecord property evalutes is Empty 
> String(""), not NULL.
> So for 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-9169) Improvement for PutDatabaseRecord `Update Keys`

2021-08-26 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-9169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng updated NIFI-9169:
-
Summary: Improvement for PutDatabaseRecord `Update Keys`  (was: Improvement 
for Expression Language when evaluated result is NULL)

> Improvement for PutDatabaseRecord `Update Keys`
> ---
>
> Key: NIFI-9169
> URL: https://issues.apache.org/jira/browse/NIFI-9169
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: ZhangCheng
>Assignee: ZhangCheng
>Priority: Major
>
> For NIFI, If we set some EL to one processor, such as `${schema.name}` for 
> PutDatabaseREcord, and the incoming flowfile does not have a `schema.name` 
> attribute, then the result of PutDatabaseRecord property evalutes is Empty 
> String(""), not NULL.
> {code:java}
> if (expressions.size() == 1) {
> final String evaluated = 
> expressions.get(0).evaluate(evaluationContext, decorator);
> return evaluated == null ? EMPTY_STRING : evaluated;
> }
> {code}
> And maybe we don not want the Empty String , for example
> {code:java}
> // PutDatabaseRecord 
> public static TableSchema from(final Connection conn, final String catalog, 
> final String schema, final String tableName,
>final boolean translateColumnNames, 
> final boolean includePrimaryKeys, ComponentLog log) throws SQLException {
> final DatabaseMetaData dmd = conn.getMetaData();
> try (final ResultSet colrs = dmd.getColumns(catalog, schema, 
> tableName, "%")) {
> ...
> {code}
> If an attribute does not exist, return NULL



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-9169) Improvement for Expression Language when evaluated result is NULL

2021-08-26 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-9169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng updated NIFI-9169:
-
Description: 
For NIFI, If we set some EL to one processor, such as `${schema.name}` for 
PutDatabaseREcord, and the incoming flowfile does not have a `schema.name` 
attribute, then the result of PutDatabaseRecord property evalutes is Empty 
String(""), not NULL.
{code:java}
if (expressions.size() == 1) {
final String evaluated = 
expressions.get(0).evaluate(evaluationContext, decorator);
return evaluated == null ? EMPTY_STRING : evaluated;
}
{code}

And maybe we don not want the Empty String , for example

{code:java}
// PutDatabaseRecord 
public static TableSchema from(final Connection conn, final String catalog, 
final String schema, final String tableName,
   final boolean translateColumnNames, 
final boolean includePrimaryKeys, ComponentLog log) throws SQLException {
final DatabaseMetaData dmd = conn.getMetaData();

try (final ResultSet colrs = dmd.getColumns(catalog, schema, 
tableName, "%")) {
...
{code}

If an attribute does not exist, return NULL





  was:
For NIFI, If we set some EL to one processor, such as `${schema.name}` for 
PutDatabaseREcord, and the incoming flowfile does not have a `schema.name` 
attribute, then the result of PutDatabaseRecord property evalutes is Empty 
String(""), not NULL.
{code:java}
if (expressions.size() == 1) {
final String evaluated = 
expressions.get(0).evaluate(evaluationContext, decorator);
return evaluated == null ? EMPTY_STRING : evaluated;
}
{code}

And maybe we don not want the Empty String , for example

{code:java}
// PutDatabaseRecord 
public static TableSchema from(final Connection conn, final String catalog, 
final String schema, final String tableName,
   final boolean translateColumnNames, 
final boolean includePrimaryKeys, ComponentLog log) throws SQLException {
final DatabaseMetaData dmd = conn.getMetaData();

try (final ResultSet colrs = dmd.getColumns(catalog, schema, 
tableName, "%")) {
...
{code}







> Improvement for Expression Language when evaluated result is NULL
> -
>
> Key: NIFI-9169
> URL: https://issues.apache.org/jira/browse/NIFI-9169
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: ZhangCheng
>Assignee: ZhangCheng
>Priority: Major
>
> For NIFI, If we set some EL to one processor, such as `${schema.name}` for 
> PutDatabaseREcord, and the incoming flowfile does not have a `schema.name` 
> attribute, then the result of PutDatabaseRecord property evalutes is Empty 
> String(""), not NULL.
> {code:java}
> if (expressions.size() == 1) {
> final String evaluated = 
> expressions.get(0).evaluate(evaluationContext, decorator);
> return evaluated == null ? EMPTY_STRING : evaluated;
> }
> {code}
> And maybe we don not want the Empty String , for example
> {code:java}
> // PutDatabaseRecord 
> public static TableSchema from(final Connection conn, final String catalog, 
> final String schema, final String tableName,
>final boolean translateColumnNames, 
> final boolean includePrimaryKeys, ComponentLog log) throws SQLException {
> final DatabaseMetaData dmd = conn.getMetaData();
> try (final ResultSet colrs = dmd.getColumns(catalog, schema, 
> tableName, "%")) {
> ...
> {code}
> If an attribute does not exist, return NULL



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-9169) Improvement for Expression Language when evaluated result is NULL

2021-08-26 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-9169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng updated NIFI-9169:
-
Description: 
For NIFI, If we set some EL to one processor, such as `${schema.name}` for 
PutDatabaseREcord, and the incoming flowfile does not have a `schema.name` 
attribute, then the result of PutDatabaseRecord property evalutes is Empty 
String(""), not NULL.
{code:java}
if (expressions.size() == 1) {
final String evaluated = 
expressions.get(0).evaluate(evaluationContext, decorator);
return evaluated == null ? EMPTY_STRING : evaluated;
}
{code}

And maybe we don not want the Empty String , for example

{code:java}
// PutDatabaseRecord 
public static TableSchema from(final Connection conn, final String catalog, 
final String schema, final String tableName,
   final boolean translateColumnNames, 
final boolean includePrimaryKeys, ComponentLog log) throws SQLException {
final DatabaseMetaData dmd = conn.getMetaData();

try (final ResultSet colrs = dmd.getColumns(catalog, schema, 
tableName, "%")) {
...
{code}






> Improvement for Expression Language when evaluated result is NULL
> -
>
> Key: NIFI-9169
> URL: https://issues.apache.org/jira/browse/NIFI-9169
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: ZhangCheng
>Assignee: ZhangCheng
>Priority: Major
>
> For NIFI, If we set some EL to one processor, such as `${schema.name}` for 
> PutDatabaseREcord, and the incoming flowfile does not have a `schema.name` 
> attribute, then the result of PutDatabaseRecord property evalutes is Empty 
> String(""), not NULL.
> {code:java}
> if (expressions.size() == 1) {
> final String evaluated = 
> expressions.get(0).evaluate(evaluationContext, decorator);
> return evaluated == null ? EMPTY_STRING : evaluated;
> }
> {code}
> And maybe we don not want the Empty String , for example
> {code:java}
> // PutDatabaseRecord 
> public static TableSchema from(final Connection conn, final String catalog, 
> final String schema, final String tableName,
>final boolean translateColumnNames, 
> final boolean includePrimaryKeys, ComponentLog log) throws SQLException {
> final DatabaseMetaData dmd = conn.getMetaData();
> try (final ResultSet colrs = dmd.getColumns(catalog, schema, 
> tableName, "%")) {
> ...
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-9169) Improvement for Expression Language when evaluated result is NULL

2021-08-26 Thread ZhangCheng (Jira)
ZhangCheng created NIFI-9169:


 Summary: Improvement for Expression Language when evaluated result 
is NULL
 Key: NIFI-9169
 URL: https://issues.apache.org/jira/browse/NIFI-9169
 Project: Apache NiFi
  Issue Type: Improvement
Reporter: ZhangCheng
Assignee: ZhangCheng






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-9078) Date value wong when `Use Avro Logical Types` is true

2021-08-24 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-9078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng updated NIFI-9078:
-
Description: 
As doc says, when `Use Avro Logical Types` is true, `DATE as logical 
'date-millis': written as int denoting days since Unix epoch (1970-01-01),`. 
But when record writer is JsonRecordSerWriter, DATE is written as denoting 
mills since Unix epoch.

ExecuteSQLRecord, QueryDatabaseTableRecord, JsonRecordSetWriter

  was:As doc says, when `DATE as logical 'date-millis': written as int denoting 
days since Unix epoch (1970-01-01),`


> Date value wong when `Use Avro Logical Types` is true
> -
>
> Key: NIFI-9078
> URL: https://issues.apache.org/jira/browse/NIFI-9078
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: ZhangCheng
>Priority: Major
>
> As doc says, when `Use Avro Logical Types` is true, `DATE as logical 
> 'date-millis': written as int denoting days since Unix epoch (1970-01-01),`. 
> But when record writer is JsonRecordSerWriter, DATE is written as denoting 
> mills since Unix epoch.
> ExecuteSQLRecord, QueryDatabaseTableRecord, JsonRecordSetWriter



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-9078) Date value wong when `Use Avro Logical Types` is true for JsonRecordSerWriter

2021-08-24 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-9078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng updated NIFI-9078:
-
Summary: Date value wong when `Use Avro Logical Types` is true for 
JsonRecordSerWriter  (was: Date value wong when `Use Avro Logical Types` is 
true)

> Date value wong when `Use Avro Logical Types` is true for JsonRecordSerWriter
> -
>
> Key: NIFI-9078
> URL: https://issues.apache.org/jira/browse/NIFI-9078
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: ZhangCheng
>Priority: Major
>
> As doc says, when `Use Avro Logical Types` is true, `DATE as logical 
> 'date-millis': written as int denoting days since Unix epoch (1970-01-01),`. 
> But when record writer is JsonRecordSerWriter, DATE is written as denoting 
> mills since Unix epoch.
> ExecuteSQLRecord, QueryDatabaseTableRecord, JsonRecordSetWriter



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-9078) Date value wong when `Use Avro Logical Types` is true

2021-08-24 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-9078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng updated NIFI-9078:
-
Description: As doc says, when `DATE as logical 'date-millis': written as 
int denoting days since Unix epoch (1970-01-01),`

> Date value wong when `Use Avro Logical Types` is true
> -
>
> Key: NIFI-9078
> URL: https://issues.apache.org/jira/browse/NIFI-9078
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: ZhangCheng
>Priority: Major
>
> As doc says, when `DATE as logical 'date-millis': written as int denoting 
> days since Unix epoch (1970-01-01),`



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-9078) Date value wong when `Use Avro Logical Types` is true

2021-08-24 Thread ZhangCheng (Jira)
ZhangCheng created NIFI-9078:


 Summary: Date value wong when `Use Avro Logical Types` is true
 Key: NIFI-9078
 URL: https://issues.apache.org/jira/browse/NIFI-9078
 Project: Apache NiFi
  Issue Type: Bug
  Components: Extensions
Reporter: ZhangCheng






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (NIFI-9064) ExecuteSQLRecord support Oracle timestamp when `Use Avro Logical Types` is true

2021-08-19 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-9064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng resolved NIFI-9064.
--
Resolution: Fixed

> ExecuteSQLRecord support Oracle timestamp when `Use Avro Logical Types` is 
> true 
> 
>
> Key: NIFI-9064
> URL: https://issues.apache.org/jira/browse/NIFI-9064
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.14.0
>Reporter: ZhangCheng
>Assignee: ZhangCheng
>Priority: Major
> Fix For: 1.15.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> When the source db is Oracle, and the table has `timestamp` column, using 
> `ExecuteSQLRecord`(the same as `QueryDatabaseTableRecord`)  and set `Use Avro 
> Logical Types` true,  we will get sth like this:
> ```
> Caused by: java.io.IOException: 
> org.apache.nifi.serialization.record.util.IllegalTypeConversionException: 
> Cannot convert value [2021-08-19 10:58:50.01] of type class 
> oracle.sql.TIMESTAMP to Timestamp for field TS
>  at 
> org.apache.nifi.processors.standard.sql.RecordSqlWriter.writeResultSet(RecordSqlWriter.java:88)
>  at 
> org.apache.nifi.processors.standard.AbstractExecuteSQL.lambda$onTrigger$1(AbstractExecuteSQL.java:302)
>  ... 14 common frames omitted
> ```



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (NIFI-9064) ExecuteSQLRecord support Oracle timestamp when `Use Avro Logical Types` is true

2021-08-19 Thread ZhangCheng (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-9064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17401647#comment-17401647
 ] 

ZhangCheng edited comment on NIFI-9064 at 8/19/21, 1:13 PM:


Maybe I can fix this by making some change at DataTypeUtils.toTimestamp, sth 
like 
```
if ("oracle.sql.TIMESTAMP".equals(value.getClass().getName())) {
..
}
```


was (Author: ku_cheng):
Maybe I can fix this by making some change at DataTypeUtils.toTimestamp, sth 
like 
```
if ("oracle.sql.TIMESTAMP".equals(value.getClass().getName())) {
String orcValue = value.toString();
String orcFormat = "-MM-dd HH:mm:ss.SSS";
DateFormat dateFormat = getDateFormat(orcFormat);
if (orcValue.length() > orcFormat.length()) {
orcValue = orcValue.substring(0, orcValue.indexOf(".") + 4);
}
if (orcValue.length() < orcFormat.length()) {
int supplement = orcFormat.length() - orcValue.length();
while (supplement > 0) {
orcValue += '0';
supplement -- ;
}
}
try {
final java.util.Date utilDate = dateFormat.parse(orcValue);
return new Timestamp(utilDate.getTime());
} catch (ParseException e) {
throw new IllegalTypeConversionException("Could not convert 
value [" + value
+ "] of Oracle DB to Timestamp for field " + fieldName 
+ " because the value is not in the expected date format: "
+ orcFormat);
}
}
```

> ExecuteSQLRecord support Oracle timestamp when `Use Avro Logical Types` is 
> true 
> 
>
> Key: NIFI-9064
> URL: https://issues.apache.org/jira/browse/NIFI-9064
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.14.0
>Reporter: ZhangCheng
>Assignee: ZhangCheng
>Priority: Major
> Fix For: 1.15.0
>
>
> When the source db is Oracle, and the table has `timestamp` column, using 
> `ExecuteSQLRecord`(the same as `QueryDatabaseTableRecord`)  and set `Use Avro 
> Logical Types` true,  we will get sth like this:
> ```
> Caused by: java.io.IOException: 
> org.apache.nifi.serialization.record.util.IllegalTypeConversionException: 
> Cannot convert value [2021-08-19 10:58:50.01] of type class 
> oracle.sql.TIMESTAMP to Timestamp for field TS
>  at 
> org.apache.nifi.processors.standard.sql.RecordSqlWriter.writeResultSet(RecordSqlWriter.java:88)
>  at 
> org.apache.nifi.processors.standard.AbstractExecuteSQL.lambda$onTrigger$1(AbstractExecuteSQL.java:302)
>  ... 14 common frames omitted
> ```



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-9064) ExecuteSQLRecord support Oracle timestamp when `Use Avro Logical Types` is true

2021-08-19 Thread ZhangCheng (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-9064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17401647#comment-17401647
 ] 

ZhangCheng commented on NIFI-9064:
--

Maybe I can fix this by making some change at DataTypeUtils.toTimestamp, sth 
like 
```
if ("oracle.sql.TIMESTAMP".equals(value.getClass().getName())) {
String orcValue = value.toString();
String orcFormat = "-MM-dd HH:mm:ss.SSS";
DateFormat dateFormat = getDateFormat(orcFormat);
if (orcValue.length() > orcFormat.length()) {
orcValue = orcValue.substring(0, orcValue.indexOf(".") + 4);
}
if (orcValue.length() < orcFormat.length()) {
int supplement = orcFormat.length() - orcValue.length();
while (supplement > 0) {
orcValue += '0';
supplement -- ;
}
}
try {
final java.util.Date utilDate = dateFormat.parse(orcValue);
return new Timestamp(utilDate.getTime());
} catch (ParseException e) {
throw new IllegalTypeConversionException("Could not convert 
value [" + value
+ "] of Oracle DB to Timestamp for field " + fieldName 
+ " because the value is not in the expected date format: "
+ orcFormat);
}
}
```

> ExecuteSQLRecord support Oracle timestamp when `Use Avro Logical Types` is 
> true 
> 
>
> Key: NIFI-9064
> URL: https://issues.apache.org/jira/browse/NIFI-9064
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.14.0
>Reporter: ZhangCheng
>Assignee: ZhangCheng
>Priority: Major
> Fix For: 1.15.0
>
>
> When the source db is Oracle, and the table has `timestamp` column, using 
> `ExecuteSQLRecord`(the same as `QueryDatabaseTableRecord`)  and set `Use Avro 
> Logical Types` true,  we will get sth like this:
> ```
> Caused by: java.io.IOException: 
> org.apache.nifi.serialization.record.util.IllegalTypeConversionException: 
> Cannot convert value [2021-08-19 10:58:50.01] of type class 
> oracle.sql.TIMESTAMP to Timestamp for field TS
>  at 
> org.apache.nifi.processors.standard.sql.RecordSqlWriter.writeResultSet(RecordSqlWriter.java:88)
>  at 
> org.apache.nifi.processors.standard.AbstractExecuteSQL.lambda$onTrigger$1(AbstractExecuteSQL.java:302)
>  ... 14 common frames omitted
> ```



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-9064) ExecuteSQLRecord support Oracle timestamp when `Use Avro Logical Types` is true

2021-08-19 Thread ZhangCheng (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-9064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17401646#comment-17401646
 ] 

ZhangCheng commented on NIFI-9064:
--

1st
If we fix this issue like `ExecuteSQL` (`DefaultAvroSqlWriter` -> 
`convertToAvroStream(...)` -> `rs.getTimestamp(..)`), we can add some code at 
`ResultSetRecordSet.createRecord(final ResultSet rs)`  like that.

But, If we do so, we will always loss accuracy for Oracle Timestamp whether 
`Use Avro Logical Types` is set to true or not. That doesn't seem appropriate.

> ExecuteSQLRecord support Oracle timestamp when `Use Avro Logical Types` is 
> true 
> 
>
> Key: NIFI-9064
> URL: https://issues.apache.org/jira/browse/NIFI-9064
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.14.0
>Reporter: ZhangCheng
>Assignee: ZhangCheng
>Priority: Major
> Fix For: 1.15.0
>
>
> When the source db is Oracle, and the table has `timestamp` column, using 
> `ExecuteSQLRecord`(the same as `QueryDatabaseTableRecord`)  and set `Use Avro 
> Logical Types` true,  we will get sth like this:
> ```
> Caused by: java.io.IOException: 
> org.apache.nifi.serialization.record.util.IllegalTypeConversionException: 
> Cannot convert value [2021-08-19 10:58:50.01] of type class 
> oracle.sql.TIMESTAMP to Timestamp for field TS
>  at 
> org.apache.nifi.processors.standard.sql.RecordSqlWriter.writeResultSet(RecordSqlWriter.java:88)
>  at 
> org.apache.nifi.processors.standard.AbstractExecuteSQL.lambda$onTrigger$1(AbstractExecuteSQL.java:302)
>  ... 14 common frames omitted
> ```



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-9064) ExecuteSQLRecord support Oracle timestamp when `Use Avro Logical Types` is true

2021-08-19 Thread ZhangCheng (Jira)
ZhangCheng created NIFI-9064:


 Summary: ExecuteSQLRecord support Oracle timestamp when `Use Avro 
Logical Types` is true 
 Key: NIFI-9064
 URL: https://issues.apache.org/jira/browse/NIFI-9064
 Project: Apache NiFi
  Issue Type: Bug
  Components: Extensions
Affects Versions: 1.14.0
Reporter: ZhangCheng
Assignee: ZhangCheng
 Fix For: 1.15.0


When the source db is Oracle, and the table has `timestamp` column, using 
`ExecuteSQLRecord`(the same as `QueryDatabaseTableRecord`)  and set `Use Avro 
Logical Types` true,  we will get sth like this:

```

Caused by: java.io.IOException: 
org.apache.nifi.serialization.record.util.IllegalTypeConversionException: 
Cannot convert value [2021-08-19 10:58:50.01] of type class 
oracle.sql.TIMESTAMP to Timestamp for field TS
 at 
org.apache.nifi.processors.standard.sql.RecordSqlWriter.writeResultSet(RecordSqlWriter.java:88)
 at 
org.apache.nifi.processors.standard.AbstractExecuteSQL.lambda$onTrigger$1(AbstractExecuteSQL.java:302)
 ... 14 common frames omitted

```



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-8101) Improvement PutDatabaseRecord for refresh table schema cache

2021-01-08 Thread ZhangCheng (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17261747#comment-17261747
 ] 

ZhangCheng commented on NIFI-8101:
--

Hi [~mattyb149],happy new year!

Firstly ,it would be better to generate a new Processor for creating(alter 
drop) table. In this way, the function of the Processor is single and 
independent, and we can design the process more flexibly.(And if we want to 
develop new Processors for synchronize the table structure, i think some 
thoughts in Kettle is very useful)

Secondly, I think the 'PutDatabaseRecord' function should focus on the `data`, 
and strive to ensure that the correct data is written to the target table, with 
no missing data and no dirty data. And we should follow the table structure of 
the target table, instead of modifying the target table according to the data 
structure. Modify the target table structure based on the structure of the 
data, I know this is useful sometimes, but the data content in the FlowFile 
should be treated as indeterminate (including the structure of the data).

Even if we provide the ability to synchronize the table structure in our flow, 
I think that modifying the target table should happen before writing data to 
the target table, not when writing data to the target table using 
'putDatabaseRecord' or after some error occurs.I always think that modifying 
the structure of the target table is a very serious thing, and we should do it 
explicitly and visibly when we need to.

Additionally, 'Unmatched Field Behavior' and 'Unmatched Column Behavior' are 
very useful (I really really like this design),There are always situations 
where the incoming data is more or less columns than the target table, and 
these columns are likely to be the ones the designer wants to ignore. NIFI-8101 
is just the enhancement and supplement of Unmatched Field Behavior' and 
'Unmatched Column Behavior'. If the user already knows how to use 
PutDatabaseRecord and understands the 'Unmatched Field Behavior' and 'Unmatched 
Column Behavior', then I believe they will be able to easily accept the 
'Refresh Cached Schema'(Refresh Unmatched Fields/Refresh Unmatched Columns...).

That is my thoughts, what do you think about.

 

> Improvement  PutDatabaseRecord for refresh table schema cache
> -
>
> Key: NIFI-8101
> URL: https://issues.apache.org/jira/browse/NIFI-8101
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: ZhangCheng
>Assignee: ZhangCheng
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> * Sometimes,  the target table has changed and the `PutDatabaseRecord` cached 
> outdated  table schema informatiion.  Maybe we need a new property to tell 
> the `PutDatabaseRecord` to refresh the table schema cache under certain 
> conditions. 
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-8101) Improvement PutDatabaseRecord for refresh table schema cache

2020-12-17 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng updated NIFI-8101:
-
Status: Patch Available  (was: In Progress)

> Improvement  PutDatabaseRecord for refresh table schema cache
> -
>
> Key: NIFI-8101
> URL: https://issues.apache.org/jira/browse/NIFI-8101
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: ZhangCheng
>Assignee: ZhangCheng
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> * Sometimes,  the target table has changed and the `PutDatabaseRecord` cached 
> outdated  table schema informatiion.  Maybe we need a new property to tell 
> the `PutDatabaseRecord` to refresh the table schema cache under certain 
> conditions. 
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-8101) Improvement PutDatabaseRecord for refresh table schema cache

2020-12-16 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng updated NIFI-8101:
-
Description: 
* Sometimes,  the target table has changed and the `PutDatabaseRecord` cached 
outdated  table schema informatiion.  Maybe we need a new property to tell the 
`PutDatabaseRecord` to refresh the table schema cache under certain conditions. 

 
 

  was:Sometimes,  the target table has changed and 


> Improvement  PutDatabaseRecord for refresh table schema cache
> -
>
> Key: NIFI-8101
> URL: https://issues.apache.org/jira/browse/NIFI-8101
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: ZhangCheng
>Assignee: ZhangCheng
>Priority: Major
>
> * Sometimes,  the target table has changed and the `PutDatabaseRecord` cached 
> outdated  table schema informatiion.  Maybe we need a new property to tell 
> the `PutDatabaseRecord` to refresh the table schema cache under certain 
> conditions. 
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-8101) Improvement PutDatabaseRecord for refresh table schema cache

2020-12-16 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng updated NIFI-8101:
-
Description: Sometimes,  the target table has changed and 

> Improvement  PutDatabaseRecord for refresh table schema cache
> -
>
> Key: NIFI-8101
> URL: https://issues.apache.org/jira/browse/NIFI-8101
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: ZhangCheng
>Assignee: ZhangCheng
>Priority: Major
>
> Sometimes,  the target table has changed and 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-8101) Improvement PutDatabaseRecord for refresh table schema cache

2020-12-16 Thread ZhangCheng (Jira)
ZhangCheng created NIFI-8101:


 Summary: Improvement  PutDatabaseRecord for refresh table schema 
cache
 Key: NIFI-8101
 URL: https://issues.apache.org/jira/browse/NIFI-8101
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Extensions
Reporter: ZhangCheng
Assignee: ZhangCheng






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-8032) fix RecordPath Guide doc

2020-11-20 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng updated NIFI-8032:
-
Status: Patch Available  (was: In Progress)

> fix RecordPath Guide doc
> 
>
> Key: NIFI-8032
> URL: https://issues.apache.org/jira/browse/NIFI-8032
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: ZhangCheng
>Assignee: ZhangCheng
>Priority: Major
> Attachments: image-2020-11-20-15-01-25-617.png, 
> image-2020-11-20-15-02-02-666.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> !image-2020-11-20-15-01-25-617.png!
>  
> !image-2020-11-20-15-02-02-666.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-8032) fix RecordPath Guide doc

2020-11-19 Thread ZhangCheng (Jira)
ZhangCheng created NIFI-8032:


 Summary: fix RecordPath Guide doc
 Key: NIFI-8032
 URL: https://issues.apache.org/jira/browse/NIFI-8032
 Project: Apache NiFi
  Issue Type: Bug
Reporter: ZhangCheng
Assignee: ZhangCheng
 Attachments: image-2020-11-20-15-01-25-617.png, 
image-2020-11-20-15-02-02-666.png

!image-2020-11-20-15-01-25-617.png!

 

!image-2020-11-20-15-02-02-666.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-6061) PutDatabaseRecord does not properly handle BLOB/CLOB fields

2020-09-25 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng updated NIFI-6061:
-
Affects Version/s: 1.11.4
   Status: Patch Available  (was: In Progress)

> PutDatabaseRecord does not properly handle BLOB/CLOB fields
> ---
>
> Key: NIFI-6061
> URL: https://issues.apache.org/jira/browse/NIFI-6061
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.11.4
>Reporter: Matt Burgess
>Assignee: ZhangCheng
>Priority: Major
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> BLOB/CLOB fields in NiFi's Record API are returned from the record as 
> Object[Byte], but when PutDatabaseRecord tries to insert Object[] via 
> setObject(), the following error occurs:
> 2019-02-20 15:11:16,216 WARN [Timer-Driven Process Thread-10] 
> o.a.n.p.standard.PutDatabaseRecord 
> PutDatabaseRecord[id=0c84b9de-0169-1000-0164-3fbad7a17664] Failed to process 
> StandardFlowFileRecord[uuid=d739f432-0871-41bb-a0c9-d6ceeac68a6d,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=155069058-1, container=default, 
> section=1], offset=1728, 
> length=251],offset=0,name=d739f432-0871-41bb-a0c9-d6ceeac68a6d,size=251] due 
> to org.postgresql.util.PSQLException: Can't infer the SQL type to use for an 
> instance of [Ljava.lang.Object;. Use setObject() with an explicit Types value 
> to specify the type to use.: 
> Somewhere in the value conversion/representation, PutDatabaseRecord would 
> likely need to create a java.sql.Blob object and transfer the bytes into it. 
> One issue I see is that the record field type has been converted to 
> Array[Byte], so the information that the field is a BLOB is lost by that 
> point. If this requires DB-specific code, we'd likely need to add a Database 
> Adapter property and delegate out to the various DB adapters.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7810) Improvement for 'Translate Field Names' for PutDatabaseRecord and

2020-09-25 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng updated NIFI-7810:
-
Summary: Improvement for 'Translate Field Names' for PutDatabaseRecord and  
 (was: Improvement for 'Translate Field Names')

> Improvement for 'Translate Field Names' for PutDatabaseRecord and 
> --
>
> Key: NIFI-7810
> URL: https://issues.apache.org/jira/browse/NIFI-7810
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: ZhangCheng
>Assignee: ZhangCheng
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> 'Translate Field Names' in PutDatabaseRecord and ConvertJSONToSQL can help us 
> map the field and column by
> {code:java}
> private static String normalizeColumnName(final String colName, final boolean 
> translateColumnNames) {
> return translateColumnNames ? colName.toUpperCase().replace("_", "") 
> : colName;
>  }
> {code}
> but this rule `colName.toUpperCase().replace("_", "")` is fixed. I think 
> maybe we can define the rule using Expression Language.
> It's vary useful for 'Translate Field Names', but sometimes, there will be 
> column names such as 'AB' and 'A_B' in the table, and 
> `colName.toUpperCase().replace("_", "")` can not help



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7810) Improvement for 'Translate Field Names' for PutDatabaseRecord and ConvertJSONToSQL

2020-09-25 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng updated NIFI-7810:
-
Summary: Improvement for 'Translate Field Names' for PutDatabaseRecord and 
ConvertJSONToSQL  (was: Improvement for 'Translate Field Names' for 
PutDatabaseRecord and )

> Improvement for 'Translate Field Names' for PutDatabaseRecord and 
> ConvertJSONToSQL
> --
>
> Key: NIFI-7810
> URL: https://issues.apache.org/jira/browse/NIFI-7810
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: ZhangCheng
>Assignee: ZhangCheng
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> 'Translate Field Names' in PutDatabaseRecord and ConvertJSONToSQL can help us 
> map the field and column by
> {code:java}
> private static String normalizeColumnName(final String colName, final boolean 
> translateColumnNames) {
> return translateColumnNames ? colName.toUpperCase().replace("_", "") 
> : colName;
>  }
> {code}
> but this rule `colName.toUpperCase().replace("_", "")` is fixed. I think 
> maybe we can define the rule using Expression Language.
> It's vary useful for 'Translate Field Names', but sometimes, there will be 
> column names such as 'AB' and 'A_B' in the table, and 
> `colName.toUpperCase().replace("_", "")` can not help



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7810) Improvement for 'Translate Field Names'

2020-09-25 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng updated NIFI-7810:
-
Description: 
'Translate Field Names' in PutDatabaseRecord and ConvertJSONToSQL can help us 
map the field and column by
{code:java}
private static String normalizeColumnName(final String colName, final boolean 
translateColumnNames) {
return translateColumnNames ? colName.toUpperCase().replace("_", "") : 
colName;
 }
{code}
but this rule `colName.toUpperCase().replace("_", "")` is fixed. I think maybe 
we can define the rule using Expression Language.

It's vary useful for 'Translate Field Names', but sometimes, there will be 
column names such as 'AB' and 'A_B' in the table, and 
`colName.toUpperCase().replace("_", "")` can not help

  was:
'Translate Field Names'  in PutDatabaseRecord  and ConvertJSONToSQL can help us 
map the field and column by
{code:java}
private static String normalizeColumnName(final String colName, final boolean 
translateColumnNames) {
return translateColumnNames ? colName.toUpperCase().replace("_", "") : 
colName;
 }
{code}
but this rule `colName.toUpperCase().replace("_", "")` is fixed. I think maybe 
we can define the rule using Expression Language.



> Improvement for 'Translate Field Names'
> ---
>
> Key: NIFI-7810
> URL: https://issues.apache.org/jira/browse/NIFI-7810
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: ZhangCheng
>Assignee: ZhangCheng
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> 'Translate Field Names' in PutDatabaseRecord and ConvertJSONToSQL can help us 
> map the field and column by
> {code:java}
> private static String normalizeColumnName(final String colName, final boolean 
> translateColumnNames) {
> return translateColumnNames ? colName.toUpperCase().replace("_", "") 
> : colName;
>  }
> {code}
> but this rule `colName.toUpperCase().replace("_", "")` is fixed. I think 
> maybe we can define the rule using Expression Language.
> It's vary useful for 'Translate Field Names', but sometimes, there will be 
> column names such as 'AB' and 'A_B' in the table, and 
> `colName.toUpperCase().replace("_", "")` can not help



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-6061) PutDatabaseRecord does not properly handle BLOB/CLOB fields

2020-09-24 Thread ZhangCheng (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17201419#comment-17201419
 ] 

ZhangCheng commented on NIFI-6061:
--

[~mattyb149]  And please , review this as quickly as possible  ;).  Actually I 
have other code waiting to PR rely on the code in this part:P.

> PutDatabaseRecord does not properly handle BLOB/CLOB fields
> ---
>
> Key: NIFI-6061
> URL: https://issues.apache.org/jira/browse/NIFI-6061
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: ZhangCheng
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> BLOB/CLOB fields in NiFi's Record API are returned from the record as 
> Object[Byte], but when PutDatabaseRecord tries to insert Object[] via 
> setObject(), the following error occurs:
> 2019-02-20 15:11:16,216 WARN [Timer-Driven Process Thread-10] 
> o.a.n.p.standard.PutDatabaseRecord 
> PutDatabaseRecord[id=0c84b9de-0169-1000-0164-3fbad7a17664] Failed to process 
> StandardFlowFileRecord[uuid=d739f432-0871-41bb-a0c9-d6ceeac68a6d,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=155069058-1, container=default, 
> section=1], offset=1728, 
> length=251],offset=0,name=d739f432-0871-41bb-a0c9-d6ceeac68a6d,size=251] due 
> to org.postgresql.util.PSQLException: Can't infer the SQL type to use for an 
> instance of [Ljava.lang.Object;. Use setObject() with an explicit Types value 
> to specify the type to use.: 
> Somewhere in the value conversion/representation, PutDatabaseRecord would 
> likely need to create a java.sql.Blob object and transfer the bytes into it. 
> One issue I see is that the record field type has been converted to 
> Array[Byte], so the information that the field is a BLOB is lost by that 
> point. If this requires DB-specific code, we'd likely need to add a Database 
> Adapter property and delegate out to the various DB adapters.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-6061) PutDatabaseRecord does not properly handle BLOB/CLOB fields

2020-09-24 Thread ZhangCheng (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17201413#comment-17201413
 ] 

ZhangCheng commented on NIFI-6061:
--

[~mattyb149]I didn't find problems about CLOB, and I found that the record 
field type has been converted to Array[Number] not Array[Byte],  so I only fix 
the BLOB problem in new class AbstractDatabaseAdapter.java, if there are other 
problems about record and sql type, we can add some code in switch-case below:

{code:java}
void psSetValue(PreparedStatement ps, int index, Object value, int sqlType, int 
recordSqlType) throws SQLException, IOException {
if (null == value) {
ps.setNull(index, sqlType);
} else {
switch (sqlType) {
case Types.BLOB:
//resolve BLOB type for record
if (Types.ARRAY == recordSqlType) {
Object[] objects = (Object[]) value;
byte[] byteArray = new byte[objects.length];
for (int k = 0; k < objects.length; k++) {
Object o = objects[k];
if (o instanceof Number) {
byteArray[k] = ((Number) o).byteValue();
}
}
try (InputStream inputStream = new 
ByteArrayInputStream(byteArray)) {
ps.setBlob(index, inputStream);
} catch (IOException e) {
throw new IOException("Unable to parse binary data 
" + value.toString(), e.getCause());
}
} else {
try (InputStream inputStream = new 
ByteArrayInputStream(value.toString().getBytes())) {
ps.setBlob(index, inputStream);
} catch (IOException e) {
throw new IOException("Unable to parse binary data 
" + value.toString(), e.getCause());
}
}
break;
 //add other Types here to resolve data
default:
ps.setObject(index, value, sqlType);
}
}
}
{code}


> PutDatabaseRecord does not properly handle BLOB/CLOB fields
> ---
>
> Key: NIFI-6061
> URL: https://issues.apache.org/jira/browse/NIFI-6061
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: ZhangCheng
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> BLOB/CLOB fields in NiFi's Record API are returned from the record as 
> Object[Byte], but when PutDatabaseRecord tries to insert Object[] via 
> setObject(), the following error occurs:
> 2019-02-20 15:11:16,216 WARN [Timer-Driven Process Thread-10] 
> o.a.n.p.standard.PutDatabaseRecord 
> PutDatabaseRecord[id=0c84b9de-0169-1000-0164-3fbad7a17664] Failed to process 
> StandardFlowFileRecord[uuid=d739f432-0871-41bb-a0c9-d6ceeac68a6d,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=155069058-1, container=default, 
> section=1], offset=1728, 
> length=251],offset=0,name=d739f432-0871-41bb-a0c9-d6ceeac68a6d,size=251] due 
> to org.postgresql.util.PSQLException: Can't infer the SQL type to use for an 
> instance of [Ljava.lang.Object;. Use setObject() with an explicit Types value 
> to specify the type to use.: 
> Somewhere in the value conversion/representation, PutDatabaseRecord would 
> likely need to create a java.sql.Blob object and transfer the bytes into it. 
> One issue I see is that the record field type has been converted to 
> Array[Byte], so the information that the field is a BLOB is lost by that 
> point. If this requires DB-specific code, we'd likely need to add a Database 
> Adapter property and delegate out to the various DB adapters.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (NIFI-6061) PutDatabaseRecord does not properly handle BLOB/CLOB fields

2020-09-24 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng reassigned NIFI-6061:


Assignee: ZhangCheng

> PutDatabaseRecord does not properly handle BLOB/CLOB fields
> ---
>
> Key: NIFI-6061
> URL: https://issues.apache.org/jira/browse/NIFI-6061
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Matt Burgess
>Assignee: ZhangCheng
>Priority: Major
>
> BLOB/CLOB fields in NiFi's Record API are returned from the record as 
> Object[Byte], but when PutDatabaseRecord tries to insert Object[] via 
> setObject(), the following error occurs:
> 2019-02-20 15:11:16,216 WARN [Timer-Driven Process Thread-10] 
> o.a.n.p.standard.PutDatabaseRecord 
> PutDatabaseRecord[id=0c84b9de-0169-1000-0164-3fbad7a17664] Failed to process 
> StandardFlowFileRecord[uuid=d739f432-0871-41bb-a0c9-d6ceeac68a6d,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=155069058-1, container=default, 
> section=1], offset=1728, 
> length=251],offset=0,name=d739f432-0871-41bb-a0c9-d6ceeac68a6d,size=251] due 
> to org.postgresql.util.PSQLException: Can't infer the SQL type to use for an 
> instance of [Ljava.lang.Object;. Use setObject() with an explicit Types value 
> to specify the type to use.: 
> Somewhere in the value conversion/representation, PutDatabaseRecord would 
> likely need to create a java.sql.Blob object and transfer the bytes into it. 
> One issue I see is that the record field type has been converted to 
> Array[Byte], so the information that the field is a BLOB is lost by that 
> point. If this requires DB-specific code, we'd likely need to add a Database 
> Adapter property and delegate out to the various DB adapters.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Issue Comment Deleted] (NIFI-6061) PutDatabaseRecord does not properly handle BLOB/CLOB fields

2020-09-24 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng updated NIFI-6061:
-
Comment: was deleted

(was: For Oracle TIMESTAMP field, when PutDatabaseRecord tries to insert it via 
setObject(), it will throws exception ORA-01843:not a valid month.

I think this exception will be solved afte fixing this bug.)

> PutDatabaseRecord does not properly handle BLOB/CLOB fields
> ---
>
> Key: NIFI-6061
> URL: https://issues.apache.org/jira/browse/NIFI-6061
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Matt Burgess
>Priority: Major
>
> BLOB/CLOB fields in NiFi's Record API are returned from the record as 
> Object[Byte], but when PutDatabaseRecord tries to insert Object[] via 
> setObject(), the following error occurs:
> 2019-02-20 15:11:16,216 WARN [Timer-Driven Process Thread-10] 
> o.a.n.p.standard.PutDatabaseRecord 
> PutDatabaseRecord[id=0c84b9de-0169-1000-0164-3fbad7a17664] Failed to process 
> StandardFlowFileRecord[uuid=d739f432-0871-41bb-a0c9-d6ceeac68a6d,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=155069058-1, container=default, 
> section=1], offset=1728, 
> length=251],offset=0,name=d739f432-0871-41bb-a0c9-d6ceeac68a6d,size=251] due 
> to org.postgresql.util.PSQLException: Can't infer the SQL type to use for an 
> instance of [Ljava.lang.Object;. Use setObject() with an explicit Types value 
> to specify the type to use.: 
> Somewhere in the value conversion/representation, PutDatabaseRecord would 
> likely need to create a java.sql.Blob object and transfer the bytes into it. 
> One issue I see is that the record field type has been converted to 
> Array[Byte], so the information that the field is a BLOB is lost by that 
> point. If this requires DB-specific code, we'd likely need to add a Database 
> Adapter property and delegate out to the various DB adapters.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-7824) GenerateTableFetch Improvement

2020-09-20 Thread ZhangCheng (Jira)
ZhangCheng created NIFI-7824:


 Summary: GenerateTableFetch Improvement
 Key: NIFI-7824
 URL: https://issues.apache.org/jira/browse/NIFI-7824
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Extensions
Reporter: ZhangCheng
Assignee: ZhangCheng


‘Custom ORDER BY Column’  should have a higher priority than 'Maximum-value 
Columns'  for genarating query sqls when using 'order by'.
For example: we usually use timestamp column as the 'Maximum-value Columns' , 
and Many values of timestamp are often the same. In this case, querying data 
often results in duplicate data because of 'order by timestamp '.So if we use 
timestamp as  'Maximum-value Columns' ,and use a unique column as ‘Custom ORDER 
BY Column’, and use 'order by unique column ', we would not query duplicate 
data.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7810) Improvement for 'Translate Field Names'

2020-09-15 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng updated NIFI-7810:
-
Status: Patch Available  (was: In Progress)

> Improvement for 'Translate Field Names'
> ---
>
> Key: NIFI-7810
> URL: https://issues.apache.org/jira/browse/NIFI-7810
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: ZhangCheng
>Assignee: ZhangCheng
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> 'Translate Field Names'  in PutDatabaseRecord  and ConvertJSONToSQL can help 
> us map the field and column by
> {code:java}
> private static String normalizeColumnName(final String colName, final boolean 
> translateColumnNames) {
> return translateColumnNames ? colName.toUpperCase().replace("_", "") 
> : colName;
>  }
> {code}
> but this rule `colName.toUpperCase().replace("_", "")` is fixed. I think 
> maybe we can define the rule using Expression Language.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7810) Improvement for 'Translate Field Names'

2020-09-15 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng updated NIFI-7810:
-
Description: 
'Translate Field Names'  in PutDatabaseRecord  and ConvertJSONToSQL can help us 
map the field and column by
{code:java}
private static String normalizeColumnName(final String colName, final boolean 
translateColumnNames) {
return translateColumnNames ? colName.toUpperCase().replace("_", "") : 
colName;
 }
{code}
but this rule `colName.toUpperCase().replace("_", "")` is fixed. I think maybe 
we can define the rule using Expression Language.


  was:PutDatabaseRecord 


> Improvement for 'Translate Field Names'
> ---
>
> Key: NIFI-7810
> URL: https://issues.apache.org/jira/browse/NIFI-7810
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: ZhangCheng
>Assignee: ZhangCheng
>Priority: Major
>
> 'Translate Field Names'  in PutDatabaseRecord  and ConvertJSONToSQL can help 
> us map the field and column by
> {code:java}
> private static String normalizeColumnName(final String colName, final boolean 
> translateColumnNames) {
> return translateColumnNames ? colName.toUpperCase().replace("_", "") 
> : colName;
>  }
> {code}
> but this rule `colName.toUpperCase().replace("_", "")` is fixed. I think 
> maybe we can define the rule using Expression Language.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7810) Improvement for 'Translate Field Names'

2020-09-15 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng updated NIFI-7810:
-
Description: PutDatabaseRecord 

> Improvement for 'Translate Field Names'
> ---
>
> Key: NIFI-7810
> URL: https://issues.apache.org/jira/browse/NIFI-7810
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: ZhangCheng
>Assignee: ZhangCheng
>Priority: Major
>
> PutDatabaseRecord 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-7810) Improvement for 'Translate Field Names'

2020-09-15 Thread ZhangCheng (Jira)
ZhangCheng created NIFI-7810:


 Summary: Improvement for 'Translate Field Names'
 Key: NIFI-7810
 URL: https://issues.apache.org/jira/browse/NIFI-7810
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Extensions
Reporter: ZhangCheng
Assignee: ZhangCheng






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-6061) PutDatabaseRecord does not properly handle BLOB/CLOB fields

2020-07-15 Thread ZhangCheng (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17157981#comment-17157981
 ] 

ZhangCheng commented on NIFI-6061:
--

For Oracle TIMESTAMP field, when PutDatabaseRecord tries to insert it via 
setObject(), it will throws exception ORA-01843:not a valid month.

I think this exception will be solved afte fixing this bug.

> PutDatabaseRecord does not properly handle BLOB/CLOB fields
> ---
>
> Key: NIFI-6061
> URL: https://issues.apache.org/jira/browse/NIFI-6061
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Matt Burgess
>Priority: Major
>
> BLOB/CLOB fields in NiFi's Record API are returned from the record as 
> Object[Byte], but when PutDatabaseRecord tries to insert Object[] via 
> setObject(), the following error occurs:
> 2019-02-20 15:11:16,216 WARN [Timer-Driven Process Thread-10] 
> o.a.n.p.standard.PutDatabaseRecord 
> PutDatabaseRecord[id=0c84b9de-0169-1000-0164-3fbad7a17664] Failed to process 
> StandardFlowFileRecord[uuid=d739f432-0871-41bb-a0c9-d6ceeac68a6d,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=155069058-1, container=default, 
> section=1], offset=1728, 
> length=251],offset=0,name=d739f432-0871-41bb-a0c9-d6ceeac68a6d,size=251] due 
> to org.postgresql.util.PSQLException: Can't infer the SQL type to use for an 
> instance of [Ljava.lang.Object;. Use setObject() with an explicit Types value 
> to specify the type to use.: 
> Somewhere in the value conversion/representation, PutDatabaseRecord would 
> likely need to create a java.sql.Blob object and transfer the bytes into it. 
> One issue I see is that the record field type has been converted to 
> Array[Byte], so the information that the field is a BLOB is lost by that 
> point. If this requires DB-specific code, we'd likely need to add a Database 
> Adapter property and delegate out to the various DB adapters.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (NIFI-7606) Provide "NIFI_HOME()" function in Expression Language

2020-07-08 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng closed NIFI-7606.


Abandoned

> Provide "NIFI_HOME()" function in Expression Language
> -
>
> Key: NIFI-7606
> URL: https://issues.apache.org/jira/browse/NIFI-7606
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: ZhangCheng
>Assignee: ZhangCheng
>Priority: Major
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Usually,  I make a file '$NIFI_HOME/jdbc' to hold some jars of driivers. I 
> find it's useful to provide a function to get the path NIFI installed, and we 
> will use this such as `$(NIFI_HOME):append('/jdbc/ojdbc8.jar'))` in 
> `DbcpConnectionPoll` property. I think it will be useful for other scenes 
> need to indicate file path.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (NIFI-7606) Provide "NIFI_HOME()" function in Expression Language

2020-07-08 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng resolved NIFI-7606.
--
Resolution: Abandoned

It's not necessary.  ${NIFI_HOME} is useful

> Provide "NIFI_HOME()" function in Expression Language
> -
>
> Key: NIFI-7606
> URL: https://issues.apache.org/jira/browse/NIFI-7606
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: ZhangCheng
>Assignee: ZhangCheng
>Priority: Major
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Usually,  I make a file '$NIFI_HOME/jdbc' to hold some jars of driivers. I 
> find it's useful to provide a function to get the path NIFI installed, and we 
> will use this such as `$(NIFI_HOME):append('/jdbc/ojdbc8.jar'))` in 
> `DbcpConnectionPoll` property. I think it will be useful for other scenes 
> need to indicate file path.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7606) Provide "NIFI_HOME()" function in Expression Language

2020-07-08 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng updated NIFI-7606:
-
Description: Usually,  I make a file '$NIFI_HOME/jdbc' to hold some jars of 
driivers. I find it's useful to provide a function to get the path NIFI 
installed, and we will use this such as 
`$(NIFI_HOME):append('/jdbc/ojdbc8.jar'))` in `DbcpConnectionPoll` property. I 
think it will be useful for other scenes need to indicate file path.  (was: 
Usually,  I make a file '$NIFI_HOME/jdbc' to hold some jars of driivers. I find 
it's useful to provide a function to get the path NIFI installed, and we will 
use this such as `$(NIFI_HOME().append('/jdbc/ojdbc8.jar'))` in 
`DbcpConnectionPoll` property. I think it will be useful for other scenes need 
to indicate file path.)

> Provide "NIFI_HOME()" function in Expression Language
> -
>
> Key: NIFI-7606
> URL: https://issues.apache.org/jira/browse/NIFI-7606
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: ZhangCheng
>Assignee: ZhangCheng
>Priority: Major
>
> Usually,  I make a file '$NIFI_HOME/jdbc' to hold some jars of driivers. I 
> find it's useful to provide a function to get the path NIFI installed, and we 
> will use this such as `$(NIFI_HOME):append('/jdbc/ojdbc8.jar'))` in 
> `DbcpConnectionPoll` property. I think it will be useful for other scenes 
> need to indicate file path.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7606) Provide "NIFI_HOME()" function in Expression Language

2020-07-07 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng updated NIFI-7606:
-
Description: Usually,  I make a file '$NIFI_HOME/jdbc' to hold some jars of 
driivers. I find it's useful to provide a function to get the path NIFI 
installed, and we will use this such as 
`$(NIFI_HOME().append('/jdbc/ojdbc8.jar'))` in `DbcpConnectionPoll` property. I 
think it will be useful for other scenes need to indicate file path.

> Provide "NIFI_HOME()" function in Expression Language
> -
>
> Key: NIFI-7606
> URL: https://issues.apache.org/jira/browse/NIFI-7606
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: ZhangCheng
>Assignee: ZhangCheng
>Priority: Major
>
> Usually,  I make a file '$NIFI_HOME/jdbc' to hold some jars of driivers. I 
> find it's useful to provide a function to get the path NIFI installed, and we 
> will use this such as `$(NIFI_HOME().append('/jdbc/ojdbc8.jar'))` in 
> `DbcpConnectionPoll` property. I think it will be useful for other scenes 
> need to indicate file path.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-7606) Provide "NIFI_HOME()" function in Expression Language

2020-07-07 Thread ZhangCheng (Jira)
ZhangCheng created NIFI-7606:


 Summary: Provide "NIFI_HOME()" function in Expression Language
 Key: NIFI-7606
 URL: https://issues.apache.org/jira/browse/NIFI-7606
 Project: Apache NiFi
  Issue Type: New Feature
  Components: Extensions
Reporter: ZhangCheng
Assignee: ZhangCheng






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7484) fix ListFTP and FetchFTP doc

2020-05-25 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng updated NIFI-7484:
-
Status: Patch Available  (was: In Progress)

> fix ListFTP and FetchFTP doc
> 
>
> Key: NIFI-7484
> URL: https://issues.apache.org/jira/browse/NIFI-7484
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: ZhangCheng
>Assignee: ZhangCheng
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In ListFTP's doc, WritesAttribute description about "filename" and "path" 
> used the 'SFTP', change it to 'FTP'
> In FetchFTP's CapabilityDescription ,used the 'SFTP'



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (NIFI-7403) Put.java improvement(PutSQL's transactions support)

2020-05-25 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng reassigned NIFI-7403:


Assignee: ZhangCheng

> Put.java improvement(PutSQL's transactions support)
> ---
>
> Key: NIFI-7403
> URL: https://issues.apache.org/jira/browse/NIFI-7403
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.11.4
>Reporter: ZhangCheng
>Assignee: ZhangCheng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> PutSQL processor support ,if we set this 
> property true, I think it means The PutSQL processor will excute these sqls 
> of one transaction Transactionally!!
> But we find that when we set the  false, those sqls of 
> one transaction do not excute transactionally,some sucess and some failure. I 
> think it's wrong.
> I think, if we set  true, it should be 
> executed Transactionally, no matter  is true or false.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7483) fix TailFile doc, remove description about 'Rolling strategy'

2020-05-25 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng updated NIFI-7483:
-
Status: Patch Available  (was: In Progress)

> fix TailFile doc, remove description about 'Rolling strategy'
> -
>
> Key: NIFI-7483
> URL: https://issues.apache.org/jira/browse/NIFI-7483
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: ZhangCheng
>Assignee: ZhangCheng
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> 'Rolling strategy' has be removed alreadly, But in TailFile's doc, "Lookup 
> frequency" and "Maximum age"  Property and additionalDetails.html, I see sth 
> about 'Rolling strategy'.We should remove that.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (NIFI-7483) fix TailFile doc, remove description about 'Rolling strategy'

2020-05-25 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng reassigned NIFI-7483:


Assignee: ZhangCheng

> fix TailFile doc, remove description about 'Rolling strategy'
> -
>
> Key: NIFI-7483
> URL: https://issues.apache.org/jira/browse/NIFI-7483
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: ZhangCheng
>Assignee: ZhangCheng
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> 'Rolling strategy' has be removed alreadly, But in TailFile's doc, "Lookup 
> frequency" and "Maximum age"  Property and additionalDetails.html, I see sth 
> about 'Rolling strategy'.We should remove that.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (NIFI-7410) Clob unreadable code when convertToAvroStream in JdbcCommon.java

2020-05-25 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng reassigned NIFI-7410:


Assignee: ZhangCheng

> Clob unreadable code when convertToAvroStream in JdbcCommon.java 
> -
>
> Key: NIFI-7410
> URL: https://issues.apache.org/jira/browse/NIFI-7410
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.11.4
>Reporter: ZhangCheng
>Assignee: ZhangCheng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> when ExecuteSql or QueryDataBaseTable processor trigger below code in 
> JdbcCommon.java ,clob would be unreadable code(with Chinese character)
> {code:java}
> if (javaSqlType == CLOB) {
> Clob clob = rs.getClob(i);
> if (clob != null) {
> long numChars = clob.length();
> char[] buffer = new char[(int) numChars];
> InputStream is = clob.getAsciiStream();
> int index = 0;
> int c = is.read();
> while (c >= 0) {
> buffer[index++] = (char) c;
> c = is.read();
> }
> rec.put(i - 1, new String(buffer));
> clob.free();
> } else {
> rec.put(i - 1, null);
> }
> continue;
> }
> {code}
> I konw this can be resoveld by using ExecuteSqlRecord and 
> QueryDatabaseTableRecord. Then have new avroWriter(by using controller 
> cervice), so I think ,can we change the DefaultAvroSqlWriter to the new  
> avroWriter?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (NIFI-7484) fix ListFTP and FetchFTP doc

2020-05-25 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng reassigned NIFI-7484:


Assignee: ZhangCheng

> fix ListFTP and FetchFTP doc
> 
>
> Key: NIFI-7484
> URL: https://issues.apache.org/jira/browse/NIFI-7484
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: ZhangCheng
>Assignee: ZhangCheng
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In ListFTP's doc, WritesAttribute description about "filename" and "path" 
> used the 'SFTP', change it to 'FTP'
> In FetchFTP's CapabilityDescription ,used the 'SFTP'



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7484) fix ListFTP and FetchFTP doc

2020-05-25 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng updated NIFI-7484:
-
Description: 
In ListFTP's doc, WritesAttribute description about "filename" and "path" used 
the 'SFTP', change it to 'FTP'
In FetchFTP's CapabilityDescription ,used the 'SFTP'

  was:In ListFTP's doc, WritesAttribute description about "filename" and "path" 
used the 'SFTP', change it to 'FTP'

Summary: fix ListFTP and FetchFTP doc  (was: fix ListFTP doc)

> fix ListFTP and FetchFTP doc
> 
>
> Key: NIFI-7484
> URL: https://issues.apache.org/jira/browse/NIFI-7484
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: ZhangCheng
>Priority: Minor
>
> In ListFTP's doc, WritesAttribute description about "filename" and "path" 
> used the 'SFTP', change it to 'FTP'
> In FetchFTP's CapabilityDescription ,used the 'SFTP'



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-7484) fix ListFTP doc

2020-05-25 Thread ZhangCheng (Jira)
ZhangCheng created NIFI-7484:


 Summary: fix ListFTP doc
 Key: NIFI-7484
 URL: https://issues.apache.org/jira/browse/NIFI-7484
 Project: Apache NiFi
  Issue Type: Bug
  Components: Extensions
Reporter: ZhangCheng


In ListFTP's doc, WritesAttribute description about "filename" and "path" used 
the 'SFTP', change it to 'FTP'



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-7483) fix TailFile doc, remove description about 'Rolling strategy'

2020-05-25 Thread ZhangCheng (Jira)
ZhangCheng created NIFI-7483:


 Summary: fix TailFile doc, remove description about 'Rolling 
strategy'
 Key: NIFI-7483
 URL: https://issues.apache.org/jira/browse/NIFI-7483
 Project: Apache NiFi
  Issue Type: Bug
  Components: Extensions
Reporter: ZhangCheng


'Rolling strategy' has be removed alreadly, But in TailFile's doc, "Lookup 
frequency" and "Maximum age"  Property and additionalDetails.html, I see sth 
about 'Rolling strategy'.We should remove that.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (NIFI-7140) PutSql support database transaction rollback when is false

2020-05-14 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng resolved NIFI-7140.
--

Replaced by NIFI-7403


> PutSql support database transaction rollback when is 
> false
> 
>
> Key: NIFI-7140
> URL: https://issues.apache.org/jira/browse/NIFI-7140
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: ZhangCheng
>Priority: Major
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> For PutSQL processor,if the 'Support Fragmented Transactions' is true, the 
> processor will process all FlowFiles with that fragment.identifier as a 
> single transaction;
> In actuality,it works. 
> But when some sql of the transaction failed and  is 
> false , the database transaction will not roll back.
> Sometimes,we need the  database transaction rollback and do not want  the 
> flowfile rollback, we need that the failed database  transaction route  to 
> REL_FAILURE.
> If the is true and  is 
> false , I think it should still support the capability  of database 
> transaction rollback, for example :it should add a property (like  Fragmented Transactions RollBack>)  which can indicate that whether the 
> processor support  database transaction rollback when the 'Support Fragmented 
> Transactions' is true .Of course ,when  is true 
> , will be ignored



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-6878) ConvertJSONToSQL Improvement. Statement Type Support "Use statement.type Attribute" or Supports Expression Language

2020-05-14 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng updated NIFI-6878:
-
Status: Patch Available  (was: Reopened)

> ConvertJSONToSQL Improvement. Statement Type Support  "Use statement.type 
> Attribute" or Supports Expression Language 
> -
>
> Key: NIFI-6878
> URL: https://issues.apache.org/jira/browse/NIFI-6878
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: ZhangCheng
>Assignee: Matt Burgess
>Priority: Minor
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> ConvertJSONToSQL  Statement Type provides fixed options : 
> UPDATE,INSERT,DELETE. 
> Usually, it can meet our needs. But  in actual application,I think It's not 
> flexible enough.
>  In some cases, we need to dynamically indicate the Statement Type.
> For example,the data from CpatureChangeMysql owns  the attribute  of 
> statement  type(cdc.event.type, we need to convert the data to sql(DML) 
> orderly; And we now have to use RouteOnAttribute to transfer data to three 
> branches , Build SQL statement separately ,finally,we have to use 
> EnforceOrder  to ensure the order of SQL statements.
> But it will be easy if ConvertJSONToSQL  supports dynamical Statement Type . 
> It is easy to implement this feature just like PutDatabaseRecord. 
> In practice, I did use PutDatabaseRecord   instead of ConvertJSONToSQL.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7140) PutSql support database transaction rollback when is false

2020-05-14 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng updated NIFI-7140:
-
Affects Version/s: (was: 1.11.1)
   Status: Open  (was: Patch Available)

> PutSql support database transaction rollback when is 
> false
> 
>
> Key: NIFI-7140
> URL: https://issues.apache.org/jira/browse/NIFI-7140
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: ZhangCheng
>Priority: Major
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> For PutSQL processor,if the 'Support Fragmented Transactions' is true, the 
> processor will process all FlowFiles with that fragment.identifier as a 
> single transaction;
> In actuality,it works. 
> But when some sql of the transaction failed and  is 
> false , the database transaction will not roll back.
> Sometimes,we need the  database transaction rollback and do not want  the 
> flowfile rollback, we need that the failed database  transaction route  to 
> REL_FAILURE.
> If the is true and  is 
> false , I think it should still support the capability  of database 
> transaction rollback, for example :it should add a property (like  Fragmented Transactions RollBack>)  which can indicate that whether the 
> processor support  database transaction rollback when the 'Support Fragmented 
> Transactions' is true .Of course ,when  is true 
> , will be ignored



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7140) PutSql support database transaction rollback when is false

2020-05-14 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng updated NIFI-7140:
-
Status: Patch Available  (was: Reopened)

> PutSql support database transaction rollback when is 
> false
> 
>
> Key: NIFI-7140
> URL: https://issues.apache.org/jira/browse/NIFI-7140
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.11.1
>Reporter: ZhangCheng
>Priority: Major
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> For PutSQL processor,if the 'Support Fragmented Transactions' is true, the 
> processor will process all FlowFiles with that fragment.identifier as a 
> single transaction;
> In actuality,it works. 
> But when some sql of the transaction failed and  is 
> false , the database transaction will not roll back.
> Sometimes,we need the  database transaction rollback and do not want  the 
> flowfile rollback, we need that the failed database  transaction route  to 
> REL_FAILURE.
> If the is true and  is 
> false , I think it should still support the capability  of database 
> transaction rollback, for example :it should add a property (like  Fragmented Transactions RollBack>)  which can indicate that whether the 
> processor support  database transaction rollback when the 'Support Fragmented 
> Transactions' is true .Of course ,when  is true 
> , will be ignored



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7140) PutSql support database transaction rollback when is false

2020-05-14 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng updated NIFI-7140:
-
Status: Reopened  (was: Closed)

> PutSql support database transaction rollback when is 
> false
> 
>
> Key: NIFI-7140
> URL: https://issues.apache.org/jira/browse/NIFI-7140
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.11.1
>Reporter: ZhangCheng
>Priority: Major
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> For PutSQL processor,if the 'Support Fragmented Transactions' is true, the 
> processor will process all FlowFiles with that fragment.identifier as a 
> single transaction;
> In actuality,it works. 
> But when some sql of the transaction failed and  is 
> false , the database transaction will not roll back.
> Sometimes,we need the  database transaction rollback and do not want  the 
> flowfile rollback, we need that the failed database  transaction route  to 
> REL_FAILURE.
> If the is true and  is 
> false , I think it should still support the capability  of database 
> transaction rollback, for example :it should add a property (like  Fragmented Transactions RollBack>)  which can indicate that whether the 
> processor support  database transaction rollback when the 'Support Fragmented 
> Transactions' is true .Of course ,when  is true 
> , will be ignored



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Reopened] (NIFI-6878) ConvertJSONToSQL Improvement. Statement Type Support "Use statement.type Attribute" or Supports Expression Language

2020-05-14 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng reopened NIFI-6878:
--

> ConvertJSONToSQL Improvement. Statement Type Support  "Use statement.type 
> Attribute" or Supports Expression Language 
> -
>
> Key: NIFI-6878
> URL: https://issues.apache.org/jira/browse/NIFI-6878
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: ZhangCheng
>Assignee: Matt Burgess
>Priority: Minor
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> ConvertJSONToSQL  Statement Type provides fixed options : 
> UPDATE,INSERT,DELETE. 
> Usually, it can meet our needs. But  in actual application,I think It's not 
> flexible enough.
>  In some cases, we need to dynamically indicate the Statement Type.
> For example,the data from CpatureChangeMysql owns  the attribute  of 
> statement  type(cdc.event.type, we need to convert the data to sql(DML) 
> orderly; And we now have to use RouteOnAttribute to transfer data to three 
> branches , Build SQL statement separately ,finally,we have to use 
> EnforceOrder  to ensure the order of SQL statements.
> But it will be easy if ConvertJSONToSQL  supports dynamical Statement Type . 
> It is easy to implement this feature just like PutDatabaseRecord. 
> In practice, I did use PutDatabaseRecord   instead of ConvertJSONToSQL.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-6878) ConvertJSONToSQL Improvement. Statement Type Support "Use statement.type Attribute" or Supports Expression Language

2020-05-14 Thread ZhangCheng (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17107174#comment-17107174
 ] 

ZhangCheng commented on NIFI-6878:
--

[~pvillard]I am so sorry :(. I made a mistake that I thought the PR was closed. 

> ConvertJSONToSQL Improvement. Statement Type Support  "Use statement.type 
> Attribute" or Supports Expression Language 
> -
>
> Key: NIFI-6878
> URL: https://issues.apache.org/jira/browse/NIFI-6878
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: ZhangCheng
>Assignee: Matt Burgess
>Priority: Minor
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> ConvertJSONToSQL  Statement Type provides fixed options : 
> UPDATE,INSERT,DELETE. 
> Usually, it can meet our needs. But  in actual application,I think It's not 
> flexible enough.
>  In some cases, we need to dynamically indicate the Statement Type.
> For example,the data from CpatureChangeMysql owns  the attribute  of 
> statement  type(cdc.event.type, we need to convert the data to sql(DML) 
> orderly; And we now have to use RouteOnAttribute to transfer data to three 
> branches , Build SQL statement separately ,finally,we have to use 
> EnforceOrder  to ensure the order of SQL statements.
> But it will be easy if ConvertJSONToSQL  supports dynamical Statement Type . 
> It is easy to implement this feature just like PutDatabaseRecord. 
> In practice, I did use PutDatabaseRecord   instead of ConvertJSONToSQL.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-6878) ConvertJSONToSQL Improvement. Statement Type Support "Use statement.type Attribute" or Supports Expression Language

2020-05-13 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng updated NIFI-6878:
-
Status: Resolved  (was: Patch Available)

> ConvertJSONToSQL Improvement. Statement Type Support  "Use statement.type 
> Attribute" or Supports Expression Language 
> -
>
> Key: NIFI-6878
> URL: https://issues.apache.org/jira/browse/NIFI-6878
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: ZhangCheng
>Assignee: Matt Burgess
>Priority: Minor
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> ConvertJSONToSQL  Statement Type provides fixed options : 
> UPDATE,INSERT,DELETE. 
> Usually, it can meet our needs. But  in actual application,I think It's not 
> flexible enough.
>  In some cases, we need to dynamically indicate the Statement Type.
> For example,the data from CpatureChangeMysql owns  the attribute  of 
> statement  type(cdc.event.type, we need to convert the data to sql(DML) 
> orderly; And we now have to use RouteOnAttribute to transfer data to three 
> branches , Build SQL statement separately ,finally,we have to use 
> EnforceOrder  to ensure the order of SQL statements.
> But it will be easy if ConvertJSONToSQL  supports dynamical Statement Type . 
> It is easy to implement this feature just like PutDatabaseRecord. 
> In practice, I did use PutDatabaseRecord   instead of ConvertJSONToSQL.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (NIFI-7403) Put.java improvement(PutSQL's transactions support)

2020-05-12 Thread ZhangCheng (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17095088#comment-17095088
 ] 

ZhangCheng edited comment on NIFI-7403 at 5/12/20, 12:56 PM:
-

https://github.com/apache/nifi/pull/4266


was (Author: ku_cheng):
https://github.com/apache/nifi/pull/4239

> Put.java improvement(PutSQL's transactions support)
> ---
>
> Key: NIFI-7403
> URL: https://issues.apache.org/jira/browse/NIFI-7403
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.11.4
>Reporter: ZhangCheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> PutSQL processor support ,if we set this 
> property true, I think it means The PutSQL processor will excute these sqls 
> of one transaction Transactionally!!
> But we find that when we set the  false, those sqls of 
> one transaction do not excute transactionally,some sucess and some failure. I 
> think it's wrong.
> I think, if we set  true, it should be 
> executed Transactionally, no matter  is true or false.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7403) Put.java improvement(PutSQL's transactions support)

2020-05-12 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng updated NIFI-7403:
-
Description: 
PutSQL processor support ,if we set this 
property true, I think it means The PutSQL processor will excute these sqls of 
one transaction Transactionally!!

But we find that when we set the  false, those sqls of one 
transaction do not excute transactionally,some sucess and some failure. I think 
it's wrong.

I think, if we set  true, it should be 
executed Transactionally, no matter  is true or false.



  was:
PutSQL processor support ,if we set this 
property true, I think it means The PutSQL processor will excute these sqls of 
one transaction Transactionally!!

But we find that when we set the  false, those sqls of one 
transaction do not excute transactionally,some sucess and some failure. I think 
it's wrong.

I think, if we set  true, it should be 
executed Transactionally, no matter  is true or false.

I see the code, only PutSQL has the ,  it 
maybe improve this feature at a small cost.

modify code design:

step1:  Maybe other Processors would support the  (such as PutDatabaseRecord), we should move the   from PutSQL.java to Put.java( I think it's a rational 
design that `Put.java`  define the  property )


{code:java}
public static final PropertyDescriptor SUPPORT_TRANSACTIONS = new 
PropertyDescriptor.Builder()
.name("Support Fragmented Transactions")
   ...
{code}

step2: Additionally, I think the Put.java can extract the RelationShips of the 
processors those use the Put.java(PutSQL PutDatabaseRecord, PutHiveQL...We can 
see that these processors who use the Put.java have the same Relationships, I 
this this is the `Put`'s common feature) 


{code:java}
static final Relationship REL_SUCCESS = new Relationship.Builder()
.name("success")
.description("A FlowFile is routed to this relationship after the 
database is successfully updated")
.build();
static final Relationship REL_RETRY = new Relationship.Builder()
.name("retry")
.description("A FlowFile is routed to this relationship if the 
database cannot be updated but attempting the operation again may succeed")
.build();
static final Relationship REL_FAILURE = new Relationship.Builder()
.name("failure")
.description("A FlowFile is routed to this relationship if the 
database cannot be updated and retrying the operation will also fail, "
+ "such as an invalid query or an integrity constraint 
violation")
.build();
{code}


step3: in Put.java `onTrigger` method,  after the `putFlowFiles` and before the 
`onCompleted.apply`, we try to get the value of , if true 
, check the `transferredFlowFiles` , if there are flowfiles don't route to 
`Success`, we should reroute these `transferredFlowFiles`(retry > failure),and 
do `onFailed`(if it's not null)

{code:java}
 try {
putFlowFiles(context, session, functionContext, connection, 
flowFiles, result);
} catch (DiscontinuedException e) {
// Whether it was an error or semi normal is depends on the 
implementation and reason why it wanted to discontinue.
// So, no logging is needed here.
}
...

if(context.getProperty(SUPPORT_TRANSACTIONS).asBoolean()){
//TODO   do sth
}

// OnCompleted processing.
if (onCompleted != null) {
onCompleted.apply(context, session, functionContext, 
connection);
}

// Transfer FlowFiles.
transferFlowFiles.apply(context, session, functionContext, 
result);
{code}




> Put.java improvement(PutSQL's transactions support)
> ---
>
> Key: NIFI-7403
> URL: https://issues.apache.org/jira/browse/NIFI-7403
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.11.4
>Reporter: ZhangCheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> PutSQL processor support ,if we set this 
> property true, I think it means The PutSQL processor will excute these sqls 
> of one transaction Transactionally!!
> But we find that when we set the  false, those sqls of 
> one transaction do not excute transactionally,some sucess and some failure. I 
> think it's wrong.
> I think, if we set  true, it should be 
> executed Transactionally, no matter  is true or false.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7410) Clob unreadable code when convertToAvroStream in JdbcCommon.java

2020-04-30 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng updated NIFI-7410:
-
Affects Version/s: 1.11.4
   Labels: pull-request-available  (was: )
   Status: Patch Available  (was: Open)

> Clob unreadable code when convertToAvroStream in JdbcCommon.java 
> -
>
> Key: NIFI-7410
> URL: https://issues.apache.org/jira/browse/NIFI-7410
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.11.4
>Reporter: ZhangCheng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> when ExecuteSql or QueryDataBaseTable processor trigger below code in 
> JdbcCommon.java ,clob would be unreadable code(with Chinese character)
> {code:java}
> if (javaSqlType == CLOB) {
> Clob clob = rs.getClob(i);
> if (clob != null) {
> long numChars = clob.length();
> char[] buffer = new char[(int) numChars];
> InputStream is = clob.getAsciiStream();
> int index = 0;
> int c = is.read();
> while (c >= 0) {
> buffer[index++] = (char) c;
> c = is.read();
> }
> rec.put(i - 1, new String(buffer));
> clob.free();
> } else {
> rec.put(i - 1, null);
> }
> continue;
> }
> {code}
> I konw this can be resoveld by using ExecuteSqlRecord and 
> QueryDatabaseTableRecord. Then have new avroWriter(by using controller 
> cervice), so I think ,can we change the DefaultAvroSqlWriter to the new  
> avroWriter?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7410) Clob unreadable code when convertToAvroStream in JdbcCommon.java

2020-04-30 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng updated NIFI-7410:
-
Description: 
when ExecuteSql or QueryDataBaseTable processor trigger below code in 
JdbcCommon.java ,clob would be unreadable code(with Chinese character)
{code:java}
if (javaSqlType == CLOB) {
Clob clob = rs.getClob(i);
if (clob != null) {
long numChars = clob.length();
char[] buffer = new char[(int) numChars];
InputStream is = clob.getAsciiStream();
int index = 0;
int c = is.read();
while (c >= 0) {
buffer[index++] = (char) c;
c = is.read();
}
rec.put(i - 1, new String(buffer));
clob.free();
} else {
rec.put(i - 1, null);
}
continue;
}
{code}
I konw this can be resoveld by using ExecuteSqlRecord and 
QueryDatabaseTableRecord. Then have new avroWriter(by using controller 
cervice), so I think ,can we change the DefaultAvroSqlWriter to the new  
avroWriter?




  was:
when ExecuteSql or QueryDataBaseTable processor trigger below code in 
JdbcCommon.java ,clob would be unreadable code(with Chinese character)
{code:java}
if (javaSqlType == CLOB) {
Clob clob = rs.getClob(i);
if (clob != null) {
long numChars = clob.length();
char[] buffer = new char[(int) numChars];
InputStream is = clob.getAsciiStream();
int index = 0;
int c = is.read();
while (c >= 0) {
buffer[index++] = (char) c;
c = is.read();
}
rec.put(i - 1, new String(buffer));
clob.free();
} else {
rec.put(i - 1, null);
}
continue;
}
{code}
I konw this can be resoveld by using ExecuteSqlRecord and 
QueryDatabaseTableRecord.

But I think  this should be maintained  too.




> Clob unreadable code when convertToAvroStream in JdbcCommon.java 
> -
>
> Key: NIFI-7410
> URL: https://issues.apache.org/jira/browse/NIFI-7410
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: ZhangCheng
>Priority: Major
>
> when ExecuteSql or QueryDataBaseTable processor trigger below code in 
> JdbcCommon.java ,clob would be unreadable code(with Chinese character)
> {code:java}
> if (javaSqlType == CLOB) {
> Clob clob = rs.getClob(i);
> if (clob != null) {
> long numChars = clob.length();
> char[] buffer = new char[(int) numChars];
> InputStream is = clob.getAsciiStream();
> int index = 0;
> int c = is.read();
> while (c >= 0) {
> buffer[index++] = (char) c;
> c = is.read();
> }
> rec.put(i - 1, new String(buffer));
> clob.free();
> } else {
> rec.put(i - 1, null);
> }
> continue;
> }
> {code}
> I konw this can be resoveld by using ExecuteSqlRecord and 
> QueryDatabaseTableRecord. Then have new avroWriter(by using controller 
> cervice), so I think ,can we change the DefaultAvroSqlWriter to the new  
> avroWriter?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-7410) Clob unreadable code when convertToAvroStream in JdbcCommon.java

2020-04-29 Thread ZhangCheng (Jira)
ZhangCheng created NIFI-7410:


 Summary: Clob unreadable code when convertToAvroStream in 
JdbcCommon.java 
 Key: NIFI-7410
 URL: https://issues.apache.org/jira/browse/NIFI-7410
 Project: Apache NiFi
  Issue Type: Bug
Reporter: ZhangCheng


when ExecuteSql or QueryDataBaseTable processor trigger below code in 
JdbcCommon.java ,clob would be unreadable code(with Chinese character)
{code:java}
if (javaSqlType == CLOB) {
Clob clob = rs.getClob(i);
if (clob != null) {
long numChars = clob.length();
char[] buffer = new char[(int) numChars];
InputStream is = clob.getAsciiStream();
int index = 0;
int c = is.read();
while (c >= 0) {
buffer[index++] = (char) c;
c = is.read();
}
rec.put(i - 1, new String(buffer));
clob.free();
} else {
rec.put(i - 1, null);
}
continue;
}
{code}
I konw this can be resoveld by using ExecuteSqlRecord and 
QueryDatabaseTableRecord.

But I think  this should be maintained  too.





--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7403) Put.java improvement(PutSQL's transactions support)

2020-04-28 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng updated NIFI-7403:
-
Fix Version/s: 1.12.0
   Labels: pull-request-available  (was: )
   Status: Patch Available  (was: Open)

https://github.com/apache/nifi/pull/4239

> Put.java improvement(PutSQL's transactions support)
> ---
>
> Key: NIFI-7403
> URL: https://issues.apache.org/jira/browse/NIFI-7403
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.11.4
>Reporter: ZhangCheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> PutSQL processor support ,if we set this 
> property true, I think it means The PutSQL processor will excute these sqls 
> of one transaction Transactionally!!
> But we find that when we set the  false, those sqls of 
> one transaction do not excute transactionally,some sucess and some failure. I 
> think it's wrong.
> I think, if we set  true, it should be 
> executed Transactionally, no matter  is true or false.
> I see the code, only PutSQL has the ,  it 
> maybe improve this feature at a small cost.
> modify code design:
> step1:  Maybe other Processors would support the   Transactions>(such as PutDatabaseRecord), we should move the   Fragmented Transactions> from PutSQL.java to Put.java( I think it's a 
> rational design that `Put.java`  define the  
> property )
> {code:java}
> public static final PropertyDescriptor SUPPORT_TRANSACTIONS = new 
> PropertyDescriptor.Builder()
> .name("Support Fragmented Transactions")
>...
> {code}
> step2: Additionally, I think the Put.java can extract the RelationShips of 
> the processors those use the Put.java(PutSQL PutDatabaseRecord, 
> PutHiveQL...We can see that these processors who use the Put.java have the 
> same Relationships, I this this is the `Put`'s common feature) 
> {code:java}
> static final Relationship REL_SUCCESS = new Relationship.Builder()
> .name("success")
> .description("A FlowFile is routed to this relationship after the 
> database is successfully updated")
> .build();
> static final Relationship REL_RETRY = new Relationship.Builder()
> .name("retry")
> .description("A FlowFile is routed to this relationship if the 
> database cannot be updated but attempting the operation again may succeed")
> .build();
> static final Relationship REL_FAILURE = new Relationship.Builder()
> .name("failure")
> .description("A FlowFile is routed to this relationship if the 
> database cannot be updated and retrying the operation will also fail, "
> + "such as an invalid query or an integrity constraint 
> violation")
> .build();
> {code}
> step3: in Put.java `onTrigger` method,  after the `putFlowFiles` and before 
> the `onCompleted.apply`, we try to get the value of , if 
> true , check the `transferredFlowFiles` , if there are flowfiles don't route 
> to `Success`, we should reroute these `transferredFlowFiles`(retry > 
> failure),and do `onFailed`(if it's not null)
> {code:java}
>  try {
> putFlowFiles(context, session, functionContext, 
> connection, flowFiles, result);
> } catch (DiscontinuedException e) {
> // Whether it was an error or semi normal is depends on 
> the implementation and reason why it wanted to discontinue.
> // So, no logging is needed here.
> }
> ...
> 
> if(context.getProperty(SUPPORT_TRANSACTIONS).asBoolean()){
> //TODO   do sth
> }
> // OnCompleted processing.
> if (onCompleted != null) {
> onCompleted.apply(context, session, functionContext, 
> connection);
> }
> // Transfer FlowFiles.
> transferFlowFiles.apply(context, session, functionContext, 
> result);
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7140) PutSql support database transaction rollback when is false

2020-04-28 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng updated NIFI-7140:
-
Status: Resolved  (was: Closed)

> PutSql support database transaction rollback when is 
> false
> 
>
> Key: NIFI-7140
> URL: https://issues.apache.org/jira/browse/NIFI-7140
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.11.1
>Reporter: ZhangCheng
>Priority: Major
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> For PutSQL processor,if the 'Support Fragmented Transactions' is true, the 
> processor will process all FlowFiles with that fragment.identifier as a 
> single transaction;
> In actuality,it works. 
> But when some sql of the transaction failed and  is 
> false , the database transaction will not roll back.
> Sometimes,we need the  database transaction rollback and do not want  the 
> flowfile rollback, we need that the failed database  transaction route  to 
> REL_FAILURE.
> If the is true and  is 
> false , I think it should still support the capability  of database 
> transaction rollback, for example :it should add a property (like  Fragmented Transactions RollBack>)  which can indicate that whether the 
> processor support  database transaction rollback when the 'Support Fragmented 
> Transactions' is true .Of course ,when  is true 
> , will be ignored



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (NIFI-7140) PutSql support database transaction rollback when is false

2020-04-28 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng closed NIFI-7140.


seeAlso https://issues.apache.org/jira/browse/NIFI-7403

> PutSql support database transaction rollback when is 
> false
> 
>
> Key: NIFI-7140
> URL: https://issues.apache.org/jira/browse/NIFI-7140
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.11.1
>Reporter: ZhangCheng
>Priority: Major
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> For PutSQL processor,if the 'Support Fragmented Transactions' is true, the 
> processor will process all FlowFiles with that fragment.identifier as a 
> single transaction;
> In actuality,it works. 
> But when some sql of the transaction failed and  is 
> false , the database transaction will not roll back.
> Sometimes,we need the  database transaction rollback and do not want  the 
> flowfile rollback, we need that the failed database  transaction route  to 
> REL_FAILURE.
> If the is true and  is 
> false , I think it should still support the capability  of database 
> transaction rollback, for example :it should add a property (like  Fragmented Transactions RollBack>)  which can indicate that whether the 
> processor support  database transaction rollback when the 'Support Fragmented 
> Transactions' is true .Of course ,when  is true 
> , will be ignored



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (NIFI-7140) PutSql support database transaction rollback when is false

2020-04-28 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng closed NIFI-7140.


> PutSql support database transaction rollback when is 
> false
> 
>
> Key: NIFI-7140
> URL: https://issues.apache.org/jira/browse/NIFI-7140
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.11.1
>Reporter: ZhangCheng
>Priority: Major
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> For PutSQL processor,if the 'Support Fragmented Transactions' is true, the 
> processor will process all FlowFiles with that fragment.identifier as a 
> single transaction;
> In actuality,it works. 
> But when some sql of the transaction failed and  is 
> false , the database transaction will not roll back.
> Sometimes,we need the  database transaction rollback and do not want  the 
> flowfile rollback, we need that the failed database  transaction route  to 
> REL_FAILURE.
> If the is true and  is 
> false , I think it should still support the capability  of database 
> transaction rollback, for example :it should add a property (like  Fragmented Transactions RollBack>)  which can indicate that whether the 
> processor support  database transaction rollback when the 'Support Fragmented 
> Transactions' is true .Of course ,when  is true 
> , will be ignored



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (NIFI-7140) PutSql support database transaction rollback when is false

2020-04-28 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng resolved NIFI-7140.
--
Resolution: Not A Problem

> PutSql support database transaction rollback when is 
> false
> 
>
> Key: NIFI-7140
> URL: https://issues.apache.org/jira/browse/NIFI-7140
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.11.1
>Reporter: ZhangCheng
>Priority: Major
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> For PutSQL processor,if the 'Support Fragmented Transactions' is true, the 
> processor will process all FlowFiles with that fragment.identifier as a 
> single transaction;
> In actuality,it works. 
> But when some sql of the transaction failed and  is 
> false , the database transaction will not roll back.
> Sometimes,we need the  database transaction rollback and do not want  the 
> flowfile rollback, we need that the failed database  transaction route  to 
> REL_FAILURE.
> If the is true and  is 
> false , I think it should still support the capability  of database 
> transaction rollback, for example :it should add a property (like  Fragmented Transactions RollBack>)  which can indicate that whether the 
> processor support  database transaction rollback when the 'Support Fragmented 
> Transactions' is true .Of course ,when  is true 
> , will be ignored



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7403) Put.java improvement(PutSQL's transactions support)

2020-04-28 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng updated NIFI-7403:
-
Description: 
PutSQL processor support ,if we set this 
property true, I think it means The PutSQL processor will excute these sqls of 
one transaction Transactionally!!

But we find that when we set the  false, those sqls of one 
transaction do not excute transactionally,some sucess and some failure. I think 
it's wrong.

I think, if we set  true, it should be 
executed Transactionally, no matter  is true or false.

I see the code, only PutSQL has the ,  it 
maybe improve this feature at a small cost.

modify code design:

step1:  Maybe other Processors would support the  (such as PutDatabaseRecord), we should move the   from PutSQL.java to Put.java( I think it's a rational 
design that `Put.java`  define the  property )


{code:java}
public static final PropertyDescriptor SUPPORT_TRANSACTIONS = new 
PropertyDescriptor.Builder()
.name("Support Fragmented Transactions")
   ...
{code}

step2: Additionally, I think the Put.java can extract the RelationShips of the 
processors those use the Put.java(PutSQL PutDatabaseRecord, PutHiveQL...We can 
see that these processors who use the Put.java have the same Relationships, I 
this this is the `Put`'s common feature) 


{code:java}
static final Relationship REL_SUCCESS = new Relationship.Builder()
.name("success")
.description("A FlowFile is routed to this relationship after the 
database is successfully updated")
.build();
static final Relationship REL_RETRY = new Relationship.Builder()
.name("retry")
.description("A FlowFile is routed to this relationship if the 
database cannot be updated but attempting the operation again may succeed")
.build();
static final Relationship REL_FAILURE = new Relationship.Builder()
.name("failure")
.description("A FlowFile is routed to this relationship if the 
database cannot be updated and retrying the operation will also fail, "
+ "such as an invalid query or an integrity constraint 
violation")
.build();
{code}


step3: in Put.java `onTrigger` method,  after the `putFlowFiles` and before the 
`onCompleted.apply`, we try to get the value of , if true 
, check the `transferredFlowFiles` , if there are flowfiles don't route to 
`Success`, we should reroute these `transferredFlowFiles`(retry > failure),and 
do `onFailed`(if it's not null)

{code:java}
 try {
putFlowFiles(context, session, functionContext, connection, 
flowFiles, result);
} catch (DiscontinuedException e) {
// Whether it was an error or semi normal is depends on the 
implementation and reason why it wanted to discontinue.
// So, no logging is needed here.
}
...

if(context.getProperty(SUPPORT_TRANSACTIONS).asBoolean()){
//TODO   do sth
}

// OnCompleted processing.
if (onCompleted != null) {
onCompleted.apply(context, session, functionContext, 
connection);
}

// Transfer FlowFiles.
transferFlowFiles.apply(context, session, functionContext, 
result);
{code}



  was:
PutSQL processor support ,if we set this 
property true, I think it means The PutSQL processor will excute these sqls of 
one transaction Transactionally!!

But we find that when we set the  false, those sqls of one 
transaction do not excute transactionally,some sucess and some failure. I think 
it's wrong.

I think, if we set  true, it should be 
executed Transactionally, no matter  is true or false.

I see the code, only PutSQL has the ,  it 
maybe improve this feature at a small cost.

modify code design:

step1:  Maybe other Processors would support the  (such as PutDatabaseRecord), we should move the   from PutSQL.java to Put.java( I think it's a rational 
design that `Put.java`  define the  property )


{code:java}
public static final PropertyDescriptor SUPPORT_TRANSACTIONS = new 
PropertyDescriptor.Builder()
.name("Support Fragmented Transactions")
   ...
{code}


step2: in Put.java `onTrigger` method,  after the `putFlowFiles` and before the 
`onCompleted.apply`, we try to get the value of , if true 
, check the `transferredFlowFiles` , if there are flowfiles don't route to 
`Success`, we should reroute these `transferredFlowFiles`(retry > failure),and 
do `onFailed`(if it's not null)

{code:java}
 try {
putFlowFiles(context, session, functionContext, connection, 
flowFiles, result);
} catch (DiscontinuedException e) {
// Whether it was an error or semi normal is depends on the 
implementation and reason why it wanted to 

[jira] [Updated] (NIFI-7403) Put.java improvement(PutSQL's transactions support)

2020-04-28 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng updated NIFI-7403:
-
Description: 
PutSQL processor support ,if we set this 
property true, I think it means The PutSQL processor will excute these sqls of 
one transaction Transactionally!!

But we find that when we set the  false, those sqls of one 
transaction do not excute transactionally,some sucess and some failure. I think 
it's wrong.

I think, if we set  true, it should be 
executed Transactionally, no matter  is true or false.

I see the code, only PutSQL has the ,  it 
maybe improve this feature at a small cost.

modify code design:

step1:  Maybe other Processors would support the  (such as PutDatabaseRecord), we should move the   from PutSQL.java to Put.java( I think it's a rational 
design that `Put.java`  define the  property )


{code:java}
public static final PropertyDescriptor SUPPORT_TRANSACTIONS = new 
PropertyDescriptor.Builder()
.name("Support Fragmented Transactions")
   ...
{code}


step2: in Put.java `onTrigger` method,  after the `putFlowFiles` and before the 
`onCompleted.apply`, we try to get the value of , if true 
, check the `transferredFlowFiles` , if there are flowfiles don't route to 
`Success`, we should reroute these `transferredFlowFiles`(retry > failure),and 
do `onFailed`(if it's not null)

{code:java}
 try {
putFlowFiles(context, session, functionContext, connection, 
flowFiles, result);
} catch (DiscontinuedException e) {
// Whether it was an error or semi normal is depends on the 
implementation and reason why it wanted to discontinue.
// So, no logging is needed here.
}
...

if(context.getProperty(SUPPORT_TRANSACTIONS).asBoolean()){
//TODO   do sth
}

// OnCompleted processing.
if (onCompleted != null) {
onCompleted.apply(context, session, functionContext, 
connection);
}

// Transfer FlowFiles.
transferFlowFiles.apply(context, session, functionContext, 
result);
{code}

step3 Additionally, I think the Put.java can extract the RelationShips of the 
processors those use the Put.java(PutSQL PutDatabaseRecord, PutHiveQL...We can 
see that these processors who use the Put.java have the same Relationships, I 
this this is the `Put`'s common feature) 


{code:java}
static final Relationship REL_SUCCESS = new Relationship.Builder()
.name("success")
.description("A FlowFile is routed to this relationship after the 
database is successfully updated")
.build();
static final Relationship REL_RETRY = new Relationship.Builder()
.name("retry")
.description("A FlowFile is routed to this relationship if the 
database cannot be updated but attempting the operation again may succeed")
.build();
static final Relationship REL_FAILURE = new Relationship.Builder()
.name("failure")
.description("A FlowFile is routed to this relationship if the 
database cannot be updated and retrying the operation will also fail, "
+ "such as an invalid query or an integrity constraint 
violation")
.build();
{code}


  was:
PutSQL processor support ,if we set this 
property true, I think it means The PutSQL processor will excute these sqls of 
one transaction Transactionally!!

But we find that when we set the  false, those sqls of one 
transaction do not excute transactionally,some sucess and some failure. I think 
it's wrong.

I think, if we set  true, it should be 
executed Transactionally, no matter  is true or false.

I see the code, only PutSQL has the ,  it 
maybe improve this feature at a small cost.

modify code design:

step1: at account of that maybe Other Processors support the  (such as PutDatabaseRecord), we should move the  
 from PutSQL.java to Put.java


{code:java}
public static final PropertyDescriptor SUPPORT_TRANSACTIONS = new 
PropertyDescriptor.Builder()
.name("Support Fragmented Transactions")
   ...
{code}


step2: in Put.java onTrigger,  after the `putFlowFiles` and before the 
`onCompleted.apply`, we try to get the value of , if true 
, check the `transferredFlowFiles` , if there are flowfiles don't route to 
`Success`, we should reroute these `transferredFlowFiles`(retry > failure),and 
do `onFailed`(if it's not null)

{code:java}
 try {
putFlowFiles(context, session, functionContext, connection, 
flowFiles, result);
} catch (DiscontinuedException e) {
// Whether it was an error or semi normal is depends on the 
implementation and reason why it wanted to discontinue.
// So, no logging is needed here.
 

[jira] [Updated] (NIFI-7403) Put.java improvement(PutSQL's transactions support)

2020-04-28 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng updated NIFI-7403:
-
Description: 
PutSQL processor support ,if we set this 
property true, I think it means The PutSQL processor will excute these sqls of 
one transaction Transactionally!!

But we find that when we set the  false, those sqls of one 
transaction do not excute transactionally,some sucess and some failure. I think 
it's wrong.

I think, if we set  true, it should be 
executed Transactionally, no matter  is true or false.

I see the code, only PutSQL has the ,  it 
maybe improve this feature at a small cost.

modify code design:

step1: at account of that maybe Other Processors support the  (such as PutDatabaseRecord), we should move the  
 from PutSQL.java to Put.java


{code:java}
public static final PropertyDescriptor SUPPORT_TRANSACTIONS = new 
PropertyDescriptor.Builder()
.name("Support Fragmented Transactions")
   ...
{code}


step2: in Put.java onTrigger,  after the `putFlowFiles` and before the 
`onCompleted.apply`, we try to get the value of , if true 
, check the `transferredFlowFiles` , if there are flowfiles don't route to 
`Success`, we should reroute these `transferredFlowFiles`(retry > failure),and 
do `onFailed`(if it's not null)

{code:java}
 try {
putFlowFiles(context, session, functionContext, connection, 
flowFiles, result);
} catch (DiscontinuedException e) {
// Whether it was an error or semi normal is depends on the 
implementation and reason why it wanted to discontinue.
// So, no logging is needed here.
}
...

if(context.getProperty(SUPPORT_TRANSACTIONS).asBoolean()){
//TODO   do sth
}

// OnCompleted processing.
if (onCompleted != null) {
onCompleted.apply(context, session, functionContext, 
connection);
}

// Transfer FlowFiles.
transferFlowFiles.apply(context, session, functionContext, 
result);
{code}

step3 Additionally, I think the Put.java can extract the RelationShips of the 
processors those use the Put.java(PutSQL PutDatabaseRecord, PutHiveQL...We can 
see that these processors who use the Put.java have the same Relationships, I 
this this is the `Put`'s common feature) 


{code:java}
static final Relationship REL_SUCCESS = new Relationship.Builder()
.name("success")
.description("A FlowFile is routed to this relationship after the 
database is successfully updated")
.build();
static final Relationship REL_RETRY = new Relationship.Builder()
.name("retry")
.description("A FlowFile is routed to this relationship if the 
database cannot be updated but attempting the operation again may succeed")
.build();
static final Relationship REL_FAILURE = new Relationship.Builder()
.name("failure")
.description("A FlowFile is routed to this relationship if the 
database cannot be updated and retrying the operation will also fail, "
+ "such as an invalid query or an integrity constraint 
violation")
.build();
{code}


  was:
PutSQL processor support ,if we set this 
property true, I think it means The PutSQL processor will excute these sqls of 
one transaction Transactionally!!

But we find that when we set the  false, those sqls of one 
transaction do not excute transactionally,some sucess and some failure. I think 
it's wrong.

I think, if we set  true, it should be 
executed Transactionally, no matter  is true or false.

I see the code, only PutSQL has the ,  it 
should be improve this feature at a small cost.

modify code design:

step1: at account of that maybe Other Processors support the  (such as PutDatabaseRecord), we should move the  
 from PutSQL.java to Put.java


{code:java}
public static final PropertyDescriptor SUPPORT_TRANSACTIONS = new 
PropertyDescriptor.Builder()
.name("Support Fragmented Transactions")
   ...
{code}


step2: in Put.java onTrigger,  after the `putFlowFiles` and before the 
`onCompleted.apply`, we try to get the value of , if true 
, check the `transferredFlowFiles` , if there are flowfiles don't route to 
`Success`, we should reroute these `transferredFlowFiles`(retry > failure),and 
do `onFailed`(if it's not null)

{code:java}
 try {
putFlowFiles(context, session, functionContext, connection, 
flowFiles, result);
} catch (DiscontinuedException e) {
// Whether it was an error or semi normal is depends on the 
implementation and reason why it wanted to discontinue.
// So, no logging is needed here.
}
...


[jira] [Updated] (NIFI-7403) Put.java improvement(PutSQL's transactions support)

2020-04-28 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng updated NIFI-7403:
-
Summary: Put.java improvement(PutSQL's transactions support)  (was: PutSql 
improvement)

> Put.java improvement(PutSQL's transactions support)
> ---
>
> Key: NIFI-7403
> URL: https://issues.apache.org/jira/browse/NIFI-7403
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.11.4
>Reporter: ZhangCheng
>Priority: Major
>
> PutSQL processor support ,if we set this 
> property true, I think it means The PutSQL processor will excute these sqls 
> of one transaction Transactionally!!
> But we find that when we set the  false, those sqls of 
> one transaction do not excute transactionally,some sucess and some failure. I 
> think it's wrong.
> I think, if we set  true, it should be 
> executed Transactionally, no matter  is true or false.
> I see the code, only PutSQL has the ,  it 
> should be improve this feature at a small cost.
> modify code design:
> step1: at account of that maybe Other Processors support the   Fragmented Transactions>(such as PutDatabaseRecord), we should move the  
>  from PutSQL.java to Put.java
> {code:java}
> public static final PropertyDescriptor SUPPORT_TRANSACTIONS = new 
> PropertyDescriptor.Builder()
> .name("Support Fragmented Transactions")
>...
> {code}
> step2: in Put.java onTrigger,  after the `putFlowFiles` and before the 
> `onCompleted.apply`, we try to get the value of , if 
> true , check the `transferredFlowFiles` , if there are flowfiles don't route 
> to `Success`, we should reroute these `transferredFlowFiles`(retry > 
> failure),and do `onFailed`(if it's not null)
> {code:java}
>  try {
> putFlowFiles(context, session, functionContext, 
> connection, flowFiles, result);
> } catch (DiscontinuedException e) {
> // Whether it was an error or semi normal is depends on 
> the implementation and reason why it wanted to discontinue.
> // So, no logging is needed here.
> }
> ...
> 
> if(context.getProperty(SUPPORT_TRANSACTIONS).asBoolean()){
> //TODO   do sth
> }
> // OnCompleted processing.
> if (onCompleted != null) {
> onCompleted.apply(context, session, functionContext, 
> connection);
> }
> // Transfer FlowFiles.
> transferFlowFiles.apply(context, session, functionContext, 
> result);
> {code}
> step3 Additionally, I think the Put.java can extract the RelationShips of the 
> processors those use the Put.java(PutSQL PutDatabaseRecord, PutHiveQL...We 
> can see that these processors who use the Put.java have the same 
> Relationships, I this this is the `Put`'s common feature) 
> {code:java}
> static final Relationship REL_SUCCESS = new Relationship.Builder()
> .name("success")
> .description("A FlowFile is routed to this relationship after the 
> database is successfully updated")
> .build();
> static final Relationship REL_RETRY = new Relationship.Builder()
> .name("retry")
> .description("A FlowFile is routed to this relationship if the 
> database cannot be updated but attempting the operation again may succeed")
> .build();
> static final Relationship REL_FAILURE = new Relationship.Builder()
> .name("failure")
> .description("A FlowFile is routed to this relationship if the 
> database cannot be updated and retrying the operation will also fail, "
> + "such as an invalid query or an integrity constraint 
> violation")
> .build();
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7403) PutSql improvement

2020-04-28 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng updated NIFI-7403:
-
Description: 
PutSQL processor support ,if we set this 
property true, I think it means The PutSQL processor will excute these sqls of 
one transaction Transactionally!!

But we find that when we set the  false, those sqls of one 
transaction do not excute transactionally,some sucess and some failure. I think 
it's wrong.

I think, if we set  true, it should be 
executed Transactionally, no matter  is true or false.

I see the code, only PutSQL has the ,  it 
should be improve this feature at a small cost.

modify code design:

step1: at account of that maybe Other Processors support the  (such as PutDatabaseRecord), we should move the  
 from PutSQL.java to Put.java


{code:java}
public static final PropertyDescriptor SUPPORT_TRANSACTIONS = new 
PropertyDescriptor.Builder()
.name("Support Fragmented Transactions")
   ...
{code}


step2: in Put.java onTrigger,  after the `putFlowFiles` and before the 
`onCompleted.apply`, we try to get the value of , if true 
, check the `transferredFlowFiles` , if there are flowfiles don't route to 
`Success`, we should reroute these `transferredFlowFiles`(retry > failure),and 
do `onFailed`(if it's not null)

{code:java}
 try {
putFlowFiles(context, session, functionContext, connection, 
flowFiles, result);
} catch (DiscontinuedException e) {
// Whether it was an error or semi normal is depends on the 
implementation and reason why it wanted to discontinue.
// So, no logging is needed here.
}
...

if(context.getProperty(SUPPORT_TRANSACTIONS).asBoolean()){
//TODO   do sth
}

// OnCompleted processing.
if (onCompleted != null) {
onCompleted.apply(context, session, functionContext, 
connection);
}

// Transfer FlowFiles.
transferFlowFiles.apply(context, session, functionContext, 
result);
{code}

step3 Additionally, I think the Put.java can extract the RelationShips of the 
processors those use the Put.java(PutSQL PutDatabaseRecord, PutHiveQL...We can 
see that these processors who use the Put.java have the same Relationships, I 
this this is the `Put`'s common feature) 


{code:java}
static final Relationship REL_SUCCESS = new Relationship.Builder()
.name("success")
.description("A FlowFile is routed to this relationship after the 
database is successfully updated")
.build();
static final Relationship REL_RETRY = new Relationship.Builder()
.name("retry")
.description("A FlowFile is routed to this relationship if the 
database cannot be updated but attempting the operation again may succeed")
.build();
static final Relationship REL_FAILURE = new Relationship.Builder()
.name("failure")
.description("A FlowFile is routed to this relationship if the 
database cannot be updated and retrying the operation will also fail, "
+ "such as an invalid query or an integrity constraint 
violation")
.build();
{code}


  was:
PutSQL processor support ,if we set this 
property true, I think it means The PutSQL processor will excute these sqls of 
one transaction Transactionally!!

But we find that when we set the  false, those sqls of one 
transaction do not excute transactionally,some sucess and some failure. I think 
it's wrong.

I think, if we set  true, it should be 
executed Transactionally, no matter  is true or false.

I see the code, only PutSQL has the ,  it 
should be improve this feature at a small cost.

modify code design:

step1: at account of that maybe Other Processors support the  (such as PutDatabaseRecord), we should move the  
 from PutSQL.java to Put.java

```java
public static final PropertyDescriptor SUPPORT_TRANSACTIONS = new 
PropertyDescriptor.Builder()
.name("Support Fragmented Transactions")
   ...
```

step2: in Put.java onTrigger,  after the `putFlowFiles` and before the 
`onCompleted.apply`, we try to get the value of , if true 
, check the `transferredFlowFiles` , if there are flowfiles don't route to 
`Success`, we should reroute these `transferredFlowFiles`(retry > failure),and 
do `onFailed`(if it's not null)

```java
 try {
putFlowFiles(context, session, functionContext, connection, 
flowFiles, result);
} catch (DiscontinuedException e) {
// Whether it was an error or semi normal is depends on the 
implementation and reason why it wanted to discontinue.
// So, no logging is needed here.
}
...


[jira] [Updated] (NIFI-7403) PutSql improvement

2020-04-28 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng updated NIFI-7403:
-
Description: 
PutSQL processor support ,if we set this 
property true, I think it means The PutSQL processor will excute these sqls of 
one transaction Transactionally!!

But we find that when we set the  false, those sqls of one 
transaction do not excute transactionally,some sucess and some failure. I think 
it's wrong.

I think, if we set  true, it should be 
executed Transactionally, no matter  is true or false.

I see the code, only PutSQL has the ,  it 
should be improve this feature at a small cost.

modify code design:

step1: at account of that maybe Other Processors support the  (such as PutDatabaseRecord), we should move the  
 from PutSQL.java to Put.java

```java
public static final PropertyDescriptor SUPPORT_TRANSACTIONS = new 
PropertyDescriptor.Builder()
.name("Support Fragmented Transactions")
   ...
```

step2: in Put.java onTrigger,  after the `putFlowFiles` and before the 
`onCompleted.apply`, we try to get the value of , if true 
, check the `transferredFlowFiles` , if there are flowfiles don't route to 
`Success`, we should reroute these `transferredFlowFiles`(retry > failure),and 
do `onFailed`(if it's not null)

```java
 try {
putFlowFiles(context, session, functionContext, connection, 
flowFiles, result);
} catch (DiscontinuedException e) {
// Whether it was an error or semi normal is depends on the 
implementation and reason why it wanted to discontinue.
// So, no logging is needed here.
}
...

if(context.getProperty(SUPPORT_TRANSACTIONS).asBoolean()){
//TODO   do sth
}

// OnCompleted processing.
if (onCompleted != null) {
onCompleted.apply(context, session, functionContext, 
connection);
}

// Transfer FlowFiles.
transferFlowFiles.apply(context, session, functionContext, 
result);

```

  was:
PutSQL processor support ,if we set this 
property true, I think it means The PutSQL processor will excute these sqls of 
one transaction Transactionally!!

But we find that when we set the  false, those sqls of one 
transaction do not excute transactionally,some sucess and some failure. I think 
it's wrong.

I think, if we set  true, it should be 
executed Transactionally, no matter  is true or false.

I see the code, only PutSQL has the ,  it 
should be improve this feature at a small cost.


> PutSql improvement
> --
>
> Key: NIFI-7403
> URL: https://issues.apache.org/jira/browse/NIFI-7403
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.11.4
>Reporter: ZhangCheng
>Priority: Major
>
> PutSQL processor support ,if we set this 
> property true, I think it means The PutSQL processor will excute these sqls 
> of one transaction Transactionally!!
> But we find that when we set the  false, those sqls of 
> one transaction do not excute transactionally,some sucess and some failure. I 
> think it's wrong.
> I think, if we set  true, it should be 
> executed Transactionally, no matter  is true or false.
> I see the code, only PutSQL has the ,  it 
> should be improve this feature at a small cost.
> modify code design:
> step1: at account of that maybe Other Processors support the   Fragmented Transactions>(such as PutDatabaseRecord), we should move the  
>  from PutSQL.java to Put.java
> ```java
> public static final PropertyDescriptor SUPPORT_TRANSACTIONS = new 
> PropertyDescriptor.Builder()
> .name("Support Fragmented Transactions")
>...
> ```
> step2: in Put.java onTrigger,  after the `putFlowFiles` and before the 
> `onCompleted.apply`, we try to get the value of , if 
> true , check the `transferredFlowFiles` , if there are flowfiles don't route 
> to `Success`, we should reroute these `transferredFlowFiles`(retry > 
> failure),and do `onFailed`(if it's not null)
> ```java
>  try {
> putFlowFiles(context, session, functionContext, 
> connection, flowFiles, result);
> } catch (DiscontinuedException e) {
> // Whether it was an error or semi normal is depends on 
> the implementation and reason why it wanted to discontinue.
> // So, no logging is needed here.
> }
> ...
> 
> if(context.getProperty(SUPPORT_TRANSACTIONS).asBoolean()){
> //TODO   do sth
> }
> // OnCompleted processing.
> if (onCompleted != null) {
> onCompleted.apply(context, session, 

[jira] [Updated] (NIFI-7403) PutSql improvement

2020-04-28 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng updated NIFI-7403:
-
Description: 
PutSQL processor support ,if we set this 
property true, I think it means The PutSQL processor will excute these sqls of 
one transaction Transactionally!!

But we find that when we set the  false, those sqls of one 
transaction do not excute transactionally,some sucess and some failure. I think 
it's wrong.

I think, if we set  true, it should be 
executed Transactionally, no matter  is true or false.

I see the code, only PutSQL has the ,  it 
should be improve this feature at a small cost.

  was:
PutSQL processor support ,if we set this 
property true, I think it means The PutSQL processor will excute these sqls of 
one transaction Transactionally!!

But we find that when we set the  false, those sqls of one 
transaction do not excute transactionally,some sucess and some failure. I think 
it's wrong.

I think, if we set  true, it should be 
executed Transactionally, no matter  is true or false.


> PutSql improvement
> --
>
> Key: NIFI-7403
> URL: https://issues.apache.org/jira/browse/NIFI-7403
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.11.4
>Reporter: ZhangCheng
>Priority: Major
>
> PutSQL processor support ,if we set this 
> property true, I think it means The PutSQL processor will excute these sqls 
> of one transaction Transactionally!!
> But we find that when we set the  false, those sqls of 
> one transaction do not excute transactionally,some sucess and some failure. I 
> think it's wrong.
> I think, if we set  true, it should be 
> executed Transactionally, no matter  is true or false.
> I see the code, only PutSQL has the ,  it 
> should be improve this feature at a small cost.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7403) PutSql improvement

2020-04-28 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng updated NIFI-7403:
-
  Component/s: Extensions
Affects Version/s: 1.11.4
  Description: 
PutSQL processor support ,if we set this 
property true, I think it means The PutSQL processor will excute these sqls of 
one transaction Transactionally!!

But we find that when we set the  false, those sqls of one 
transaction do not excute transactionally,some sucess and some failure. I think 
it's wrong.

I think, if we set  true, it should be 
executed Transactionally, no matter  is true or false.

> PutSql improvement
> --
>
> Key: NIFI-7403
> URL: https://issues.apache.org/jira/browse/NIFI-7403
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.11.4
>Reporter: ZhangCheng
>Priority: Major
>
> PutSQL processor support ,if we set this 
> property true, I think it means The PutSQL processor will excute these sqls 
> of one transaction Transactionally!!
> But we find that when we set the  false, those sqls of 
> one transaction do not excute transactionally,some sucess and some failure. I 
> think it's wrong.
> I think, if we set  true, it should be 
> executed Transactionally, no matter  is true or false.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-7403) PutSql improvement

2020-04-28 Thread ZhangCheng (Jira)
ZhangCheng created NIFI-7403:


 Summary: PutSql improvement
 Key: NIFI-7403
 URL: https://issues.apache.org/jira/browse/NIFI-7403
 Project: Apache NiFi
  Issue Type: Improvement
Reporter: ZhangCheng






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (NIFI-6878) ConvertJSONToSQL Improvement. Statement Type Support "Use statement.type Attribute" or Supports Expression Language

2020-03-01 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng closed NIFI-6878.


> ConvertJSONToSQL Improvement. Statement Type Support  "Use statement.type 
> Attribute" or Supports Expression Language 
> -
>
> Key: NIFI-6878
> URL: https://issues.apache.org/jira/browse/NIFI-6878
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: ZhangCheng
>Assignee: Matt Burgess
>Priority: Minor
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> ConvertJSONToSQL  Statement Type provides fixed options : 
> UPDATE,INSERT,DELETE. 
> Usually, it can meet our needs. But  in actual application,I think It's not 
> flexible enough.
>  In some cases, we need to dynamically indicate the Statement Type.
> For example,the data from CpatureChangeMysql owns  the attribute  of 
> statement  type(cdc.event.type, we need to convert the data to sql(DML) 
> orderly; And we now have to use RouteOnAttribute to transfer data to three 
> branches , Build SQL statement separately ,finally,we have to use 
> EnforceOrder  to ensure the order of SQL statements.
> But it will be easy if ConvertJSONToSQL  supports dynamical Statement Type . 
> It is easy to implement this feature just like PutDatabaseRecord. 
> In practice, I did use PutDatabaseRecord   instead of ConvertJSONToSQL.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-6878) ConvertJSONToSQL Improvement. Statement Type Support "Use statement.type Attribute" or Supports Expression Language

2020-03-01 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng updated NIFI-6878:
-
Status: Resolved  (was: Closed)

> ConvertJSONToSQL Improvement. Statement Type Support  "Use statement.type 
> Attribute" or Supports Expression Language 
> -
>
> Key: NIFI-6878
> URL: https://issues.apache.org/jira/browse/NIFI-6878
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: ZhangCheng
>Assignee: Matt Burgess
>Priority: Minor
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> ConvertJSONToSQL  Statement Type provides fixed options : 
> UPDATE,INSERT,DELETE. 
> Usually, it can meet our needs. But  in actual application,I think It's not 
> flexible enough.
>  In some cases, we need to dynamically indicate the Statement Type.
> For example,the data from CpatureChangeMysql owns  the attribute  of 
> statement  type(cdc.event.type, we need to convert the data to sql(DML) 
> orderly; And we now have to use RouteOnAttribute to transfer data to three 
> branches , Build SQL statement separately ,finally,we have to use 
> EnforceOrder  to ensure the order of SQL statements.
> But it will be easy if ConvertJSONToSQL  supports dynamical Statement Type . 
> It is easy to implement this feature just like PutDatabaseRecord. 
> In practice, I did use PutDatabaseRecord   instead of ConvertJSONToSQL.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (NIFI-6878) ConvertJSONToSQL Improvement. Statement Type Support "Use statement.type Attribute" or Supports Expression Language

2020-02-16 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng closed NIFI-6878.


> ConvertJSONToSQL Improvement. Statement Type Support  "Use statement.type 
> Attribute" or Supports Expression Language 
> -
>
> Key: NIFI-6878
> URL: https://issues.apache.org/jira/browse/NIFI-6878
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: ZhangCheng
>Assignee: Matt Burgess
>Priority: Minor
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> ConvertJSONToSQL  Statement Type provides fixed options : 
> UPDATE,INSERT,DELETE. 
> Usually, it can meet our needs. But  in actual application,I think It's not 
> flexible enough.
>  In some cases, we need to dynamically indicate the Statement Type.
> For example,the data from CpatureChangeMysql owns  the attribute  of 
> statement  type(cdc.event.type, we need to convert the data to sql(DML) 
> orderly; And we now have to use RouteOnAttribute to transfer data to three 
> branches , Build SQL statement separately ,finally,we have to use 
> EnforceOrder  to ensure the order of SQL statements.
> But it will be easy if ConvertJSONToSQL  supports dynamical Statement Type . 
> It is easy to implement this feature just like PutDatabaseRecord. 
> In practice, I did use PutDatabaseRecord   instead of ConvertJSONToSQL.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-6878) ConvertJSONToSQL Improvement. Statement Type Support "Use statement.type Attribute" or Supports Expression Language

2020-02-16 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng updated NIFI-6878:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> ConvertJSONToSQL Improvement. Statement Type Support  "Use statement.type 
> Attribute" or Supports Expression Language 
> -
>
> Key: NIFI-6878
> URL: https://issues.apache.org/jira/browse/NIFI-6878
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: ZhangCheng
>Assignee: Matt Burgess
>Priority: Minor
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> ConvertJSONToSQL  Statement Type provides fixed options : 
> UPDATE,INSERT,DELETE. 
> Usually, it can meet our needs. But  in actual application,I think It's not 
> flexible enough.
>  In some cases, we need to dynamically indicate the Statement Type.
> For example,the data from CpatureChangeMysql owns  the attribute  of 
> statement  type(cdc.event.type, we need to convert the data to sql(DML) 
> orderly; And we now have to use RouteOnAttribute to transfer data to three 
> branches , Build SQL statement separately ,finally,we have to use 
> EnforceOrder  to ensure the order of SQL statements.
> But it will be easy if ConvertJSONToSQL  supports dynamical Statement Type . 
> It is easy to implement this feature just like PutDatabaseRecord. 
> In practice, I did use PutDatabaseRecord   instead of ConvertJSONToSQL.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7140) PutSql support database transaction rollback when is false

2020-02-16 Thread ZhangCheng (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhangCheng updated NIFI-7140:
-
Description: 
For PutSQL processor,if the 'Support Fragmented Transactions' is true, the 
processor will process all FlowFiles with that fragment.identifier as a single 
transaction;
In actuality,it works. 
But when some sql of the transaction failed and  is false 
, the database transaction will not roll back.
Sometimes,we need the  database transaction rollback and do not want  the 
flowfile rollback, we need that the failed database  transaction route  to 
REL_FAILURE.
If the is true and  is 
false , I think it should still support the capability  of database transaction 
rollback, for example :it should add a property (like )  which can indicate that whether the processor support  
database transaction rollback when the 'Support Fragmented Transactions' is 
true .Of course ,when  is true , will be ignored

  was:
For PutSQL processor,if the 'Support Fragmented Transactions' is true, the 
processor will process all FlowFiles with that fragment.identifier as a single 
transaction;
In actuality,it works. 
But when some sql of the transaction failed and  is false 
, the database transaction will not roll back.
Sometimes,we need the  database transaction rollback and do not want  the 
flowfile rollback, we need that the failed database  transaction route  to 
REL_FAILURE.
If the is true and  is 
false , I think it should still support the capability  of database transaction 
rollback, for example :it should add a property (like )  which can indicate that whether the processor support  
database transaction rollback when the 'Support Fragmented Transactions' is 
true .Of course ,when  is true ,database transaction 
rollback will be supported too.


> PutSql support database transaction rollback when is 
> false
> 
>
> Key: NIFI-7140
> URL: https://issues.apache.org/jira/browse/NIFI-7140
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.11.1
>Reporter: ZhangCheng
>Priority: Major
>
> For PutSQL processor,if the 'Support Fragmented Transactions' is true, the 
> processor will process all FlowFiles with that fragment.identifier as a 
> single transaction;
> In actuality,it works. 
> But when some sql of the transaction failed and  is 
> false , the database transaction will not roll back.
> Sometimes,we need the  database transaction rollback and do not want  the 
> flowfile rollback, we need that the failed database  transaction route  to 
> REL_FAILURE.
> If the is true and  is 
> false , I think it should still support the capability  of database 
> transaction rollback, for example :it should add a property (like  Fragmented Transactions RollBack>)  which can indicate that whether the 
> processor support  database transaction rollback when the 'Support Fragmented 
> Transactions' is true .Of course ,when  is true 
> , will be ignored



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


  1   2   >