[jira] [Updated] (NIFI-7909) ConvertRecord writes invalid data when converting long to int

2020-10-12 Thread Mike Thomsen (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Thomsen updated NIFI-7909:
---
Fix Version/s: 1.13.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> ConvertRecord writes invalid data when converting long to int
> -
>
> Key: NIFI-7909
> URL: https://issues.apache.org/jira/browse/NIFI-7909
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Christophe Monnet
>Assignee: Matt Burgess
>Priority: Major
> Fix For: 1.13.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> [https://apachenifi.slack.com/archives/C0L9UPWJZ/p1602145019023300]
> {quote}I use ConvertRecord to convert from JSON to Avro.
>  For a field as "int" in the avro schema, if the json payload contains a 
> number that is too big, NiFi does not throw an error but writes "crap" in the 
> avro file. Is that intended?
> When using avrotool it throws an exception: 
> org.codehaus.jackson.JsonParseException: Numeric value (2156760545) out of 
> range of int
> The ConvertRecord is configured with JsonTreeReader (Infer Schema strategy) 
> and AvroRecordSetWriter (Use 'Schema Text' Property).
>  So I guess NiFi converts an inferred long to an explicitely specified int?
>  How can I make NiFi less lenient? I would prefer a failure than wrong data 
> in output.
> {quote}
> Workaround: use ValidateRecord.
> I'm also wondering if the ConsumeKafkaRecord processors could be affected.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7909) ConvertRecord writes invalid data when converting long to int

2020-10-12 Thread Matt Burgess (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-7909:
---
Affects Version/s: (was: 1.12.1)
   (was: 1.11.4)
   Status: Patch Available  (was: In Progress)

> ConvertRecord writes invalid data when converting long to int
> -
>
> Key: NIFI-7909
> URL: https://issues.apache.org/jira/browse/NIFI-7909
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Christophe Monnet
>Assignee: Matt Burgess
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> [https://apachenifi.slack.com/archives/C0L9UPWJZ/p1602145019023300]
> {quote}I use ConvertRecord to convert from JSON to Avro.
>  For a field as "int" in the avro schema, if the json payload contains a 
> number that is too big, NiFi does not throw an error but writes "crap" in the 
> avro file. Is that intended?
> When using avrotool it throws an exception: 
> org.codehaus.jackson.JsonParseException: Numeric value (2156760545) out of 
> range of int
> The ConvertRecord is configured with JsonTreeReader (Infer Schema strategy) 
> and AvroRecordSetWriter (Use 'Schema Text' Property).
>  So I guess NiFi converts an inferred long to an explicitely specified int?
>  How can I make NiFi less lenient? I would prefer a failure than wrong data 
> in output.
> {quote}
> Workaround: use ValidateRecord.
> I'm also wondering if the ConsumeKafkaRecord processors could be affected.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7909) ConvertRecord writes invalid data when converting long to int

2020-10-09 Thread Christophe Monnet (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christophe Monnet updated NIFI-7909:

Description: 
[https://apachenifi.slack.com/archives/C0L9UPWJZ/p1602145019023300]
{quote}I use ConvertRecord to convert from JSON to Avro.
 For a field as "int" in the avro schema, if the json payload contains a number 
that is too big, NiFi does not throw an error but writes "crap" in the avro 
file. Is that intended?

When using avrotool it throws an exception: 
org.codehaus.jackson.JsonParseException: Numeric value (2156760545) out of 
range of int

The ConvertRecord is configured with JsonTreeReader (Infer Schema strategy) and 
AvroRecordSetWriter (Use 'Schema Text' Property).
 So I guess NiFi converts an inferred long to an explicitely specified int?
 How can I make NiFi less lenient? I would prefer a failure than wrong data in 
output.
{quote}
Workaround: use ValidateRecord.

I'm also wondering if the ConsumeKafkaRecord processors could be affected.

 

  was:
[https://apachenifi.slack.com/archives/C0L9UPWJZ/p1602145019023300]
{quote}I use ConvertRecord to convert from JSON to Avro.
For a field as "int" in the avro schema, if the json payload contains a number 
that is too big, NiFi does not throw an error but writes "crap" in the avro 
file. Is that intended?

When using avrotool it throws an exception: 
org.codehaus.jackson.JsonParseException: Numeric value (2156760545) out of 
range of int

The ConvertRecord is configured with JsonTreeReader (Infer Schema strategy) and 
AvroRecordSetWriter (Use 'Schema Text' Property).
So I guess NiFi converts an inferred long to an explicitely specified int?
How can I make NiFi less lenient? I would prefer a failure than wrong data in 
output.
{quote}
 


> ConvertRecord writes invalid data when converting long to int
> -
>
> Key: NIFI-7909
> URL: https://issues.apache.org/jira/browse/NIFI-7909
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.11.4, 1.12.1
>Reporter: Christophe Monnet
>Priority: Major
>
> [https://apachenifi.slack.com/archives/C0L9UPWJZ/p1602145019023300]
> {quote}I use ConvertRecord to convert from JSON to Avro.
>  For a field as "int" in the avro schema, if the json payload contains a 
> number that is too big, NiFi does not throw an error but writes "crap" in the 
> avro file. Is that intended?
> When using avrotool it throws an exception: 
> org.codehaus.jackson.JsonParseException: Numeric value (2156760545) out of 
> range of int
> The ConvertRecord is configured with JsonTreeReader (Infer Schema strategy) 
> and AvroRecordSetWriter (Use 'Schema Text' Property).
>  So I guess NiFi converts an inferred long to an explicitely specified int?
>  How can I make NiFi less lenient? I would prefer a failure than wrong data 
> in output.
> {quote}
> Workaround: use ValidateRecord.
> I'm also wondering if the ConsumeKafkaRecord processors could be affected.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)