[ 
https://issues.apache.org/jira/browse/NIFI-2531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15415523#comment-15415523
 ] 

ASF GitHub Bot commented on NIFI-2531:
--------------------------------------

Github user bbende commented on the issue:

    https://github.com/apache/nifi/pull/823
  
    Tested this out with MySQL, created a table like the following:
    
    ```
    mysql> CREATE TABLE BIGINT_TEST (id bigint(20) unsigned, name varchar(255));
    Query OK, 0 rows affected (0.02 sec)
    
    mysql> insert into BIGINT_TEST (id, name) values (22222222222222222222, 
"test");
    Query OK, 1 row affected, 1 warning (0.00 sec)
    
    mysql> select * from BIGINT_TEST;
    +----------------------+------+
    | id                   | name |
    +----------------------+------+
    | 18446744073709551615 | test |
    +----------------------+------+
    1 row in set (0.00 sec)
    ```
    Verified I got the error before your patch, then applied it and got a 
different error:
    
    ```
    java.lang.ArithmeticException: BigInteger out of long range
        at java.math.BigInteger.longValueExact(BigInteger.java:4383) 
~[na:1.8.0_74]
        at 
org.apache.nifi.processors.standard.util.JdbcCommon.convertToAvroStream(JdbcCommon.java:125)
 ~[na:na]
    ```
    
    I understand there isn't much we can do here because the value is too big 
for a long, but would we better off representing BIGINT as a string in the 
schema and output so that we never run into an error? 
    
    I realize we lose the typing then, but not sure if having some of the data 
typed and some with errors is better or worse.


> SQL-to-Avro processors do not convert BIGINT correctly
> ------------------------------------------------------
>
>                 Key: NIFI-2531
>                 URL: https://issues.apache.org/jira/browse/NIFI-2531
>             Project: Apache NiFi
>          Issue Type: Bug
>    Affects Versions: 1.0.0, 0.7.0
>            Reporter: Matt Burgess
>            Assignee: Matt Burgess
>             Fix For: 1.0.0
>
>
> For the SQL to Avro processors that use JdbcCommon (such as ExecuteSQL), if a 
> BigInteger object is being put into an Avro record, it is being put in as a 
> String. However when the Avro schema is created and the SQL type of the 
> column is BIGINT, the schema contains the expected type "long" (actually a 
> union between null and long to allow for null values). This causes errors 
> such as:
> UnresolvedUnionException: not in union: ["null", "long"]
> If a BigInteger is retrieved from the result set and the SQL type is BIGINT, 
> then its value is expected to fit into 8 bytes and should thus be converted 
> to a long before storing in the Avro record.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to