Yuanhao Zhu created NIFI-13633:
----------------------------------

             Summary: AllowScientificNotation default false causing existing 
service to fail
                 Key: NIFI-13633
                 URL: https://issues.apache.org/jira/browse/NIFI-13633
             Project: Apache NiFi
          Issue Type: Bug
          Components: Core Framework
    Affects Versions: 1.27.0, 1.26.0
            Reporter: Yuanhao Zhu


Recently after upgrading to nifi 1.27.0(We skipped 1.26.0 since the azure 
keyvault related functions are not working properly for dependency conflicts), 
our ConvertRecord processor(ParquetRecordReader and JsonRecordSetWriter) start 
to no longer able to convert the parquet file and complains about not being 
able to handle "NaN".

 
2024-08-06 06:42:20,852 ERROR [Timer-Driven Process Thread-17] 
o.a.n.processors.standard.ConvertRecord 
ConvertRecord[id=2f9e8b7a-045c-301d-f79c-8f4b417eba69] Failed to write 
MapRecord[\{sampling_rate=NaN, 
qid_mapping=2::0::11::0_1::1::0::3::0_1::0::6::0_8, 
ingestion_time=[Ljava.lang.Object;@2ac05a68, 
z_timestamp=[Ljava.lang.Object;@6239681b, value_string=null, asset_id=9668491, 
source=protobuf-bridge, error=NaN, value=0.0}] with reader schema ... and 
writer schema ... as a JSON Object due to java.lang.NumberFormatException: 
Character N is neither a decimal digit number, decimal point, nor "e" notation 
exponential mark.
 

After some investigation in your repo, I found that it's because when the 
AllowScientificNotation option of the JsonRecordSetWriter is set to false(which 
is default, and that's why the record writer failed after the upgrade), the 
value will be parse by BigDecimal class which does not support "NaN". 

 

I'm not sure if this is the expected behaviour, but I think it's nice to let 
you guys know that this would lead to existing services to fail because of the 
change.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to