timeabarna commented on a change in pull request #5358:
URL: https://github.com/apache/nifi/pull/5358#discussion_r702616830



##########
File path: 
nifi-nar-bundles/nifi-hive-bundle/nifi-hive3-processors/src/main/java/org/apache/nifi/util/hive/HiveJdbcCommon.java
##########
@@ -149,7 +155,16 @@ public static long convertToAvroStream(final ResultSet rs, 
final OutputStream ou
                         // org.apache.avro.AvroRuntimeException: Unknown datum 
type java.lang.Byte
                         rec.put(i - 1, ((Byte) value).intValue());
 
-                    } else if (value instanceof BigDecimal || value instanceof 
BigInteger) {
+                    } else if (value instanceof BigDecimal) {
+                        if (useLogicalTypes) {
+                            final int precision = meta.getPrecision(i) > 1 ? 
meta.getPrecision(i) : 10;
+                            final int scale = meta.getScale(i) > 0 ? 
meta.getScale(i) : 0;
+                            rec.put(i - 1, 
AvroTypeUtil.convertToAvroObject(value, LogicalTypes.decimal(precision, 
scale).addToSchema(Schema.create(Schema.Type.BYTES))));
+                        } else {
+                            rec.put(i - 1, value.toString());
+                        }
+
+                    } else if (value instanceof BigInteger) {

Review comment:
       @mattyb149 Thanks for your recommendations. As I can see in JdbcCommon 
BigInt has been added to Avro either as long or string regardless of Avro 
Logical Types used or not. Applying the same to HiveJdbcCommon wouldn't cause 
some backward compatibility issues? I think a flow already built on BigInt as 
string now being sometimes long could break. Would you like this conversion 
only happen when logical type turned on?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to