[
https://issues.apache.org/jira/browse/KAFKA-10457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17438256#comment-17438256
]
Arjun Satish commented on KAFKA-10457:
--------------------------------------
looks like we would need a new [data
type|https://github.com/apache/kafka/blob/3.0.0/connect/api/src/main/java/org/apache/kafka/connect/data/Schema.java#L45-L107]
to accommodate large integers. Right now, the only precisions we support for
integers are: INT8, INT16, INT32, INT64.
Adding a new type will help handle it in
[JsonConverter|https://github.com/apache/kafka/blob/3.0.0/connect/json/src/main/java/org/apache/kafka/connect/json/JsonConverter.java#L70-L73]
correctly.
> JsonConverter.toConnectData trims BigInteger to Long for schema-less case
> -------------------------------------------------------------------------
>
> Key: KAFKA-10457
> URL: https://issues.apache.org/jira/browse/KAFKA-10457
> Project: Kafka
> Issue Type: Bug
> Components: KafkaConnect
> Reporter: Oleksandr Diachenko
> Assignee: Oleksandr Diachenko
> Priority: Critical
>
>
> When _JsonConverter_ is configured with _schemas.enable=false_ and value,
> exceeding _Double_ is passed, the result is incorrect since the converter
> trims it to _Double:_
> {code:java}
> Map<String, Boolean> props = Collections.singletonMap("schemas.enable",
> false);
> converter.configure(props, true);
> BigInteger value = BigInteger.valueOf(Long.MAX_VALUE).add(new
> BigInteger("1"));
> String msg = value.toString();
> SchemaAndValue schemaAndValue = converter.toConnectData(TOPIC,
> msg.getBytes());
> assertNull(schemaAndValue.schema());
> assertEquals(value, schemaAndValue.value());
> {code}
>
> Fails with:
>
> {code:java}
> expected:<9223372036854775808> but was:<-9223372036854775808>
> Expected :9223372036854775808
> Actual :-9223372036854775808
> {code}
>
>
--
This message was sent by Atlassian Jira
(v8.3.4#803005)