avocader opened a new pull request #9320: URL: https://github.com/apache/kafka/pull/9320
The `org.apache.kafka.connect.data.Values#parse` method parses integers, which are larger than `Long.MAX_VALUE` as `double` with `Schema.FLOAT64_SCHEMA`. That means we are losing precision for these larger integers. For example: ``` SchemaAndValue schemaAndValue = Values.parseString("9223372036854775808"); ``` returns: ``` SchemaAndValue{schema=Schema{FLOAT64}, value=9.223372036854776E18} ``` Also, this method parses values, that can be parsed as `FLOAT32` to `FLOAT64`. This PR changes parsing logic, to use `FLOAT32`/`FLOAT64` for numbers that don't have and fraction part(`decimal.scale()!=0`) only, and use an arbitrary-precision `org.apache.kafka.connect.data.Decimal` otherwise. Also, it updates the method to parse numbers, that can be represented as `float` to `FLOAT64`. Added unit tests, that cover parsing `BigInteger`, `Byte`, `Short`, `Integer`, `Long`, `Float`, `Double` types. ### Committer Checklist (excluded from commit message) - [ ] Verify design and implementation - [ ] Verify test coverage and CI build status - [ ] Verify documentation (including upgrade notes) ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org