openinx opened a new pull request #1271:
URL: https://github.com/apache/iceberg/pull/1271


   This PR addressed the bug in https://github.com/apache/iceberg/issues/1269,  
it mainly fixed the two sub-issues: 
   
   1.  when writing a Decimal (precision<=18) into hive orc file,  the orc 
writer will scale down the decimal. for example,  we have a value  10.100  for 
type `Decimal(10, 3)`,  the hive orc will remove all the trailing zero and 
store it as 101*10^(-1), mean precision is 3 and scale is 1.  Here the scale of 
decimal read from hive orc file, is not strictly equal to 3.  so for both spark 
orc reader and generic orc reader we need to transform it to the given scale =3 
.  Otherwise, the unit test will be broken. 
   
   2. The long value of zoned timestamp can be negative,  while we spark orc 
reader/writer did not consider this case, and just use the  `/`  and `%` to do 
the arithmetic computation,  while actually we should use `Math.floorDiv` and 
`Math.floorMod`. 
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to