tgravescs commented on pull request #31284:
URL: https://github.com/apache/spark/pull/31284#issuecomment-765478254


   I'm a bit confused by your description, it would be nice to add more detail. 
 looking at the code I think what you are saying is that you read it as a long 
from the parquet file but then downcast it to an int when writing it to the 
column, correct?  If you don't downcast to an INT then the ONHeapColumnVector 
blows up - but why does that blow up?
   
   perhaps clarify what you mean by write it when you say " Spark will read it 
as a long but write it as an int by downcasting it."


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to