cloud-fan commented on pull request #31284:
URL: https://github.com/apache/spark/pull/31284#issuecomment-769154273


   > The reason this happens is because method readLong tries to write the long 
to a WritableColumnVector which has been initialized to accept only Ints which 
leads to a NullPointerException.
   
   Why did Spark allocate an int column vector in the first place? If the 
parqeut field is INT64, we should use long column vector.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to