lxian commented on pull request #31998:
URL: https://github.com/apache/spark/pull/31998#issuecomment-850280282


   > @lxian In the current approach we'd have to copy values from one vector to 
another. I think a better and more efficient approach may be to feed the row 
indexes to `VectorizedRleValuesReader#readXXX` and skip rows if they are not in 
the range, so basically we increment both `rowId` and row indexes in parallel.
   
   In `VectorizedRleValuesReader` the values are read in batch or in a simple 
loop. I am wondering whether it will make it slow if we put the row index 
filtering there.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to