akhalymon-cv commented on pull request #34709:
URL: https://github.com/apache/spark/pull/34709#issuecomment-979084706


   @peter-toth thanks for looking into the patch. Regarding partitioning - 
that's a valid point, I pruned myWhereClause too fast without considering it 
handles partitioning. Sampling is only supported by Postgres, so it doesn't 
affect very this use case much. As for limiting, maybe the user can specify it 
manually in the query? One thing I can think of is modifying the query to 
include partitioning conditions, which can be not easy to do. As far as I 
understand now, we will be not able to add partitioning without nesting the 
user query, as we are already doing. @peter-toth what do you think?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to