kbendick commented on issue #2765:
URL: https://github.com/apache/iceberg/issues/2765#issuecomment-907499637


   Hi @nautilus28. Sorry to leave this hanging for so long.
   
   There is a bug in Spark (that has been patched but I don't think it's been 
released) where queries without any unresolved fields (such as named columns in 
the query) don't properly parse. There's also a bug where MERGE INTO queries 
are resolved by ordinal position (and not by field name) in Spark, which has 
also been patched but I don't believe we'll see those patches upstream until 
Spark 3.2. Though I don't think either of those are what you're hitting.
   
   Is this just a concern about predicate pushdown? I do seem to recall that 
(at least in older versions), predicates aren't always pushed down if the type 
is different (I think especially coming from subqueries). I'd have to look into 
it again.
   
   I'll see if I can recall (or find somebody who knows more than I do) about 
the conditions in which predicate pushdown does or does not occur.
   
   If you can upgrade to Spark 3.1, that would be ideal. Now that we support 
Spark 3.1, I will admit that I am always a bit suspicious of a lot of behavior 
with Spark 3.0.x (though I cannot confirm that that's the issue here).


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to