Github user viirya commented on the issue:

    https://github.com/apache/spark/pull/16785
  
    > this looks like a very big hammer to solve this problem. Can't we try a 
different approach?
    I think we should try to avoid optimizing already optimized code snippets, 
you might be able to do this using some kind of a fence. It would even be 
better if we would have a recursive node.
    
    @cloud-fan @hvanhovell Ok. I've figured out to add a check to reduce the 
candidates of aliased constraints. It can achieve same speed-up (cut of half 
running time in benchmark) without parallel collection hammer.
    
    Can you have time to look at it? Thanks.



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to