Github user robbinspg commented on the pull request:

    https://github.com/apache/spark/pull/10421#issuecomment-168919276
  
    Merging this into the 1.6 stream has caused a test failure in 
    
    org.apache.spark.sql.execution.ExchangeCoordinatorSuite.determining the 
number of reducers: aggregate operator
    
    There was a change in master in the ExchangeCoordinatorSuite which set the 
expected partition sizes to a new value. I do not understand why the change in 
this PR affects the input partition sizes but it does.
    
    I think this is a test issue rather than an issue with this PR. Should I 
raise a new Jira to fix the expected partition sizes?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to