Github user HyukjinKwon commented on the issue:

    https://github.com/apache/spark/pull/15848
  
    In my personal opinion, I think we should fix here together. Maybe we could
    
    - Run the tests with `--fail-never` flag in the maven in order to list up 
the failed tests. If they are few and fixable, then we can fix them up here.
    
    - If they are few but not sure why they are failed, I think we could make 
them `ignore` here (personally I think this is even not preferable though, I 
guess it _might_ be acceptable).
    
    - If they are a lot (or assumed to be a lot), I think we should 
incrementally fix each by each.
    
    I think making them ignored sounds pretty unsafe if they are a lot. They 
exist to test regression and we would not be able to detect them until they are 
actually tested. It is so close for Spark 2.1 release and I am worried if we 
meet unexpected test failures when it is actually backported.
    
    If this PR blocks the release, it _might_ be acceptable if this is the only 
choice but it seems not.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to