Github user HyukjinKwon commented on the issue:

    https://github.com/apache/spark/pull/16586
  
    They all pass in individual tests with `test-only` (please check the logs 
above).
    
    ```
    org.apache.spark.scheduler.SparkListenerSuite:
     - local metrics (8 seconds, 656 milliseconds)
    
    org.apache.spark.sql.hive.execution.HiveQuerySuite:
     - constant null testing (531 milliseconds)
    
    org.apache.spark.sql.hive.execution.AggregationQuerySuite:
     - udaf with all data types (4 seconds, 285 milliseconds)
    
    org.apache.spark.sql.hive.StatisticsSuite:
     - verify serialized column stats after analyzing columns (2 seconds, 844 
milliseconds)
    
    org.apache.spark.sql.hive.execution.SQLQuerySuite:
    - dynamic partition value test (1 second, 407 milliseconds)
    - SPARK-6785: HiveQuerySuite - Date cast (188 milliseconds)
    ```
    
    Although I am wondering how/why those tests seem more flaky (assuming from 
observations in the builds), I think it is possible to say, at least, Spark 
tests (in a way I run) are able to pass on Windows.
    
    Let me remove `[WIP]` and try to make the tests stable on Windows in the 
future if this sounds reasonable.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to