Github user dongjoon-hyun commented on a diff in the pull request:

    https://github.com/apache/spark/pull/18207#discussion_r120393293
  
    --- Diff: docs/index.md ---
    @@ -26,15 +26,13 @@ Spark runs on both Windows and UNIX-like systems (e.g. 
Linux, Mac OS). It's easy
     locally on one machine --- all you need is to have `java` installed on 
your system `PATH`,
     or the `JAVA_HOME` environment variable pointing to a Java installation.
     
    -Spark runs on Java 8+, Python 2.6+/3.4+ and R 3.1+. For the Scala API, 
Spark {{site.SPARK_VERSION}}
    +Spark runs on Java 8+, Python 2.7+/3.4+ and R 3.1+. For the Scala API, 
Spark {{site.SPARK_VERSION}}
     uses Scala {{site.SCALA_BINARY_VERSION}}. You will need to use a 
compatible Scala version
     ({{site.SCALA_BINARY_VERSION}}.x).
     
    -Note that support for Java 7 was removed as of Spark 2.2.0.
    +Note that support for Java 7, Python 2.6 and old Hadoop versions before 
2.6.5 were removed as of Spark 2.2.0.
    --- End diff --
    
    Did you test on Python 2.6 at RC4 testing? And it happens to be supported 
at 2.2.0?
    Actually, Jenkins does not run the tests over two months (since 29th Mar). 
As @JoshRosen said, there is no guard anymore in order to guarantee `still more 
supported` on 2.2.1 or later.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to