Github user srowen commented on a diff in the pull request:

    https://github.com/apache/spark/pull/18207#discussion_r120308919
  
    --- Diff: docs/index.md ---
    @@ -26,15 +26,13 @@ Spark runs on both Windows and UNIX-like systems (e.g. 
Linux, Mac OS). It's easy
     locally on one machine --- all you need is to have `java` installed on 
your system `PATH`,
     or the `JAVA_HOME` environment variable pointing to a Java installation.
     
    -Spark runs on Java 8+, Python 2.6+/3.4+ and R 3.1+. For the Scala API, 
Spark {{site.SPARK_VERSION}}
    +Spark runs on Java 8+, Python 2.7+/3.4+ and R 3.1+. For the Scala API, 
Spark {{site.SPARK_VERSION}}
     uses Scala {{site.SCALA_BINARY_VERSION}}. You will need to use a 
compatible Scala version
     ({{site.SCALA_BINARY_VERSION}}.x).
     
    -Note that support for Java 7 was removed as of Spark 2.2.0.
    +Note that support for Java 7, Python 2.6 and old Hadoop versions before 
2.6.5 were removed as of Spark 2.2.0.
    --- End diff --
    
    It sounds like 2.6 is still more supported than not-support in 2.2.x, but 
we should call it unsupported for 2.3.x. 
    
    Let's say that Python 2.6 support was really _removed_ for 2.3.0, rather 
than 2.2.0.
    It's fine to add a note about Hadoop before 2.6 being unsupported in 2.2.0, 
that's true.
    
    Then we can merge in master. It's not essential to merge this for 2.2 as 
the existing text appears to be not-wrong.



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to