Github user srowen commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22615#discussion_r222097177
  
    --- Diff: docs/building-spark.md ---
    @@ -49,25 +49,20 @@ To create a Spark distribution like those distributed 
by the
     to be runnable, use `./dev/make-distribution.sh` in the project root 
directory. It can be configured
     with Maven profile settings and so on like the direct Maven build. Example:
     
    -    ./dev/make-distribution.sh --name custom-spark --pip --r --tgz 
-Psparkr -Phadoop-2.7 -Phive -Phive-thriftserver -Pmesos -Pyarn -Pkubernetes
    +    ./dev/make-distribution.sh --name custom-spark --pip --r --tgz 
-Psparkr -Phive -Phive-thriftserver -Pmesos -Pyarn -Pkubernetes
     
     This will build Spark distribution along with Python pip and R packages. 
For more information on usage, run `./dev/make-distribution.sh --help`
     
     ## Specifying the Hadoop Version and Enabling YARN
     
     You can specify the exact version of Hadoop to compile against through the 
`hadoop.version` property. 
    -If unset, Spark will build against Hadoop 2.6.X by default.
     
     You can enable the `yarn` profile and optionally set the `yarn.version` 
property if it is different 
     from `hadoop.version`.
     
    -Examples:
    +Example:
     
    -    # Apache Hadoop 2.6.X
    -    ./build/mvn -Pyarn -DskipTests clean package
    -
    -    # Apache Hadoop 2.7.X and later
    -    ./build/mvn -Pyarn -Phadoop-2.7 -Dhadoop.version=2.7.3 -DskipTests 
clean package
    +    ./build/mvn -Pyarn -Dhadoop.version=2.7.7 -DskipTests clean package
    --- End diff --
    
    Yeah the point here was just to show a different Spark version. I could 
make it a 2.8.x version


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to