GitHub user srowen opened a pull request:

    https://github.com/apache/spark/pull/2814

    SPARK-1209 [CORE] SparkHadoopUtil should not use package org.apache.hadoop

    (This is just a look at what completely moving the classes would look like. 
I know Patrick flagged that as maybe not OK, although, it's private?)

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/srowen/spark SPARK-1209

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/2814.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #2814
    
----
commit b5b82e28724f94573644a68a794ce33649ea6ea3
Author: Sean Owen <[email protected]>
Date:   2014-10-13T19:43:36Z

    Refer to "self-contained" rather than "standalone" apps to avoid confusion 
with standalone deployment mode. And fix placement of reference to this in 
MLlib docs.

commit ec1b84a331d2cfe1a9b2dc2553a117026728bb01
Author: Sean Owen <[email protected]>
Date:   2014-10-13T19:48:42Z

    Move SparkHadoopMapRedUtil / SparkHadoopMapReduceUtil from 
org.apache.hadoop to org.apache.spark

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to