GitHub user witgo reopened a pull request:

    https://github.com/apache/spark/pull/1022

    SPARK-1719: spark.*.extraLibraryPath isn't applied on yarn

    Fix: spark.executor.extraLibraryPath isn't applied on yarn

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/witgo/spark SPARK-1719

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/1022.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #1022
    
----
commit b23e9c3e4085c0a7faf2c51fd350ad1233aa7a40
Author: Prashant Sharma <[email protected]>
Date:   2014-07-11T18:52:35Z

    [SPARK-2437] Rename MAVEN_PROFILES to SBT_MAVEN_PROFILES and add 
SBT_MAVEN_PROPERTIES
    
    NOTE: It is not possible to use both env variable  `SBT_MAVEN_PROFILES`  
and `-P` flag at same time. `-P` if specified takes precedence.
    
    Author: Prashant Sharma <[email protected]>
    
    Closes #1374 from ScrapCodes/SPARK-2437/rename-MAVEN_PROFILES and squashes 
the following commits:
    
    8694bde [Prashant Sharma] [SPARK-2437] Rename MAVEN_PROFILES to 
SBT_MAVEN_PROFILES and add SBT_MAVEN_PROPERTIES

commit cbff18774b0a2f346901ddf2f566be50561a57c7
Author: Kousuke Saruta <[email protected]>
Date:   2014-07-12T04:10:26Z

    [SPARK-2457] Inconsistent description in README about build option
    
    Now, we should use -Pyarn instead of SPARK_YARN when building but README 
says as follows.
    
        For Apache Hadoop 2.2.X, 2.1.X, 2.0.X, 0.23.x, Cloudera CDH MRv2, and 
other Hadoop versions
        with YARN, also set `SPARK_YARN=true`:
    
          # Apache Hadoop 2.0.5-alpha
          $ sbt/sbt -Dhadoop.version=2.0.5-alpha -Pyarn assembly
    
          # Cloudera CDH 4.2.0 with MapReduce v2
          $ sbt/sbt -Dhadoop.version=2.0.0-cdh4.2.0 -Pyarn assembly
    
          # Apache Hadoop 2.2.X and newer
          $ sbt/sbt -Dhadoop.version=2.2.0 -Pyarn assembly
    
    Author: Kousuke Saruta <[email protected]>
    
    Closes #1382 from sarutak/SPARK-2457 and squashes the following commits:
    
    e7b2d64 [Kousuke Saruta] Replaced "SPARK_YARN=true" with "-Pyarn" in README

commit 55960869358d4f8aa5b2e3b17d87b0b02ba9acdd
Author: DB Tsai <[email protected]>
Date:   2014-07-12T06:04:43Z

    [SPARK-1969][MLlib] Online summarizer APIs for mean, variance, min, and max
    
    It basically moved the private ColumnStatisticsAggregator class from 
RowMatrix to public available DeveloperApi with documentation and unitests.
    
    Changes:
    1) Moved the private implementation from 
org.apache.spark.mllib.linalg.ColumnStatisticsAggregator to 
org.apache.spark.mllib.stat.MultivariateOnlineSummarizer
    2) When creating OnlineSummarizer object, the number of columns is not 
needed in the constructor. It's determined when users add the first sample.
    3) Added the APIs documentation for MultivariateOnlineSummarizer.
    4) Added the unittests for MultivariateOnlineSummarizer.
    
    Author: DB Tsai <[email protected]>
    
    Closes #955 from dbtsai/dbtsai-summarizer and squashes the following 
commits:
    
    b13ac90 [DB Tsai] dbtsai-summarizer

commit d38887b8a0d00a11d7cf9393e7cb0918c3ec7a22
Author: Li Pu <[email protected]>
Date:   2014-07-12T06:26:47Z

    use specialized axpy in RowMatrix for SVD
    
    After running some more tests on large matrix, found that the BV axpy 
(breeze/linalg/Vector.scala, axpy) is slower than the BSV axpy 
(breeze/linalg/operators/SparseVectorOps.scala, sv_dv_axpy), 8s v.s. 2s for 
each multiplication. The BV axpy operates on an iterator while BSV axpy 
directly operates on the underlying array. I think the overhead comes from 
creating the iterator (with a zip) and advancing the pointers.
    
    Author: Li Pu <[email protected]>
    Author: Xiangrui Meng <[email protected]>
    Author: Li Pu <[email protected]>
    
    Closes #1378 from vrilleup/master and squashes the following commits:
    
    6fb01a3 [Li Pu] use specialized axpy in RowMatrix
    5255f2a [Li Pu] Merge remote-tracking branch 'upstream/master'
    7312ec1 [Li Pu] very minor comment fix
    4c618e9 [Li Pu] Merge pull request #1 from mengxr/vrilleup-master
    a461082 [Xiangrui Meng] make superscript show up correctly in doc
    861ec48 [Xiangrui Meng] simplify axpy
    62969fa [Xiangrui Meng] use BDV directly in symmetricEigs change the 
computation mode to local-svd, local-eigs, and dist-eigs update tests and docs
    c273771 [Li Pu] automatically determine SVD compute mode and parameters
    7148426 [Li Pu] improve RowMatrix multiply
    5543cce [Li Pu] improve svd api
    819824b [Li Pu] add flag for dense svd or sparse svd
    eb15100 [Li Pu] fix binary compatibility
    4c7aec3 [Li Pu] improve comments
    e7850ed [Li Pu] use aggregate and axpy
    827411b [Li Pu] fix EOF new line
    9c80515 [Li Pu] use non-sparse implementation when k = n
    fe983b0 [Li Pu] improve scala style
    96d2ecb [Li Pu] improve eigenvalue sorting
    e1db950 [Li Pu] SPARK-1782: svd for sparse matrix using ARPACK

commit 2245c87af4f507cda361e16f322a14eac25b38fd
Author: Daniel Darabos <[email protected]>
Date:   2014-07-12T07:07:42Z

    Use the Executor's ClassLoader in sc.objectFile().
    
    This makes it possible to read classes from the object file which were 
specified in the user-provided jars. (By default ObjectInputStream uses 
latestUserDefinedLoader, which may or may not be the right one.)
    
    I created this because I ran into the following problem. I have x:RDD[X] 
with X being defined in the jar that I provide to SparkContext. I save it with 
x.saveAsObjectFile("x"). I try to load it with sc.objectFile\[X\]("x"). It 
fails with ClassNotFoundException.
    
    After a good while of debugging I figured out that Utils.deserialize() most 
likely uses the ClassLoader of Utils. This is the bootstrap ClassLoader, so it 
is not aware of the dynamically added jars. This patch fixes the issue.
    
    A more robust fix would be to always default to 
Thread.currentThread.getContextClassLoader. This would prevent this problem 
from biting anyone in the future. It would be a bit harder to test though. On 
the topic of testing, if you'd like to see tests for this, I will need some 
hand-holding. Thanks!
    
    Author: Daniel Darabos <[email protected]>
    
    Closes #181 from darabos/master and squashes the following commits:
    
    45a011a [Daniel Darabos] Add test for SPARK-1877. (Fixed in 52eb54d.)
    e13e090 [Daniel Darabos] Merge branch 'master' of 
https://github.com/apache/spark
    61fe0d0 [Daniel Darabos] Fix style (line too long).
    1b5df2c [Daniel Darabos] Use the Executor's ClassLoader in sc.objectFile(). 
This makes it possible to read classes from the object file which were 
specified in the user-provided jars. (By default ObjectInputStream uses 
latestUserDefinedLoader, which may or may not be the right one.)

commit 7a0135293192aaefc6ae20b57e15a90945bd8a4e
Author: Ankur Dave <[email protected]>
Date:   2014-07-12T19:05:34Z

    [SPARK-2455] Mark (Shippable)VertexPartition serializable
    
    VertexPartition and ShippableVertexPartition are contained in RDDs but are 
not marked Serializable, leading to NotSerializableExceptions when using Java 
serialization.
    
    The fix is simply to mark them as Serializable. This PR does that and adds 
a test for serializing them using Java and Kryo serialization.
    
    Author: Ankur Dave <[email protected]>
    
    Closes #1376 from ankurdave/SPARK-2455 and squashes the following commits:
    
    ed4a51b [Ankur Dave] Make (Shippable)VertexPartition serializable
    1fd42c5 [Ankur Dave] Add failing tests for Java serialization

commit 7e26b57615f6c1d3f9058f9c19c05ec91f017f4c
Author: Michael Armbrust <[email protected]>
Date:   2014-07-12T19:07:27Z

    [SPARK-2441][SQL] Add more efficient distinct operator.
    
    Author: Michael Armbrust <[email protected]>
    
    Closes #1366 from marmbrus/partialDistinct and squashes the following 
commits:
    
    12a31ab [Michael Armbrust] Add more efficient distinct operator.

commit 1a7d7cc85fb24de21f1cde67d04467171b82e845
Author: Michael Armbrust <[email protected]>
Date:   2014-07-12T19:13:32Z

    [SPARK-2405][SQL] Reusue same byte buffers when creating new instance of 
InMemoryRelation
    
    Reuse byte buffers when creating unique attributes for multiple instances 
of an InMemoryRelation in a single query plan.
    
    Author: Michael Armbrust <[email protected]>
    
    Closes #1332 from marmbrus/doubleCache and squashes the following commits:
    
    4a19609 [Michael Armbrust] Clean up concurrency story by calculating 
buffersn the constructor.
    b39c931 [Michael Armbrust] Allocations are kind of a side effect.
    f67eff7 [Michael Armbrust] Reusue same byte buffers when creating new 
instance of InMemoryRelation

commit 4c8be64e768fe71643b37f1e82f619c8aeac6eff
Author: Sandy Ryza <[email protected]>
Date:   2014-07-12T23:55:15Z

    SPARK-2462.  Make Vector.apply public.
    
    Apologies if there's an already-discussed reason I missed for why this 
doesn't make sense.
    
    Author: Sandy Ryza <[email protected]>
    
    Closes #1389 from sryza/sandy-spark-2462 and squashes the following commits:
    
    2e5e201 [Sandy Ryza] SPARK-2462.  Make Vector.apply public.

commit 635888cbed0e3f4127252fb84db449f0cc9ed659
Author: Sean Owen <[email protected]>
Date:   2014-07-14T02:27:43Z

    SPARK-2363. Clean MLlib's sample data files
    
    (Just made a PR for this, mengxr was the reporter of:)
    
    MLlib has sample data under serveral folders:
    1) data/mllib
    2) data/
    3) mllib/data/*
    Per previous discussion with Matei Zaharia, we want to put them under 
`data/mllib` and clean outdated files.
    
    Author: Sean Owen <[email protected]>
    
    Closes #1394 from srowen/SPARK-2363 and squashes the following commits:
    
    54313dd [Sean Owen] Move ML example data from /mllib/data/ and /data/ into 
/data/mllib/

commit aab5349660109481ee944721d611771da5a93109
Author: Prashant Sharma <[email protected]>
Date:   2014-07-14T07:42:59Z

    Made rdd.py pep8 complaint by using Autopep8 and a little manual editing.
    
    Author: Prashant Sharma <[email protected]>
    
    Closes #1354 from ScrapCodes/pep8-comp-1 and squashes the following commits:
    
    9858ea8 [Prashant Sharma] Code Review
    d8851b7 [Prashant Sharma] Found # noqa works even inside comment blocks. 
Not sure if it works with all versions of python.
    10c0cef [Prashant Sharma] Made rdd.py pep8 complaint by using Autopep8 and 
a little manual tweaking.

commit 38ccd6ebd412cfbf82ae9d8a0998ff697db11455
Author: Daoyuan <[email protected]>
Date:   2014-07-14T17:40:44Z

    move some test file to match src code
    
    Just move some test suite to corresponding package
    
    Author: Daoyuan <[email protected]>
    
    Closes #1401 from adrian-wang/movetestfiles and squashes the following 
commits:
    
    d1a6803 [Daoyuan] move some test file to match src code

commit d60b09bb60cff106fa0acddebf35714503b20f03
Author: Zongheng Yang <[email protected]>
Date:   2014-07-14T20:22:24Z

    [SPARK-2443][SQL] Fix slow read from partitioned tables
    
    This fix obtains a comparable performance boost as [PR 
#1390](https://github.com/apache/spark/pull/1390) by moving an array update and 
deserializer initialization out of a potentially very long loop. Suggested by 
yhuai. The below results are updated for this fix.
    
    ## Benchmarks
    Generated a local text file with 10M rows of simple key-value pairs. The 
data is loaded as a table through Hive. Results are obtained on my local 
machine using hive/console.
    
    Without the fix:
    
    Type | Non-partitioned | Partitioned (1 part)
    ------------ | ------------ | -------------
    First run | 9.52s end-to-end (1.64s Spark job) | 36.6s (28.3s)
    Stablized runs | 1.21s (1.18s) | 27.6s (27.5s)
    
    With this fix:
    
    Type | Non-partitioned | Partitioned (1 part)
    ------------ | ------------ | -------------
    First run | 9.57s (1.46s) | 11.0s (1.69s)
    Stablized runs | 1.13s (1.10s) | 1.23s (1.19s)
    
    Author: Zongheng Yang <[email protected]>
    
    Closes #1408 from concretevitamin/slow-read-2 and squashes the following 
commits:
    
    d86e437 [Zongheng Yang] Move update & initialization out of potentially 
long loop.

commit 3dd8af7a6623201c28231f4b71f59ea4e9ae29bf
Author: li-zhihui <[email protected]>
Date:   2014-07-14T20:32:49Z

    [SPARK-1946] Submit tasks after (configured ratio) executors have been 
registered
    
    Because submitting tasks and registering executors are asynchronous, in 
most situation, early stages' tasks run without preferred locality.
    
    A simple solution is sleeping few seconds in application, so that executors 
have enough time to register.
    
    The PR add 2 configuration properties to make TaskScheduler submit tasks 
after a few of executors have been registered.
    
    \# Submit tasks only after (registered executors / total executors) arrived 
the ratio, default value is 0
    spark.scheduler.minRegisteredExecutorsRatio = 0.8
    
    \# Whatever minRegisteredExecutorsRatio is arrived, submit tasks after the 
maxRegisteredWaitingTime(millisecond), default value is 30000
    spark.scheduler.maxRegisteredExecutorsWaitingTime = 5000
    
    Author: li-zhihui <[email protected]>
    
    Closes #900 from li-zhihui/master and squashes the following commits:
    
    b9f8326 [li-zhihui] Add logs & edit docs
    1ac08b1 [li-zhihui] Add new configs to user docs
    22ead12 [li-zhihui] Move waitBackendReady to postStartHook
    c6f0522 [li-zhihui] Bug fix: numExecutors wasn't set & use constant 
DEFAULT_NUMBER_EXECUTORS
    4d6d847 [li-zhihui] Move waitBackendReady to TaskSchedulerImpl.start & some 
code refactor
    0ecee9a [li-zhihui] Move waitBackendReady from DAGScheduler.submitStage to 
TaskSchedulerImpl.submitTasks
    4261454 [li-zhihui] Add docs for new configs & code style
    ce0868a [li-zhihui] Code style, rename configuration property name of 
minRegisteredRatio & maxRegisteredWaitingTime
    6cfb9ec [li-zhihui] Code style, revert default minRegisteredRatio of yarn 
to 0, driver get --num-executors in yarn/alpha
    812c33c [li-zhihui] Fix driver lost --num-executors option in yarn-cluster 
mode
    e7b6272 [li-zhihui] support yarn-cluster
    37f7dc2 [li-zhihui] support yarn mode(percentage style)
    3f8c941 [li-zhihui] submit stage after (configured ratio of) executors have 
been registered

commit 9fe693b5b6ed6af34ee1e800ab89c8a11991ea38
Author: Takuya UESHIN <[email protected]>
Date:   2014-07-14T22:42:28Z

    [SPARK-2446][SQL] Add BinaryType support to Parquet I/O.
    
    Note that this commit changes the semantics when loading in data that was 
created with prior versions of Spark SQL.  Before, we were writing out strings 
as Binary data without adding any other annotations. Thus, when data is read in 
from prior versions, data that was StringType will now become BinaryType.  
Users that need strings can CAST that column to a String.  It was decided that 
while this breaks compatibility, it does make us compatible with other systems 
(Hive, Thrift, etc) and adds support for Binary data, so this is the right 
decision long term.
    
    To support `BinaryType`, the following changes are needed:
    - Make `StringType` use `OriginalType.UTF8`
    - Add `BinaryType` using `PrimitiveTypeName.BINARY` without `OriginalType`
    
    Author: Takuya UESHIN <[email protected]>
    
    Closes #1373 from ueshin/issues/SPARK-2446 and squashes the following 
commits:
    
    ecacb92 [Takuya UESHIN] Add BinaryType support to Parquet I/O.
    616e04a [Takuya UESHIN] Make StringType use OriginalType.UTF8.

commit e2255e4b2c404f31ac9f7af9ed445141af980973
Author: Takuya UESHIN <[email protected]>
Date:   2014-07-15T06:06:35Z

    [SPARK-2467] Revert SparkBuild to publish-local to both .m2 and .ivy2.
    
    Author: Takuya UESHIN <[email protected]>
    
    Closes #1398 from ueshin/issues/SPARK-2467 and squashes the following 
commits:
    
    7f01d58 [Takuya UESHIN] Revert SparkBuild to publish-local to both .m2 and 
.ivy2.

commit 1f99fea53b5ff994dd4a12b44625d35186e269ff
Author: William Benton <[email protected]>
Date:   2014-07-15T06:09:13Z

    SPARK-2486: Utils.getCallSite is now resilient to bogus frames
    
    When running Spark under certain instrumenting profilers,
    Utils.getCallSite could crash with an NPE.  This commit
    makes it more resilient to failures occurring while inspecting
    stack frames.
    
    Author: William Benton <[email protected]>
    
    Closes #1413 from willb/spark-2486 and squashes the following commits:
    
    b7c0274 [William Benton] Use explicit null checks instead of Try()
    0f0c1ae [William Benton] Utils.getCallSite is now resilient to bogus frames

commit a2aa7bebae31e1e7ec23d31aaa436283743b283b
Author: Aaron Davidson <[email protected]>
Date:   2014-07-15T06:38:12Z

    Add/increase severity of warning in documentation of groupBy()
    
    groupBy()/groupByKey() is notorious for being a very convenient API that 
can lead to poor performance when used incorrectly.
    
    This PR just makes it clear that users should be cautious not to rely on 
this API when they really want a different (more performant) one, such as 
reduceByKey().
    
    (Note that one source of confusion is the name; this groupBy() is not the 
same as a SQL GROUP-BY, which is used for aggregation and is more similar in 
nature to Spark's reduceByKey().)
    
    Author: Aaron Davidson <[email protected]>
    
    Closes #1380 from aarondav/warning and squashes the following commits:
    
    f60da39 [Aaron Davidson] Give better advice
    d0afb68 [Aaron Davidson] Add/increase severity of warning in documentation 
of groupBy()

commit c6d75745de58ff1445912bf72a58b6ad2b3f863c
Author: Kousuke Saruta <[email protected]>
Date:   2014-07-15T06:55:39Z

    [SPARK-2390] Files in staging directory cannot be deleted and wastes the 
space of HDFS
    
    When running jobs with YARN Cluster mode and using HistoryServer, the files 
in the Staging Directory (~/.sparkStaging on HDFS) cannot be deleted.
    HistoryServer uses directory where event log is written, and the directory 
is represented as a instance of o.a.h.f.FileSystem created by using 
FileSystem.get.
    
    On the other hand, ApplicationMaster has a instance named fs, which also 
created by using FileSystem.get.
    
    FileSystem.get returns cached same instance when URI passed to the method 
represents same file system and the method is called by same user.
    Because of the behavior, when the directory for event log is on HDFS, fs of 
ApplicationMaster and fileSystem of FileLogger is same instance.
    When shutting down ApplicationMaster, fileSystem.close is called in 
FileLogger#stop, which is invoked by SparkContext#stop indirectly.
    
    And ApplicationMaster#cleanupStagingDir also called by JVM shutdown hook. 
In this method, fs.delete(stagingDirPath) is invoked.
    Because fs.delete in ApplicationMaster is called after fileSystem.close in 
FileLogger, fs.delete fails and results not deleting files in the staging 
directory.
    
    I think, calling fileSystem.delete is not needed.
    
    Author: Kousuke Saruta <[email protected]>
    
    Closes #1326 from sarutak/SPARK-2390 and squashes the following commits:
    
    10e1a88 [Kousuke Saruta] Removed fileSystem.close from FileLogger.scala not 
to prevent any other FileSystem operation

commit c7c7ac83392b10abb011e6aead1bf92e7c73695e
Author: Michael Armbrust <[email protected]>
Date:   2014-07-15T07:13:51Z

    [SPARK-2485][SQL] Lock usage of hive client.
    
    Author: Michael Armbrust <[email protected]>
    
    Closes #1412 from marmbrus/lockHiveClient and squashes the following 
commits:
    
    4bc9d5a [Michael Armbrust] protected[hive]
    22e9177 [Michael Armbrust] Add comments.
    7aa8554 [Michael Armbrust] Don't lock on hive's object.
    a6edc5f [Michael Armbrust] Lock usage of hive client.

commit 7446f5ff93142d2dd5c79c63fa947f47a1d4db8b
Author: lianhuiwang <[email protected]>
Date:   2014-07-15T07:22:06Z

    discarded exceeded completedDrivers
    
    When completedDrivers number exceeds the threshold, the first 
Max(spark.deploy.retainedDrivers, 1) will be discarded.
    
    Author: lianhuiwang <[email protected]>
    
    Closes #1114 from lianhuiwang/retained-drivers and squashes the following 
commits:
    
    8789418 [lianhuiwang] discarded exceeded completedDrivers

commit dd95abada78b4d0aec97dacda50fdfd74464b073
Author: Reynold Xin <[email protected]>
Date:   2014-07-15T08:46:57Z

    [SPARK-2399] Add support for LZ4 compression.
    
    Based on Greg Bowyer's patch from JIRA 
https://issues.apache.org/jira/browse/SPARK-2399
    
    Author: Reynold Xin <[email protected]>
    
    Closes #1416 from rxin/lz4 and squashes the following commits:
    
    6c8fefe [Reynold Xin] Fixed typo.
    8a14d38 [Reynold Xin] [SPARK-2399] Add support for LZ4 compression.

commit 52beb20f7904e0333198b9b14619366ddf53ab85
Author: DB Tsai <[email protected]>
Date:   2014-07-15T09:14:58Z

    [SPARK-2477][MLlib] Using appendBias for adding intercept in 
GeneralizedLinearAlgorithm
    
    Instead of using prependOne currently in GeneralizedLinearAlgorithm, we 
would like to use appendBias for 1) keeping the indices of original training 
set unchanged by adding the intercept into the last element of vector and 2) 
using the same public API for consistently adding intercept.
    
    Author: DB Tsai <[email protected]>
    
    Closes #1410 from dbtsai/SPARK-2477_intercept_with_appendBias and squashes 
the following commits:
    
    011432c [DB Tsai] From Alpine Data Labs

commit 8f1d4226c285e33d2fb839d3163bb374eb6db0e7
Author: Reynold Xin <[email protected]>
Date:   2014-07-15T09:15:29Z

    Update README.md to include a slightly more informative project description.
    
    (cherry picked from commit 401083be9f010f95110a819a49837ecae7d9c4ec)
    Signed-off-by: Reynold Xin <[email protected]>

commit 6555618c8f39b4e7da9402c3fd9da7a75bf7794e
Author: Reynold Xin <[email protected]>
Date:   2014-07-15T09:20:01Z

    README update: added "for Big Data".

commit 04b01bb101eeaf76c2e7c94c291669f0b2372c9a
Author: Alexander Ulanov <[email protected]>
Date:   2014-07-15T15:40:22Z

    [MLLIB] [SPARK-2222] Add multiclass evaluation metrics
    
    Adding two classes:
    1) MulticlassMetrics implements various multiclass evaluation metrics
    2) MulticlassMetricsSuite implements unit tests for MulticlassMetrics
    
    Author: Alexander Ulanov <[email protected]>
    Author: unknown <[email protected]>
    Author: Xiangrui Meng <[email protected]>
    
    Closes #1155 from avulanov/master and squashes the following commits:
    
    2eae80f [Alexander Ulanov] Merge pull request #1 from mengxr/avulanov-master
    5ebeb08 [Xiangrui Meng] minor updates
    79c3555 [Alexander Ulanov] Addressing reviewers comments mengxr
    0fa9511 [Alexander Ulanov] Addressing reviewers comments mengxr
    f0dadc9 [Alexander Ulanov] Addressing reviewers comments mengxr
    4811378 [Alexander Ulanov] Removing println
    87fb11f [Alexander Ulanov] Addressing reviewers comments mengxr. Added 
confusion matrix
    e3db569 [Alexander Ulanov] Addressing reviewers comments mengxr. Added true 
positive rate and false positive rate. Test suite code style.
    a7e8bf0 [Alexander Ulanov] Addressing reviewers comments mengxr
    c3a77ad [Alexander Ulanov] Addressing reviewers comments mengxr
    e2c91c3 [Alexander Ulanov] Fixes to mutliclass metics
    d5ce981 [unknown] Comments about Double
    a5c8ba4 [unknown] Unit tests. Class rename
    fcee82d [unknown] Unit tests. Class rename
    d535d62 [unknown] Multiclass evaluation

commit cb09e93c1d7ef9c8f0a1abe4e659783c74993a4e
Author: William Benton <[email protected]>
Date:   2014-07-15T16:13:39Z

    Reformat multi-line closure argument.
    
    Author: William Benton <[email protected]>
    
    Closes #1419 from willb/reformat-2486 and squashes the following commits:
    
    2676231 [William Benton] Reformat multi-line closure argument.

commit 9dd635eb5df52835b3b7f4f2b9c789da9e813c71
Author: witgo <[email protected]>
Date:   2014-07-15T17:46:17Z

    SPARK-2480: Resolve sbt warnings "NOTE: SPARK_YARN is deprecated, please 
use -Pyarn flag"
    
    Author: witgo <[email protected]>
    
    Closes #1404 from witgo/run-tests and squashes the following commits:
    
    f703aee [witgo] fix Note: implicit method fromPairDStream is not applicable 
here because it comes after the application point and it lacks an explicit 
result type
    2944f51 [witgo] Remove "NOTE: SPARK_YARN is deprecated, please use -Pyarn 
flag"
    ef59c70 [witgo] fix Note: implicit method fromPairDStream is not applicable 
here because it comes after the application point and it lacks an explicit 
result type
    6cefee5 [witgo] Remove "NOTE: SPARK_YARN is deprecated, please use -Pyarn 
flag"

commit 72ea56da8e383c61c6f18eeefef03b9af00f5158
Author: witgo <[email protected]>
Date:   2014-07-15T18:52:56Z

    SPARK-1291: Link the spark UI to RM ui in yarn-client mode
    
    Author: witgo <[email protected]>
    
    Closes #1112 from witgo/SPARK-1291 and squashes the following commits:
    
    6022bcd [witgo] review commit
    1fbb925 [witgo] add addAmIpFilter to yarn alpha
    210299c [witgo] review commit
    1b92a07 [witgo] review commit
    6896586 [witgo] Add comments to addWebUIFilter
    3e9630b [witgo] review commit
    142ee29 [witgo] review commit
    1fe7710 [witgo] Link the spark UI to RM ui in yarn-client mode

commit e7ec815d9a2b0f89a56dc7dd3106c31a09492028
Author: Reynold Xin <[email protected]>
Date:   2014-07-15T20:13:33Z

    Added LZ4 to compression codec in configuration page.
    
    Author: Reynold Xin <[email protected]>
    
    Closes #1417 from rxin/lz4 and squashes the following commits:
    
    472f6a1 [Reynold Xin] Set the proper default.
    9cf0b2f [Reynold Xin] Added LZ4 to compression codec in configuration page.

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

Reply via email to