GitHub user alexrovner opened a pull request:
https://github.com/apache/spark/pull/5746
Spark 5529 backport 1.3
Still running tests on this branch. Mechanically applied the changes based
on https://github.com/apache/spark/pull/4369 without fully understanding whats
actually happening since I am not familiar with the codebase. Feedback would be
appreciated.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/alexrovner/spark SPARK-5529-backport-1.3
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/spark/pull/5746.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #5746
----
commit f92876a5430d2b87e2df66a4eddc833d568e3e1a
Author: Liang-Chi Hsieh <[email protected]>
Date: 2015-03-02T21:11:17Z
[Minor] Fix doc typo for describing primitiveTerm effectiveness condition
It should be `true` instead of `false`?
Author: Liang-Chi Hsieh <[email protected]>
Closes #4762 from viirya/doc_fix and squashes the following commits:
2e37482 [Liang-Chi Hsieh] Fix doc.
(cherry picked from commit 3f9def81170c24f24f4a6b7ca7905de4f75e11e0)
Signed-off-by: Michael Armbrust <[email protected]>
commit a83b9bbb242ff6dedf261b8838add5a391d5bd36
Author: q00251598 <[email protected]>
Date: 2015-03-02T21:16:29Z
[SPARK-6040][SQL] Fix the percent bug in tablesample
HiveQL expression like `select count(1) from src tablesample(1 percent);`
means take 1% sample to select. But it means 100% in the current version of the
Spark.
Author: q00251598 <[email protected]>
Closes #4789 from watermen/SPARK-6040 and squashes the following commits:
2453ebe [q00251598] check and adjust the fraction.
(cherry picked from commit 582e5a24c55e8c876733537c9910001affc8b29b)
Signed-off-by: Michael Armbrust <[email protected]>
commit 650d1e7fb13545d0d102de9bb6e11ab4f9ef6359
Author: Marcelo Vanzin <[email protected]>
Date: 2015-03-02T22:41:43Z
[SPARK-6050] [yarn] Relax matching of vcore count in received containers.
Some YARN configurations return a vcore count for allocated
containers that does not match the requested resource. That means
Spark would always ignore those containers. So relax the the matching
of the vcore count to allow the Spark jobs to run.
Author: Marcelo Vanzin <[email protected]>
Closes #4818 from vanzin/SPARK-6050 and squashes the following commits:
991c803 [Marcelo Vanzin] Remove config option, standardize on legacy
behavior (no vcore matching).
8c9c346 [Marcelo Vanzin] Restrict lax matching to vcores only.
3359692 [Marcelo Vanzin] [SPARK-6050] [yarn] Add config option to do lax
resource matching.
(cherry picked from commit 6b348d90f475440c285a4b636134ffa9351580b9)
Signed-off-by: Thomas Graves <[email protected]>
commit 3899c7c2ccc38bbe76a775ba3eee200bd4c192ac
Author: Michael Armbrust <[email protected]>
Date: 2015-03-03T00:10:54Z
[SPARK-6114][SQL] Avoid metastore conversions before plan is resolved
Author: Michael Armbrust <[email protected]>
Closes #4855 from marmbrus/explodeBug and squashes the following commits:
a712249 [Michael Armbrust] [SPARK-6114][SQL] Avoid metastore conversions
before plan is resolved
(cherry picked from commit 8223ce6a81e4cc9fdf816892365fcdff4006c35e)
Signed-off-by: Michael Armbrust <[email protected]>
commit 866f2814a48a34820da9069378c2cbbb3589fb0f
Author: Cheng Lian <[email protected]>
Date: 2015-03-03T00:18:00Z
[SPARK-6082] [SQL] Provides better error message for malformed rows when
caching tables
Constructs like Hive `TRANSFORM` may generate malformed rows (via badly
authored external scripts for example). I'm a bit hesitant to have this
feature, since it introduces per-tuple cost when caching tables. However,
considering caching tables is usually a one-time cost, this is probably worth
having.
<!-- Reviewable:start -->
[<img src="https://reviewable.io/review_button.png" height=40 alt="Review
on Reviewable"/>](https://reviewable.io/reviews/apache/spark/4842)
<!-- Reviewable:end -->
Author: Cheng Lian <[email protected]>
Closes #4842 from liancheng/spark-6082 and squashes the following commits:
b05dbff [Cheng Lian] Provides better error message for malformed rows when
caching tables
(cherry picked from commit 1a49496b4a9df40c74739fc0fb8a21c88a477075)
Signed-off-by: Michael Armbrust <[email protected]>
commit 8100b79c292349aaeefe7ff9545afb9e526c2bff
Author: Andrew Or <[email protected]>
Date: 2015-03-03T00:34:32Z
[SPARK-6066] Make event log format easier to parse
Some users have reported difficulty in parsing the new event log format.
Since we embed the metadata in the beginning of the file, when we compress the
event log we need to skip the metadata because we need that information to
parse the log later. This means we'll end up with a partially compressed file
if event logging compression is turned on. The old format looks like:
```
sparkVersion = 1.3.0
compressionCodec = org.apache.spark.io.LZFCompressionCodec
=== LOG_HEADER_END ===
// actual events, could be compressed bytes
```
The new format in this patch puts the compression codec in the log file
name instead. It also removes the metadata header altogether along with the
Spark version, which was not needed. The new file name looks something like:
```
app_without_compression
app_123.lzf
app_456.snappy
```
I tested this with and without compression, using different compression
codecs and event logging directories. I verified that both the `Master` and the
`HistoryServer` can render both compressed and uncompressed logs as before.
Author: Andrew Or <[email protected]>
Closes #4821 from andrewor14/event-log-format and squashes the following
commits:
8511141 [Andrew Or] Fix test
654883d [Andrew Or] Add back metadata with Spark version
7f537cd [Andrew Or] Address review feedback
7d6aa61 [Andrew Or] Make codec an extension
59abee9 [Andrew Or] Merge branch 'master' of github.com:apache/spark into
event-log-format
27c9a6c [Andrew Or] Address review feedback
519e51a [Andrew Or] Address review feedback
ef69276 [Andrew Or] Merge branch 'master' of github.com:apache/spark into
event-log-format
88a091d [Andrew Or] Add tests for new format and file name
f32d8d2 [Andrew Or] Fix tests
8db5a06 [Andrew Or] Embed metadata in the event log file name instead
(cherry picked from commit 6776cb33ea691f7843b956b3e80979282967e826)
Signed-off-by: Patrick Wendell <[email protected]>
commit ea69cf28e6874d205fca70872a637547407bc08b
Author: Andrew Or <[email protected]>
Date: 2015-03-03T00:36:42Z
[SPARK-6048] SparkConf should not translate deprecated configs on set
There are multiple issues with translating on set outlined in the JIRA.
This PR reverts the translation logic added to `SparkConf`. In the future,
after the 1.3.0 release we will figure out a way to reorganize the internal
structure more elegantly. For now, let's preserve the existing semantics of
`SparkConf` since it's a public interface. Unfortunately this means duplicating
some code for now, but this is all internal and we can always clean it up later.
Author: Andrew Or <[email protected]>
Closes #4799 from andrewor14/conf-set-translate and squashes the following
commits:
11c525b [Andrew Or] Move warning to driver
10e77b5 [Andrew Or] Add documentation for deprecation precedence
a369cb1 [Andrew Or] Merge branch 'master' of github.com:apache/spark into
conf-set-translate
c26a9e3 [Andrew Or] Revert all translate logic in SparkConf
fef6c9c [Andrew Or] Restore deprecation logic for
spark.executor.userClassPathFirst
94b4dfa [Andrew Or] Translate on get, not set
(cherry picked from commit 258d154c9f1afdd52dce19f03d81683ee34effac)
Signed-off-by: Patrick Wendell <[email protected]>
commit 1b8ab5752fccbc08c3f76c50bc384b89231d0a78
Author: Xiangrui Meng <[email protected]>
Date: 2015-03-03T01:14:34Z
[SPARK-6121][SQL][MLLIB] simpleString for UDT
`df.dtypes` shows `null` for UDTs. This PR uses `udt` by default and
`VectorUDT` overwrites it with `vector`.
jkbradley davies
Author: Xiangrui Meng <[email protected]>
Closes #4858 from mengxr/SPARK-6121 and squashes the following commits:
34f0a77 [Xiangrui Meng] simpleString for UDT
(cherry picked from commit 2db6a853a53b4c25e35983bc489510abb8a73e1d)
Signed-off-by: Xiangrui Meng <[email protected]>
commit 11389f0262b1a755effc8cbd247aa199c0d8fd9d
Author: Xiangrui Meng <[email protected]>
Date: 2015-03-03T02:10:50Z
[SPARK-5537] Add user guide for multinomial logistic regression
This is based on #4801 from dbtsai. The linear method guide is re-organized
a little bit for this change.
Closes #4801
Author: Xiangrui Meng <[email protected]>
Author: DB Tsai <[email protected]>
Closes #4861 from mengxr/SPARK-5537 and squashes the following commits:
47af0ac [Xiangrui Meng] update user guide for multinomial logistic
regression
cdc2e15 [Xiangrui Meng] Merge remote-tracking branch 'apache/master' into
AlpineNow-mlor-doc
096d0ca [DB Tsai] first commit
(cherry picked from commit 9d6c5aeebd3c7f8ff6defe3bccd8ff12ed918293)
Signed-off-by: Xiangrui Meng <[email protected]>
commit ffd0591094a1a3edafed5c5f3ff1ca1b6048bf46
Author: Tathagata Das <[email protected]>
Date: 2015-03-03T02:40:46Z
[SPARK-6127][Streaming][Docs] Add Kafka to Python api docs
davies
Author: Tathagata Das <[email protected]>
Closes #4860 from tdas/SPARK-6127 and squashes the following commits:
82de92a [Tathagata Das] Add Kafka to Python api docs
(cherry picked from commit 9eb22ece115c69899d100cecb8a5e20b3a268649)
Signed-off-by: Tathagata Das <[email protected]>
commit 1b490e91fd6b5d06d9caeb50e597639ccfc0bc3b
Author: Yin Huai <[email protected]>
Date: 2015-03-03T03:31:55Z
[SPARK-5950][SQL]Insert array into a metastore table saved as parquet
should work when using datasource api
This PR contains the following changes:
1. Add a new method, `DataType.equalsIgnoreCompatibleNullability`, which is
the middle ground between DataType's equality check and
`DataType.equalsIgnoreNullability`. For two data types `from` and `to`, it does
`equalsIgnoreNullability` as well as if the nullability of `from` is compatible
with that of `to`. For example, the nullability of `ArrayType(IntegerType,
containsNull = false)` is compatible with that of `ArrayType(IntegerType,
containsNull = true)` (for an array without null values, we can always say it
may contain null values). However, the nullability of `ArrayType(IntegerType,
containsNull = true)` is incompatible with that of `ArrayType(IntegerType,
containsNull = false)` (for an array that may have null values, we cannot say
it does not have null values).
2. For the `resolved` field of `InsertIntoTable`, use
`equalsIgnoreCompatibleNullability` to replace the equality check of the data
types.
3. For our data source write path, when appending data, we always use the
schema of existing table to write the data. This is important for parquet,
since nullability direct impacts the way to encode/decode values. If we do not
do this, we may see corrupted values when reading values from a set of parquet
files generated with different nullability settings.
4. When generating a new parquet table, we always set
nullable/containsNull/valueContainsNull to true. So, we will not face
situations that we cannot append data because containsNull/valueContainsNull in
an Array/Map column of the existing table has already been set to `false`. This
change makes the whole data pipeline more robust.
5. Update the equality check of JSON relation. Since JSON does not really
cares nullability, `equalsIgnoreNullability` seems a better choice to compare
schemata from to JSON tables.
JIRA: https://issues.apache.org/jira/browse/SPARK-5950
Thanks viirya for the initial work in #4729.
cc marmbrus liancheng
Author: Yin Huai <[email protected]>
Closes #4826 from yhuai/insertNullabilityCheck and squashes the following
commits:
3b61a04 [Yin Huai] Revert change on equals.
80e487e [Yin Huai] asNullable in UDT.
587d88b [Yin Huai] Make methods private.
0cb7ea2 [Yin Huai] marmbrus's comments.
3cec464 [Yin Huai] Cheng's comments.
486ed08 [Yin Huai] Merge remote-tracking branch 'upstream/master' into
insertNullabilityCheck
d3747d1 [Yin Huai] Remove unnecessary change.
8360817 [Yin Huai] Merge remote-tracking branch 'upstream/master' into
insertNullabilityCheck
8a3f237 [Yin Huai] Use equalsIgnoreNullability instead of equality check.
0eb5578 [Yin Huai] Fix tests.
f6ed813 [Yin Huai] Update old parquet path.
e4f397c [Yin Huai] Unit tests.
b2c06f8 [Yin Huai] Ignore nullability in JSON relation's equality check.
8bd008b [Yin Huai] nullable, containsNull, and valueContainsNull will be
always true for parquet data.
bf50d73 [Yin Huai] When appending data, we use the schema of the existing
table instead of the schema of the new data.
0a703e7 [Yin Huai] Test failed again since we cannot read correct content.
9a26611 [Yin Huai] Make InsertIntoTable happy.
8f19fe5 [Yin Huai] equalsIgnoreCompatibleNullability
4ec17fd [Yin Huai] Failed test.
(cherry picked from commit 12599942e69e4d73040f3a8611661a0862514ffc)
Signed-off-by: Michael Armbrust <[email protected]>
commit 4e6e0086c27e3f37eda6391c063b481896a69476
Author: Reynold Xin <[email protected]>
Date: 2015-03-03T06:14:08Z
[SPARK-5310][SQL] Fixes to Docs and Datasources API
- Various Fixes to docs
- Make data source traits actually interfaces
Based on #4862 but with fixed conflicts.
Author: Reynold Xin <[email protected]>
Author: Michael Armbrust <[email protected]>
Closes #4868 from marmbrus/pr/4862 and squashes the following commits:
fe091ea [Michael Armbrust] Merge remote-tracking branch 'origin/master'
into pr/4862
0208497 [Reynold Xin] Test fixes.
34e0a28 [Reynold Xin] [SPARK-5310][SQL] Various fixes to Spark SQL docs.
(cherry picked from commit 54d19689ff8d786acde5b8ada6741854ffadadea)
Signed-off-by: Michael Armbrust <[email protected]>
commit 62c53be2a9ef805545c314ffbbfafdcf2fced9f2
Author: Xiangrui Meng <[email protected]>
Date: 2015-03-03T06:27:01Z
[SPARK-6097][MLLIB] Support tree model save/load in PySpark/MLlib
Similar to `MatrixFactorizaionModel`, we only need wrappers to support
save/load for tree models in Python.
jkbradley
Author: Xiangrui Meng <[email protected]>
Closes #4854 from mengxr/SPARK-6097 and squashes the following commits:
4586a4d [Xiangrui Meng] fix more typos
8ebcac2 [Xiangrui Meng] fix python style
91172d8 [Xiangrui Meng] fix typos
201b3b9 [Xiangrui Meng] update user guide
b5158e2 [Xiangrui Meng] support tree model save/load in PySpark/MLlib
(cherry picked from commit 7e53a79c30511dbd0e5d9878a4b8b0f5bc94e68b)
Signed-off-by: Xiangrui Meng <[email protected]>
commit 81648a7b12b5e6e4178d7e8191cca2ec45331580
Author: Joseph K. Bradley <[email protected]>
Date: 2015-03-03T06:33:51Z
[SPARK-6120] [mllib] Warnings about memory in tree, ensemble model save
Issue: When the Python DecisionTree example in the programming guide is
run, it runs out of Java Heap Space when using the default memory settings for
the spark shell.
This prints a warning.
CC: mengxr
Author: Joseph K. Bradley <[email protected]>
Closes #4864 from jkbradley/dt-save-heap and squashes the following commits:
02e8daf [Joseph K. Bradley] fixed based on code review
7ecb1ed [Joseph K. Bradley] Added warnings about memory when calling tree
and ensemble model save with too small a Java heap size
(cherry picked from commit c2fe3a6ff1a48a9da54d2c2c4d80ecd06cdeebca)
Signed-off-by: Xiangrui Meng <[email protected]>
commit 841d2a27fe62ccfb399007f54e7d9c9191e71c1c
Author: DB Tsai <[email protected]>
Date: 2015-03-03T06:37:12Z
[SPARK-5537][MLlib][Docs] Add user guide for multinomial logistic regression
Adding more description on top of #4861.
Author: DB Tsai <[email protected]>
Closes #4866 from dbtsai/doc and squashes the following commits:
37e9d07 [DB Tsai] doc
(cherry picked from commit b196056190c569505cc32669d1aec30ed9d70665)
Signed-off-by: Xiangrui Meng <[email protected]>
commit 1aa8461652ab02c3d5961dfb7b87d44f43d56093
Author: Patrick Wendell <[email protected]>
Date: 2015-03-03T09:38:07Z
HOTFIX: Bump HBase version in MapR profiles.
After #2982 (SPARK-4048) we rely on the newer HBase packaging format.
commit ae60eb9984de56c36c7f63220f9281fbaac10931
Author: Patrick Wendell <[email protected]>
Date: 2015-03-03T08:38:12Z
BUILD: Minor tweaks to internal build scripts
This adds two features:
1. The ability to publish with a different maven version than
that specified in the release source.
2. Forking of different Zinc instances during the parallel dist
creation (to help with some stability issues).
commit ce7158cf70c1003c1011d9a755813b31feae91e4
Author: Patrick Wendell <[email protected]>
Date: 2015-03-03T10:19:19Z
Adding CHANGES.txt for Spark 1.3
commit 4fee08ef0141b1be5684d78b6fe9cb93c98b0bc4
Author: Patrick Wendell <[email protected]>
Date: 2015-03-03T10:20:03Z
Revert "Preparing development version 1.3.1-SNAPSHOT"
This reverts commit 2ab0ba04f66683be25cbe0e83cecf2bdcb0f13ba.
commit b012ed189844d2a515a882144364921caf89b4c0
Author: Patrick Wendell <[email protected]>
Date: 2015-03-03T10:20:05Z
Revert "Preparing Spark release v1.3.0-rc1"
This reverts commit f97b0d4a6b26504916816d7aefcf3132cd1da6c2.
commit 3af26870e5163438868c4eb2df88380a533bb232
Author: Patrick Wendell <[email protected]>
Date: 2015-03-03T10:23:07Z
Preparing Spark release v1.3.0-rc2
commit 05d5a29eb3193aeb57d177bafe39eb75edce72a1
Author: Patrick Wendell <[email protected]>
Date: 2015-03-03T10:23:07Z
Preparing development version 1.3.1-SNAPSHOT
commit ee4929d1d38d83382ccdc22bf99f61f24f989c8b
Author: Andrew Or <[email protected]>
Date: 2015-03-03T21:04:15Z
Revert "[SPARK-5423][Core] Cleanup resources in DiskMapIterator.finalize to
ensure deleting the temp file"
This reverts commit 25fae8e7e6c93b7817771342d370b73b40dcf92e.
commit 8446ad0ebd2abb10ef405dc2ce53d2724604ce83
Author: Sean Owen <[email protected]>
Date: 2015-03-03T21:40:11Z
SPARK-1911 [DOCS] Warn users if their assembly jars are not built with Java
6
Add warning about building with Java 7+ and running the JAR on early Java 6.
CC andrewor14
Author: Sean Owen <[email protected]>
Closes #4874 from srowen/SPARK-1911 and squashes the following commits:
79fa2f6 [Sean Owen] Add warning about building with Java 7+ and running the
JAR on early Java 6.
(cherry picked from commit e750a6bfddf1d7bf7d3e99a424ec2b83a18b40d9)
Signed-off-by: Andrew Or <[email protected]>
commit 9a0b75cdd85c1933c590480dd233b1803726ed71
Author: Imran Rashid <[email protected]>
Date: 2015-03-03T23:33:19Z
[SPARK-5949] HighlyCompressedMapStatus needs more classes registered w/ kryo
https://issues.apache.org/jira/browse/SPARK-5949
Author: Imran Rashid <[email protected]>
Closes #4877 from squito/SPARK-5949_register_roaring_bitmap and squashes
the following commits:
7e13316 [Imran Rashid] style style style
5f6bb6d [Imran Rashid] more style
709bfe0 [Imran Rashid] style
a5cb744 [Imran Rashid] update tests to cover both types of
RoaringBitmapContainers
09610c6 [Imran Rashid] formatting
f9a0b7c [Imran Rashid] put primitive array registrations together
97beaf8 [Imran Rashid] SPARK-5949 HighlyCompressedMapStatus needs more
classes registered w/ kryo
(cherry picked from commit 1f1fccc5ceb0c5b7656a0594be3a67bd3b432e85)
Signed-off-by: Reynold Xin <[email protected]>
commit 9f249779ffe65131d1b9b95154a8ccd3a89fe022
Author: Xiangrui Meng <[email protected]>
Date: 2015-03-04T07:52:02Z
[SPARK-6141][MLlib] Upgrade Breeze from 0.10 to 0.11 to fix convergence bug
LBFGS and OWLQN in Breeze 0.10 has convergence check bug.
This is fixed in 0.11, see the description in Breeze project for detail:
https://github.com/scalanlp/breeze/pull/373#issuecomment-76879760
Author: Xiangrui Meng <[email protected]>
Author: DB Tsai <[email protected]>
Author: DB Tsai <[email protected]>
Closes #4879 from dbtsai/breeze and squashes the following commits:
d848f65 [DB Tsai] Merge pull request #1 from mengxr/AlpineNow-breeze
c2ca6ac [Xiangrui Meng] upgrade to breeze-0.11.1
35c2f26 [Xiangrui Meng] fix LRSuite
397a208 [DB Tsai] upgrade breeze
(cherry picked from commit 76e20a0a03cf2c02db35e00271924efb070eaaa5)
Signed-off-by: Xiangrui Meng <[email protected]>
commit 035243d738aabaa7290797514f56a7e638d18abf
Author: Cheng Lian <[email protected]>
Date: 2015-03-04T11:39:02Z
[SPARK-6136] [SQL] Removed JDBC integration tests which depends on
docker-client
Integration test suites in the JDBC data source (`MySQLIntegration` and
`PostgresIntegration`) depend on docker-client 2.7.5, which transitively
depends on Guava 17.0. Unfortunately, Guava 17.0 is causing test runtime binary
compatibility issues when Spark is compiled against Hive 0.12.0, or Hadoop 2.4.
Considering `MySQLIntegration` and `PostgresIntegration` are ignored right
now, I'd suggest moving them from the Spark project to the [Spark integration
tests] [1] project. This PR removes both the JDBC data source integration tests
and the docker-client test dependency.
[1]: |https://github.com/databricks/spark-integration-tests
<!-- Reviewable:start -->
[<img src="https://reviewable.io/review_button.png" height=40 alt="Review
on Reviewable"/>](https://reviewable.io/reviews/apache/spark/4872)
<!-- Reviewable:end -->
Author: Cheng Lian <[email protected]>
Closes #4872 from liancheng/remove-docker-client and squashes the following
commits:
1f4169e [Cheng Lian] Removes DockerHacks
159b24a [Cheng Lian] Removed JDBC integration tests which depends on
docker-client
(cherry picked from commit 76b472f12a57bb5bec7b3791660eb47e9177da7f)
Signed-off-by: Cheng Lian <[email protected]>
commit bfa4e3194f14d941cd8fa61a845d892a5c5027c6
Author: Liang-Chi Hsieh <[email protected]>
Date: 2015-03-04T12:23:43Z
[SPARK-6134][SQL] Fix wrong datatype for casting FloatType and default
LongType value in defaultPrimitive
In `CodeGenerator`, the casting on `FloatType` should use `FloatType`
instead of `IntegerType`.
Besides, `defaultPrimitive` for `LongType` should be `-1L` instead of `1L`.
Author: Liang-Chi Hsieh <[email protected]>
Closes #4870 from viirya/codegen_type and squashes the following commits:
76311dd [Liang-Chi Hsieh] Fix wrong datatype for casting on FloatType. Fix
the wrong value for LongType in defaultPrimitive.
(cherry picked from commit aef8a84e42351419a67d56abaf1ee75a05eb11ea)
Signed-off-by: Cheng Lian <[email protected]>
commit 3fc74f45a9ec9544c04c71ce1412f6bc2eb6e75b
Author: Marcelo Vanzin <[email protected]>
Date: 2015-03-04T20:58:39Z
[SPARK-6144] [core] Fix addFile when source files are on "hdfs:"
The code failed in two modes: it complained when it tried to re-create a
directory that already existed, and it was placing some files in the wrong
parent directory. The patch fixes both issues.
Author: Marcelo Vanzin <[email protected]>
Author: trystanleftwich <[email protected]>
Closes #4894 from vanzin/SPARK-6144 and squashes the following commits:
100b3a1 [Marcelo Vanzin] Style fix.
58266aa [Marcelo Vanzin] Fix fetchHcfs file for directories.
91733b7 [trystanleftwich] [SPARK-6144]When in cluster mode using ADD JAR
with a hdfs:// sourced jar will fail
(cherry picked from commit 3a35a0dfe940843c3f3a5f51acfe24def488faa9)
Signed-off-by: Andrew Or <[email protected]>
commit a0aa24a63e8549c36a938248305734ff6cfa6cc6
Author: Cheng Lian <[email protected]>
Date: 2015-03-05T04:52:58Z
[SPARK-6149] [SQL] [Build] Excludes Guava 15 referenced by
jackson-module-scala_2.10
This PR excludes Guava 15.0 from the SBT build, to make Spark SQL CLI
(`bin/spark-sql`) work when compiled against Hive 0.12.0.
<!-- Reviewable:start -->
[<img src="https://reviewable.io/review_button.png" height=40 alt="Review
on Reviewable"/>](https://reviewable.io/reviews/apache/spark/4890)
<!-- Reviewable:end -->
Author: Cheng Lian <[email protected]>
Closes #4890 from liancheng/exclude-guava-15 and squashes the following
commits:
91ae9fa [Cheng Lian] Moves Guava 15 exclusion from SBT build to POM
282bd2a [Cheng Lian] Excludes Guava 15 referenced by
jackson-module-scala_2.10
(cherry picked from commit 1aa90e39e33caa497971544ee7643fb3ff048c12)
Signed-off-by: Patrick Wendell <[email protected]>
----
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]