GitHub user soda123 opened a pull request:
https://github.com/apache/spark/pull/12690
Branch 1.5 Some Suggestions on Concrete Mixer Clean
ããLast time we talked safety in operating machinery, actually
maintenance and clean after the machine being off work is also important.
[url=http://www.dwblockmakingmachine.com/product/concrete-mixer/dw35-disiel-concrete-mixer.html]Mini
concrete mixer[/url] for sale,Mobile
[url=http://daswellchina.com/products/planetary-concrete-mixer/mp2000-planetary-concrete-mixers.html]concrete
mixer
manufacturer[/url],[url=http://www.concrete-batching-plants.org/concrete-batching-plants/stationary-concrete-batching-plants.html]concrete
batching machine[/url],
ããHere are some tips on cleaning the concrete mixer, here we share it
again, for it is very important.
ããThe cleaning process you use will depend on whether the cement mix is
fresh or dried. Fresh cement is easy to be removed with water and hydro stone
during ten to fifteen minutes. If need to get into te mixing drum, please be
careful and be sure cut off the power. With a rubber mallet, tap the mixing
drum gently several times until the dried chunks of concrete are removed. When
hitting the drum be careful donnot make dents.
ããIf there is still concrete left on the mixer, could use some
chemicals, such as hydrochloric acid. Keep the mixer spinning slowly.
ããBe sure that the water be drained completely, especially in winter.
Let the mixer dry thoroughly before storing.
ããIt is easy to clean the mixer, but need your patience. Clean it every
day when the cement is fresh and wet. For doing this not only can keep your
concrete cement mixer a long life, but also for safety operation .
ããPioneer offers concrete mixer with excellent performance to match
with concrete batching plant, the following are our products:
ããJZC series [url=http://www.portable-concrete-mixer.net]portable
concrete mixer[/url]: JZC250, JZC350, JZC500, JZC300T,JZC350T
ããJZM
[url=http://www.daswellchina.com/products/concrete-mixers/small-concrete-mixer.html]small
concrete mixer[/url]: JZM350, JZM500
ããJS series forced type concrete mixer: JS500
[url=http://www.dwmixers.com/products/planetary-concrete-mixers/mp1000-planetary-concrete-mixer.html]MP1000
Planetary Concrete Mixer equipment[/url], JS750, JS1000, JS1500, JS2000,
JS3000, JS4000
ããJSII
[url=http://www.dwmixers.com/products/concrete-mixer/twin-shaft-concrete-mixer.html]twin
shaft concrete mixer[/url]:JS500II, JS750II, JS1000II, JS1500II, JS2000II,
JS3000II, JS4000II
ããJZD series diesel concrete mixer:JZD300, JZD350, JZD300T, JZD350T
ããWe sincerely welcome your inquire concrete batching plant, and our
sales team will response you as soon as possible. Besides, we offer 7/24 phone
service.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/apache/spark branch-1.5
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/spark/pull/12690.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #12690
----
commit f8909a6c67420c27570b1691268a965c032ba32d
Author: Nithin Asokan <[email protected]>
Date: 2015-09-12T08:50:49Z
[SPARK-10554] [CORE] Fix NPE with ShutdownHook
https://issues.apache.org/jira/browse/SPARK-10554
Fixes NPE when ShutdownHook tries to cleanup temporary folders
Author: Nithin Asokan <[email protected]>
Closes #8720 from nasokan/SPARK-10554.
(cherry picked from commit 8285e3b0d3dc0eff669eba993742dfe0401116f9)
Signed-off-by: Sean Owen <[email protected]>
commit 4586f218839561aa55488f10b548a93a0f6a33e6
Author: Iulian Dragos <[email protected]>
Date: 2015-09-13T10:00:08Z
[SPARK-6350] [MESOS] [BACKPORT] Fine-grained mode scheduler respects
spark.mesos.mesosExecutor.cores when launching Mesos executors (regression)
(cherry picked from commit 03e8d0a620301c0bfd2bbf21415f7d794da19603)
backported to branch-1.5 /cc andrewor14
Author: Iulian Dragos <[email protected]>
Closes #8732 from dragos/issue/mesos/fine-grained-maxExecutorCores-1.5.
commit 5f58704c98c26e9399f984a3de5b74664cdad76d
Author: Kousuke Saruta <[email protected]>
Date: 2015-09-14T19:06:23Z
[SPARK-10584] [DOC] [SQL] Documentation about
spark.sql.hive.metastore.version is wrong.
The default value of hive metastore version is 1.2.1 but the documentation
says the value of `spark.sql.hive.metastore.version` is 0.13.1.
Also, we cannot get the default value by
`sqlContext.getConf("spark.sql.hive.metastore.version")`.
Author: Kousuke Saruta <[email protected]>
Closes #8739 from sarutak/SPARK-10584.
(cherry picked from commit cf2821ef5fd9965eb6256e8e8b3f1e00c0788098)
Signed-off-by: Yin Huai <[email protected]>
commit 5b7067c91f359356c5f65ea679d47dd6bf8b2eac
Author: Nick Pritchard <[email protected]>
Date: 2015-09-14T20:27:45Z
[SPARK-10573] [ML] IndexToString output schema should be StringType
Fixes bug where IndexToString output schema was DoubleType. Correct me if
I'm wrong, but it doesn't seem like the output needs to have any "ML Attribute"
metadata.
Author: Nick Pritchard <[email protected]>
Closes #8751 from pnpritchard/SPARK-10573.
(cherry picked from commit 8a634e9bcc671167613fb575c6c0c054fb4b3479)
Signed-off-by: Xiangrui Meng <[email protected]>
commit a0d564a102eb930f3c061d7827abbcea50ccbb68
Author: Davies Liu <[email protected]>
Date: 2015-09-14T21:10:54Z
[SPARK-10522] [SQL] Nanoseconds of Timestamp in Parquet should be positive
Or Hive can't read it back correctly.
Thanks vanzin for report this.
Author: Davies Liu <[email protected]>
Closes #8674 from davies/positive_nano.
(cherry picked from commit 7e32387ae6303fd1cd32389d47df87170b841c67)
Signed-off-by: Davies Liu <[email protected]>
commit 0e1c9d9ff7f9c8f9ae40179c19abbd1d211d142e
Author: Tom Graves <[email protected]>
Date: 2015-09-14T22:05:19Z
[SPARK-10549] scala 2.11 spark on yarn with security - Repl doesn't work
Make this lazy so that it can set the yarn mode before creating the
securityManager.
Author: Tom Graves <[email protected]>
Author: Thomas Graves <[email protected]>
Closes #8719 from tgravescs/SPARK-10549.
commit eb0cb25bb81a5aa271d2a0266e5a31b36d1fc071
Author: Forest Fang <[email protected]>
Date: 2015-09-14T22:07:13Z
[SPARK-10543] [CORE] Peak Execution Memory Quantile should be Per-task Basis
Read `PEAK_EXECUTION_MEMORY` using `update` to get per task partial value
instead of cumulative value.
I tested with this workload:
```scala
val size = 1000
val repetitions = 10
val data = sc.parallelize(1 to size, 5).map(x => (util.Random.nextInt(size
/ repetitions),util.Random.nextDouble)).toDF("key", "value")
val res = data.toDF.groupBy("key").agg(sum("value")).count
```
Before:

After:

Tasks view:

cc andrewor14 I appreciate if you can give feedback on this since I think
you introduced display of this metric.
Author: Forest Fang <[email protected]>
Closes #8726 from saurfang/stagepage.
(cherry picked from commit fd1e8cddf2635c55fec2ac6e1f1c221c9685af0f)
Signed-off-by: Andrew Or <[email protected]>
commit 5db51f91131e867fd27cb6b0457a2698925cd920
Author: Andrew Or <[email protected]>
Date: 2015-09-14T22:09:43Z
[SPARK-10564] ThreadingSuite: assertion failures in threads don't fail the
test (round 2)
This is a follow-up patch to #8723. I missed one case there.
Author: Andrew Or <[email protected]>
Closes #8727 from andrewor14/fix-threading-suite.
(cherry picked from commit 7b6c856367b9c36348e80e83959150da9656c4dd)
Signed-off-by: Andrew Or <[email protected]>
commit d5c0361e7f2535a3893a7172d21881b18aa919d6
Author: Davies Liu <[email protected]>
Date: 2015-09-15T02:46:34Z
[SPARK-10542] [PYSPARK] fix serialize namedtuple
Author: Davies Liu <[email protected]>
Closes #8707 from davies/fix_namedtuple.
commit 7286c2ba6f5b98b02ff98e7287de23fd9c19f789
Author: Jacek Laskowski <[email protected]>
Date: 2015-09-15T06:40:29Z
Small fixes to docs
Links work now properly + consistent use of *Spark standalone cluster*
(Spark uppercase + lowercase the rest -- seems agreed in the other places in
the docs).
Author: Jacek Laskowski <[email protected]>
Closes #8759 from jaceklaskowski/docs-submitting-apps.
(cherry picked from commit 833be73314b85b390a9007ed6ed63dc47bbd9e4f)
Signed-off-by: Reynold Xin <[email protected]>
commit 997be78c3a291f86e348d626ae89745ead625251
Author: Andrew Or <[email protected]>
Date: 2015-09-15T23:46:34Z
[SPARK-10548] [SPARK-10563] [SQL] Fix concurrent SQL executions / branch-1.5
*Note: this is for branch-1.5 only*
This is the same as #8710 but affects only SQL. The more general fix for
SPARK-10563 is considered risky to backport into a maintenance release, so it
is disabled by default and enabled only in SQL.
Author: Andrew Or <[email protected]>
Closes #8721 from andrewor14/concurrent-sql-executions-1.5 and squashes the
following commits:
3b9b462 [Andrew Or] Merge branch 'branch-1.5' of github.com:apache/spark
into concurrent-sql-executions-1.5
4435db7 [Andrew Or] Clone properties only for SQL for backward compatibility
0b7e5ab [Andrew Or] Clone parent local properties on inherit
commit 2bbcbc65917fd5f0eba23d0cb000eb9c26c0165b
Author: Josh Rosen <[email protected]>
Date: 2015-09-16T00:11:21Z
[SPARK-10381] Fix mixup of taskAttemptNumber & attemptId in
OutputCommitCoordinator
When speculative execution is enabled, consider a scenario where the
authorized committer of a particular output partition fails during the
OutputCommitter.commitTask() call. In this case, the OutputCommitCoordinator is
supposed to release that committer's exclusive lock on committing once that
task fails. However, due to a unit mismatch (we used task attempt number in one
place and task attempt id in another) the lock will not be released, causing
Spark to go into an infinite retry loop.
This bug was masked by the fact that the OutputCommitCoordinator does not
have enough end-to-end tests (the current tests use many mocks). Other factors
contributing to this bug are the fact that we have many similarly-named
identifiers that have different semantics but the same data types (e.g.
attemptNumber and taskAttemptId, with inconsistent variable naming which makes
them difficult to distinguish).
This patch adds a regression test and fixes this bug by always using task
attempt numbers throughout this code.
Author: Josh Rosen <[email protected]>
Closes #8544 from JoshRosen/SPARK-10381.
(cherry picked from commit 38700ea40cb1dd0805cc926a9e629f93c99527ad)
Signed-off-by: Josh Rosen <[email protected]>
commit 4c4a9ba28d9052fad45caca9a1eba9ef9db309d5
Author: Luciano Resende <[email protected]>
Date: 2015-09-16T09:47:30Z
[SPARK-10511] [BUILD] Reset git repository before packaging source distro
The calculation of Spark version is downloading
Scala and Zinc in the build directory which is
inflating the size of the source distribution.
Reseting the repo before packaging the source
distribution fix this issue.
Author: Luciano Resende <[email protected]>
Closes #8774 from lresende/spark-10511.
(cherry picked from commit 1894653edce718e874d1ddc9ba442bce43cbc082)
Signed-off-by: Sean Owen <[email protected]>
commit eae1566de4a3548273edae9d13da4fbae87d9447
Author: yangping.wu <[email protected]>
Date: 2015-09-17T16:52:40Z
[SPARK-10660] Doc describe error in the "Running Spark on YARN" page
In the Configuration section, the **spark.yarn.driver.memoryOverhead** and
**spark.yarn.am.memoryOverhead**âs default value should be "driverMemory *
0.10, with minimum of 384" and "AM memory * 0.10, with minimum of 384"
respectively. Because from Spark 1.4.0, the **MEMORY_OVERHEAD_FACTOR** is set
to 0.1.0, not 0.07.
Author: yangping.wu <[email protected]>
Closes #8797 from 397090770/SparkOnYarnDocError.
(cherry picked from commit c88bb5df94f9696677c3a429472114bc66f32a52)
Signed-off-by: Marcelo Vanzin <[email protected]>
commit 9f8fb3385fb14bc8b83772bf138e777beb5d7157
Author: Liang-Chi Hsieh <[email protected]>
Date: 2015-09-17T17:02:15Z
[SPARK-10642] [PYSPARK] Fix crash when calling rdd.lookup() on tuple keys
JIRA: https://issues.apache.org/jira/browse/SPARK-10642
When calling `rdd.lookup()` on a RDD with tuple keys, `portable_hash` will
return a long. That causes `DAGScheduler.submitJob` to throw
`java.lang.ClassCastException: java.lang.Long cannot be cast to
java.lang.Integer`.
Author: Liang-Chi Hsieh <[email protected]>
Closes #8796 from viirya/fix-pyrdd-lookup.
(cherry picked from commit 136c77d8bbf48f7c45dd7c3fbe261a0476f455fe)
Signed-off-by: Davies Liu <[email protected]>
commit 88176d1a283e215843dfd4d6b8fa9b9c5e9b00b2
Author: Josiah Samuel <[email protected]>
Date: 2015-09-17T17:18:21Z
[SPARK-10172] [CORE] disable sort in HistoryServer webUI
This pull request is to address the JIRA SPARK-10172 (History Server web UI
gets messed up when sorting on any column).
The content of the table gets messed up due to the rowspan attribute of the
table data(cell) during sorting.
The current table sort library used in SparkUI (sorttable.js) doesn't
support/handle cells(td) with rowspans.
The fix will disable the table sort in the web UI, when there are jobs
listed with multiple attempts.
Author: Josiah Samuel <[email protected]>
Closes #8506 from josiahsams/SPARK-10172.
(cherry picked from commit 81b4db374dd61b6f1c30511c70b6ab2a52c68faa)
Signed-off-by: Marcelo Vanzin <[email protected]>
commit fd58ed48d5d10167b7667d2243fbb3d75b93c0fc
Author: Michael Armbrust <[email protected]>
Date: 2015-09-17T18:05:30Z
[SPARK-10650] Clean before building docs
The [published docs for
1.5.0](http://spark.apache.org/docs/1.5.0/api/java/org/apache/spark/streaming/)
have a bunch of test classes in them. The only way I can reproduce this is to
`test:compile` before running `unidoc`. To prevent this from happening again,
I've added a clean before doc generation.
Author: Michael Armbrust <[email protected]>
Closes #8787 from marmbrus/testsInDocs.
(cherry picked from commit e0dc2bc232206d2f4da4278502c1f88babc8b55a)
Signed-off-by: Michael Armbrust <[email protected]>
commit 464d6e7d1be33f4bb11b38c0bcabb8d90aed1246
Author: Yin Huai <[email protected]>
Date: 2015-09-17T18:14:52Z
[SPARK-10639] [SQL] Need to convert UDAF's result from scala to sql type
https://issues.apache.org/jira/browse/SPARK-10639
Author: Yin Huai <[email protected]>
Closes #8788 from yhuai/udafConversion.
(cherry picked from commit aad644fbe29151aec9004817d42e4928bdb326f3)
Signed-off-by: Michael Armbrust <[email protected]>
Conflicts:
sql/core/src/test/scala/org/apache/spark/sql/UserDefinedTypeSuite.scala
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/AggregationQuerySuite.scala
commit 153a23af7d815ce1cc275f5e9657649a5d2de1cc
Author: Josh Rosen <[email protected]>
Date: 2015-09-17T18:40:24Z
[SPARK-10657] Remove SCP-based Jenkins log archiving
As of https://issues.apache.org/jira/browse/SPARK-7561, we no longer need
to use our custom SCP-based mechanism for archiving Jenkins logs on the master
machine; this has been superseded by the use of a Jenkins plugin which archives
the logs and provides public links to view them.
Per shaneknapp, we should remove this log syncing mechanism if it is no
longer necessary; removing the need to SCP from the Jenkins workers to the
masters is a desired step as part of some larger Jenkins infra refactoring.
Author: Josh Rosen <[email protected]>
Closes #8793 from JoshRosen/remove-jenkins-ssh-to-master.
(cherry picked from commit f1c911552cf5d0d60831c79c1881016293aec66c)
Signed-off-by: Josh Rosen <[email protected]>
commit dc5ae033427033f16efe2e9fd7726a21ea36a2e5
Author: linweizhong <[email protected]>
Date: 2015-09-18T05:25:24Z
[SPARK-9522] [SQL] SparkSubmit process can not exit if kill application
when HiveThriftServer was starting
When we start HiveThriftServer, we will start SparkContext first, then
start HiveServer2, if we kill application while HiveServer2 is starting then
SparkContext will stop successfully, but SparkSubmit process can not exit.
Author: linweizhong <[email protected]>
Closes #7853 from Sephiroth-Lin/SPARK-9522.
(cherry picked from commit 93c7650ab60a839a9cbe8b4ea1d5eda93e53ebe0)
Signed-off-by: Yin Huai <[email protected]>
commit f97db949923f60f805663849693c8077e8398b8c
Author: Felix Bechstein <[email protected]>
Date: 2015-09-18T05:42:46Z
docs/running-on-mesos.md: state default values in default column
This PR simply uses the default value column for defaults.
Author: Felix Bechstein <[email protected]>
Closes #8810 from felixb/fix_mesos_doc.
(cherry picked from commit 9a56dcdf7f19c9f7f913a2ce9bc981cb43a113c5)
Signed-off-by: Reynold Xin <[email protected]>
commit 2c6a51e1443aa6bf1401319560e0b5387160bce5
Author: navis.ryu <[email protected]>
Date: 2015-09-18T07:43:02Z
[SPARK-10684] [SQL] StructType.interpretedOrdering need not to be serialized
Kryo fails with buffer overflow even with max value (2G).
{noformat}
org.apache.spark.SparkException: Kryo serialization failed: Buffer
overflow. Available: 0, required: 1
Serialization trace:
containsChild (org.apache.spark.sql.catalyst.expressions.BoundReference)
child (org.apache.spark.sql.catalyst.expressions.SortOrder)
array (scala.collection.mutable.ArraySeq)
ordering (org.apache.spark.sql.catalyst.expressions.InterpretedOrdering)
interpretedOrdering (org.apache.spark.sql.types.StructType)
schema (org.apache.spark.sql.catalyst.expressions.GenericRowWithSchema). To
avoid this, increase spark.kryoserializer.buffer.max value.
at
org.apache.spark.serializer.KryoSerializerInstance.serialize(KryoSerializer.scala:263)
at
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:240)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
{noformat}
Author: navis.ryu <[email protected]>
Closes #8808 from navis/SPARK-10684.
(cherry picked from commit e3b5d6cb29e0f983fcc55920619e6433298955f5)
Signed-off-by: Reynold Xin <[email protected]>
commit e1e781f04963a69f9b2d0be664ed2457016d94d2
Author: Cheng Lian <[email protected]>
Date: 2015-09-18T19:19:08Z
[SPARK-10540] Fixes flaky all-data-type test
This PR breaks the original test case into multiple ones (one test case for
each data type). In this way, test failure output can be much more readable.
Within each test case, we build a table with two columns, one of them is
for the data type to test, the other is an "index" column, which is used to
sort the DataFrame and workaround [SPARK-10591] [1]
[1]: https://issues.apache.org/jira/browse/SPARK-10591
Author: Cheng Lian <[email protected]>
Closes #8768 from liancheng/spark-10540/test-all-data-types.
(cherry picked from commit 00a2911c5bea67a1a4796fb1d6fd5d0a8ee79001)
Signed-off-by: Yin Huai <[email protected]>
commit 3df52ccfa701f759bd60fe048d47d3664769b37f
Author: Yijie Shen <[email protected]>
Date: 2015-09-18T20:20:13Z
[SPARK-10539] [SQL] Project should not be pushed down through Intersect or
Except #8742
Intersect and Except are both set operators and they use the all the
columns to compare equality between rows. When pushing their Project parent
down, the relations they based on would change, therefore not an equivalent
transformation.
JIRA: https://issues.apache.org/jira/browse/SPARK-10539
I added some comments based on the fix of
https://github.com/apache/spark/pull/8742.
Author: Yijie Shen <[email protected]>
Author: Yin Huai <[email protected]>
Closes #8823 from yhuai/fix_set_optimization.
(cherry picked from commit c6f8135ee52202bd86adb090ab631e80330ea4df)
Signed-off-by: Yin Huai <[email protected]>
Conflicts:
sql/core/src/test/scala/org/apache/spark/sql/DataFrameSuite.scala
commit 4051fffaa2533dacda7ec91650cc0675ce8a65cc
Author: Holden Karau <[email protected]>
Date: 2015-09-18T20:47:14Z
[SPARK-10449] [SQL] Don't merge decimal types with incompatable precision
or scales
From JIRA: Schema merging should only handle struct fields. But currently
we also reconcile decimal precision and scale information.
Author: Holden Karau <[email protected]>
Closes #8634 from holdenk/SPARK-10449-dont-merge-different-precision.
(cherry picked from commit 3a22b1004f527d54d399dd0225cd7f2f8ffad9c5)
Signed-off-by: Cheng Lian <[email protected]>
commit a6c315358b4517c461beabd5cd319d56d9fddd57
Author: Mingyu Kim <[email protected]>
Date: 2015-09-18T22:40:58Z
[SPARK-10611] Clone Configuration for each task for NewHadoopRDD
This patch attempts to fix the Hadoop Configuration thread safety issue for
NewHadoopRDD in the same way SPARK-2546 fixed the issue for HadoopRDD.
Author: Mingyu Kim <[email protected]>
Closes #8763 from mingyukim/mkim/SPARK-10611.
(cherry picked from commit 8074208fa47fa654c1055c48cfa0d923edeeb04f)
Signed-off-by: Josh Rosen <[email protected]>
commit b3f1e653320e074fe78971a2a3b659c36da20b45
Author: Cheng Lian <[email protected]>
Date: 2015-09-19T01:42:20Z
[SPARK-10623] [SQL] Fixes ORC predicate push-down
When pushing down a leaf predicate, ORC `SearchArgument` builder requires
an extra "parent" predicate (any one among `AND`/`OR`/`NOT`) to wrap the leaf
predicate. E.g., to push down `a < 1`, we must build `AND(a < 1)` instead.
Fortunately, when actually constructing the `SearchArgument`, the builder will
eliminate all those unnecessary wrappers.
This PR is based on #8783 authored by zhzhan. I also took the chance to
simply `OrcFilters` a little bit to improve readability.
Author: Cheng Lian <[email protected]>
Closes #8799 from liancheng/spark-10623/fix-orc-ppd.
(cherry picked from commit 22be2ae147a111e88896f6fb42ed46bbf108a99b)
Signed-off-by: Yin Huai <[email protected]>
Conflicts:
sql/hive/src/main/scala/org/apache/spark/sql/hive/orc/OrcFilters.scala
commit 49355d0e032cfe82b907e6cb45c0b894387ba46b
Author: Andrew Or <[email protected]>
Date: 2015-09-19T06:58:25Z
[SPARK-10474] [SQL] Aggregation fails to allocate memory for pointer array
When `TungstenAggregation` hits memory pressure, it switches from
hash-based to sort-based aggregation in-place. However, in the process we try
to allocate the pointer array for writing to the new `UnsafeExternalSorter`
*before* actually freeing the memory from the hash map. This lead to the
following exception:
```
java.io.IOException: Could not acquire 65536 bytes of memory
at
org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter.initializeForWriting(UnsafeExternalSorter.java:169)
at
org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter.spill(UnsafeExternalSorter.java:220)
at
org.apache.spark.sql.execution.UnsafeKVExternalSorter.<init>(UnsafeKVExternalSorter.java:126)
at
org.apache.spark.sql.execution.UnsafeFixedWidthAggregationMap.destructAndCreateExternalSorter(UnsafeFixedWidthAggregationMap.java:257)
at
org.apache.spark.sql.execution.aggregate.TungstenAggregationIterator.switchToSortBasedAggregation(TungstenAggregationIterator.scala:435)
```
Author: Andrew Or <[email protected]>
Closes #8827 from andrewor14/allocate-pointer-array.
(cherry picked from commit 7ff8d68cc19299e16dedfd819b9e96480fa6cf44)
Signed-off-by: Andrew Or <[email protected]>
commit aaae67df9c03013af0677eb5c1146784f6efc3d1
Author: Kousuke Saruta <[email protected]>
Date: 2015-09-19T08:59:36Z
[SPARK-10584] [SQL] [DOC] Documentation about the compatible Hive version
is wrong.
In Spark 1.5.0, Spark SQL is compatible with Hive 0.12.0 through 1.2.1 but
the documentation is wrong.
/CC yhuai
Author: Kousuke Saruta <[email protected]>
Closes #8776 from sarutak/SPARK-10584-2.
(cherry picked from commit d507f9c0b7f7a524137a694ed6443747aaf90463)
Signed-off-by: Cheng Lian <[email protected]>
commit 9b74fecb3f6091bdf2f3785490c1de0d9042c338
Author: Alexis Seigneurin <[email protected]>
Date: 2015-09-19T11:01:22Z
Fixed links to the API
Submitting this change on the master branch as requested in
https://github.com/apache/spark/pull/8819#issuecomment-141505941
Author: Alexis Seigneurin <[email protected]>
Closes #8838 from aseigneurin/patch-2.
(cherry picked from commit d83b6aae8b4357c56779cc98804eb350ab8af62d)
Signed-off-by: Sean Owen <[email protected]>
----
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]