Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/4688#issuecomment-82491054
Following Thomas's [earlier
comment](https://github.com/apache/spark/pull/4688#issuecomment-76224212)
* Yes, slider does keytabs. For deploying things like
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/4491#issuecomment-83972651
Most of the sync changes in HADOOP-1170 was to deal with HBase's
expectations. Even if you don't think it matters for its current use in spark,
you should make
GitHub user steveloughran opened a pull request:
https://github.com/apache/spark/pull/5119
SPARK-6433 patch 001: test JARs are built; sql/hive pulls in spark-sql ...
1. Test JARs are built published
1. log4j.resources is explicitly excluded. Without this, downstream test
run
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/4491#discussion_r26760844
--- Diff: core/src/main/scala/org/apache/spark/crypto/CipherSuite.scala ---
@@ -0,0 +1,72 @@
+/*
+ * Licensed to the Apache Software Foundation
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/4491#discussion_r26760061
--- Diff:
core/src/main/scala/org/apache/spark/crypto/CryptoInputStream.scala ---
@@ -0,0 +1,428 @@
+/*
+ * Licensed to the Apache Software
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/4491#discussion_r26760312
--- Diff:
core/src/main/scala/org/apache/spark/crypto/CryptoInputStream.scala ---
@@ -0,0 +1,428 @@
+/*
+ * Licensed to the Apache Software
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/4491#issuecomment-83615286
1. the `InterfaceAudience.Private` tags in Hadoop are a please don't
use` hint, although if you look at YARN AMs, they end up importing using
stuff which
GitHub user steveloughran opened a pull request:
https://github.com/apache/spark/pull/5070
SPARK-6389 YARN app diagnostics report doesn't report NPEs
Trivial patch to implicitly call `Exception.toString()` over
`Exception.getMessage()` âthis defaults to including the exception
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/5070#issuecomment-82419544
I think you'd be welcome to take a quick look for other occurrences in
the code base that plainly need the same treatment.
As I come across them
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/5070#issuecomment-82450470
Actually, there are a couple more in the same file. I'll update this pull
with the other ones
---
If your project is set up for it, you can reply to this email
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/5119#issuecomment-85448943
-Updated patch with the indentation corrected; plugin version entrusted to
the apache parent template
---
If your project is set up for it, you can reply
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/5119#discussion_r27116385
--- Diff: pom.xml ---
@@ -1472,6 +1473,45 @@
groupIdorg.scalatest/groupId
artifactIdscalatest-maven-plugin/artifactId
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/4688#issuecomment-86009798
[YARN-896](https://issues.apache.org/jira/browse/YARN-896) is where to go
to look for issues related to long-lived services.
---
If your project is set up
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/5119#issuecomment-88430604
OK
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/5119#issuecomment-86937523
I did a clean build and it didn't work, at least not with the command.
{code}
mvn clean install -DskipTests -Pyarn -Phadoop-2.4 -Dhadoop.version=2.6.0
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/5119#issuecomment-87403955
It leaves out the `log4j.properties` of every test JAR, to stop it
contaminating downstream tests. you don't want to be trying to debug exactly
which log4j file
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/5119#issuecomment-87436152
To clarify something: do you expect there to be a {{log4j.properties}} file
in the slider assembly JAR? because there isn't one, not in trunk@ 0e2753ff
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/5119#issuecomment-87034493
I commented it out again and did work, so I am now concluding I trust maven
even less than before. Pushed a new commit with the JAR execution omitted.
---
If your
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/5119#issuecomment-86659856
As an experiment, I changed the plugin declaration to exclude the main JAR
phase, that is, commented out this bit:
```xml
execution
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/4688#issuecomment-85992286
On AM failover, YARN refreshes the tokens for the AM so that it the NM
localizer can restart the app. when the AM collects these tokens (by the normal
mechanism
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/5119#discussion_r26925189
--- Diff: pom.xml ---
@@ -1472,6 +1474,46 @@
groupIdorg.scalatest/groupId
artifactIdscalatest-maven-plugin/artifactId
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/4491#issuecomment-84934768
@kellyzly
i don't understand why need make CryptoOutputStream.scala#close safe. Is
there situation when
multiple threads call this function
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/5119#discussion_r26925330
--- Diff: sql/hive/src/test/scala/org/apache/spark/sql/QueryTest.scala ---
@@ -1,140 +0,0 @@
-/*
--- End diff --
yes. These are the two
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/5119#discussion_r26925134
--- Diff: pom.xml ---
@@ -1472,6 +1474,46 @@
groupIdorg.scalatest/groupId
artifactIdscalatest-maven-plugin/artifactId
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/5119#discussion_r26925089
--- Diff: pom.xml ---
@@ -158,6 +158,7 @@
fasterxml.jackson.version2.4.4/fasterxml.jackson.version
snappy.version1.1.1.6
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/5447#issuecomment-92483319
This may seem a silly question, but why not just use `File.toURI`? It does
handle windows paths robustly
---
If your project is set up for it, you can reply
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/5423#issuecomment-95675032
There's no obvious reason why the Jenkins build failed; the console says
all the tests passed.
---
If your project is set up for it, you can reply to this email
GitHub user steveloughran opened a pull request:
https://github.com/apache/spark/pull/5592
SPARK-7009 Build assembly JAR via ant to avoid zip64 problems
This is the ~30 line patch to have ant generate a zip32 artifact straight
off the shaded JAR. As noted however, the line class
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/5423#issuecomment-94830807
Note that this adds a new profile `hadoop-2.6`, to pull in the 2.6 JARs and
conditionally add yarn/history source tests to the build...without that the
tests
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/5423#issuecomment-95282650
This iteration has a simpler service flush/shutdown logic, with specific
messages for each action queued, and no attempt to trigger the yarn service
stop when
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/5637#issuecomment-95486611
The difference between this patch and #5592 is that the latter was trying
to use zip in Ant to handle the JAR creation everywhere. This patch doesn't
do
Github user steveloughran closed the pull request at:
https://github.com/apache/spark/pull/5592
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/5592#issuecomment-94509316
The only reason that jenkins didn't fail the build is that the assembly
package target was skipped.
interestingly the console logs show s that spark stream
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/5423#issuecomment-98220352
I'll mark the `PrivilegedFunction` as private; all it does is take a
function `() = T` and run it as a `PrivilegedExceptionAction`, so making
`UGI.doAs` slightly
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/6033#issuecomment-100831678
Jetty would catch them and generate a 500 response with no body, and not
log it *as far as I could find*.
This patch catches and logs via the spark-log4j
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/6033#issuecomment-100916317
(im doing some more work on this; adding the jetty exception details, and
in the SPARK-1537, verification that the error text makes it through). If that
works
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/6033#issuecomment-100917262
update: OK, with a jetty error handler set up right, exceptions are turned
into 500 errors with a `pre` formatted stack trace;
```
html
head
meta
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/6033#issuecomment-100926303
Latest patch logs then rethrows; a jetty error handler created on jetty
server startup will catch this and convert it into a 500+ error page. The jetty
handler
GitHub user steveloughran opened a pull request:
https://github.com/apache/spark/pull/6033
SPARK-7508 JettyUtils-generated servlets to log report all errors
Patch for SPARK-7508
This logs @ warn then generates a respnse which include the message body
and stack trace
GitHub user steveloughran opened a pull request:
https://github.com/apache/spark/pull/6191
SPARK-7669 Builds against Hadoop 2.6+ get inconsistent curator dependâ¦
This adds a new profile, `hadoop-2.6`, copying over the hadoop-2.4
properties, updating ZK to 3.4.6 and making
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/6191#discussion_r30439495
--- Diff: pom.xml ---
@@ -705,7 +706,7 @@
dependency
groupIdorg.apache.curator/groupId
artifactIdcurator-recipes
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/5423#issuecomment-93582949
This is WiP build, with a lot more tests, with integration ones going all
the way from a wired up spark context to an in-memory ATS server; this needs
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/5423#discussion_r28507287
--- Diff:
yarn/history/src/main/scala/org/apache/spark/deploy/history/yarn/YarnHistoryService.scala
---
@@ -0,0 +1,630 @@
+/*
+ * Licensed
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/5423#discussion_r28507280
--- Diff:
yarn/history/src/main/scala/org/apache/spark/deploy/history/yarn/YarnHistoryService.scala
---
@@ -0,0 +1,630 @@
+/*
+ * Licensed
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/5423#discussion_r28507293
--- Diff:
yarn/history/src/main/scala/org/apache/spark/deploy/history/yarn/YarnHistoryService.scala
---
@@ -0,0 +1,630 @@
+/*
+ * Licensed
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/5423#discussion_r28507400
--- Diff:
yarn/src/main/scala/org/apache/spark/scheduler/cluster/YarnClientSchedulerBackend.scala
---
@@ -56,10 +59,16 @@ private[spark] class
GitHub user steveloughran opened a pull request:
https://github.com/apache/spark/pull/5423
SPARK-1537 Application Timeline Server integration
This a snapshot of the work in progress. It's a superset of zhzhan's work,
compiling against the master branch and with a lot more tests
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/6394#discussion_r31629791
--- Diff:
yarn/src/main/scala/org/apache/spark/deploy/yarn/YarnAllocator.scala ---
@@ -253,10 +269,26 @@ private[yarn] class YarnAllocator
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/6394#discussion_r31629703
--- Diff:
yarn/src/main/scala/org/apache/spark/deploy/yarn/YarnAllocator.scala ---
@@ -253,10 +269,26 @@ private[yarn] class YarnAllocator
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/5423#issuecomment-110415267
Latest commit demonstrates SPARK-8275, HistoryServer caches incomplete App
UIs. After this test run I'm going to comment out the assertions in question so
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/5423#issuecomment-110712388
There's a deliberate failing test in this patch, which shows that Jenkins
hasn't been doing the test runs.
I've been using the profile `-Phadoop-2.6
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/5423#issuecomment-110354197
I'll make `ApplicationListingResults` `private[spark]`; of the other
warnings the two `logInfo` callouts are clearly confusion about the use of the
word `class
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/6935#issuecomment-114253979
`CacheEntry` is unintentional public class; will fix. I have no idea where
the others came from
---
If your project is set up for it, you can reply to this email
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/6935#issuecomment-114254408
style check. Not enough spaces, by the look of things
```
[error]
/home/jenkins/workspace/SparkPullRequestBuilder@2/core/src/main/scala/org/apache/spark
GitHub user steveloughran opened a pull request:
https://github.com/apache/spark/pull/6935
SPARK-7889 Jobs progress of apps on complete page of HistoryServer shows
uncompleted
This patch pulls all the application cache logic out of the `HistoryServer`
and into its own
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/6935#issuecomment-115744548
If you want to do tests, grab the code
*
[WebsiteIntegrationSuite](https://github.com/steveloughran/spark/blob/stevel/feature/SPARK-1537-ATS/yarn/history/src
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/5423#issuecomment-115745128
Test failed in PySpark. Unless those are functional tests which work with a
mini yarn cluster and history server, it's not this patch which broke things
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/5423#issuecomment-115745220
..but will make those case classes private. I'd expected them to be
already, but I guess not
---
If your project is set up for it, you can reply to this email
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/6545#issuecomment-114656223
-yes, independent. SPARK-1537: implements the HistoryProvider in YARN, gets
it from ATS, etc, etc. The JIRA we are looking at here, where my patch is
[pull/6935
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/6935#issuecomment-114656473
I should add that I did think about actually using the last-updated
information to decide whether to refresh or not, and decided simply having a
timeout
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/6935#issuecomment-115622133
I probably tested different bits of it ... I verified that updated apps
were coming in, but no, not a full functional test of the Web UI, as that would
take a lot
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/6748#issuecomment-110907285
I hadn't seen the other work you are doing.
how about I worry about hadoop 1.2 thrift while you get the other bit in,
and I can build
GitHub user steveloughran opened a pull request:
https://github.com/apache/spark/pull/6748
WiP: SPARK-8064 Upgrade to Hive 1.2
This is work in progress branch to add Hive 1.2.0 support.
Current status: it compiles, initial instantiations don't throw up obscure
linkage
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/6545#issuecomment-112366005
1. I can do the test as part of SPARK-1537; indeed, I already have it
replicated, I've just turned that test off. That code already has everything
needed
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/5998#issuecomment-112388090
This patch doesn't have a JIRA entry, created
[SPARK-8394](https://issues.apache.org/jira/browse/SPARK-8394) for changelog
generation
---
If your project is set
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/6394#discussion_r31427020
--- Diff:
yarn/src/main/scala/org/apache/spark/deploy/yarn/YarnAllocator.scala ---
@@ -253,10 +269,26 @@ private[yarn] class YarnAllocator
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/6441#issuecomment-107459495
Regarding backporting, it could be enough to start with copying
{{SparkFunSuite}} into 1.3 1.4 branches, so that any new patches written
against trunk can
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/5423#issuecomment-117066169
OK, I've done the shallow changes, will now
1. rename health check methods to make clear what they are doing is more
connectivity check
2. add the attempt
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/5423#issuecomment-118005136
jenkins playing up, perhaps
```
::
[warn] :: UNRESOLVED DEPENDENCIES ::
[warn
Github user steveloughran closed the pull request at:
https://github.com/apache/spark/pull/6748
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/6748#issuecomment-118012547
I'm going to cancel this and build up a new one, focusing on the hive
thriftserver
---
If your project is set up for it, you can reply to this email and have
GitHub user steveloughran opened a pull request:
https://github.com/apache/spark/pull/7188
SPARK-8789 improve SQLQuerySuite resilience by dropping tables in setup
drops tables used in tests before attempting to create them again. That
way, if the previous test failed
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/7188#issuecomment-118042608
hmmm, its {{withTable}} would do it, wouldn't it?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/5423#discussion_r33465613
--- Diff: core/src/main/scala/org/apache/spark/ui/JettyUtils.scala ---
@@ -220,9 +220,9 @@ private[spark] object JettyUtils extends Logging
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/5423#discussion_r33466420
--- Diff:
yarn/history/src/main/scala/org/apache/spark/deploy/history/yarn/YarnHistoryProvider.scala
---
@@ -0,0 +1,1015 @@
+/*
+ * Licensed
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/5423#discussion_r33465769
--- Diff:
yarn/history/src/main/scala/org/apache/spark/deploy/history/yarn/YarnEventListener.scala
---
@@ -0,0 +1,133 @@
+/*
+ * Licensed
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/5423#discussion_r33467960
--- Diff:
yarn/history/src/main/scala/org/apache/spark/deploy/history/yarn/YarnHistoryProvider.scala
---
@@ -0,0 +1,1015 @@
+/*
+ * Licensed
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/5423#discussion_r33465661
--- Diff: docs/monitoring.md ---
@@ -256,6 +256,157 @@ still required, though there is only one application
available. Eg. to see the
running app
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/5423#discussion_r33466020
--- Diff:
yarn/history/src/main/scala/org/apache/spark/deploy/history/yarn/YarnHistoryProvider.scala
---
@@ -0,0 +1,1015 @@
+/*
+ * Licensed
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/5423#discussion_r33459858
--- Diff:
yarn/history/src/main/scala/org/apache/spark/deploy/history/yarn/YarnHistoryProvider.scala
---
@@ -0,0 +1,1015 @@
+/*
+ * Licensed
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/5423#issuecomment-116667971
Thanks for sitting down to review it; it has grown to handle the end to end
problem, auth, unreliable endpoints, etc, where some complexity is always the
curse
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/5423#discussion_r33467225
--- Diff:
yarn/history/src/main/scala/org/apache/spark/deploy/history/yarn/YarnHistoryProvider.scala
---
@@ -0,0 +1,1015 @@
+/*
+ * Licensed
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/6935#issuecomment-116592795
I like the selenium test; it could be combined with the provider i wrote
which lets us programmatically create our own history, so add changes we can
look
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/5423#discussion_r33486012
--- Diff:
yarn/history/src/main/scala/org/apache/spark/deploy/history/yarn/YarnEventListener.scala
---
@@ -0,0 +1,133 @@
+/*
+ * Licensed
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/5423#discussion_r33488891
--- Diff:
yarn/history/src/main/scala/org/apache/spark/deploy/history/yarn/YarnHistoryProvider.scala
---
@@ -0,0 +1,1015 @@
+/*
+ * Licensed
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/5423#discussion_r33491590
--- Diff:
yarn/history/src/main/scala/org/apache/spark/deploy/history/yarn/YarnHistoryProvider.scala
---
@@ -0,0 +1,1015 @@
+/*
+ * Licensed
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/5423#discussion_r33488124
--- Diff:
yarn/history/src/main/scala/org/apache/spark/deploy/history/yarn/YarnHistoryProvider.scala
---
@@ -0,0 +1,1015 @@
+/*
+ * Licensed
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/5423#discussion_r33488601
--- Diff:
yarn/history/src/main/scala/org/apache/spark/deploy/history/yarn/YarnHistoryProvider.scala
---
@@ -0,0 +1,1015 @@
+/*
+ * Licensed
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/5423#discussion_r33488512
--- Diff:
yarn/history/src/main/scala/org/apache/spark/deploy/history/yarn/YarnHistoryProvider.scala
---
@@ -0,0 +1,1015 @@
+/*
+ * Licensed
GitHub user steveloughran opened a pull request:
https://github.com/apache/spark/pull/7191
SPARK-8064, build against Hive 1.2.1
Cherry picked the parts of the initial SPARK-8064 WiP branch needed to get
sql/hive to compile against hive 1.2.1. That's the ASF release packaged under
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/5423#discussion_r33495775
--- Diff: docs/monitoring.md ---
@@ -256,6 +256,157 @@ still required, though there is only one application
available. Eg. to see the
running app
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/5423#discussion_r33496039
--- Diff: docs/monitoring.md ---
@@ -256,6 +256,157 @@ still required, though there is only one application
available. Eg. to see the
running app
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/5423#discussion_r33496190
--- Diff: docs/monitoring.md ---
@@ -256,6 +256,157 @@ still required, though there is only one application
available. Eg. to see the
running app
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/6935#issuecomment-116795511
(BTW, I'm not going to be looking at this for a couple of weeks; focusing
on a (big) patch
---
If your project is set up for it, you can reply to this email
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/7191#discussion_r35808519
--- Diff:
sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/SparkSQLCLIDriver.scala
---
@@ -190,10 +191,14 @@ private[hive
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/7191#discussion_r35810466
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/parquet/ParquetCompatibilityTest.scala
---
@@ -31,6 +31,8 @@ import org.apache.spark.util.Utils
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/7191#discussion_r35810709
--- Diff:
sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/SparkSQLSessionManager.scala
---
@@ -55,12 +56,14 @@ private
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/7191#discussion_r35812664
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveContext.scala ---
@@ -515,16 +530,18 @@ class HiveContext(sc: SparkContext) extends
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/7191#issuecomment-126943233
1. This is a rebased patch; apologies to anyone who has branched off it.
2. It uses our own spark-project/hive version, where hive-exec only
contains the core
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/7191#issuecomment-127048445
Latest patch includes the commits from lliancheng for HiveSubmit missing
spark-hive parquet dependencies on the SBT test runs
---
If your project is set up
1 - 100 of 1115 matches
Mail list logo