GitHub user yaooqinn opened a pull request:
https://github.com/apache/spark/pull/9559
[SPARK-11583] Make MapStatus use less memory uage
In the resolved issue https://issues.apache.org/jira/browse/SPARK-11271, as
I said, using BitSet can save â20% memory usage compared
Github user yaooqinn commented on the pull request:
https://github.com/apache/spark/pull/9559#issuecomment-154983945
retest plz
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
GitHub user yaooqinn opened a pull request:
https://github.com/apache/spark/pull/9661
[SPARK-11583] [Core]MapStatus Using RoaringBitmap More Properly
1. test cases
1.1 sparse case: for each task 10 blocks contains data, others dont
sc.makeRDD(1 to 40950, 4095).groupBy(x=&g
Github user yaooqinn commented on a diff in the pull request:
https://github.com/apache/spark/pull/9661#discussion_r44745422
--- Diff: core/src/main/scala/org/apache/spark/scheduler/MapStatus.scala ---
@@ -154,15 +155,17 @@ private[spark] class HighlyCompressedMapStatus
private
Github user yaooqinn commented on a diff in the pull request:
https://github.com/apache/spark/pull/9661#discussion_r44751632
--- Diff: core/src/main/scala/org/apache/spark/scheduler/MapStatus.scala ---
@@ -154,15 +155,17 @@ private[spark] class HighlyCompressedMapStatus
private
Github user yaooqinn commented on the pull request:
https://github.com/apache/spark/pull/9661#issuecomment-156359312
For my questions in my last comment
### continuous
```scala
scala> import org.roaringbitmap._
import org.roaringbitmap._
scala> val r
Github user yaooqinn commented on a diff in the pull request:
https://github.com/apache/spark/pull/9661#discussion_r44759326
--- Diff: core/src/main/scala/org/apache/spark/scheduler/MapStatus.scala ---
@@ -154,15 +155,17 @@ private[spark] class HighlyCompressedMapStatus
private
Github user yaooqinn closed the pull request at:
https://github.com/apache/spark/pull/9559
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user yaooqinn commented on the pull request:
https://github.com/apache/spark/pull/9559#issuecomment-156309802
OK, close this pr and see #9661
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user yaooqinn commented on the pull request:
https://github.com/apache/spark/pull/9661#issuecomment-156327126
## test cases
sparse case: 4085 empty ---*S*
```scala
sc.makeRDD(1 to 40950, 4095).groupBy(x=>x).top(5)
```
dense case: 95 empty
Github user yaooqinn commented on a diff in the pull request:
https://github.com/apache/spark/pull/9661#discussion_r45007852
--- Diff:
core/src/main/scala/org/apache/spark/serializer/KryoSerializer.scala ---
@@ -21,24 +21,24 @@ import java.io.{EOFException, IOException
Github user yaooqinn commented on a diff in the pull request:
https://github.com/apache/spark/pull/9661#discussion_r45007872
--- Diff: core/src/main/scala/org/apache/spark/scheduler/MapStatus.scala ---
@@ -173,18 +172,15 @@ private[spark] object HighlyCompressedMapStatus
Github user yaooqinn commented on the pull request:
https://github.com/apache/spark/pull/9661#issuecomment-157253352
$ git push https://github.com/yaooqinn/spark.git mapstatus-roaring:test
Counting objects: 5581, done.
Delta compression using up to 4 threads.
Compressing
Github user yaooqinn commented on the pull request:
https://github.com/apache/spark/pull/9559#issuecomment-155252219
@rxin OpenHashSet replace HashSet
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user yaooqinn commented on the pull request:
https://github.com/apache/spark/pull/9559#issuecomment-155252403
@andrewor14 Thanks for your advices
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user yaooqinn closed the pull request at:
https://github.com/apache/spark/pull/9661
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user yaooqinn commented on the pull request:
https://github.com/apache/spark/pull/9661#issuecomment-157300997
@davies thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user yaooqinn opened a pull request:
https://github.com/apache/spark/pull/10462
[Core] remove redundant collection conversion.
remove redundant collection conversion.
Array toArray
You can merge this pull request into a Git repository by running:
$ git pull
Github user yaooqinn closed the pull request at:
https://github.com/apache/spark/pull/10462
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user yaooqinn commented on the pull request:
https://github.com/apache/spark/pull/9987#issuecomment-189095247
@vanzin
```scala
16/02/26 10:35:04 WARN TaskSetManager: Lost task 213.0 in stage 1.0 (TID
16971, SZV141645): FetchFailed(BlockManagerId(118, 192.168.75.193
Github user yaooqinn commented on a diff in the pull request:
https://github.com/apache/spark/pull/7770#discussion_r53543399
--- Diff: core/src/main/scala/org/apache/spark/ui/jobs/StagePage.scala ---
@@ -349,6 +365,23 @@ private[ui] class StagePage(parent: StagesTab) extends
Github user yaooqinn commented on the pull request:
https://github.com/apache/spark/pull/3438#issuecomment-208769628
Is there any progress about this mechanism study?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user yaooqinn commented on the pull request:
https://github.com/apache/spark/pull/11499#issuecomment-192824744
LGTMï¼Thanksï¼
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user yaooqinn commented on the issue:
https://github.com/apache/spark/pull/16955
@vanzin Thanks, all tests passed
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user yaooqinn commented on the issue:
https://github.com/apache/spark/pull/16955
@srowen @jerryshao Thanks for your comments. I have added some
descriptions both in the JIRA and here, please check whether OK or not.
---
If your project is set up for it, you can reply
Github user yaooqinn commented on a diff in the pull request:
https://github.com/apache/spark/pull/16955#discussion_r101491787
--- Diff:
resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/security/CredentialUpdater.scala
---
@@ -55,14 +55,10 @@ private[spark
GitHub user yaooqinn opened a pull request:
https://github.com/apache/spark/pull/16955
[SPARK-19626]update cred using spark.yarn.credentials.updateTime
## What changes were proposed in this pull request?
update cred using spark.yarn.credentials.updateTime
## How
Github user yaooqinn commented on the issue:
https://github.com/apache/spark/pull/16955
cc again @jerryshao
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user yaooqinn commented on the issue:
https://github.com/apache/spark/pull/16955
ping @srowen, would you plz verify this patch again? thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user yaooqinn commented on the issue:
https://github.com/apache/spark/pull/15071
@hvanhovell
For current `BroadcastHashJoinExec`, we generate join code for key is not
unique like this:
```
while (matches.hasnext) {
matched = matches.next
Github user yaooqinn commented on the issue:
https://github.com/apache/spark/pull/15071
@hvanhovell thanks very much for your suggestions. I have added my comments
to the description, and I will run benchmark later to see if it works
---
If your project is set up for it, you can
GitHub user yaooqinn opened a pull request:
https://github.com/apache/spark/pull/15071
[WIP][SPARK-17517][SQL]Improve generated Code for BroadcastHashJoinExec
## What changes were proposed in this pull request?
For current `BroadcastHashJoinExec`, we generate join code
GitHub user yaooqinn opened a pull request:
https://github.com/apache/spark/pull/15227
[SPARK-17655][SQL]Remove unused variables declarations and definations in a
WholeStageCodeGened stage
## What changes were proposed in this pull request?
A WholeStageCodeGened stage
Github user yaooqinn commented on the issue:
https://github.com/apache/spark/pull/15071
cc @davies
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user yaooqinn commented on a diff in the pull request:
https://github.com/apache/spark/pull/15071#discussion_r79637407
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/benchmark/BroadcastHashJoinBenchmark.scala
---
@@ -0,0 +1,84 @@
+/*
+ * Licensed
Github user yaooqinn commented on the issue:
https://github.com/apache/spark/pull/15071
@hvanhovell I have added a benchmark test for this, could you please help
me to review? thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user yaooqinn commented on the issue:
https://github.com/apache/spark/pull/15071
@hvanhovell
I think unfixed length fields may lead to memory overlapping when
```BuildLeft```, since we are reusing the ```BufferHolder``` to avoid writing
the stream side repeatedly
GitHub user yaooqinn opened a pull request:
https://github.com/apache/spark/pull/15193
[SQL]RowBasedKeyValueBatch reuse valueRow too
## What changes were proposed in this pull request?
reuse the cached valueRow in RowBasedKeyValueBatch
## How was this patch
Github user yaooqinn commented on the issue:
https://github.com/apache/spark/pull/15193
cc @ooq @sameeragarwal @davies is it right and necessary?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user yaooqinn commented on the issue:
https://github.com/apache/spark/pull/15071
ping @hvanhovell
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user yaooqinn commented on the issue:
https://github.com/apache/spark/pull/11045
Hi @winningsix, I am interesting in your idea, and i am confusing about
the field `userName` used in your code, 1)where is it be initialized? 2) is it
used for privilege checking?
---
If your
Github user yaooqinn commented on the issue:
https://github.com/apache/spark/pull/11045
@winningsix Glad to hear from you. And we can only use this through Spark
ThriftServer, right?
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user yaooqinn commented on the issue:
https://github.com/apache/spark/pull/16445
ping @srowen would you plz take a look at this prï¼
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user yaooqinn opened a pull request:
https://github.com/apache/spark/pull/16445
[SPARK-19043][SQL]Make SparkSQLSessionManager more configurable
## What changes were proposed in this pull request?
To make SparkSQLSessionManager's background operation thread pool
Github user yaooqinn commented on the issue:
https://github.com/apache/spark/pull/17387
@jerryshao This may also fix issue SPARK-19995 and SPARK-19997 on yarn with
those apps which take `SparkSQLCLIDriver.main` as entrance, plz take a look.
@vanzin @tgravescs @mridulm @dongjoon
Github user yaooqinn commented on the issue:
https://github.com/apache/spark/pull/17335
https://issues.apache.org/jira/browse/SPARK-15754 will this patch cause
this problem?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user yaooqinn commented on the issue:
https://github.com/apache/spark/pull/17335
@subrotosanyal would you please help to describe
https://github.com/apache/spark/pull/13499 in detailï¼Thanks
---
If your project is set up for it, you can reply to this email and have your
Github user yaooqinn commented on the issue:
https://github.com/apache/spark/pull/17387
yesï¼this seem only fix the local modeï¼in standalone mode it still got
problems of hdfs token lost @tgravescs @jerryshao .
---
If your project is set up for it, you can reply to this email
Github user yaooqinn commented on the issue:
https://github.com/apache/spark/pull/17387
cc @tgravescs, tested with a secured HDFS with standalone and works fine.
And this pr has a lot of yarn specified security arguments to rename.
cc @jerryshao plz take a look
Github user yaooqinn commented on a diff in the pull request:
https://github.com/apache/spark/pull/17430#discussion_r108055529
--- Diff:
core/src/main/scala/org/apache/spark/deploy/SparkSubmitArguments.scala ---
@@ -190,6 +190,7 @@ private[deploy] class SparkSubmitArguments(args
Github user yaooqinn commented on a diff in the pull request:
https://github.com/apache/spark/pull/17430#discussion_r108061061
--- Diff:
core/src/main/scala/org/apache/spark/deploy/SparkSubmitArguments.scala ---
@@ -190,6 +190,7 @@ private[deploy] class SparkSubmitArguments(args
Github user yaooqinn commented on the issue:
https://github.com/apache/spark/pull/17430
ok to test. if i am in whitelist...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user yaooqinn commented on a diff in the pull request:
https://github.com/apache/spark/pull/17430#discussion_r108094996
--- Diff:
core/src/main/scala/org/apache/spark/deploy/SparkSubmitArguments.scala ---
@@ -190,6 +190,7 @@ private[deploy] class SparkSubmitArguments(args
Github user yaooqinn commented on the issue:
https://github.com/apache/spark/pull/17430
@srowen could please help me to verify this pr? thank you.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
GitHub user yaooqinn opened a pull request:
https://github.com/apache/spark/pull/17430
[SPARK-20096][Spark Submit][Minor]Expose the right queue name not null if
set by --conf or configure file
## What changes were proposed in this pull request?
while submit apps with -v
Github user yaooqinn commented on a diff in the pull request:
https://github.com/apache/spark/pull/17430#discussion_r108616336
--- Diff:
core/src/test/scala/org/apache/spark/deploy/SparkSubmitSuite.scala ---
@@ -148,6 +148,17 @@ class SparkSubmitSuite
appArgs.childArgs
Github user yaooqinn commented on the issue:
https://github.com/apache/spark/pull/17430
ping @felixcheung, would you please take a look again?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user yaooqinn commented on a diff in the pull request:
https://github.com/apache/spark/pull/17430#discussion_r108051074
--- Diff:
core/src/main/scala/org/apache/spark/deploy/SparkSubmitArguments.scala ---
@@ -190,6 +190,7 @@ private[deploy] class SparkSubmitArguments(args
Github user yaooqinn commented on a diff in the pull request:
https://github.com/apache/spark/pull/17430#discussion_r108051067
--- Diff:
core/src/main/scala/org/apache/spark/deploy/SparkSubmitArguments.scala ---
@@ -307,7 +308,7 @@ private[deploy] class SparkSubmitArguments(args
Github user yaooqinn commented on the issue:
https://github.com/apache/spark/pull/17333
@jerryshao ð
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user yaooqinn commented on the issue:
https://github.com/apache/spark/pull/17335
with this creds provided by HiveCredentialProvider and configured by
`hive.metastore.kerberos.principal`, do we need to re-login with
`spark.yarn.principal` aiming to connect metastore
Github user yaooqinn commented on the issue:
https://github.com/apache/spark/pull/17335
I have tested this with my kerberized hdfs and it works for me. LGTM,
thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user yaooqinn commented on the issue:
https://github.com/apache/spark/pull/17333
see #17335 for more details, duplicated & closed
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user yaooqinn closed the pull request at:
https://github.com/apache/spark/pull/17333
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user yaooqinn commented on the issue:
https://github.com/apache/spark/pull/17335
The dbs and tbls may be created on hdfs via the real userï¼so that the
proxy user may have no rights to things such as:
```
Error: java.lang.RuntimeException: Cannot create staging
Github user yaooqinn commented on the issue:
https://github.com/apache/spark/pull/17333
ping @vanzin , can you take a look at this
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user yaooqinn opened a pull request:
https://github.com/apache/spark/pull/17333
[SPARK-19997] [SQL]fix proxy ugi could not get tgt to cause metastore
connecting problem
## What changes were proposed in this pull request?
Pass the real user ugi instead of proxy ugi
GitHub user yaooqinn opened a pull request:
https://github.com/apache/spark/pull/17387
[SPARK-20060][Deploy][Kerberos][Spark Shell] Obtain credentials for proxy
user before talking to hive metastore
## What changes were proposed in this pull request?
For **Spark on non
Github user yaooqinn commented on the issue:
https://github.com/apache/spark/pull/18668
@cloud-fan would you plz take a look; this pr focus on the issue of
spark.hadoop.* properties not respected by the CliStateState(cliState), most of
them take affects while we call
Github user yaooqinn commented on the issue:
https://github.com/apache/spark/pull/18668
Hi @gatorsmile, I add some UTs in CliSuite, please check!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user yaooqinn commented on a diff in the pull request:
https://github.com/apache/spark/pull/18668#discussion_r130792745
--- Diff:
sql/hive-thriftserver/src/test/scala/org/apache/spark/sql/hive/thriftserver/CliSuite.scala
---
@@ -283,4 +283,17 @@ class CliSuite extends
Github user yaooqinn commented on the issue:
https://github.com/apache/spark/pull/18668
There is a bug in HiveClientImpl about reusing cliSessionState, see
[HiveClientImpl.scala#L140](https://github.com/apache/spark/blob/master/sql/hive/src/main/scala/org/apache/spark/sql/hive/client
Github user yaooqinn commented on the issue:
https://github.com/apache/spark/pull/18668
@vanzin
> the configuration of the execution Hive
Does this mean a hive client initialized by
[HiveUtils.newClientForExecution](https://github.com/apache/spark/blob/master/
Github user yaooqinn commented on a diff in the pull request:
https://github.com/apache/spark/pull/18668#discussion_r131068575
--- Diff: docs/configuration.md ---
@@ -2335,5 +2335,59 @@ The location of these configuration files varies
across Hadoop versions, but
a common
Github user yaooqinn commented on a diff in the pull request:
https://github.com/apache/spark/pull/18668#discussion_r131068501
--- Diff: docs/configuration.md ---
@@ -2335,5 +2335,59 @@ The location of these configuration files varies
across Hadoop versions, but
a common
Github user yaooqinn commented on a diff in the pull request:
https://github.com/apache/spark/pull/18668#discussion_r131074348
--- Diff: docs/configuration.md ---
@@ -2335,5 +2335,59 @@ The location of these configuration files varies
across Hadoop versions, but
a common
Github user yaooqinn commented on a diff in the pull request:
https://github.com/apache/spark/pull/18668#discussion_r131329214
--- Diff:
sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/SparkSQLCLIDriver.scala
---
@@ -134,6 +135,16 @@ private[hive
Github user yaooqinn commented on a diff in the pull request:
https://github.com/apache/spark/pull/18668#discussion_r131320143
--- Diff:
sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/SparkSQLCLIDriver.scala
---
@@ -157,12 +168,8 @@ private[hive
Github user yaooqinn commented on a diff in the pull request:
https://github.com/apache/spark/pull/18668#discussion_r131320120
--- Diff:
sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/SparkSQLCLIDriver.scala
---
@@ -157,12 +168,8 @@ private[hive
Github user yaooqinn commented on a diff in the pull request:
https://github.com/apache/spark/pull/18668#discussion_r131320240
--- Diff: sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveUtils.scala
---
@@ -404,6 +404,13 @@ private[spark] object HiveUtils extends Logging
Github user yaooqinn commented on a diff in the pull request:
https://github.com/apache/spark/pull/18668#discussion_r131321807
--- Diff:
sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/SparkSQLCLIDriver.scala
---
@@ -134,6 +135,16 @@ private[hive
Github user yaooqinn commented on the issue:
https://github.com/apache/spark/pull/18648
ping @gatorsmile could you help to review thisï¼
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user yaooqinn commented on the issue:
https://github.com/apache/spark/pull/18648
@jiangxb1987 Could this pr be mergedï¼
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user yaooqinn commented on the issue:
https://github.com/apache/spark/pull/18648
ping @jiangxb1987 @cloud-fan again
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user yaooqinn commented on the issue:
https://github.com/apache/spark/pull/18648
ping @cloud-fan would you take another lookï¼
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user yaooqinn commented on the issue:
https://github.com/apache/spark/pull/18648
@cloud-fan
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
GitHub user yaooqinn opened a pull request:
https://github.com/apache/spark/pull/18648
[SPARK-21428] Set IsolatedClientLoader off while using builtin Hive jars
for reusing CliSessionState
## What changes were proposed in this pull request?
Set isolated to false while using
GitHub user yaooqinn opened a pull request:
https://github.com/apache/spark/pull/18668
[SPARK-21451][SQL]get `spark.hadoop.*` properties from sysProps to hiveconf
## What changes were proposed in this pull request?
get `spark.hadoop.*` properties from sysProps
GitHub user yaooqinn opened a pull request:
https://github.com/apache/spark/pull/18666
[SPARK-21449][SQL][Hive]Close HiveClient's SessionState to delete residual
dirs
## What changes were proposed in this pull request?
When sparkSession.stop() is called, close
Github user yaooqinn commented on the issue:
https://github.com/apache/spark/pull/18668
ping @cloud-fan @gatorsmile
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user yaooqinn commented on a diff in the pull request:
https://github.com/apache/spark/pull/18668#discussion_r128170401
--- Diff: sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveUtils.scala
---
@@ -404,6 +404,13 @@ private[spark] object HiveUtils extends Logging
Github user yaooqinn commented on a diff in the pull request:
https://github.com/apache/spark/pull/18668#discussion_r128186557
--- Diff: sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveUtils.scala
---
@@ -404,6 +404,13 @@ private[spark] object HiveUtils extends Logging
Github user yaooqinn commented on the issue:
https://github.com/apache/spark/pull/18668
@cloud-fan tests passed, mind take a look?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user yaooqinn commented on the issue:
https://github.com/apache/spark/pull/18668
ping @cloud-fan
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user yaooqinn commented on the issue:
https://github.com/apache/spark/pull/18668
ping @cloud-fan again
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user yaooqinn commented on a diff in the pull request:
https://github.com/apache/spark/pull/18668#discussion_r131513659
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkHadoopUtil.scala
---
@@ -99,17 +100,30 @@ class SparkHadoopUtil extends Logging
Github user yaooqinn commented on a diff in the pull request:
https://github.com/apache/spark/pull/18668#discussion_r131513657
--- Diff: sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveUtils.scala
---
@@ -404,6 +405,8 @@ private[spark] object HiveUtils extends Logging
Github user yaooqinn commented on the issue:
https://github.com/apache/spark/pull/18668
@gatorsmile CliSuite will get nothing configured here because the
cliSessionState is not reuse as we expected, see at
https://github.com/apache/spark/blob/master/sql/hive/src/main/scala/org/apache
Github user yaooqinn commented on a diff in the pull request:
https://github.com/apache/spark/pull/18648#discussion_r131851577
--- Diff:
sql/hive-thriftserver/src/test/scala/org/apache/spark/sql/hive/thriftserver/HiveCliSessionStateSuite.scala
---
@@ -0,0 +1,57
Github user yaooqinn commented on a diff in the pull request:
https://github.com/apache/spark/pull/18648#discussion_r131807040
--- Diff: sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveUtils.scala
---
@@ -312,7 +323,7 @@ private[spark] object HiveUtils extends Logging
1 - 100 of 233 matches
Mail list logo