Github user weiqingy commented on the issue:
https://github.com/apache/spark/pull/19294
@szhem @mridulm Sorry for the late reply. Just come back from vacation.
Sure. I'll try this PR with SHC.
---
-
To unsubscribe
Github user weiqingy commented on the issue:
https://github.com/apache/spark/pull/17342
@steveloughran Thanks Steve.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user weiqingy commented on the issue:
https://github.com/apache/spark/pull/18127
Thanks, @HyukjinKwon . Yes, but will come back here after I finish other
work. Do I need to close this for now and reopen it at that time?
---
If your project is set up for it, you can reply
Github user weiqingy commented on the issue:
https://github.com/apache/spark/pull/18127
I think the build errors are not related to this code changes:
```
java.lang.RuntimeException: 1 fatal warnings
at scala.sys.package$.error(package.scala:27
Github user weiqingy commented on the issue:
https://github.com/apache/spark/pull/18127
Jenkins, test this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
GitHub user weiqingy opened a pull request:
https://github.com/apache/spark/pull/18127
[SPARK-6628][SQL][Branch-2.1] Fix ClassCastException when executing sql
statement 'insert into' on hbase table
## What changes were proposed in this pull request?
The issue of SPARK-6628
Github user weiqingy closed the pull request at:
https://github.com/apache/spark/pull/17989
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user weiqingy commented on the issue:
https://github.com/apache/spark/pull/17989
Spark 1.6 is a little old. I'll move the change to branch-2.1.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user weiqingy commented on the issue:
https://github.com/apache/spark/pull/17989
Jenkins, test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user weiqingy commented on a diff in the pull request:
https://github.com/apache/spark/pull/17989#discussion_r116631706
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/hiveWriterContainers.scala ---
@@ -70,8 +70,11 @@ private[hive] class SparkHiveWriterContainer
GitHub user weiqingy opened a pull request:
https://github.com/apache/spark/pull/17989
[SPARK-6628][SQL] Fix ClassCastException when executing sql statement
'insert into' on hbase table
## What changes were proposed in this pull request?
The major issue of SPARK-6628
Github user weiqingy commented on the issue:
https://github.com/apache/spark/pull/17342
Thanks, @vanzin .
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user weiqingy commented on a diff in the pull request:
https://github.com/apache/spark/pull/17342#discussion_r113103389
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala
---
@@ -2606,4 +2607,19 @@ class SQLQuerySuite extends QueryTest
Github user weiqingy commented on a diff in the pull request:
https://github.com/apache/spark/pull/17342#discussion_r113061767
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/internal/SharedState.scala ---
@@ -146,6 +149,7 @@ private[sql] class SharedState(val sparkContext
Github user weiqingy commented on a diff in the pull request:
https://github.com/apache/spark/pull/17342#discussion_r112374078
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/internal/SharedState.scala ---
@@ -146,6 +149,7 @@ private[sql] class SharedState(val sparkContext
Github user weiqingy commented on a diff in the pull request:
https://github.com/apache/spark/pull/17342#discussion_r112019602
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/internal/SharedState.scala ---
@@ -148,6 +149,8 @@ private[sql] class SharedState(val sparkContext
Github user weiqingy commented on the issue:
https://github.com/apache/spark/pull/17342
Jenkins, test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user weiqingy commented on the issue:
https://github.com/apache/spark/pull/17342
The failures, I think, were not triggered by this code change. Will
re-trigger Jenkins.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user weiqingy commented on a diff in the pull request:
https://github.com/apache/spark/pull/17342#discussion_r111870688
--- Diff: core/src/test/scala/org/apache/spark/util/UtilsSuite.scala ---
@@ -1021,4 +1021,19 @@ class UtilsSuite extends SparkFunSuite
Github user weiqingy commented on a diff in the pull request:
https://github.com/apache/spark/pull/17342#discussion_r111300718
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/internal/SharedState.scala ---
@@ -148,6 +149,8 @@ private[sql] class SharedState(val sparkContext
Github user weiqingy commented on a diff in the pull request:
https://github.com/apache/spark/pull/17342#discussion_r110511571
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/internal/SharedState.scala ---
@@ -148,6 +149,8 @@ private[sql] class SharedState(val sparkContext
Github user weiqingy commented on a diff in the pull request:
https://github.com/apache/spark/pull/17342#discussion_r107713074
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -2767,3 +2767,24 @@ private[spark] class CircularBuffer(sizeInBytes: Int
= 10240
Github user weiqingy commented on a diff in the pull request:
https://github.com/apache/spark/pull/17342#discussion_r107456555
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -2767,3 +2767,24 @@ private[spark] class CircularBuffer(sizeInBytes: Int
= 10240
Github user weiqingy commented on a diff in the pull request:
https://github.com/apache/spark/pull/17342#discussion_r107331216
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/internal/SharedState.scala ---
@@ -148,6 +149,8 @@ private[sql] class SharedState(val sparkContext
Github user weiqingy commented on a diff in the pull request:
https://github.com/apache/spark/pull/17342#discussion_r107215428
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/internal/SharedState.scala ---
@@ -148,6 +149,8 @@ private[sql] class SharedState(val sparkContext
Github user weiqingy commented on a diff in the pull request:
https://github.com/apache/spark/pull/17342#discussion_r107069396
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/internal/SharedState.scala ---
@@ -148,6 +149,8 @@ private[sql] class SharedState(val sparkContext
Github user weiqingy commented on a diff in the pull request:
https://github.com/apache/spark/pull/17342#discussion_r107063651
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -2767,3 +2767,24 @@ private[spark] class CircularBuffer(sizeInBytes: Int
= 10240
Github user weiqingy commented on a diff in the pull request:
https://github.com/apache/spark/pull/17342#discussion_r106978348
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -2767,3 +2767,24 @@ private[spark] class CircularBuffer(sizeInBytes: Int
= 10240
Github user weiqingy commented on a diff in the pull request:
https://github.com/apache/spark/pull/17342#discussion_r106977805
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -2767,3 +2767,24 @@ private[spark] class CircularBuffer(sizeInBytes: Int
= 10240
Github user weiqingy commented on a diff in the pull request:
https://github.com/apache/spark/pull/17342#discussion_r106977152
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/internal/SharedState.scala ---
@@ -148,6 +149,8 @@ private[sql] class SharedState(val sparkContext
Github user weiqingy commented on the issue:
https://github.com/apache/spark/pull/17342
Hi, @rxin Could you please review this PR? Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user weiqingy commented on the issue:
https://github.com/apache/spark/pull/17342
Jenkins, test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user weiqingy commented on the issue:
https://github.com/apache/spark/pull/17342
`org.apache.spark.storage.BlockManagerProactiveReplicationSuite.proactive
block replication - 3 replicas - 2 block manager deletions` failed, but it
passed locally.
---
If your project is set
GitHub user weiqingy opened a pull request:
https://github.com/apache/spark/pull/17342
[SPARK-18910][SPARK-12868] Allow adding jars from hdfs
## What changes were proposed in this pull request?
Spark 2.2 is going to be cut, it'll be great if SPARK-12868 can be resolved
before
Github user weiqingy commented on the issue:
https://github.com/apache/spark/pull/16426
Thanks, @srowen. I have updated the title.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user weiqingy commented on the issue:
https://github.com/apache/spark/pull/16426
Hi, @srowen Could you please review this PR again? Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user weiqingy commented on the issue:
https://github.com/apache/spark/pull/16426
Retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user weiqingy commented on the issue:
https://github.com/apache/spark/pull/16426
Hi, @srowen Thanks for the review. I have updated the PR. The current PR is
mainly about examples (and two suites: Yarn suite and SparkContext suite).
I went through the tests. I found
Github user weiqingy commented on the issue:
https://github.com/apache/spark/pull/16426
Thanks for the comments, @dongjoon-hyun .
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
GitHub user weiqingy opened a pull request:
https://github.com/apache/spark/pull/16426
[MINOR][TEST] Add `finally` clause for `sc.stop()`
## What changes were proposed in this pull request?
Add `finally` clause for `sc.stop()` in the `test("register and deregister
Github user weiqingy commented on the issue:
https://github.com/apache/spark/pull/16176
Thanks for the review. @cloud-fan
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user weiqingy commented on the issue:
https://github.com/apache/spark/pull/16176
Yes, I have updated the PR. @cloud-fan
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user weiqingy commented on the issue:
https://github.com/apache/spark/pull/16176
Hi, @cloud-fan I have updated the PR to add more encoders. Could you
please review this PR again? Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user weiqingy commented on the issue:
https://github.com/apache/spark/pull/16223
Thanks @srowen for the review.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user weiqingy commented on the issue:
https://github.com/apache/spark/pull/16223
Hi, @srowen The previous
[PR#16159](https://github.com/apache/spark/pull/16159#issuecomment-265000724)
broke [spark-master-compile-sbt-scala-2.10
](https://amplab.cs.berkeley.edu/jenkins/view
GitHub user weiqingy opened a pull request:
https://github.com/apache/spark/pull/16223
[SPARK-18697][BUILD] Upgrade sbt plugins
## What changes were proposed in this pull request?
This PR is to upgrade sbt plugins. The following sbt plugins will be
upgraded
Github user weiqingy commented on the issue:
https://github.com/apache/spark/pull/16176
Retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user weiqingy commented on the issue:
https://github.com/apache/spark/pull/16176
The only failure is
â[org.apache.spark.sql.kafka010.KafkaSourceStressForDontFailOnDataLossSuite.stress
test for
failOnDataLoss=false](https://github.com/apache/spark/blob/master/external/kafka
Github user weiqingy commented on the issue:
https://github.com/apache/spark/pull/16159
Thanks, @srowen @zsxwing . My bad. I am looking into it.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user weiqingy commented on the issue:
https://github.com/apache/spark/pull/16176
@gatorsmile Could you please review this PR? Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user weiqingy commented on the issue:
https://github.com/apache/spark/pull/16159
Thanks for the review, @srowen .
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
GitHub user weiqingy opened a pull request:
https://github.com/apache/spark/pull/16176
[SPARK-18746][SQL] Add newBigDecimalEncoder
## What changes were proposed in this pull request?
Add `newBigDecimalEncoder` in `SQLImplicits.scala`.
## How was this patch tested
Github user weiqingy commented on a diff in the pull request:
https://github.com/apache/spark/pull/16159#discussion_r91006592
--- Diff: project/SparkBuild.scala ---
@@ -596,19 +596,17 @@ object Hive {
}
object Assembly {
- import sbtassembly.AssemblyUtils
Github user weiqingy commented on the issue:
https://github.com/apache/spark/pull/16159
Retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user weiqingy commented on the issue:
https://github.com/apache/spark/pull/16159
It failed the unit test:
`org.apache.spark.sql.streaming.StreamingQueryListenerSuite.single listener,
check trigger events are generated correctly`. However, I just ran it locally,
and it passed
Github user weiqingy commented on the issue:
https://github.com/apache/spark/pull/16159
The references to upgrade sbt-assembly as following:
https://github.com/sbt/sbt-assembly/blob/master/Migration.md
https://github.com/sbt/sbt-assembly/blob/0.12.0/src/main/scala
GitHub user weiqingy opened a pull request:
https://github.com/apache/spark/pull/16159
[SPARK-18697][BUILD] Upgrade sbt plugins
## What changes were proposed in this pull request?
This PR is to upgrade sbt plugins. The following sbt plugins will be
upgraded
Github user weiqingy commented on the issue:
https://github.com/apache/spark/pull/16069
Hi, @srowen I have fixed deprecation warnings. We should also use
mimaBinaryIssueFilters and mimaPreviousArtifacts instead of deprecated
binaryIssueFilters and previousArtifacts in MimaBuild.scala
Github user weiqingy commented on the issue:
https://github.com/apache/spark/pull/16069
Hi, @srowen Thanks for the review. I have updated the PR to fix deprecation
warnings (about '`previousArtifact`', '`/`', '`stringToReference`', '`<+= `'
operator, '`t3ToTable3`', '`<<=`'
Github user weiqingy commented on the issue:
https://github.com/apache/spark/pull/16069
I will update the PR to fix the deprecation warnings in
project/SparkBuild.scala.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user weiqingy commented on the issue:
https://github.com/apache/spark/pull/16062
@srowen Thanks for the review.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user weiqingy commented on the issue:
https://github.com/apache/spark/pull/16069
Thanks, @dongjoon-hyun Yes, Good catch. I have updated the description.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user weiqingy commented on the issue:
https://github.com/apache/spark/pull/16069
@srowen Thanks for the information.
For sbt update, I think, currently, the only file needed to change is
`project/build.properties`. The [Jenkins build console
output](https
Github user weiqingy commented on the issue:
https://github.com/apache/spark/pull/16069
@JoshRosen OK. I will update Zinc to 0.3.11, the latest stable version (The
version of Zinc downloaded from "`Download the latest stable version`" in [Zinc
github](https://github.com/t
Github user weiqingy commented on the issue:
https://github.com/apache/spark/pull/16069
@srowen Thanks for the feedback. I will check the scripts in `build/`. Yes,
I will upgrade zinc too. Should I create another jira to track the zinc
upgrade? or we can merge the upgrade
Github user weiqingy commented on the issue:
https://github.com/apache/spark/pull/16069
@JoshRosen Thanks for the reply. I have checked the status of each sbt
plugin. Except the plugins below, all other plugins are up-to-date.
- sbt-assembly: 0.11.2 -> 0.1
Github user weiqingy commented on the issue:
https://github.com/apache/spark/pull/16069
Hi, @JoshRosen I am wondering if I should also upgrade sbt plugins in this
PR? What do you think of this upgrade? Your suggestion will be helpful. Thanks.
---
If your project is set up
GitHub user weiqingy opened a pull request:
https://github.com/apache/spark/pull/16069
[WIP][SPARK-18638][BUILD] Upgrade sbt to 0.13.13
## What changes were proposed in this pull request?
This PR is to upgrade sbt from 0.13.11 to 0.13.13.
The release notes since
GitHub user weiqingy opened a pull request:
https://github.com/apache/spark/pull/16062
[SPARK-18629][SQL] Fix numPartition of JDBCSuite Testcase
## What changes were proposed in this pull request?
Fix numPartition of JDBCSuite Testcase.
## How was this patch tested
Github user weiqingy commented on the issue:
https://github.com/apache/spark/pull/15941
Thanks for review this, @srowen
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user weiqingy closed the pull request at:
https://github.com/apache/spark/pull/15960
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user weiqingy commented on the issue:
https://github.com/apache/spark/pull/15960
Thanks @rxin @srowen @holdenk.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user weiqingy commented on the issue:
https://github.com/apache/spark/pull/15960
@srowen Ok. Thanks for the reply.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user weiqingy commented on the issue:
https://github.com/apache/spark/pull/15960
Hi, @rxin @srowen Thanks for the prompt feedbacks and suggestions. Yes, I
understand and agree with your concerns. The motivation of creating this PR is
I think the redundant string interpolators
GitHub user weiqingy opened a pull request:
https://github.com/apache/spark/pull/15960
[SPARK-18521] Add `NoRedundantStringInterpolator` Scala rule
## What changes were proposed in this pull request?
This PR is to add a new scala style rule 'NoRedundantStringInterpolator
Github user weiqingy commented on the issue:
https://github.com/apache/spark/pull/15941
Hi, @rxin I am not sure if I will find more about the doc. If it is OK for
you, I can just make this PR WIP. If I find more I will add them in. Thanks.
---
If your project is set up for it, you
Github user weiqingy commented on the issue:
https://github.com/apache/spark/pull/15941
@srowen Thanks for the review. I have updated the PR.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user weiqingy opened a pull request:
https://github.com/apache/spark/pull/15941
[SQL][DOC] Fix incorrect `code` tag in docs/sql-programming-guide.md
## What changes were proposed in this pull request?
This PR is to fix incorrect `code` tag in `sql-programming-guide.md
Github user weiqingy commented on the issue:
https://github.com/apache/spark/pull/15886
Thanks for the review, @srowen.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user weiqingy commented on the issue:
https://github.com/apache/spark/pull/15869
Hi, @srowen I have updated the title and description. Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user weiqingy commented on the issue:
https://github.com/apache/spark/pull/15869
@tgravescs Thanks for the reply. I have updated the PR.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user weiqingy commented on the issue:
https://github.com/apache/spark/pull/15886
@srowen Thanks for reviewing this minor PR. I found these typo issues when
I was working on my own jobs and needed to read these documentations, so the PR
is pretty small. I will pay attention
Github user weiqingy commented on a diff in the pull request:
https://github.com/apache/spark/pull/15886#discussion_r88070180
--- Diff: docs/sql-programming-guide.md ---
@@ -1029,7 +1029,7 @@ following command:
bin/spark-shell --driver-class-path postgresql-9.4.1207.jar --jars
Github user weiqingy commented on a diff in the pull request:
https://github.com/apache/spark/pull/15563#discussion_r88064926
--- Diff:
core/src/main/scala/org/apache/spark/internal/config/package.scala ---
@@ -207,6 +207,10 @@ package object config {
.booleanConf
Github user weiqingy commented on the issue:
https://github.com/apache/spark/pull/15869
Hi, @tgravescs What do you think about `spark.driver.memory`,
`spark.executor.memory`, and `spark.executor.instances`?
- Move `spark.executor.instances` from `running-on-yarn.md
Github user weiqingy commented on a diff in the pull request:
https://github.com/apache/spark/pull/15869#discussion_r88059588
--- Diff: docs/running-on-yarn.md ---
@@ -495,6 +468,20 @@ To use a custom metrics.properties for the application
master and executors, upd
name
Github user weiqingy commented on the issue:
https://github.com/apache/spark/pull/15869
@rxin Thanks for commenting. The reason I created two PRs is that I noticed
these issues not at the same time. This one is still during discussion, and I
am not sure if it will be merged, however
GitHub user weiqingy opened a pull request:
https://github.com/apache/spark/pull/15886
[MINOR][DOC] Fix typos in the 'configuration', 'monitoring' and
'sql-programming-guide' documentation
## What changes were proposed in this pull request?
Fix typos in the 'configuration
Github user weiqingy commented on the issue:
https://github.com/apache/spark/pull/15869
Hi, @srowen I have replied your comments and updated the PR. Could you
please review it again? Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user weiqingy commented on a diff in the pull request:
https://github.com/apache/spark/pull/15869#discussion_r87945871
--- Diff: docs/running-on-yarn.md ---
@@ -495,6 +468,20 @@ To use a custom metrics.properties for the application
master and executors, upd
name
Github user weiqingy commented on a diff in the pull request:
https://github.com/apache/spark/pull/15869#discussion_r87945875
--- Diff: docs/running-on-yarn.md ---
@@ -495,6 +468,20 @@ To use a custom metrics.properties for the application
master and executors, upd
name
Github user weiqingy commented on a diff in the pull request:
https://github.com/apache/spark/pull/15869#discussion_r87945049
--- Diff: docs/running-on-yarn.md ---
@@ -118,19 +118,6 @@ To use a custom metrics.properties for the application
master and executors, upd
GitHub user weiqingy opened a pull request:
https://github.com/apache/spark/pull/15869
[YARN][DOC] Update Yarn configuration doc
## What changes were proposed in this pull request?
- Add documentation for two yarn configurations:
`spark.yarn.report.interval
Github user weiqingy commented on the issue:
https://github.com/apache/spark/pull/15858
@srowen Thanks for the review.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user weiqingy commented on a diff in the pull request:
https://github.com/apache/spark/pull/15563#discussion_r87688398
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -2578,26 +2579,38 @@ private[util] object CallerContext extends Logging
Github user weiqingy commented on a diff in the pull request:
https://github.com/apache/spark/pull/15563#discussion_r87688391
--- Diff:
core/src/main/scala/org/apache/spark/internal/config/package.scala ---
@@ -207,6 +207,10 @@ package object config {
.booleanConf
Github user weiqingy commented on the issue:
https://github.com/apache/spark/pull/15563
Thanks a lot for the review, @tgravescs @mridulm
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user weiqingy opened a pull request:
https://github.com/apache/spark/pull/15858
[MINOR][YARN] Define 'spark.yarn.am.port' in yarn config object
## What changes were proposed in this pull request?
This PR is to define 'spark.yarn.am.port' in yarn config.scala and use
Github user weiqingy commented on the issue:
https://github.com/apache/spark/pull/15563
@mridulm @tgravescs It seems Jenkins doesn't work?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user weiqingy commented on a diff in the pull request:
https://github.com/apache/spark/pull/15563#discussion_r87131269
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -2587,17 +2589,16 @@ private[spark] class CallerContext(
taskId: Option[Long
1 - 100 of 144 matches
Mail list logo