Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18114
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/78272/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18114
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18114
**[Test build #78272 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/78272/testReport)**
for PR 18114 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18114
**[Test build #78272 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/78272/testReport)**
for PR 18114 at commit
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/18320#discussion_r122879650
--- Diff: R/pkg/inst/worker/daemon.R ---
@@ -30,8 +30,42 @@ port <- as.integer(Sys.getenv("SPARKR_WORKER_PORT"))
inputCon <- socketConnection(
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18114
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/78271/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18114
**[Test build #78271 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/78271/testReport)**
for PR 18114 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18114
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18355
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18355
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/78270/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18355
**[Test build #78270 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/78270/testReport)**
for PR 18355 at commit
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/18320#discussion_r122875289
--- Diff: R/pkg/inst/worker/daemon.R ---
@@ -30,8 +30,42 @@ port <- as.integer(Sys.getenv("SPARKR_WORKER_PORT"))
inputCon <- socketConnection(
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/18356#discussion_r122875121
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSource.scala
---
@@ -181,6 +182,10 @@ case class DataSource(
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/18356#discussion_r122874997
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSource.scala
---
@@ -181,6 +182,10 @@ case class DataSource(
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/18356
To avoid potential issues, could you revert all the unrelated changes?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/18356#discussion_r122874896
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/rules.scala
---
@@ -222,12 +223,10 @@ case class
Github user actuaryzhang commented on the issue:
https://github.com/apache/spark/pull/18114
For the `column_datetime_diff_functions`:
![image](https://user-images.githubusercontent.com/11082368/27315654-9ba01c08-552f-11e7-973e-f8351cb50aae.png)
Github user actuaryzhang commented on the issue:
https://github.com/apache/spark/pull/18114
For the date time functions, I create two groups: one for arithmetic
functions that work with two columns `column_datetime_diff_functions`, and the
other for functions that work with only one
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18114
**[Test build #78271 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/78271/testReport)**
for PR 18114 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18356
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18356
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/78268/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18356
**[Test build #78268 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/78268/testReport)**
for PR 18356 at commit
Github user lawlietAi commented on the issue:
https://github.com/apache/spark/pull/18359
sorry i'm confused about to operate the github.what should i do
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user uncleGen closed the pull request at:
https://github.com/apache/spark/pull/17395
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18359
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/18328#discussion_r122872220
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/CacheManager.scala ---
@@ -106,6 +105,11 @@ class CacheManager extends Logging {
GitHub user lawlietAi opened a pull request:
https://github.com/apache/spark/pull/18359
Update Word2Vec.scala
## What changes were proposed in this pull request?
the word2vec model needs an independent function to calculate the cosine
similarity.we also desire a function
Github user actuaryzhang closed the pull request at:
https://github.com/apache/spark/pull/18140
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user actuaryzhang reopened a pull request:
https://github.com/apache/spark/pull/18140
[SPARK-20917][ML][SparkR] SparkR supports string encoding consistent with R
## What changes were proposed in this pull request?
Add `stringIndexerOrderType` to `spark.glm` and
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/18140
you can close and re-open this PR on github here
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user actuaryzhang commented on the issue:
https://github.com/apache/spark/pull/18140
How do I do that?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17758
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17758
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/78267/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17758
**[Test build #78267 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/78267/testReport)**
for PR 17758 at commit
Github user maropu commented on the issue:
https://github.com/apache/spark/pull/17758
yea, I've already found the cause; to fix the issue, it's okay to check
name duplication for partition columns in `getOrInferFileFormatSchema` as
@gatorsmile suggested
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/18140
can you kick AppVeyor?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/18320
> Also I'd suggest not committing this to branch-2.2 -- if we want to just
fix the CentOS tests we can have a different change for the older branches
agreed, this won't run as a part of
Github user darionyaphet commented on a diff in the pull request:
https://github.com/apache/spark/pull/18288#discussion_r122869702
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/source/libsvm/LibSVMRelation.scala ---
@@ -91,12 +91,10 @@ private[libsvm] class LibSVMFileFormat
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/18025
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18358
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
GitHub user devaraj-kavali opened a pull request:
https://github.com/apache/spark/pull/18358
[SPARK-21148] [CORE] Set SparkUncaughtExceptionHandler to the Master
## What changes were proposed in this pull request?
Adding the default UncaughtExceptionHandler to the Master as
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/18025
merged to master, thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/18025
AppVeyor failure is unfortunate. but it passed before a doc only change.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18355
**[Test build #78270 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/78270/testReport)**
for PR 18355 at commit
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/18025
haha. I like the `\emph`
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/17758
I think we should figure out
https://issues.apache.org/jira/browse/SPARK-21144 first. It doesn't make sense
to have duplicated columns between partition columns and data columns.
---
If your
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/17758#discussion_r122868692
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/InsertIntoHadoopFsRelationCommand.scala
---
@@ -62,13 +63,8 @@ case class
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/17758#discussion_r122867659
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/InsertIntoHadoopFsRelationCommand.scala
---
@@ -62,13 +63,8 @@ case class
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/17758#discussion_r122867252
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSource.scala
---
@@ -328,6 +333,9 @@ case class DataSource(
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/17758#discussion_r122866890
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/util/SchemaUtils.scala ---
@@ -0,0 +1,74 @@
+/*
+ * Licensed to the Apache Software
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/17758#discussion_r122866830
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/views.scala ---
@@ -355,12 +356,12 @@ object ViewHelper {
analyzedPlan:
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/17758#discussion_r122866332
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSource.scala
---
@@ -328,6 +333,9 @@ case class DataSource(
Github user yssharma commented on the issue:
https://github.com/apache/spark/pull/18029
@budde @brkyvz could you suggest if the current patch seems ok, or I should
make something similar to the case class/ trait ?
---
If your project is set up for it, you can reply to this email and
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/17758#discussion_r122865863
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/views.scala ---
@@ -355,12 +356,12 @@ object ViewHelper {
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/17758#discussion_r122865721
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/util/SchemaUtils.scala ---
@@ -0,0 +1,74 @@
+/*
+ * Licensed to the Apache Software
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18320
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/78269/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18320
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18320
**[Test build #78269 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/78269/testReport)**
for PR 18320 at commit
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/17758#discussion_r122863353
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSource.scala
---
@@ -328,6 +333,9 @@ case class DataSource(
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/17758#discussion_r122862830
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveMetastoreCatalog.scala ---
@@ -248,6 +249,10 @@ private[hive] class
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/17758#discussion_r122862386
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/test/DataFrameReaderWriterSuite.scala
---
@@ -687,4 +688,52 @@ class DataFrameReaderWriterSuite
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/18343
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/14085
@zenglinxi0615
This pr is about adding all files in a directory recursively, thus no need
to enumerate all the filenames? I think this can be pretty useful especially in
production env.
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/18343
thanks, merging to master/2.2!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/18320
I also tested the current state on CentOS for sure.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user saturday-shi commented on the issue:
https://github.com/apache/spark/pull/18230
@vanzin [Xing Shi
(saturday_s)](https://issues.apache.org/jira/secure/ViewProfile.jspa?name=saturday_s),
thanks.
---
If your project is set up for it, you can reply to this email and have
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/18320#discussion_r122861395
--- Diff: R/pkg/inst/worker/daemon.R ---
@@ -30,8 +30,42 @@ port <- as.integer(Sys.getenv("SPARKR_WORKER_PORT"))
inputCon <- socketConnection(
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/18320#discussion_r122860937
--- Diff: R/pkg/inst/worker/daemon.R ---
@@ -30,8 +30,42 @@ port <- as.integer(Sys.getenv("SPARKR_WORKER_PORT"))
inputCon <- socketConnection(
Github user fjh100456 commented on the issue:
https://github.com/apache/spark/pull/18351
Yes,it should be. @ajbozarth
The screenshotï¼@zhuoliu
![default](https://user-images.githubusercontent.com/26785576/27312007-89a3eca6-5597-11e7-81fe-7dcff2c2a861.png)
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18320
**[Test build #78269 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/78269/testReport)**
for PR 18320 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18357
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/18320#discussion_r122860370
--- Diff: R/pkg/inst/worker/daemon.R ---
@@ -30,8 +30,42 @@ port <- as.integer(Sys.getenv("SPARKR_WORKER_PORT"))
inputCon <- socketConnection(
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/18320#discussion_r122860310
--- Diff: R/pkg/inst/worker/daemon.R ---
@@ -30,8 +30,42 @@ port <- as.integer(Sys.getenv("SPARKR_WORKER_PORT"))
inputCon <- socketConnection(
GitHub user devaraj-kavali opened a pull request:
https://github.com/apache/spark/pull/18357
[SPARK-21146] [CORE] Worker should handle and shutdown when any thread gets
UncaughtException
## What changes were proposed in this pull request?
Adding the default
Github user ConeyLiu commented on the issue:
https://github.com/apache/spark/pull/18350
thanks @srowen
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user maropu commented on the issue:
https://github.com/apache/spark/pull/18356
@gatorsmile This pr included whole changes in #17758 though, you originally
meant this pr should include a part of them to fix this issue only?
---
If your project is set up for it, you can reply
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18356
**[Test build #78268 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/78268/testReport)**
for PR 18356 at commit
Github user maropu commented on the issue:
https://github.com/apache/spark/pull/18356
cc: @gatorsmile
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
GitHub user maropu opened a pull request:
https://github.com/apache/spark/pull/18356
[SPARK-21144][SQL][BRANCH-2.2] Check column name duplication in read/write
paths
## What changes were proposed in this pull request?
This pr fixed unexpected results when the data schema and
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18355
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/78266/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18355
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18355
**[Test build #78266 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/78266/testReport)**
for PR 18355 at commit
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/18348
@srowen
Sorry, the last two or three days I did not deal with my jira in time.
Please help to review the code, thanks.
---
If your project is set up for it, you can reply to this
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17758
**[Test build #78267 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/78267/testReport)**
for PR 17758 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15821
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/78265/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15821
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15821
**[Test build #78265 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/78265/testReport)**
for PR 15821 at commit
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/18320#discussion_r122855545
--- Diff: R/pkg/inst/worker/daemon.R ---
@@ -31,7 +31,15 @@ inputCon <- socketConnection(
port = port, open = "rb", blocking = TRUE, timeout =
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/18343
Agreed. The `hugeBlockSizes` map is not supposed to have too many records
but only few huge blocks.
LGTM
---
If your project is set up for it, you can reply to this email and have your
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/18346
Btw, even we can evaluate all children expressions of `CodegenFallback`
with codegen path, we still can't do wholestage codegen with the plans
including `CodegenFallback` expressions. We just can do
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/17758#discussion_r122853060
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSource.scala
---
@@ -182,6 +183,10 @@ case class DataSource(
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/18346
Thanks @dbtsai for the comment.
Yeah, I've also tried to let `CodegenFallback` evaluate all its children
under codegen path in parallel with this PR. It works.
Of course the
Github user NarineK commented on the issue:
https://github.com/apache/spark/pull/14431
yes, but we only need read access.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user NarineK commented on the issue:
https://github.com/apache/spark/pull/14742
yes, we can close this, but it would be great if you could help us a way to
access the grouping columns from SparkR in #14431
---
If your project is set up for it, you can reply to this email and
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/18320#discussion_r122848957
--- Diff: R/pkg/inst/worker/daemon.R ---
@@ -31,7 +31,15 @@ inputCon <- socketConnection(
port = port, open = "rb", blocking = TRUE, timeout =
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/18320#discussion_r122847863
--- Diff: R/pkg/inst/worker/daemon.R ---
@@ -31,7 +31,15 @@ inputCon <- socketConnection(
port = port, open = "rb", blocking = TRUE, timeout =
Github user dbtsai commented on the issue:
https://github.com/apache/spark/pull/18346
Thanks, @viirya for this PR.
We hit this issue, and @viirya was kindly helping us to find the root
cause. This approach LGTM. One alternative approach we took in the end to
unblock our
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/18355#discussion_r122847078
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/streaming/StreamingQueryManager.scala
---
@@ -332,5 +332,6 @@ class StreamingQueryManager private[sql]
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/18355#discussion_r122846910
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/streaming/state/StateStoreCoordinatorSuite.scala
---
@@ -107,6 +115,43 @@ class
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/18355#discussion_r122846679
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/statefulOperators.scala
---
@@ -36,20 +37,22 @@ import
1 - 100 of 349 matches
Mail list logo