dongjoon-hyun commented on a change in pull request #26842:
[SPARK-29392][CORE][SQL] More removal of 'foo Symbol syntax for Scala 2.13
URL: https://github.com/apache/spark/pull/26842#discussion_r356380007
##
File path: sql/core/src/test/scala/org/apache/spark/sql/DatasetSuite.scala
dongjoon-hyun commented on a change in pull request #26842:
[SPARK-29392][CORE][SQL] More removal of 'foo Symbol syntax for Scala 2.13
URL: https://github.com/apache/spark/pull/26842#discussion_r356380007
##
File path: sql/core/src/test/scala/org/apache/spark/sql/DatasetSuite.scala
dongjoon-hyun commented on a change in pull request #26842:
[SPARK-29392][CORE][SQL] More removal of 'foo Symbol syntax for Scala 2.13
URL: https://github.com/apache/spark/pull/26842#discussion_r356380007
##
File path: sql/core/src/test/scala/org/apache/spark/sql/DatasetSuite.scala
SparkQA commented on issue #26773: [SPARK-30126][CORE]sparkContext.addFile and
sparkContext.addJar fails when file path contains spaces
URL: https://github.com/apache/spark/pull/26773#issuecomment-564352441
**[Test build #115139 has
AmplabJenkins removed a comment on issue #26512: [SPARK-29493][SQL] Arrow
MapType support
URL: https://github.com/apache/spark/pull/26512#issuecomment-564351609
Merged build finished. Test PASSed.
This is an automated
AmplabJenkins removed a comment on issue #26512: [SPARK-29493][SQL] Arrow
MapType support
URL: https://github.com/apache/spark/pull/26512#issuecomment-564351617
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
AmplabJenkins commented on issue #26512: [SPARK-29493][SQL] Arrow MapType
support
URL: https://github.com/apache/spark/pull/26512#issuecomment-564351617
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
AmplabJenkins commented on issue #26512: [SPARK-29493][SQL] Arrow MapType
support
URL: https://github.com/apache/spark/pull/26512#issuecomment-564351609
Merged build finished. Test PASSed.
This is an automated message from
SparkQA removed a comment on issue #26512: [SPARK-29493][SQL] Arrow MapType
support
URL: https://github.com/apache/spark/pull/26512#issuecomment-564294611
**[Test build #115132 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/115132/testReport)**
for PR
SparkQA commented on issue #26512: [SPARK-29493][SQL] Arrow MapType support
URL: https://github.com/apache/spark/pull/26512#issuecomment-564351125
**[Test build #115132 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/115132/testReport)**
for PR 26512 at
dongjoon-hyun edited a comment on issue #26824: [SPARK-30197][PYSPARK] Add
minimum `requirements-dev.txt` file to `python` directory
URL: https://github.com/apache/spark/pull/26824#issuecomment-564350827
In this PR, I found that the installed minimum versions works with Apache
Spark in
dongjoon-hyun commented on issue #26824: [SPARK-30197][PYSPARK] Add minimum
`requirements-dev.txt` file to `python` directory
URL: https://github.com/apache/spark/pull/26824#issuecomment-564350827
In this PR, I found that the installed minimum versions works with Apache
Spark in terms of
manuzhang commented on issue #26813: [SPARK-30188][SQL][WIP] Enable adaptive
query execution by default
URL: https://github.com/apache/spark/pull/26813#issuecomment-564350695
I've seen a regression with [tpcds
dongjoon-hyun commented on issue #26824: [SPARK-30197][PYSPARK] Add minimum
`requirements-dev.txt` file to `python` directory
URL: https://github.com/apache/spark/pull/26824#issuecomment-564350388
This was not for just testing. What I wanted is to make the minimum required
versions
zhengruifeng commented on a change in pull request #26803: [SPARK-30178][ML]
RobustScaler support large numFeatures
URL: https://github.com/apache/spark/pull/26803#discussion_r356377461
##
File path: mllib/src/main/scala/org/apache/spark/ml/feature/RobustScaler.scala
##
ulysses-you commented on issue #26831: [SPARK-30201][SQL] HiveOutputWriter
standardOI should use ObjectInspectorCopyOption.DEFAULT
URL: https://github.com/apache/spark/pull/26831#issuecomment-564349554
> This isn't really my area, but I don't quite understand how this arises in
practice?
ulysses-you edited a comment on issue #26831: [SPARK-30201][SQL]
HiveOutputWriter standardOI should use ObjectInspectorCopyOption.DEFAULT
URL: https://github.com/apache/spark/pull/26831#issuecomment-564349554
> This isn't really my area, but I don't quite understand how this arises in
ulysses-you edited a comment on issue #26831: [SPARK-30201][SQL]
HiveOutputWriter standardOI should use ObjectInspectorCopyOption.DEFAULT
URL: https://github.com/apache/spark/pull/26831#issuecomment-564349554
> This isn't really my area, but I don't quite understand how this arises in
oopDaniel commented on issue #26821: [SPARK-20656][CORE]Support Incremental
parsing of event logs in SHS
URL: https://github.com/apache/spark/pull/26821#issuecomment-564349390
I was also working on a PR for incremental parsing in SHS as well, and there
are 2 ways to skip the parsed
beliefer commented on a change in pull request #26656: [SPARK-27986][SQL]
Support ANSI SQL filter clause for aggregate expression
URL: https://github.com/apache/spark/pull/26656#discussion_r356376953
##
File path:
beliefer commented on a change in pull request #26656: [SPARK-27986][SQL]
Support ANSI SQL filter clause for aggregate expression
URL: https://github.com/apache/spark/pull/26656#discussion_r356376920
##
File path:
AmplabJenkins removed a comment on issue #26773:
[SPARK-30126][CORE]sparkContext.addFile and sparkContext.addJar fails when file
path contains spaces
URL: https://github.com/apache/spark/pull/26773#issuecomment-564348605
Test PASSed.
Refer to this link for build results (access rights
AmplabJenkins removed a comment on issue #26773:
[SPARK-30126][CORE]sparkContext.addFile and sparkContext.addJar fails when file
path contains spaces
URL: https://github.com/apache/spark/pull/26773#issuecomment-564348598
Merged build finished. Test PASSed.
AmplabJenkins commented on issue #26773:
[SPARK-30126][CORE]sparkContext.addFile and sparkContext.addJar fails when file
path contains spaces
URL: https://github.com/apache/spark/pull/26773#issuecomment-564348598
Merged build finished. Test PASSed.
AmplabJenkins commented on issue #26773:
[SPARK-30126][CORE]sparkContext.addFile and sparkContext.addJar fails when file
path contains spaces
URL: https://github.com/apache/spark/pull/26773#issuecomment-564348605
Test PASSed.
Refer to this link for build results (access rights to CI
AmplabJenkins removed a comment on issue #26843: [SPARK-30209][SQL][WEB-UI]
Display stageId, attemptId and taskId for max metrics in Spark UI.
URL: https://github.com/apache/spark/pull/26843#issuecomment-564347298
Test FAILed.
Refer to this link for build results (access rights to CI
AmplabJenkins removed a comment on issue #26843: [SPARK-30209][SQL][WEB-UI]
Display stageId, attemptId and taskId for max metrics in Spark UI.
URL: https://github.com/apache/spark/pull/26843#issuecomment-564347296
Merged build finished. Test FAILed.
AmplabJenkins commented on issue #26843: [SPARK-30209][SQL][WEB-UI] Display
stageId, attemptId and taskId for max metrics in Spark UI.
URL: https://github.com/apache/spark/pull/26843#issuecomment-564347298
Test FAILed.
Refer to this link for build results (access rights to CI server
AmplabJenkins commented on issue #26843: [SPARK-30209][SQL][WEB-UI] Display
stageId, attemptId and taskId for max metrics in Spark UI.
URL: https://github.com/apache/spark/pull/26843#issuecomment-564347296
Merged build finished. Test FAILed.
SparkQA removed a comment on issue #26843: [SPARK-30209][SQL][WEB-UI] Display
stageId, attemptId and taskId for max metrics in Spark UI.
URL: https://github.com/apache/spark/pull/26843#issuecomment-564316347
**[Test build #115135 has
SparkQA commented on issue #26843: [SPARK-30209][SQL][WEB-UI] Display stageId,
attemptId and taskId for max metrics in Spark UI.
URL: https://github.com/apache/spark/pull/26843#issuecomment-564346970
**[Test build #115135 has
yaooqinn commented on issue #26833: [WIP][SPARK-30203][SQL] store assignable if
there is an appropriate user-defined cast function
URL: https://github.com/apache/spark/pull/26833#issuecomment-564345854
I got it. thanks. @gengliangwang
yaooqinn closed pull request #26833: [WIP][SPARK-30203][SQL] store assignable
if there is an appropriate user-defined cast function
URL: https://github.com/apache/spark/pull/26833
This is an automated message from the
srowen commented on issue #26831: [SPARK-30201][SQL] HiveOutputWriter
standardOI should use ObjectInspectorCopyOption.DEFAULT
URL: https://github.com/apache/spark/pull/26831#issuecomment-564345649
This isn't really my area, but I don't quite understand how this arises in
practice? what
srowen commented on a change in pull request #25899: [SPARK-29089][SQL]
Parallelize blocking FileSystem calls in DataSource#checkAndGlobPathIfNecessary
URL: https://github.com/apache/spark/pull/25899#discussion_r356373498
##
File path:
DeliangFan edited a comment on issue #26834: [SPARK-30204][K8S] Support for
configure Pod DNS for Kubernetes
URL: https://github.com/apache/spark/pull/26834#issuecomment-564343951
> Why can't you use pod templates for this? It seems all you're doing is
adding things to the pod definition,
DeliangFan closed pull request #26834: [SPARK-30204][K8S] Support for configure
Pod DNS for Kubernetes
URL: https://github.com/apache/spark/pull/26834
This is an automated message from the Apache Git Service.
To respond to
DeliangFan edited a comment on issue #26834: [SPARK-30204][K8S] Support for
configure Pod DNS for Kubernetes
URL: https://github.com/apache/spark/pull/26834#issuecomment-564343951
> Why can't you use pod templates for this? It seems all you're doing is
adding things to the pod definition,
DeliangFan commented on issue #26834: [SPARK-30204][K8S] Support for configure
Pod DNS for Kubernetes
URL: https://github.com/apache/spark/pull/26834#issuecomment-564343951
> Why can't you use pod templates for this? It seems all you're doing is
adding things to the pod definition, which
AmplabJenkins removed a comment on issue #26586: [SPARK-29950][k8s] Blacklist
deleted executors in K8S with dynamic allocation.
URL: https://github.com/apache/spark/pull/26586#issuecomment-564342586
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
AmplabJenkins removed a comment on issue #26586: [SPARK-29950][k8s] Blacklist
deleted executors in K8S with dynamic allocation.
URL: https://github.com/apache/spark/pull/26586#issuecomment-564342581
Merged build finished. Test FAILed.
AmplabJenkins commented on issue #26841: [SPARK-29152][2.4][CORE]Executor
Plugin shutdown when dynamic allocation is enabled
URL: https://github.com/apache/spark/pull/26841#issuecomment-564342678
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
AmplabJenkins removed a comment on issue #26841:
[SPARK-29152][2.4][CORE]Executor Plugin shutdown when dynamic allocation is
enabled
URL: https://github.com/apache/spark/pull/26841#issuecomment-564342678
Test PASSed.
Refer to this link for build results (access rights to CI server
AmplabJenkins removed a comment on issue #26841:
[SPARK-29152][2.4][CORE]Executor Plugin shutdown when dynamic allocation is
enabled
URL: https://github.com/apache/spark/pull/26841#issuecomment-564342669
Merged build finished. Test PASSed.
AmplabJenkins commented on issue #26841: [SPARK-29152][2.4][CORE]Executor
Plugin shutdown when dynamic allocation is enabled
URL: https://github.com/apache/spark/pull/26841#issuecomment-564342669
Merged build finished. Test PASSed.
SparkQA commented on issue #26586: [SPARK-29950][k8s] Blacklist deleted
executors in K8S with dynamic allocation.
URL: https://github.com/apache/spark/pull/26586#issuecomment-564342543
Kubernetes integration test status failure
URL:
AmplabJenkins commented on issue #26586: [SPARK-29950][k8s] Blacklist deleted
executors in K8S with dynamic allocation.
URL: https://github.com/apache/spark/pull/26586#issuecomment-564342581
Merged build finished. Test FAILed.
AmplabJenkins commented on issue #26586: [SPARK-29950][k8s] Blacklist deleted
executors in K8S with dynamic allocation.
URL: https://github.com/apache/spark/pull/26586#issuecomment-564342586
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
SparkQA commented on issue #26841: [SPARK-29152][2.4][CORE]Executor Plugin
shutdown when dynamic allocation is enabled
URL: https://github.com/apache/spark/pull/26841#issuecomment-564342318
**[Test build #115134 has
SparkQA removed a comment on issue #26841: [SPARK-29152][2.4][CORE]Executor
Plugin shutdown when dynamic allocation is enabled
URL: https://github.com/apache/spark/pull/26841#issuecomment-564308446
**[Test build #115134 has
viirya commented on a change in pull request #26828: [SPARK-30198][Core]
BytesToBytesMap does not grow internal long array as expected
URL: https://github.com/apache/spark/pull/26828#discussion_r356370350
##
File path:
07ARB commented on a change in pull request #26773:
[SPARK-30126][CORE]sparkContext.addFile and sparkContext.addJar fails when file
path contains spaces
URL: https://github.com/apache/spark/pull/26773#discussion_r356367431
##
File path:
HeartSaVioR commented on a change in pull request #26771: [SPARK-30143][SS] Add
a timeout on stopping a streaming query
URL: https://github.com/apache/spark/pull/26771#discussion_r356363756
##
File path:
HeartSaVioR commented on a change in pull request #26771: [SPARK-30143][SS] Add
a timeout on stopping a streaming query
URL: https://github.com/apache/spark/pull/26771#discussion_r356365709
##
File path:
sql/core/src/main/scala/org/apache/spark/sql/streaming/StreamingQuery.scala
HeartSaVioR commented on a change in pull request #26771: [SPARK-30143][SS] Add
a timeout on stopping a streaming query
URL: https://github.com/apache/spark/pull/26771#discussion_r356365353
##
File path:
sql/core/src/main/scala/org/apache/spark/sql/streaming/DataStreamWriter.scala
07ARB commented on a change in pull request #26773:
[SPARK-30126][CORE]sparkContext.addFile and sparkContext.addJar fails when file
path contains spaces
URL: https://github.com/apache/spark/pull/26773#discussion_r356364635
##
File path:
viirya commented on a change in pull request #26751: [SPARK-30107][SQL] Expose
nested schema pruning to all V2 sources
URL: https://github.com/apache/spark/pull/26751#discussion_r356364276
##
File path:
SparkQA commented on issue #26586: [SPARK-29950][k8s] Blacklist deleted
executors in K8S with dynamic allocation.
URL: https://github.com/apache/spark/pull/26586#issuecomment-564335835
Kubernetes integration test starting
URL:
SparkQA commented on issue #26586: [SPARK-29950][k8s] Blacklist deleted
executors in K8S with dynamic allocation.
URL: https://github.com/apache/spark/pull/26586#issuecomment-564335067
**[Test build #115138 has
07ARB commented on a change in pull request #26773:
[SPARK-30126][CORE]sparkContext.addFile and sparkContext.addJar fails when file
path contains spaces
URL: https://github.com/apache/spark/pull/26773#discussion_r356362974
##
File path:
dbtsai commented on a change in pull request #26751: [SPARK-30107][SQL] Expose
nested schema pruning to all V2 sources
URL: https://github.com/apache/spark/pull/26751#discussion_r356361601
##
File path:
AmplabJenkins commented on issue #26840: [SPARK-30038][SQL] DESCRIBE FUNCTION
should do multi-catalog resolution
URL: https://github.com/apache/spark/pull/26840#issuecomment-564331699
Merged build finished. Test PASSed.
This
AmplabJenkins commented on issue #26840: [SPARK-30038][SQL] DESCRIBE FUNCTION
should do multi-catalog resolution
URL: https://github.com/apache/spark/pull/26840#issuecomment-564331709
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
AmplabJenkins removed a comment on issue #26840: [SPARK-30038][SQL] DESCRIBE
FUNCTION should do multi-catalog resolution
URL: https://github.com/apache/spark/pull/26840#issuecomment-564331709
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
AmplabJenkins removed a comment on issue #26840: [SPARK-30038][SQL] DESCRIBE
FUNCTION should do multi-catalog resolution
URL: https://github.com/apache/spark/pull/26840#issuecomment-564331699
Merged build finished. Test PASSed.
dbtsai commented on a change in pull request #26751: [SPARK-30107][SQL] Expose
nested schema pruning to all V2 sources
URL: https://github.com/apache/spark/pull/26751#discussion_r356359449
##
File path:
SparkQA commented on issue #26840: [SPARK-30038][SQL] DESCRIBE FUNCTION should
do multi-catalog resolution
URL: https://github.com/apache/spark/pull/26840#issuecomment-564330858
**[Test build #115137 has
ulysses-you commented on issue #26831: [SPARK-30201][SQL] HiveOutputWriter
standardOI should use ObjectInspectorCopyOption.DEFAULT
URL: https://github.com/apache/spark/pull/26831#issuecomment-564329957
ping @cloud-fan @srowen . I think it's your area.
vanzin commented on issue #26586: [SPARK-29950][k8s] Blacklist deleted
executors in K8S with dynamic allocation.
URL: https://github.com/apache/spark/pull/26586#issuecomment-564329788
retest this please
This is an automated
AmplabJenkins removed a comment on issue #26840: [SPARK-30038][SQL] DESCRIBE
FUNCTION should do multi-catalog resolution
URL: https://github.com/apache/spark/pull/26840#issuecomment-564263442
Can one of the admins verify this patch?
viirya commented on issue #26840: [SPARK-30038][SQL] DESCRIBE FUNCTION should
do multi-catalog resolution
URL: https://github.com/apache/spark/pull/26840#issuecomment-564327942
ok to test.
This is an automated message from
HyukjinKwon edited a comment on issue #26824: [SPARK-30197][PYSPARK] Add
minimum `requirements-dev.txt` file to `python` directory
URL: https://github.com/apache/spark/pull/26824#issuecomment-564319786
@dongjoon-hyun we can manually set a specific version for CI specifically to
test a
AmplabJenkins removed a comment on issue #26586: [SPARK-29950][k8s] Blacklist
deleted executors in K8S with dynamic allocation.
URL: https://github.com/apache/spark/pull/26586#issuecomment-564319552
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
HyukjinKwon commented on issue #26824: [SPARK-30197][PYSPARK] Add minimum
`requirements-dev.txt` file to `python` directory
URL: https://github.com/apache/spark/pull/26824#issuecomment-564319786
@dongjoon-hyun we can manually set a specific version for CI specifically to
test a specific
AmplabJenkins commented on issue #26586: [SPARK-29950][k8s] Blacklist deleted
executors in K8S with dynamic allocation.
URL: https://github.com/apache/spark/pull/26586#issuecomment-564319542
Merged build finished. Test FAILed.
AmplabJenkins removed a comment on issue #26586: [SPARK-29950][k8s] Blacklist
deleted executors in K8S with dynamic allocation.
URL: https://github.com/apache/spark/pull/26586#issuecomment-564319542
Merged build finished. Test FAILed.
AmplabJenkins commented on issue #26586: [SPARK-29950][k8s] Blacklist deleted
executors in K8S with dynamic allocation.
URL: https://github.com/apache/spark/pull/26586#issuecomment-564319552
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
SparkQA removed a comment on issue #26586: [SPARK-29950][k8s] Blacklist deleted
executors in K8S with dynamic allocation.
URL: https://github.com/apache/spark/pull/26586#issuecomment-564289061
**[Test build #115131 has
SparkQA commented on issue #26586: [SPARK-29950][k8s] Blacklist deleted
executors in K8S with dynamic allocation.
URL: https://github.com/apache/spark/pull/26586#issuecomment-564319068
**[Test build #115131 has
AmplabJenkins removed a comment on issue #26102: [SPARK-29448][SQL] Support the
`INTERVAL` type by Parquet datasource
URL: https://github.com/apache/spark/pull/26102#issuecomment-564317585
Merged build finished. Test PASSed.
AmplabJenkins commented on issue #26102: [SPARK-29448][SQL] Support the
`INTERVAL` type by Parquet datasource
URL: https://github.com/apache/spark/pull/26102#issuecomment-564317585
Merged build finished. Test PASSed.
This is
AmplabJenkins removed a comment on issue #25899: [SPARK-29089][SQL] Parallelize
blocking FileSystem calls in DataSource#checkAndGlobPathIfNecessary
URL: https://github.com/apache/spark/pull/25899#issuecomment-564317541
Merged build finished. Test PASSed.
AmplabJenkins removed a comment on issue #25899: [SPARK-29089][SQL] Parallelize
blocking FileSystem calls in DataSource#checkAndGlobPathIfNecessary
URL: https://github.com/apache/spark/pull/25899#issuecomment-564317551
Test PASSed.
Refer to this link for build results (access rights to
AmplabJenkins removed a comment on issue #26102: [SPARK-29448][SQL] Support the
`INTERVAL` type by Parquet datasource
URL: https://github.com/apache/spark/pull/26102#issuecomment-564317595
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
AmplabJenkins commented on issue #26102: [SPARK-29448][SQL] Support the
`INTERVAL` type by Parquet datasource
URL: https://github.com/apache/spark/pull/26102#issuecomment-564317595
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
AmplabJenkins commented on issue #25899: [SPARK-29089][SQL] Parallelize
blocking FileSystem calls in DataSource#checkAndGlobPathIfNecessary
URL: https://github.com/apache/spark/pull/25899#issuecomment-564317541
Merged build finished. Test PASSed.
AmplabJenkins commented on issue #25899: [SPARK-29089][SQL] Parallelize
blocking FileSystem calls in DataSource#checkAndGlobPathIfNecessary
URL: https://github.com/apache/spark/pull/25899#issuecomment-564317551
Test PASSed.
Refer to this link for build results (access rights to CI
AmplabJenkins removed a comment on issue #26843: [SPARK-30209][SQL][WEB-UI]
Display stageId, attemptId and taskId for max metrics in Spark UI.
URL: https://github.com/apache/spark/pull/26843#issuecomment-564317272
Test PASSed.
Refer to this link for build results (access rights to CI
AmplabJenkins removed a comment on issue #26843: [SPARK-30209][SQL][WEB-UI]
Display stageId, attemptId and taskId for max metrics in Spark UI.
URL: https://github.com/apache/spark/pull/26843#issuecomment-564317268
Merged build finished. Test PASSed.
AmplabJenkins commented on issue #26843: [SPARK-30209][SQL][WEB-UI] Display
stageId, attemptId and taskId for max metrics in Spark UI.
URL: https://github.com/apache/spark/pull/26843#issuecomment-564317272
Test PASSed.
Refer to this link for build results (access rights to CI server
AmplabJenkins commented on issue #26843: [SPARK-30209][SQL][WEB-UI] Display
stageId, attemptId and taskId for max metrics in Spark UI.
URL: https://github.com/apache/spark/pull/26843#issuecomment-564317268
Merged build finished. Test PASSed.
SparkQA commented on issue #26102: [SPARK-29448][SQL] Support the `INTERVAL`
type by Parquet datasource
URL: https://github.com/apache/spark/pull/26102#issuecomment-564316859
**[Test build #115128 has
SparkQA removed a comment on issue #26102: [SPARK-29448][SQL] Support the
`INTERVAL` type by Parquet datasource
URL: https://github.com/apache/spark/pull/26102#issuecomment-564223627
**[Test build #115128 has
cozos commented on a change in pull request #25899: [SPARK-29089][SQL]
Parallelize blocking FileSystem calls in DataSource#checkAndGlobPathIfNecessary
URL: https://github.com/apache/spark/pull/25899#discussion_r356343985
##
File path:
cozos commented on a change in pull request #25899: [SPARK-29089][SQL]
Parallelize blocking FileSystem calls in DataSource#checkAndGlobPathIfNecessary
URL: https://github.com/apache/spark/pull/25899#discussion_r356344102
##
File path:
cozos commented on a change in pull request #25899: [SPARK-29089][SQL]
Parallelize blocking FileSystem calls in DataSource#checkAndGlobPathIfNecessary
URL: https://github.com/apache/spark/pull/25899#discussion_r356344148
##
File path:
SparkQA commented on issue #25899: [SPARK-29089][SQL] Parallelize blocking
FileSystem calls in DataSource#checkAndGlobPathIfNecessary
URL: https://github.com/apache/spark/pull/25899#issuecomment-564316342
**[Test build #115136 has
SparkQA commented on issue #26843: [SPARK-30209][SQL][WEB-UI] Display stageId,
attemptId and taskId for max metrics in Spark UI.
URL: https://github.com/apache/spark/pull/26843#issuecomment-564316347
**[Test build #115135 has
AmplabJenkins removed a comment on issue #26843: [SPARK-30209][SQL][WEB-UI]
Display stageId, attemptId and taskId for max metrics in Spark UI.
URL: https://github.com/apache/spark/pull/26843#issuecomment-564312504
Can one of the admins verify this patch?
AmplabJenkins removed a comment on issue #26473:
[SPARK-29864][SPARK-29920][SQL] Strict parsing of day-time strings to intervals
URL: https://github.com/apache/spark/pull/26473#issuecomment-564312522
Merged build finished. Test PASSed.
301 - 400 of 1136 matches
Mail list logo