AngersZh commented on issue #26053: [SPARK-29379][SQL]SHOW FUNCTIONS show
'!=', '<>' , 'between', 'case'
URL: https://github.com/apache/spark/pull/26053#issuecomment-540937912
> Keeping it consistent sounds fine for now but I think we should fix this
ambiguity between functions and
HyukjinKwon commented on issue #26053: [SPARK-29379][SQL]SHOW FUNCTIONS show
'!=', '<>' , 'between', 'case'
URL: https://github.com/apache/spark/pull/26053#issuecomment-540942383
Alright, let's deal with it later separately.
yaooqinn commented on issue #26080: [SPARK-29425][SQL] The ownership of a
database should be respected
URL: https://github.com/apache/spark/pull/26080#issuecomment-540945353
cc @cloud-fan @gatorsmile @wangyum
This is an
MaxGekk commented on a change in pull request #26055: [SPARK-29368][SQL][TEST]
Port interval.sql
URL: https://github.com/apache/spark/pull/26055#discussion_r333855323
##
File path: sql/core/src/test/resources/sql-tests/inputs/postgreSQL/interval.sql
##
@@ -0,0 +1,330 @@
turboFei opened a new pull request #26086: [SPARK-29302] Make the file name of
a task for dynamic partition overwrite be unique
URL: https://github.com/apache/spark/pull/26086
### What changes were proposed in this pull request?
Now, for a dynamic partition overwrite operation, the
itsvikramagr commented on a change in pull request #24922: [SPARK-28120][SS]
Rocksdb state storage implementation
URL: https://github.com/apache/spark/pull/24922#discussion_r333870578
##
File path: sql/core/pom.xml
##
@@ -147,6 +147,12 @@
mockito-core
test
Udbhav30 commented on issue #25398: [SPARK-28659][SQL] Use data source if
convertible in insert overwrite directory
URL: https://github.com/apache/spark/pull/25398#issuecomment-540964779
> If you only target to fix Hive ser/de to respect compression, why don't
you set Hive compression
Udbhav30 edited a comment on issue #25398: [SPARK-28659][SQL] Use data source
if convertible in insert overwrite directory
URL: https://github.com/apache/spark/pull/25398#issuecomment-540964779
> If you only target to fix Hive ser/de to respect compression, why don't
you set Hive
Udbhav30 commented on a change in pull request #25398: [SPARK-28659][SQL] Use
data source if convertible in insert overwrite directory
URL: https://github.com/apache/spark/pull/25398#discussion_r333874882
##
File path:
viirya opened a new pull request #26087: [SPARK-29427][SQL] Create
KeyValueGroupedDataset from existing columns in DataFrame
URL: https://github.com/apache/spark/pull/26087
### What changes were proposed in this pull request?
This PR proposes to add groupByRelationKey
advancedxy commented on a change in pull request #26040: [SPARK-9853][Core]
Optimize shuffle fetch of continuous partition IDs
URL: https://github.com/apache/spark/pull/26040#discussion_r333857916
##
File path:
advancedxy commented on a change in pull request #26040: [SPARK-9853][Core]
Optimize shuffle fetch of continuous partition IDs
URL: https://github.com/apache/spark/pull/26040#discussion_r333843027
##
File path: core/src/main/scala/org/apache/spark/storage/BlockId.scala
##
advancedxy commented on a change in pull request #26040: [SPARK-9853][Core]
Optimize shuffle fetch of continuous partition IDs
URL: https://github.com/apache/spark/pull/26040#discussion_r333848114
##
File path:
advancedxy commented on a change in pull request #26040: [SPARK-9853][Core]
Optimize shuffle fetch of continuous partition IDs
URL: https://github.com/apache/spark/pull/26040#discussion_r333855219
##
File path:
advancedxy commented on a change in pull request #26040: [SPARK-9853][Core]
Optimize shuffle fetch of continuous partition IDs
URL: https://github.com/apache/spark/pull/26040#discussion_r333857410
##
File path:
advancedxy commented on a change in pull request #26040: [SPARK-9853][Core]
Optimize shuffle fetch of continuous partition IDs
URL: https://github.com/apache/spark/pull/26040#discussion_r333860050
##
File path:
turboFei edited a comment on issue #26086: [WIP][SPARK-29302] Make the file
name of a task for dynamic partition overwrite be unique
URL: https://github.com/apache/spark/pull/26086#issuecomment-540960138
But we need skip send TaskCommitMessage if a task can not commit.
turboFei edited a comment on issue #26086: [WIP][SPARK-29302] Make the file
name of a task for dynamic partition overwrite be unique
URL: https://github.com/apache/spark/pull/26086#issuecomment-540960138
But we need skip send TaskCommitMessage if a task can not commit.
turboFei commented on issue #26086: [WIP][SPARK-29302] Make the file name of a
task for dynamic partition overwrite be unique
URL: https://github.com/apache/spark/pull/26086#issuecomment-540960138
But we need skip send TaskCommitMessage if a task can not commit.
turboFei edited a comment on issue #26086: [WIP][SPARK-29302] Make the file
name of a task for dynamic partition overwrite be unique
URL: https://github.com/apache/spark/pull/26086#issuecomment-540960138
But we should skip send `new TaskCommitMessage(addedAbsPathFiles.toMap ->
turboFei edited a comment on issue #26086: [WIP][SPARK-29302] Make the file
name of a task for dynamic partition overwrite be unique
URL: https://github.com/apache/spark/pull/26086#issuecomment-540960138
But we need skip send `new TaskCommitMessage(addedAbsPathFiles.toMap ->
turboFei edited a comment on issue #26086: [WIP][SPARK-29302] Make the file
name of a task for dynamic partition overwrite be unique
URL: https://github.com/apache/spark/pull/26086#issuecomment-540960138
But we need skip send
Udbhav30 commented on a change in pull request #25398: [SPARK-28659][SQL] Use
data source if convertible in insert overwrite directory
URL: https://github.com/apache/spark/pull/25398#discussion_r333874882
##
File path:
turboFei removed a comment on issue #26086: [WIP][SPARK-29302] Make the file
name of a task for dynamic partition overwrite be unique
URL: https://github.com/apache/spark/pull/26086#issuecomment-540960138
But we should skip send `new TaskCommitMessage(addedAbsPathFiles.toMap ->
turboFei commented on issue #26086: [WIP][SPARK-29302] Make the file name of a
task for dynamic partition overwrite be unique
URL: https://github.com/apache/spark/pull/26086#issuecomment-540964132
also cc @Clark
This is
Udbhav30 edited a comment on issue #25398: [SPARK-28659][SQL] Use data source
if convertible in insert overwrite directory
URL: https://github.com/apache/spark/pull/25398#issuecomment-540964779
> If you only target to fix Hive ser/de to respect compression, why don't
you set Hive
Udbhav30 edited a comment on issue #25398: [SPARK-28659][SQL] Use data source
if convertible in insert overwrite directory
URL: https://github.com/apache/spark/pull/25398#issuecomment-540964779
> If you only target to fix Hive ser/de to respect compression, why don't
you set Hive
Udbhav30 edited a comment on issue #25398: [SPARK-28659][SQL] Use data source
if convertible in insert overwrite directory
URL: https://github.com/apache/spark/pull/25398#issuecomment-540964779
> If you only target to fix Hive ser/de to respect compression, why don't
you set Hive
turboFei edited a comment on issue #26086: [SPARK-29302] Make the file name of
a task for dynamic partition overwrite be unique
URL: https://github.com/apache/spark/pull/26086#issuecomment-540964132
cc @viirya @cloud-fan
also cc @Clark
Mats-SX commented on a change in pull request #24851: [SPARK-27303][GRAPH] Add
Spark Graph API
URL: https://github.com/apache/spark/pull/24851#discussion_r333893149
##
File path:
graph/api/src/main/scala/org/apache/spark/graph/api/RelationshipFrameBuilder.scala
##
@@
Mats-SX commented on a change in pull request #24851: [SPARK-27303][GRAPH] Add
Spark Graph API
URL: https://github.com/apache/spark/pull/24851#discussion_r333893149
##
File path:
graph/api/src/main/scala/org/apache/spark/graph/api/RelationshipFrameBuilder.scala
##
@@
turboFei commented on issue #26086: [SPARK-29302] Make the file name of a task
for dynamic partition overwrite be unique
URL: https://github.com/apache/spark/pull/26086#issuecomment-540982545
Oh, It seems that this issue is related with
https://github.com/apache/spark/pull/24142.
I
merrily01 opened a new pull request #26088: [SPARK-29436][K8S] Support executor
for selecting scheduler through scheduler name in the case of k8s
multi-scheduler scenario
URL: https://github.com/apache/spark/pull/26088
### What changes were proposed in this pull request?
Support
yaooqinn commented on a change in pull request #25977:
[SPARK-29268][SQL]isolationOn value is wrong in case of
spark.sql.hive.metastore.jars != builtin
URL: https://github.com/apache/spark/pull/25977#discussion_r333897470
##
File path:
yaooqinn commented on a change in pull request #25977:
[SPARK-29268][SQL]isolationOn value is wrong in case of
spark.sql.hive.metastore.jars != builtin
URL: https://github.com/apache/spark/pull/25977#discussion_r333897470
##
File path:
cloud-fan commented on a change in pull request #25295: [SPARK-28560][SQL]
Optimize shuffle reader to local shuffle reader when smj converted to bhj in
adaptive execution
URL: https://github.com/apache/spark/pull/25295#discussion_r333900101
##
File path:
turboFei edited a comment on issue #26086: [SPARK-29302] Make the file name of
a task for dynamic partition overwrite be unique
URL: https://github.com/apache/spark/pull/26086#issuecomment-540982545
Oh, It seems that this issue is related with
https://github.com/apache/spark/pull/24142.
cloud-fan commented on a change in pull request #25295: [SPARK-28560][SQL]
Optimize shuffle reader to local shuffle reader when smj converted to bhj in
adaptive execution
URL: https://github.com/apache/spark/pull/25295#discussion_r333901952
##
File path:
dongjoon-hyun commented on a change in pull request #24851:
[SPARK-27303][GRAPH] Add Spark Graph API
URL: https://github.com/apache/spark/pull/24851#discussion_r334031051
##
File path:
graph/api/src/main/scala/org/apache/spark/graph/api/CypherSession.scala
##
@@ -0,0
dongjoon-hyun commented on a change in pull request #24851:
[SPARK-27303][GRAPH] Add Spark Graph API
URL: https://github.com/apache/spark/pull/24851#discussion_r334033319
##
File path:
graph/api/src/main/scala/org/apache/spark/graph/api/GraphElementFrame.scala
##
@@ -0,0
dongjoon-hyun commented on a change in pull request #24851:
[SPARK-27303][GRAPH] Add Spark Graph API
URL: https://github.com/apache/spark/pull/24851#discussion_r334027537
##
File path:
graph/api/src/main/scala/org/apache/spark/graph/api/CypherResult.scala
##
@@ -0,0
dongjoon-hyun commented on a change in pull request #24851:
[SPARK-27303][GRAPH] Add Spark Graph API
URL: https://github.com/apache/spark/pull/24851#discussion_r334038844
##
File path:
graph/api/src/main/scala/org/apache/spark/graph/api/GraphElementFrame.scala
##
@@ -0,0
dongjoon-hyun commented on a change in pull request #24851:
[SPARK-27303][GRAPH] Add Spark Graph API
URL: https://github.com/apache/spark/pull/24851#discussion_r334039032
##
File path:
graph/api/src/main/scala/org/apache/spark/graph/api/RelationshipFrame.scala
##
@@ -0,0
dongjoon-hyun commented on a change in pull request #24851:
[SPARK-27303][GRAPH] Add Spark Graph API
URL: https://github.com/apache/spark/pull/24851#discussion_r334039113
##
File path:
graph/api/src/main/scala/org/apache/spark/graph/api/RelationshipFrame.scala
##
@@ -0,0
Mats-SX commented on a change in pull request #24851: [SPARK-27303][GRAPH] Add
Spark Graph API
URL: https://github.com/apache/spark/pull/24851#discussion_r334056874
##
File path:
graph/api/src/main/scala/org/apache/spark/graph/api/PropertyGraph.scala
##
@@ -0,0 +1,138 @@
igorcalabria opened a new pull request #26093: [SPARK-27812][K8s] Bump client
version
URL: https://github.com/apache/spark/pull/26093
### What changes were proposed in this pull request?
Updated kubernetes client.
### Why are the changes needed?
dvogelbacher commented on issue #25602: [SPARK-28613][SQL] Add config option
for limiting uncompressed result size in SQL
URL: https://github.com/apache/spark/pull/25602#issuecomment-541118690
@HyukjinKwon @maropu I don't think that `spark.driver.maxResultSize` works
here exactly for the
viirya commented on issue #26087: [SPARK-29427][SQL] Create
KeyValueGroupedDataset from existing columns in DataFrame
URL: https://github.com/apache/spark/pull/26087#issuecomment-541122217
@hagerf Thanks for the PR. Your test can not be compiled. I will make needed
change later.
rdblue commented on issue #26091: [SPARK-29439][SQL] DDL commands should not
use DataSourceV2Relation
URL: https://github.com/apache/spark/pull/26091#issuecomment-541127352
> DataSourceV2Relation is a scan node.
I disagree. It is a relation. It is converted to a scan node when we
MaxGekk commented on a change in pull request #26094: [SPARK-29442][SQL] Set
`default` mode should override the existing mode
URL: https://github.com/apache/spark/pull/26094#discussion_r334108082
##
File path: sql/core/src/test/scala/org/apache/spark/sql/DataFrameSuite.scala
dongjoon-hyun commented on a change in pull request #26094: [SPARK-29442][SQL]
Set `default` mode should override the existing mode
URL: https://github.com/apache/spark/pull/26094#discussion_r334109659
##
File path: sql/core/src/test/scala/org/apache/spark/sql/DataFrameSuite.scala
MaxGekk commented on issue #26094: [SPARK-29442][SQL] Set `default` mode should
override the existing mode
URL: https://github.com/apache/spark/pull/26094#issuecomment-541167531
I am looking at `DataStreamWriter`, is this `"default"` mode specific to
`DataFrameWriter`? `DataStreamWriter`
sandeep-katta opened a new pull request #26095: [SPARK-29435][Core]Shuffle is
not working when spark.shuffle.useOldFetchProtocol=true
URL: https://github.com/apache/spark/pull/26095
### What changes were proposed in this pull request?
Shuffle Block Construction during Shuffle Write
marmbrus commented on issue #24922: [SPARK-28120][SS] Rocksdb state storage
implementation
URL: https://github.com/apache/spark/pull/24922#issuecomment-541199399
First of all, I think this is great. Thanks for working on it!
I tend to agree with @gatorsmile that we should consider
igorcalabria commented on a change in pull request #26093: [SPARK-27812][K8s]
Bump client version
URL: https://github.com/apache/spark/pull/26093#discussion_r334148210
##
File path: resource-managers/kubernetes/core/pom.xml
##
@@ -29,7 +29,7 @@
Spark Project
dbtsai commented on a change in pull request #26085: [SPARK-29434] [Core]
Improve the MapStatuses Serialization Performance
URL: https://github.com/apache/spark/pull/26085#discussion_r334169263
##
File path: core/src/main/scala/org/apache/spark/MapOutputTracker.scala
##
dbtsai commented on a change in pull request #26085: [SPARK-29434] [Core]
Improve the MapStatuses Serialization Performance
URL: https://github.com/apache/spark/pull/26085#discussion_r334169659
##
File path: core/src/main/scala/org/apache/spark/MapOutputTracker.scala
##
AmplabJenkins removed a comment on issue #24297: [SPARK-27299][GRAPH][WIP]
Spark Graph API design proposal
URL: https://github.com/apache/spark/pull/24297#issuecomment-541236362
Merged build finished. Test FAILed.
This is an
AmplabJenkins commented on issue #26095: [SPARK-29435][Core]Shuffle is not
working when spark.shuffle.useOldFetchProtocol=true
URL: https://github.com/apache/spark/pull/26095#issuecomment-541236471
Can one of the admins verify this patch?
AmplabJenkins commented on issue #26090: [SPARK-29302]Fix writing file
collision in dynamic partition overwrite mode within speculative execution
URL: https://github.com/apache/spark/pull/26090#issuecomment-541236508
Can one of the admins verify this patch?
AmplabJenkins commented on issue #26086: [SPARK-29302] Make the file name of a
task for dynamic partition overwrite be unique
URL: https://github.com/apache/spark/pull/26086#issuecomment-541236527
Can one of the admins verify this patch?
AmplabJenkins commented on issue #26084: [SPARK-29433][WebUI] Fix tooltip
stages table
URL: https://github.com/apache/spark/pull/26084#issuecomment-541236544
Can one of the admins verify this patch?
This is an automated
AmplabJenkins commented on issue #26065: [SPARK-29404][DOCS] Add an explanation
about the executor color changed in WebUI documentation
URL: https://github.com/apache/spark/pull/26065#issuecomment-541236596
Can one of the admins verify this patch?
SparkQA removed a comment on issue #24297: [SPARK-27299][GRAPH][WIP] Spark
Graph API design proposal
URL: https://github.com/apache/spark/pull/24297#issuecomment-541236219
**[Test build #111934 has
AmplabJenkins commented on issue #26088: [SPARK-29436][K8S] Support executor
for selecting scheduler through scheduler name in the case of k8s
multi-scheduler scenario
URL: https://github.com/apache/spark/pull/26088#issuecomment-541236514
Can one of the admins verify this patch?
SparkQA commented on issue #24297: [SPARK-27299][GRAPH][WIP] Spark Graph API
design proposal
URL: https://github.com/apache/spark/pull/24297#issuecomment-541236219
**[Test build #111934 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/111934/testReport)**
SparkQA commented on issue #25333: [SPARK-28597][SS] Add config to retry spark
streaming's meta log when it met error
URL: https://github.com/apache/spark/pull/25333#issuecomment-541236218
**[Test build #111930 has
AmplabJenkins commented on issue #26078: [SPARK-29151][CORE] Support fractional
resources for task resource scheduling
URL: https://github.com/apache/spark/pull/26078#issuecomment-541236575
Can one of the admins verify this patch?
AmplabJenkins commented on issue #26082: [SPARK-29431][WebUI] Improve Web UI /
Sql tab visualization with cached dataframes.
URL: https://github.com/apache/spark/pull/26082#issuecomment-541236558
Can one of the admins verify this patch?
AmplabJenkins commented on issue #26093: [SPARK-27812][K8s] Bump client version
URL: https://github.com/apache/spark/pull/26093#issuecomment-541236482
Can one of the admins verify this patch?
This is an automated message
SparkQA commented on issue #24851: [SPARK-27303][GRAPH] Add Spark Graph API
URL: https://github.com/apache/spark/pull/24851#issuecomment-541236204
**[Test build #111932 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/111932/testReport)**
for PR 24851 at
SparkQA commented on issue #22145: [SPARK-25152][K8S] Enable SparkR
Integration Tests for Kubernetes
URL: https://github.com/apache/spark/pull/22145#issuecomment-541236340
**[Test build #111935 has
SparkQA commented on issue #25018: [SPARK-26321][SQL] Port HIVE-15297: Hive
should not split semicolon within quoted string literals
URL: https://github.com/apache/spark/pull/25018#issuecomment-541236339
**[Test build #111931 has
AmplabJenkins commented on issue #24297: [SPARK-27299][GRAPH][WIP] Spark Graph
API design proposal
URL: https://github.com/apache/spark/pull/24297#issuecomment-541236366
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
SparkQA commented on issue #24297: [SPARK-27299][GRAPH][WIP] Spark Graph API
design proposal
URL: https://github.com/apache/spark/pull/24297#issuecomment-541236346
**[Test build #111934 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/111934/testReport)**
AmplabJenkins commented on issue #24297: [SPARK-27299][GRAPH][WIP] Spark Graph
API design proposal
URL: https://github.com/apache/spark/pull/24297#issuecomment-541236362
Merged build finished. Test FAILed.
This is an
SparkQA commented on issue #24405: [SPARK-27506][SQL] Allow deserialization of
Avro data using compatible schemas
URL: https://github.com/apache/spark/pull/24405#issuecomment-541236285
**[Test build #111933 has
AmplabJenkins commented on issue #26091: [SPARK-29439][SQL] DDL commands should
not use DataSourceV2Relation
URL: https://github.com/apache/spark/pull/26091#issuecomment-541236613
Merged build finished. Test PASSed.
This is
AmplabJenkins commented on issue #26091: [SPARK-29439][SQL] DDL commands should
not use DataSourceV2Relation
URL: https://github.com/apache/spark/pull/26091#issuecomment-541236623
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
AmplabJenkins removed a comment on issue #24405: [SPARK-27506][SQL] Allow
deserialization of Avro data using compatible schemas
URL: https://github.com/apache/spark/pull/24405#issuecomment-541237857
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
AmplabJenkins removed a comment on issue #25863:
[SPARK-28945][SPARK-29037][CORE][SQL] Fix the issue that spark gives duplicate
result and support concurrent file source write operations write to different
partitions in the same table.
URL:
AmplabJenkins removed a comment on issue #24851: [SPARK-27303][GRAPH] Add Spark
Graph API
URL: https://github.com/apache/spark/pull/24851#issuecomment-541237792
Merged build finished. Test PASSed.
This is an automated
AmplabJenkins removed a comment on issue #24851: [SPARK-27303][GRAPH] Add Spark
Graph API
URL: https://github.com/apache/spark/pull/24851#issuecomment-541237799
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
AmplabJenkins removed a comment on issue #25561: [SPARK-28810][DOC][SQL]
Document SHOW TABLES in SQL Reference.
URL: https://github.com/apache/spark/pull/25561#issuecomment-541237712
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
AmplabJenkins removed a comment on issue #25333: [SPARK-28597][SS] Add config
to retry spark streaming's meta log when it met error
URL: https://github.com/apache/spark/pull/25333#issuecomment-541237756
Test PASSed.
Refer to this link for build results (access rights to CI server
AmplabJenkins removed a comment on issue #25863:
[SPARK-28945][SPARK-29037][CORE][SQL] Fix the issue that spark gives duplicate
result and support concurrent file source write operations write to different
partitions in the same table.
URL:
dongjoon-hyun commented on issue #26094: [SPARK-29442][SQL] Set `default` mode
should override the existing mode
URL: https://github.com/apache/spark/pull/26094#issuecomment-541192672
This sounds like different one. We need a different JIRA.
>
AmplabJenkins removed a comment on issue #25018: [SPARK-26321][SQL] Port
HIVE-15297: Hive should not split semicolon within quoted string literals
URL: https://github.com/apache/spark/pull/25018#issuecomment-541237820
Test PASSed.
Refer to this link for build results (access rights to
AmplabJenkins removed a comment on issue #25018: [SPARK-26321][SQL] Port
HIVE-15297: Hive should not split semicolon within quoted string literals
URL: https://github.com/apache/spark/pull/25018#issuecomment-541237812
Merged build finished. Test PASSed.
AmplabJenkins commented on issue #24405: [SPARK-27506][SQL] Allow
deserialization of Avro data using compatible schemas
URL: https://github.com/apache/spark/pull/24405#issuecomment-541237857
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
AmplabJenkins removed a comment on issue #25561: [SPARK-28810][DOC][SQL]
Document SHOW TABLES in SQL Reference.
URL: https://github.com/apache/spark/pull/25561#issuecomment-541237708
Merged build finished. Test PASSed.
This
AmplabJenkins removed a comment on issue #25333: [SPARK-28597][SS] Add config
to retry spark streaming's meta log when it met error
URL: https://github.com/apache/spark/pull/25333#issuecomment-541237748
Merged build finished. Test PASSed.
AmplabJenkins removed a comment on issue #24405: [SPARK-27506][SQL] Allow
deserialization of Avro data using compatible schemas
URL: https://github.com/apache/spark/pull/24405#issuecomment-541237851
Merged build finished. Test PASSed.
AmplabJenkins removed a comment on issue #25929: [SPARK-29116][PYTHON][ML]
Refactor py classes related to DecisionTree
URL: https://github.com/apache/spark/pull/25929#issuecomment-541237556
Merged build finished. Test PASSed.
AmplabJenkins commented on issue #25914: [SPARK-29227][SS]Track rule info in
optimization phase
URL: https://github.com/apache/spark/pull/25914#issuecomment-541237652
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
AmplabJenkins commented on issue #25863: [SPARK-28945][SPARK-29037][CORE][SQL]
Fix the issue that spark gives duplicate result and support concurrent file
source write operations write to different partitions in the same table.
URL:
AmplabJenkins commented on issue #25333: [SPARK-28597][SS] Add config to retry
spark streaming's meta log when it met error
URL: https://github.com/apache/spark/pull/25333#issuecomment-541237756
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
AmplabJenkins commented on issue #24851: [SPARK-27303][GRAPH] Add Spark Graph
API
URL: https://github.com/apache/spark/pull/24851#issuecomment-541237792
Merged build finished. Test PASSed.
This is an automated message from
AmplabJenkins commented on issue #25561: [SPARK-28810][DOC][SQL] Document SHOW
TABLES in SQL Reference.
URL: https://github.com/apache/spark/pull/25561#issuecomment-541237708
Merged build finished. Test PASSed.
This is an
AmplabJenkins removed a comment on issue #25929: [SPARK-29116][PYTHON][ML]
Refactor py classes related to DecisionTree
URL: https://github.com/apache/spark/pull/25929#issuecomment-541237559
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
1 - 100 of 611 matches
Mail list logo