AmplabJenkins removed a comment on issue #24019: [SPARK-27099][SQL] Add
'xxhash64' for hashing arbitrary columns to Long
URL: https://github.com/apache/spark/pull/24019#issuecomment-470789551
Can one of the admins verify this patch?
happyhua commented on issue #23827: [SPARK-26912][CORE][HISTORY] Allow setting
permission for event_log
URL: https://github.com/apache/spark/pull/23827#issuecomment-472411198
Permission of event_log should be configurable. Probably 775 permission as
default is better.
In our situation
AmplabJenkins commented on issue #20433: [SPARK-23264][SQL] Make INTERVAL
keyword optional in INTERVAL clauses when ANSI mode enabled
URL: https://github.com/apache/spark/pull/20433#issuecomment-472420986
Test PASSed.
Refer to this link for build results (access rights to CI server
AmplabJenkins commented on issue #20433: [SPARK-23264][SQL] Make INTERVAL
keyword optional in INTERVAL clauses when ANSI mode enabled
URL: https://github.com/apache/spark/pull/20433#issuecomment-472420976
Merged build finished. Test PASSed.
attilapiros commented on a change in pull request #24079:
[SPARK-27145][Minor]Close store in the SQLAppStatusListenerSuite after test
URL: https://github.com/apache/spark/pull/24079#discussion_r265162114
##
File path:
attilapiros commented on a change in pull request #24079:
[SPARK-27145][Minor]Close store in the SQLAppStatusListenerSuite after test
URL: https://github.com/apache/spark/pull/24079#discussion_r265162114
##
File path:
attilapiros commented on a change in pull request #24079:
[SPARK-27145][Minor]Close store in the SQLAppStatusListenerSuite after test
URL: https://github.com/apache/spark/pull/24079#discussion_r265162114
##
File path:
sujith71955 commented on issue #24075: [SPARK-26176][SQL] Invalid column names
validation is been added when we create a table using the Hive serde "STORED AS"
URL: https://github.com/apache/spark/pull/24075#issuecomment-472459765
retest this please
SparkQA removed a comment on issue #20793: [WIP][SPARK-23643] Shrinking the
buffer in hashSeed up to size of the seed parameter
URL: https://github.com/apache/spark/pull/20793#issuecomment-472341429
**[Test build #103430 has
AmplabJenkins commented on issue #20793: [WIP][SPARK-23643] Shrinking the
buffer in hashSeed up to size of the seed parameter
URL: https://github.com/apache/spark/pull/20793#issuecomment-472418864
Merged build finished. Test FAILed.
AmplabJenkins commented on issue #20793: [WIP][SPARK-23643] Shrinking the
buffer in hashSeed up to size of the seed parameter
URL: https://github.com/apache/spark/pull/20793#issuecomment-472418893
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
AmplabJenkins removed a comment on issue #20793: [WIP][SPARK-23643] Shrinking
the buffer in hashSeed up to size of the seed parameter
URL: https://github.com/apache/spark/pull/20793#issuecomment-472418864
Merged build finished. Test FAILed.
attilapiros commented on a change in pull request #23393:
[SPARK-26288][CORE]add initRegisteredExecutorsDB
URL: https://github.com/apache/spark/pull/23393#discussion_r265140605
##
File path:
core/src/main/scala/org/apache/spark/deploy/ExternalShuffleService.scala
##
@@
maropu commented on issue #23783: [SPARK-26854][SQL] Support ANY/SOME subquery
URL: https://github.com/apache/spark/pull/23783#issuecomment-472433715
kindly ping @cloud-fan
This is an automated message from the Apache Git
srowen commented on a change in pull request #23919: [MINOR][DOC] Documentation
improvement: More detailed explanation of possible "deploy-mode"s
URL: https://github.com/apache/spark/pull/23919#discussion_r265146573
##
File path: docs/submitting-applications.md
##
@@
srowen commented on a change in pull request #23919: [MINOR][DOC] Documentation
improvement: More detailed explanation of possible "deploy-mode"s
URL: https://github.com/apache/spark/pull/23919#discussion_r265146185
##
File path: docs/submitting-applications.md
##
@@
viirya commented on issue #24053: [SPARK-27126][SQL] Consolidate Scala and Java
type deserializerFor
URL: https://github.com/apache/spark/pull/24053#issuecomment-472440686
Yea, I think that's the right way to go. I realized we can make move to that
direction during this refactoring and I
maropu commented on a change in pull request #20433: [SPARK-23264][SQL] Make
INTERVAL keyword optional in INTERVAL clauses when ANSI mode enabled
URL: https://github.com/apache/spark/pull/20433#discussion_r265112136
##
File path:
ajithme commented on issue #24076: [SPARK-27142] Provide REST API for SQL level
information
URL: https://github.com/apache/spark/pull/24076#issuecomment-472413717
> @ajithme Thanks for the work.
> To make it consistent with other API, I think we need to have at least two
APIs
>
AmplabJenkins removed a comment on issue #20433: [SPARK-23264][SQL] Make
INTERVAL keyword optional in INTERVAL clauses when ANSI mode enabled
URL: https://github.com/apache/spark/pull/20433#issuecomment-472420986
Test PASSed.
Refer to this link for build results (access rights to CI
AmplabJenkins removed a comment on issue #20433: [SPARK-23264][SQL] Make
INTERVAL keyword optional in INTERVAL clauses when ANSI mode enabled
URL: https://github.com/apache/spark/pull/20433#issuecomment-472420976
Merged build finished. Test PASSed.
maropu commented on a change in pull request #24019: [SPARK-27099][SQL] Add
'xxhash64' for hashing arbitrary columns to Long
URL: https://github.com/apache/spark/pull/24019#discussion_r265134944
##
File path: sql/core/src/main/scala/org/apache/spark/sql/functions.scala
##
srowen commented on a change in pull request #24057: [SPARK-26839][SQL] Work
around classloader changes in Java 9 for Hive isolation
URL: https://github.com/apache/spark/pull/24057#discussion_r265142878
##
File path: sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveUtils.scala
ajithme commented on issue #24076: [SPARK-27142] Provide REST API for SQL level
information
URL: https://github.com/apache/spark/pull/24076#issuecomment-472435640
Ok i have updated the PR accordingly. Please review
This is
DaveDeCaprio commented on a change in pull request #24028: [SPARK-26917][SQL]
Further reduce locks in CacheManager
URL: https://github.com/apache/spark/pull/24028#discussion_r265142932
##
File path:
sql/core/src/main/scala/org/apache/spark/sql/execution/CacheManager.scala
DaveDeCaprio commented on a change in pull request #24028: [SPARK-26917][SQL]
Further reduce locks in CacheManager
URL: https://github.com/apache/spark/pull/24028#discussion_r265142809
##
File path:
sql/core/src/main/scala/org/apache/spark/sql/execution/CacheManager.scala
attilapiros commented on a change in pull request #23393:
[SPARK-26288][CORE]add initRegisteredExecutorsDB
URL: https://github.com/apache/spark/pull/23393#discussion_r265152639
##
File path:
core/src/main/scala/org/apache/spark/deploy/ExternalShuffleService.scala
##
@@
attilapiros commented on a change in pull request #23393:
[SPARK-26288][CORE]add initRegisteredExecutorsDB
URL: https://github.com/apache/spark/pull/23393#discussion_r265163265
##
File path: core/src/test/scala/org/apache/spark/deploy/worker/WorkerSuite.scala
##
@@ -245,4
shahidki31 commented on a change in pull request #24079:
[SPARK-27145][Minor]Close store in the SQLAppStatusListenerSuite after test
URL: https://github.com/apache/spark/pull/24079#discussion_r265166923
##
File path:
shahidki31 commented on a change in pull request #24079:
[SPARK-27145][Minor]Close store in the SQLAppStatusListenerSuite after test
URL: https://github.com/apache/spark/pull/24079#discussion_r265166923
##
File path:
gengliangwang commented on a change in pull request #24076: [SPARK-27142]
Provide REST API for SQL level information
URL: https://github.com/apache/spark/pull/24076#discussion_r265178647
##
File path:
sql/core/src/main/scala/org/apache/spark/status/api/v1/SqlListResource.scala
gengliangwang commented on a change in pull request #24076: [SPARK-27142]
Provide REST API for SQL level information
URL: https://github.com/apache/spark/pull/24076#discussion_r265179697
##
File path:
sql/core/src/main/scala/org/apache/spark/status/api/v1/SqlListResource.scala
gengliangwang commented on a change in pull request #24076: [SPARK-27142]
Provide REST API for SQL level information
URL: https://github.com/apache/spark/pull/24076#discussion_r265179417
##
File path:
sql/core/src/main/scala/org/apache/spark/status/api/v1/SqlListResource.scala
gengliangwang commented on a change in pull request #24076: [SPARK-27142]
Provide REST API for SQL level information
URL: https://github.com/apache/spark/pull/24076#discussion_r265177205
##
File path:
sql/core/src/main/scala/org/apache/spark/status/api/v1/SqlListResource.scala
dongjoon-hyun commented on a change in pull request #24049: [SPARK-27123][SQL]
Improve CollapseProject to handle projects cross limit/repartition/sample
URL: https://github.com/apache/spark/pull/24049#discussion_r265189229
##
File path:
attilapiros commented on a change in pull request #23393:
[SPARK-26288][CORE]add initRegisteredExecutorsDB
URL: https://github.com/apache/spark/pull/23393#discussion_r265189239
##
File path: core/src/test/scala/org/apache/spark/deploy/worker/WorkerSuite.scala
##
@@ -245,4
attilapiros commented on a change in pull request #23393:
[SPARK-26288][CORE]add initRegisteredExecutorsDB
URL: https://github.com/apache/spark/pull/23393#discussion_r265189239
##
File path: core/src/test/scala/org/apache/spark/deploy/worker/WorkerSuite.scala
##
@@ -245,4
HeartSaVioR edited a comment on issue #22138: [SPARK-25151][SS] Apply Apache
Commons Pool to KafkaDataConsumer
URL: https://github.com/apache/spark/pull/22138#issuecomment-472479384
UPDATE: just added log message to log when Kafka consumer is created.
* master:
HeartSaVioR edited a comment on issue #22138: [SPARK-25151][SS] Apply Apache
Commons Pool to KafkaDataConsumer
URL: https://github.com/apache/spark/pull/22138#issuecomment-472479384
UPDATE: just added log message to log when Kafka consumer is created.
* master:
justinuang commented on issue #20303: [SPARK-23128][SQL] A new approach to do
adaptive execution in Spark SQL
URL: https://github.com/apache/spark/pull/20303#issuecomment-472482313
@carsonwang What happens when we call df.repartition(500) on a 10MB with AQE
turned on? AQE will still
justinuang edited a comment on issue #20303: [SPARK-23128][SQL] A new approach
to do adaptive execution in Spark SQL
URL: https://github.com/apache/spark/pull/20303#issuecomment-472482313
@carsonwang What happens when we call df.repartition(500) on a 10MB with AQE
turned on? AQE will
cloud-fan commented on a change in pull request #24028: [SPARK-26917][SQL]
Further reduce locks in CacheManager
URL: https://github.com/apache/spark/pull/24028#discussion_r265207871
##
File path:
sql/core/src/main/scala/org/apache/spark/sql/execution/CacheManager.scala
##
vanzin commented on issue #23827: [SPARK-26912][CORE][HISTORY] Allow setting
permission for event_log
URL: https://github.com/apache/spark/pull/23827#issuecomment-472499554
> Probably 775 permission as default is better.
Absolutely not. That would be a security issue.
> It
SparkQA removed a comment on issue #24056: [SPARK-26152] Synchronize Worker
Cleanup with Worker Shutdown
URL: https://github.com/apache/spark/pull/24056#issuecomment-472391094
**[Test build #4617 has
happyhua edited a comment on issue #23827: [SPARK-26912][CORE][HISTORY] Allow
setting permission for event_log
URL: https://github.com/apache/spark/pull/23827#issuecomment-472508692
First of all, not everyone uses Hadoop hdfs.
In our scenario, we submit every sprark job from
happyhua edited a comment on issue #23827: [SPARK-26912][CORE][HISTORY] Allow
setting permission for event_log
URL: https://github.com/apache/spark/pull/23827#issuecomment-472508692
First of all, not everyone uses Hadoop hdfs.
In our scenario, we submit every sprark job from
jzhuge commented on a change in pull request #23848: [SPARK-26946][SQL]
Identifiers for multi-catalog
URL: https://github.com/apache/spark/pull/23848#discussion_r265234026
##
File path:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/identifiers.scala
##
@@
lwwmanning opened a new pull request #24083: [SPARK-24432] Support dynamic
allocation without external shuffle service
URL: https://github.com/apache/spark/pull/24083
## What changes were proposed in this pull request?
This PR adds a limited version of dynamic allocation that does
cloud-fan commented on issue #24082: [SPARK-27123][SQL][FOLLOWUP] Use
isRenaming check for limit too.
URL: https://github.com/apache/spark/pull/24082#issuecomment-472526749
LGTM
This is an automated message from the Apache
AmplabJenkins removed a comment on issue #24051: [SPARK-26879][SQL] Standardize
one-based column indexing for stack and json_tuple function
URL: https://github.com/apache/spark/pull/24051#issuecomment-472529362
Test FAILed.
Refer to this link for build results (access rights to CI
dongjoon-hyun commented on issue #24082: [SPARK-27123][SQL][FOLLOWUP] Use
isRenaming check for limit too.
URL: https://github.com/apache/spark/pull/24082#issuecomment-472529890
Retest this please.
This is an automated
SparkQA commented on issue #24010: [SPARK-26439][CORE][WIP] Introduce
WorkerOffer reservation mechanism for Barrier TaskSet
URL: https://github.com/apache/spark/pull/24010#issuecomment-472535476
**[Test build #103444 has
SparkQA commented on issue #20433: [SPARK-23264][SQL] Make INTERVAL keyword
optional in INTERVAL clauses when ANSI mode enabled
URL: https://github.com/apache/spark/pull/20433#issuecomment-472535580
**[Test build #103445 has
AmplabJenkins removed a comment on issue #24010: [SPARK-26439][CORE][WIP]
Introduce WorkerOffer reservation mechanism for Barrier TaskSet
URL: https://github.com/apache/spark/pull/24010#issuecomment-472538357
Test PASSed.
Refer to this link for build results (access rights to CI server
AmplabJenkins removed a comment on issue #24044: [WIP][test-hadoop3.1] Test
Hadoop 3.1 on jenkins
URL: https://github.com/apache/spark/pull/24044#issuecomment-472538308
Build finished. Test PASSed.
This is an automated
AmplabJenkins removed a comment on issue #24075: [SPARK-26176][SQL] Invalid
column names validation is been added when we create a table using the Hive
serde "STORED AS"
URL: https://github.com/apache/spark/pull/24075#issuecomment-472538240
Test PASSed.
Refer to this link for build
AmplabJenkins removed a comment on issue #24010: [SPARK-26439][CORE][WIP]
Introduce WorkerOffer reservation mechanism for Barrier TaskSet
URL: https://github.com/apache/spark/pull/24010#issuecomment-472538347
Merged build finished. Test PASSed.
AmplabJenkins removed a comment on issue #24028: [SPARK-26917][SQL] Further
reduce locks in CacheManager
URL: https://github.com/apache/spark/pull/24028#issuecomment-472538268
Merged build finished. Test PASSed.
This is an
AmplabJenkins removed a comment on issue #24044: [WIP][test-hadoop3.1] Test
Hadoop 3.1 on jenkins
URL: https://github.com/apache/spark/pull/24044#issuecomment-472538331
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
AmplabJenkins removed a comment on issue #24075: [SPARK-26176][SQL] Invalid
column names validation is been added when we create a table using the Hive
serde "STORED AS"
URL: https://github.com/apache/spark/pull/24075#issuecomment-472538233
Merged build finished. Test PASSed.
AmplabJenkins removed a comment on issue #24028: [SPARK-26917][SQL] Further
reduce locks in CacheManager
URL: https://github.com/apache/spark/pull/24028#issuecomment-472538273
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
mccheah commented on issue #24083: [SPARK-24432] Support dynamic allocation
without external shuffle service
URL: https://github.com/apache/spark/pull/24083#issuecomment-472539915
ok to test
This is an automated message from
AmplabJenkins removed a comment on issue #24083: [SPARK-24432] Support dynamic
allocation without external shuffle service
URL: https://github.com/apache/spark/pull/24083#issuecomment-472537944
Can one of the admins verify this patch?
AmplabJenkins removed a comment on issue #24083: [SPARK-24432] Support dynamic
allocation without external shuffle service
URL: https://github.com/apache/spark/pull/24083#issuecomment-472541293
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
AmplabJenkins removed a comment on issue #24083: [SPARK-24432] Support dynamic
allocation without external shuffle service
URL: https://github.com/apache/spark/pull/24083#issuecomment-472541264
Merged build finished. Test PASSed.
HeartSaVioR edited a comment on issue #22138: [SPARK-25151][SS] Apply Apache
Commons Pool to KafkaDataConsumer
URL: https://github.com/apache/spark/pull/22138#issuecomment-472479384
UPDATE: just added log message to log when Kafka consumer is created.
* master:
MaxGekk commented on issue #20793: [WIP][SPARK-23643] Shrinking the buffer in
hashSeed up to size of the seed parameter
URL: https://github.com/apache/spark/pull/20793#issuecomment-472483976
@yanboliang Could you help me to regenerate expected values for
`LogisticRegressionSuite`, please.
cloud-fan commented on a change in pull request #24049: [SPARK-27123][SQL]
Improve CollapseProject to handle projects cross limit/repartition/sample
URL: https://github.com/apache/spark/pull/24049#discussion_r265206012
##
File path:
cloud-fan commented on a change in pull request #24028: [SPARK-26917][SQL]
Further reduce locks in CacheManager
URL: https://github.com/apache/spark/pull/24028#discussion_r265208435
##
File path:
sql/core/src/main/scala/org/apache/spark/sql/execution/CacheManager.scala
##
squito commented on a change in pull request #24057: [SPARK-26839][SQL] Work
around classloader changes in Java 9 for Hive isolation
URL: https://github.com/apache/spark/pull/24057#discussion_r265213623
##
File path: sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveUtils.scala
SparkQA commented on issue #24051: [SPARK-26879][SQL] Standardize one-based
column indexing for stack and json_tuple function
URL: https://github.com/apache/spark/pull/24051#issuecomment-472495422
**[Test build #103434 has
SparkQA removed a comment on issue #24051: [SPARK-26879][SQL] Standardize
one-based column indexing for stack and json_tuple function
URL: https://github.com/apache/spark/pull/24051#issuecomment-472399853
**[Test build #103434 has
gagafunctor commented on a change in pull request #23983: [SPARK-26881][mllib]
Heuristic for tree aggregate depth
URL: https://github.com/apache/spark/pull/23983#discussion_r265219047
##
File path:
mllib/src/main/scala/org/apache/spark/mllib/linalg/distributed/RowMatrix.scala
SparkQA commented on issue #24056: [SPARK-26152] Synchronize Worker Cleanup
with Worker Shutdown
URL: https://github.com/apache/spark/pull/24056#issuecomment-472500740
**[Test build #4617 has
ajithme commented on a change in pull request #24076: [SPARK-27142] Provide
REST API for SQL level information
URL: https://github.com/apache/spark/pull/24076#discussion_r265221881
##
File path:
sql/core/src/main/scala/org/apache/spark/status/api/v1/SqlListResource.scala
SparkQA removed a comment on issue #24024:
[MINOR][CORE]spark.diskStore.subDirectories <= 0 should throw Exception
URL: https://github.com/apache/spark/pull/24024#issuecomment-472396662
**[Test build #4618 has
SparkQA commented on issue #24024: [MINOR][CORE]spark.diskStore.subDirectories
<= 0 should throw Exception
URL: https://github.com/apache/spark/pull/24024#issuecomment-472512888
**[Test build #4618 has
happyhua commented on issue #23827: [SPARK-26912][CORE][HISTORY] Allow setting
permission for event_log
URL: https://github.com/apache/spark/pull/23827#issuecomment-472516443
I wonder if you have used spark history server.
So you want people have a cron job to chmod the event files
AmplabJenkins removed a comment on issue #24019: [SPARK-27099][SQL] Add
'xxhash64' for hashing arbitrary columns to Long
URL: https://github.com/apache/spark/pull/24019#issuecomment-472516087
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
AmplabJenkins commented on issue #24025: [SPARK-27106][SQL] merge
CaseInsensitiveStringMap and DataSourceOptions
URL: https://github.com/apache/spark/pull/24025#issuecomment-472520302
Merged build finished. Test PASSed.
This
dongjoon-hyun commented on issue #24082: [SPARK-27123][SQL][FOLLOWUP] Use
isRenaming check for limit too.
URL: https://github.com/apache/spark/pull/24082#issuecomment-472529664
Thank you so much for review and approval, @cloud-fan .
AmplabJenkins commented on issue #24051: [SPARK-26879][SQL] Standardize
one-based column indexing for stack and json_tuple function
URL: https://github.com/apache/spark/pull/24051#issuecomment-472529352
Merged build finished. Test FAILed.
AmplabJenkins commented on issue #24051: [SPARK-26879][SQL] Standardize
one-based column indexing for stack and json_tuple function
URL: https://github.com/apache/spark/pull/24051#issuecomment-472529362
Test FAILed.
Refer to this link for build results (access rights to CI server
AmplabJenkins removed a comment on issue #24051: [SPARK-26879][SQL] Standardize
one-based column indexing for stack and json_tuple function
URL: https://github.com/apache/spark/pull/24051#issuecomment-472529352
Merged build finished. Test FAILed.
mcheah commented on issue #24083: [SPARK-24432] Support dynamic allocation
without external shuffle service
URL: https://github.com/apache/spark/pull/24083#issuecomment-472530317
@mccheah
This is an automated message from
cloud-fan commented on issue #24012: [SPARK-26811][SQL] Add capabilities to
v2.Table
URL: https://github.com/apache/spark/pull/24012#issuecomment-472530353
LGTM except https://github.com/apache/spark/pull/24012/files#r264765864
huaxingao opened a new pull request #24084: [SPARK-27153][PYTHON]add weightCol
in python RegressionEvaluator
URL: https://github.com/apache/spark/pull/24084
## What changes were proposed in this pull request?
add weightCol in python version of RegressionEvaluator and
SparkQA commented on issue #24079: [SPARK-27145][Minor]Close store in the
SQLAppStatusListenerSuite after test
URL: https://github.com/apache/spark/pull/24079#issuecomment-472535259
**[Test build #103440 has
SparkQA commented on issue #24075: [SPARK-26176][SQL] Invalid column names
validation is been added when we create a table using the Hive serde "STORED AS"
URL: https://github.com/apache/spark/pull/24075#issuecomment-472535333
**[Test build #103441 has
SparkQA commented on issue #24084: [SPARK-27153][PYTHON]add weightCol in python
RegressionEvaluator
URL: https://github.com/apache/spark/pull/24084#issuecomment-472535188
**[Test build #103437 has
SparkQA commented on issue #24082: [SPARK-27123][SQL][FOLLOWUP] Use isRenaming
check for limit too.
URL: https://github.com/apache/spark/pull/24082#issuecomment-472535221
**[Test build #103438 has
AmplabJenkins commented on issue #24083: [SPARK-24432] Support dynamic
allocation without external shuffle service
URL: https://github.com/apache/spark/pull/24083#issuecomment-472535078
Can one of the admins verify this patch?
SparkQA commented on issue #24044: [WIP][test-hadoop3.1] Test Hadoop 3.1 on
jenkins
URL: https://github.com/apache/spark/pull/24044#issuecomment-472535415
**[Test build #103442 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/103442/testReport)**
for PR
SparkQA commented on issue #24081: [SPARK-27151][SQL] ClearCacheCommand should
be case-object to avoid copys
URL: https://github.com/apache/spark/pull/24081#issuecomment-472535253
**[Test build #103439 has
SparkQA commented on issue #24028: [SPARK-26917][SQL] Further reduce locks in
CacheManager
URL: https://github.com/apache/spark/pull/24028#issuecomment-472535407
**[Test build #103443 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/103443/testReport)**
for
AmplabJenkins removed a comment on issue #24082: [SPARK-27123][SQL][FOLLOWUP]
Use isRenaming check for limit too.
URL: https://github.com/apache/spark/pull/24082#issuecomment-472538168
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
AmplabJenkins commented on issue #24075: [SPARK-26176][SQL] Invalid column
names validation is been added when we create a table using the Hive serde
"STORED AS"
URL: https://github.com/apache/spark/pull/24075#issuecomment-472538233
Merged build finished. Test PASSed.
AmplabJenkins removed a comment on issue #24079: [SPARK-27145][Minor]Close
store in the SQLAppStatusListenerSuite after test
URL: https://github.com/apache/spark/pull/24079#issuecomment-472538163
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
AmplabJenkins commented on issue #24075: [SPARK-26176][SQL] Invalid column
names validation is been added when we create a table using the Hive serde
"STORED AS"
URL: https://github.com/apache/spark/pull/24075#issuecomment-472538240
Test PASSed.
Refer to this link for build results
AmplabJenkins removed a comment on issue #24079: [SPARK-27145][Minor]Close
store in the SQLAppStatusListenerSuite after test
URL: https://github.com/apache/spark/pull/24079#issuecomment-472538157
Merged build finished. Test PASSed.
401 - 500 of 991 matches
Mail list logo