zhixingheyi-tian commented on PR #36659:
URL: https://github.com/apache/spark/pull/36659#issuecomment-1140226861
> The GA job didn't pass, can you check?
Hi @cloud-fan @srowen
All Three GA jobs passed. Thanks
--
This is an automated message from the Apache Git Service.
To
Yikun opened a new pull request, #36712:
URL: https://github.com/apache/spark/pull/36712
### What changes were proposed in this pull request?
Since pandas 1.4
https://github.com/pandas-dev/pandas/commit/aaba0efd630ed607c5aaaef7b5f43d2fe90ca81c
> Series.__repr__() and
zhixingheyi-tian commented on PR #36659:
URL: https://github.com/apache/spark/pull/36659#issuecomment-1140187223
> **continuous-integration/appveyor/pr ** — AppVeyor build
Hi @cloud-fan @srowen
All Three GA jobs passed. Thanks
--
This is an automated message from the Apache Git
cxzl25 opened a new pull request, #36710:
URL: https://github.com/apache/spark/pull/36710
### What changes were proposed in this pull request?
Use `java.nio.file.Files.delete` instead of
`org.apache.commons.io.FileUtils#delete`
### Why are the changes needed?
sandeepvinayak commented on PR #36680:
URL: https://github.com/apache/spark/pull/36680#issuecomment-1140203155
@JoshRosen Just took another look at the code, the fix I made is for the
deadlock b/w `TaskMemoryManager` and `UnsafeExternalSorter.SplittableIterator`
which is what we faced and
wankunde opened a new pull request, #36709:
URL: https://github.com/apache/spark/pull/36709
…mance
### What changes were proposed in this pull request?
Optimize `MapOutputTracker.convertMapStatuses()` method.
### Why are the changes needed?
Yikun opened a new pull request, #36711:
URL: https://github.com/apache/spark/pull/36711
### What changes were proposed in this pull request?
Respect ps.concat sort parameter to follow pandas behavior:
- Remove the multi-index special sort process case and add ut.
- Still keep
sunchao commented on code in PR #36697:
URL: https://github.com/apache/spark/pull/36697#discussion_r884084376
##
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/V2ScanPartitioning.scala:
##
@@ -32,15 +32,15 @@ import
Yikun commented on PR #36712:
URL: https://github.com/apache/spark/pull/36712#issuecomment-1140234328
We need to cleanup all these doctest after bump pandas to 1.4. will address
together in SPARK-39150.
--
This is an automated message from the Apache Git Service.
To respond to the
pan3793 commented on PR #36697:
URL: https://github.com/apache/spark/pull/36697#issuecomment-1140265452
CI is green, please take another look @dongjoon-hyun
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
srowen commented on PR #36499:
URL: https://github.com/apache/spark/pull/36499#issuecomment-1140281363
So if I create a NUMBER in Teradata without a scale, then it uses a system
default scale. Do we know what that is?
I'm confused if Teradata doesn't record and return the actual scale
srowen commented on PR #36659:
URL: https://github.com/apache/spark/pull/36659#issuecomment-1140286823
I think the appveyor error is unrelated. I'll merge shortly
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
srowen commented on code in PR #3:
URL: https://github.com/apache/spark/pull/3#discussion_r884136877
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/json/JSONOptions.scala:
##
@@ -54,31 +54,31 @@ private[sql] class JSONOptions(
val samplingRatio =
dongjoon-hyun commented on PR #36707:
URL: https://github.com/apache/spark/pull/36707#issuecomment-1140303016
Thank you so much, @wangyum . Merged to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
dongjoon-hyun closed pull request #36707: [SPARK-39324][CORE] Log
`ExecutorDecommission` as INFO level in `TaskSchedulerImpl`
URL: https://github.com/apache/spark/pull/36707
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and
ravwojdyla commented on code in PR #36430:
URL: https://github.com/apache/spark/pull/36430#discussion_r884187298
##
sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala:
##
@@ -1593,6 +1593,35 @@ class Dataset[T] private[sql](
@scala.annotation.varargs
def
wangyum opened a new pull request, #36713:
URL: https://github.com/apache/spark/pull/36713
### What changes were proposed in this pull request?
Fix test failure when `SPARK_ANSI_SQL_MODE` is enabled:
```
2022-05-28T21:02:01.9025896Z - INSERT rows, ALTER TABLE ADD COLUMNS with
wangyum commented on PR #36713:
URL: https://github.com/apache/spark/pull/36713#issuecomment-1140354174
cc @dtenedor @gengliangwang
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
AmplabJenkins commented on PR #36701:
URL: https://github.com/apache/spark/pull/36701#issuecomment-1140376433
Can one of the admins verify this patch?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to
AmplabJenkins commented on PR #36709:
URL: https://github.com/apache/spark/pull/36709#issuecomment-1140324927
Can one of the admins verify this patch?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to
AmplabJenkins commented on PR #36710:
URL: https://github.com/apache/spark/pull/36710#issuecomment-1140324924
Can one of the admins verify this patch?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to
beliefer opened a new pull request, #36714:
URL: https://github.com/apache/spark/pull/36714
### What changes were proposed in this pull request?
Many mainstream database supports aggregate function `MEDIAN`.
**Syntax:**
Aggregate function
`MEDIAN( )`
Window function
srowen closed pull request #36659: [SPARK-39282][SQL] Replace If-Else branch
with bitwise operators in roundNumberOfBytesToNearestWord
URL: https://github.com/apache/spark/pull/36659
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
srowen commented on PR #36659:
URL: https://github.com/apache/spark/pull/36659#issuecomment-1140316615
MErged to master
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
sunchao commented on PR #36697:
URL: https://github.com/apache/spark/pull/36697#issuecomment-1140327360
Hmm thinking more about this, I think maybe we should fail the analysis on
the write path, even if a V2 transform exist in the function catalog.
Otherwise, the write may fail at a later
dcoliversun commented on code in PR #3:
URL: https://github.com/apache/spark/pull/3#discussion_r884194187
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/json/JSONOptions.scala:
##
@@ -54,31 +54,31 @@ private[sql] class JSONOptions(
val samplingRatio =
dcoliversun closed pull request #3: [SPARK-39289][CORE][SQL][SS] Replace
`map(_.toBoolean).getOrElse(false/true)` with `exists/forall(_.toBoolean)`
URL: https://github.com/apache/spark/pull/3
--
This is an automated message from the Apache Git Service.
To respond to the message,
wangyum closed pull request #36710: [SPARK-39261][CORE][FOLLOWUP] Improve
newline formatting for error messages
URL: https://github.com/apache/spark/pull/36710
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
wangyum commented on PR #36710:
URL: https://github.com/apache/spark/pull/36710#issuecomment-114037
Merged to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
github-actions[bot] commented on PR #35334:
URL: https://github.com/apache/spark/pull/35334#issuecomment-1140348353
We're closing this PR because it hasn't been updated in a while. This isn't
a judgement on the merit of the PR in any way. It's just a way of keeping the
PR queue manageable.
github-actions[bot] closed pull request #35536: [SPARK-38222][SQL] Expose Node
Description attribute in SQL Rest API
URL: https://github.com/apache/spark/pull/35536
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
github-actions[bot] commented on PR #34453:
URL: https://github.com/apache/spark/pull/34453#issuecomment-1140348363
We're closing this PR because it hasn't been updated in a while. This isn't
a judgement on the merit of the PR in any way. It's just a way of keeping the
PR queue manageable.
github-actions[bot] commented on PR #35278:
URL: https://github.com/apache/spark/pull/35278#issuecomment-1140348356
We're closing this PR because it hasn't been updated in a while. This isn't
a judgement on the merit of the PR in any way. It's just a way of keeping the
PR queue manageable.
beliefer commented on PR #36708:
URL: https://github.com/apache/spark/pull/36708#issuecomment-1140376742
ping @MaxGekk cc @cloud-fan
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
34 matches
Mail list logo