tedyu commented on PR #39250:
URL: https://github.com/apache/spark/pull/39250#issuecomment-1367136917
@srowen
Please let me know what else should be done for this PR.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use
MaxGekk commented on code in PR #39239:
URL: https://github.com/apache/spark/pull/39239#discussion_r1058794838
##
python/pyspark/pandas/tests/test_resample.py:
##
@@ -263,7 +263,7 @@ def test_dataframe_resample(self):
def test_series_resample(self):
self._test_resa
grundprinzip commented on code in PR #39283:
URL: https://github.com/apache/spark/pull/39283#discussion_r1058789739
##
connector/connect/common/src/main/protobuf/spark/connect/expressions.proto:
##
@@ -241,6 +242,20 @@ message Expression {
Expression extraction = 2;
}
cloud-fan closed pull request #39269: [SPARK-41631][FOLLOWUP][SQL] Fix two
issues in implicit lateral column alias resolution on Aggregate
URL: https://github.com/apache/spark/pull/39269
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to G
cloud-fan commented on PR #39269:
URL: https://github.com/apache/spark/pull/39269#issuecomment-1367129829
thanks, merging to master!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific c
cloud-fan commented on PR #39269:
URL: https://github.com/apache/spark/pull/39269#issuecomment-1367129489
The failure is unrelated: `python/pyspark/sql/connect/client.py:25: error:
Skipping analyzing "grpc_status": module is installed, but missing library
stubs or py.typed marker`
--
Thi
cloud-fan commented on code in PR #39239:
URL: https://github.com/apache/spark/pull/39239#discussion_r1058787040
##
python/pyspark/pandas/tests/test_resample.py:
##
@@ -263,7 +263,7 @@ def test_dataframe_resample(self):
def test_series_resample(self):
self._test_re
cloud-fan commented on code in PR #39239:
URL: https://github.com/apache/spark/pull/39239#discussion_r1058786637
##
python/pyspark/sql/types.py:
##
@@ -276,7 +276,18 @@ def toInternal(self, dt: datetime.datetime) -> int:
def fromInternal(self, ts: int) -> datetime.datetime:
zhengruifeng commented on PR #39283:
URL: https://github.com/apache/spark/pull/39283#issuecomment-1367128645
cc @HyukjinKwon @cloud-fan @grundprinzip
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to g
mridulm commented on PR #39275:
URL: https://github.com/apache/spark/pull/39275#issuecomment-1367126643
Primarily use weakIntern for cases where there are a large number of
duplicated strings with same value (so app start won't qualify), for the most
common values.
Not for others
--
T
zhengruifeng opened a new pull request, #39283:
URL: https://github.com/apache/spark/pull/39283
### What changes were proposed in this pull request?
Implement `Column.{withField, dropFields}`
### Why are the changes needed?
For API coverage
### Does this PR introdu
LuciferYang commented on PR #39255:
URL: https://github.com/apache/spark/pull/39255#issuecomment-1367117093
@bjornjorgensen The pr is still being tested. I feel strange that the yarn
module can pass the test with `-Phadoop-2`.
--
This is an automated message from the Apache Git Servic
viirya commented on code in PR #39248:
URL: https://github.com/apache/spark/pull/39248#discussion_r1058775725
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/Expression.scala:
##
@@ -127,6 +125,54 @@ abstract class Expression extends TreeNode[Expression]
LuciferYang commented on PR #39215:
URL: https://github.com/apache/spark/pull/39215#issuecomment-1367116092
Thanks @srowen
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
LuciferYang commented on PR #39265:
URL: https://github.com/apache/spark/pull/39265#issuecomment-1367116153
Thanks @dongjoon-hyun
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific com
cloud-fan commented on code in PR #39248:
URL: https://github.com/apache/spark/pull/39248#discussion_r1058768775
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/Expression.scala:
##
@@ -127,6 +125,54 @@ abstract class Expression extends TreeNode[Expressi
cloud-fan commented on code in PR #39248:
URL: https://github.com/apache/spark/pull/39248#discussion_r1058768172
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/ExpressionsEvaluator.scala:
##
@@ -0,0 +1,45 @@
+/*
+ * Licensed to the Apache Software Found
cloud-fan commented on code in PR #39248:
URL: https://github.com/apache/spark/pull/39248#discussion_r1058767972
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/Expression.scala:
##
@@ -127,6 +125,54 @@ abstract class Expression extends TreeNode[Expressi
cloud-fan commented on code in PR #39248:
URL: https://github.com/apache/spark/pull/39248#discussion_r1058767012
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/ExpressionsEvaluator.scala:
##
@@ -0,0 +1,45 @@
+/*
+ * Licensed to the Apache Software Found
cloud-fan commented on code in PR #39248:
URL: https://github.com/apache/spark/pull/39248#discussion_r1058766773
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/ExpressionsEvaluator.scala:
##
@@ -0,0 +1,45 @@
+/*
+ * Licensed to the Apache Software Found
itholic opened a new pull request, #39282:
URL: https://github.com/apache/spark/pull/39282
### What changes were proposed in this pull request?
This PR proposes to assign name to _LEGACY_ERROR_TEMP_1230,
"NEGATIVE_SCALE_NOT_ALLOWED".
### Why are the changes needed?
LuciferYang commented on code in PR #39270:
URL: https://github.com/apache/spark/pull/39270#discussion_r1058762420
##
core/src/main/protobuf/org/apache/spark/status/protobuf/store_types.proto:
##
@@ -18,8 +18,15 @@
syntax = "proto3";
package org.apache.spark.status.protobuf;
LuciferYang commented on code in PR #39270:
URL: https://github.com/apache/spark/pull/39270#discussion_r1058761067
##
core/src/main/protobuf/org/apache/spark/status/protobuf/store_types.proto:
##
@@ -18,8 +18,15 @@
syntax = "proto3";
package org.apache.spark.status.protobuf;
LuciferYang commented on code in PR #39270:
URL: https://github.com/apache/spark/pull/39270#discussion_r1058761067
##
core/src/main/protobuf/org/apache/spark/status/protobuf/store_types.proto:
##
@@ -18,8 +18,15 @@
syntax = "proto3";
package org.apache.spark.status.protobuf;
viirya commented on code in PR #39248:
URL: https://github.com/apache/spark/pull/39248#discussion_r1058759293
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/Expression.scala:
##
@@ -127,6 +125,54 @@ abstract class Expression extends TreeNode[Expression]
LuciferYang commented on code in PR #39226:
URL: https://github.com/apache/spark/pull/39226#discussion_r1058754762
##
core/src/main/scala/org/apache/spark/status/AppStatusStore.scala:
##
@@ -733,6 +734,15 @@ private[spark] class AppStatusStore(
def close(): Unit = {
st
viirya commented on code in PR #39248:
URL: https://github.com/apache/spark/pull/39248#discussion_r1058750103
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/InterpretedMutableProjection.scala:
##
@@ -117,10 +111,6 @@ object InterpretedMutableProjection
viirya commented on code in PR #39248:
URL: https://github.com/apache/spark/pull/39248#discussion_r1058750004
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/ExpressionsEvaluator.scala:
##
@@ -0,0 +1,45 @@
+/*
+ * Licensed to the Apache Software Foundati
viirya commented on code in PR #39248:
URL: https://github.com/apache/spark/pull/39248#discussion_r1058748428
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/ExpressionsEvaluator.scala:
##
@@ -0,0 +1,45 @@
+/*
+ * Licensed to the Apache Software Foundati
itholic opened a new pull request, #39281:
URL: https://github.com/apache/spark/pull/39281
### What changes were proposed in this pull request?
This PR proposes to assign name to _LEGACY_ERROR_TEMP_2051,
"DATA_SOURCE_NOT_FOUND".
### Why are the changes needed?
itholic commented on code in PR #39258:
URL: https://github.com/apache/spark/pull/39258#discussion_r1058741568
##
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/csv/CSVSuite.scala:
##
@@ -370,8 +370,11 @@ abstract class CSVSuite
.load(testFile(cars
itholic commented on code in PR #39258:
URL: https://github.com/apache/spark/pull/39258#discussion_r1058741274
##
core/src/main/resources/error/error-classes.json:
##
@@ -851,6 +851,11 @@
"Cannot name the managed table as , as its associated
location already exists. Ple
gengliangwang commented on code in PR #39192:
URL: https://github.com/apache/spark/pull/39192#discussion_r1058739922
##
core/src/main/protobuf/org/apache/spark/status/protobuf/store_types.proto:
##
@@ -390,3 +390,214 @@ message SQLExecutionUIData {
repeated int64 stages = 11;
gengliangwang commented on PR #39270:
URL: https://github.com/apache/spark/pull/39270#issuecomment-1367086148
This is for the issue in
https://github.com/apache/spark/pull/39192#discussion_r1058002256
cc @LuciferYang @panbingkun
--
This is an automated message from the Apache Git Serv
gengliangwang closed pull request #39202: [SPARK-41685][UI] Support Protobuf
serializer for the KVStore in History server
URL: https://github.com/apache/spark/pull/39202
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
gengliangwang commented on PR #39202:
URL: https://github.com/apache/spark/pull/39202#issuecomment-1367085611
@techaddict @mridulm @LuciferYang @cloud-fan thanks for the review.
Merging to master
--
This is an automated message from the Apache Git Service.
To respond to the message, ple
warrenzhu25 commented on PR #38852:
URL: https://github.com/apache/spark/pull/38852#issuecomment-1367081140
@holdenk @dongjoon-hyun @Ngone51 Help take a look?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL abo
warrenzhu25 commented on PR #39280:
URL: https://github.com/apache/spark/pull/39280#issuecomment-1367081016
@dongjoon-hyun @mridulm @Ngone51 Help take a look?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL abo
warrenzhu25 opened a new pull request, #39280:
URL: https://github.com/apache/spark/pull/39280
### What changes were proposed in this pull request?
Handle decommission request sent before executor registration
### Why are the changes needed?
Current behavior is such requests will
zhengruifeng commented on code in PR #39236:
URL: https://github.com/apache/spark/pull/39236#discussion_r1058734162
##
connector/connect/server/src/main/scala/org/apache/spark/sql/connect/planner/SparkConnectPlanner.scala:
##
@@ -352,6 +353,16 @@ class SparkConnectPlanner(sessio
itholic opened a new pull request, #39279:
URL: https://github.com/apache/spark/pull/39279
### What changes were proposed in this pull request?
This PR proposes to assign name to _LEGACY_ERROR_TEMP_2141,
"ENCODER_NOT_FOUND".
### Why are the changes needed?
beliefer commented on PR #39262:
URL: https://github.com/apache/spark/pull/39262#issuecomment-1367067211
ping @HyukjinKwon @zhengruifeng @grundprinzip @amaliujia
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
beliefer commented on code in PR #39236:
URL: https://github.com/apache/spark/pull/39236#discussion_r1058721777
##
connector/connect/server/src/main/scala/org/apache/spark/sql/connect/planner/SparkConnectPlanner.scala:
##
@@ -352,6 +353,16 @@ class SparkConnectPlanner(session: S
zhengruifeng commented on code in PR #39236:
URL: https://github.com/apache/spark/pull/39236#discussion_r1058720184
##
connector/connect/server/src/main/scala/org/apache/spark/sql/connect/planner/SparkConnectPlanner.scala:
##
@@ -352,6 +353,16 @@ class SparkConnectPlanner(sessio
zhengruifeng commented on code in PR #39236:
URL: https://github.com/apache/spark/pull/39236#discussion_r1058719947
##
connector/connect/server/src/main/scala/org/apache/spark/sql/connect/planner/SparkConnectPlanner.scala:
##
@@ -352,6 +353,16 @@ class SparkConnectPlanner(sessio
anchovYu commented on code in PR #39269:
URL: https://github.com/apache/spark/pull/39269#discussion_r1058719511
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/ResolveLateralColumnAliasReference.scala:
##
@@ -168,19 +168,18 @@ object ResolveLateralColumnAli
cloud-fan commented on code in PR #39269:
URL: https://github.com/apache/spark/pull/39269#discussion_r1058718417
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/ResolveLateralColumnAliasReference.scala:
##
@@ -168,19 +168,18 @@ object ResolveLateralColumnAl
cloud-fan commented on code in PR #39269:
URL: https://github.com/apache/spark/pull/39269#discussion_r1058718270
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/ResolveLateralColumnAliasReference.scala:
##
@@ -168,19 +168,18 @@ object ResolveLateralColumnAl
zhengruifeng commented on PR #39272:
URL: https://github.com/apache/spark/pull/39272#issuecomment-1367059119
merged into master
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific commen
zhengruifeng closed pull request #39272: [SPARK-41751][CONNECT][PYTHON] Fix
`Column.{bitwiseAND, bitwiseOR, bitwiseXOR}`
URL: https://github.com/apache/spark/pull/39272
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
cloud-fan commented on PR #39268:
URL: https://github.com/apache/spark/pull/39268#issuecomment-1367058299
also cc @ulysses-you
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific commen
anchovYu commented on code in PR #39269:
URL: https://github.com/apache/spark/pull/39269#discussion_r1058716520
##
sql/core/src/test/scala/org/apache/spark/sql/LateralColumnAliasSuite.scala:
##
@@ -547,7 +547,8 @@ class LateralColumnAliasSuite extends
LateralColumnAliasSuiteBas
cloud-fan commented on code in PR #39268:
URL: https://github.com/apache/spark/pull/39268#discussion_r1058717124
##
sql/core/src/main/scala/org/apache/spark/sql/execution/ui/AllExecutionsPage.scala:
##
@@ -26,40 +26,65 @@ import scala.xml.{Node, NodeSeq}
import org.apache.spa
cloud-fan commented on code in PR #39268:
URL: https://github.com/apache/spark/pull/39268#discussion_r1058716626
##
core/src/main/scala/org/apache/spark/internal/config/UI.scala:
##
@@ -229,4 +229,11 @@ private[spark] object UI {
.stringConf
.transform(_.toUpperCase(Lo
anchovYu commented on code in PR #39269:
URL: https://github.com/apache/spark/pull/39269#discussion_r1058716520
##
sql/core/src/test/scala/org/apache/spark/sql/LateralColumnAliasSuite.scala:
##
@@ -547,7 +547,8 @@ class LateralColumnAliasSuite extends
LateralColumnAliasSuiteBas
zhengruifeng opened a new pull request, #39278:
URL: https://github.com/apache/spark/pull/39278
### What changes were proposed in this pull request?
1, Make the internal string op names `startswith`, `endswith` consistent
with FunctionRegistry
2, add test for string ops
### Why
cloud-fan commented on code in PR #39269:
URL: https://github.com/apache/spark/pull/39269#discussion_r1058715468
##
sql/core/src/test/scala/org/apache/spark/sql/LateralColumnAliasSuite.scala:
##
@@ -547,7 +547,8 @@ class LateralColumnAliasSuite extends
LateralColumnAliasSuiteBa
anchovYu commented on code in PR #39269:
URL: https://github.com/apache/spark/pull/39269#discussion_r1058714323
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/ResolveLateralColumnAliasReference.scala:
##
@@ -168,19 +168,18 @@ object ResolveLateralColumnAli
anchovYu commented on code in PR #39269:
URL: https://github.com/apache/spark/pull/39269#discussion_r1058714106
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/ResolveLateralColumnAliasReference.scala:
##
@@ -168,19 +168,18 @@ object ResolveLateralColumnAli
ulysses-you commented on code in PR #39263:
URL: https://github.com/apache/spark/pull/39263#discussion_r1058714055
##
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveStrategies.scala:
##
@@ -232,15 +233,35 @@ case class RelationConversions(
if DDLUtils.isHiveTab
cloud-fan commented on code in PR #39269:
URL: https://github.com/apache/spark/pull/39269#discussion_r1058713809
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/ResolveLateralColumnAliasReference.scala:
##
@@ -168,19 +168,18 @@ object ResolveLateralColumnAl
beliefer commented on PR #39236:
URL: https://github.com/apache/spark/pull/39236#issuecomment-1367052882
ping @HyukjinKwon @zhengruifeng @grundprinzip @amaliujia
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
ulysses-you opened a new pull request, #39277:
URL: https://github.com/apache/spark/pull/39277
### What changes were proposed in this pull request?
This pr aims to pull out the v1write information from `V1WriteCommand` to
`WriteFiles`:
```scala
case class WriteFiles(chil
zhengruifeng opened a new pull request, #39276:
URL: https://github.com/apache/spark/pull/39276
### What changes were proposed in this pull request?
Fix arithmetic ops: `__neg__`, `__pow__`:
1, `__neg__` fix `[UNRESOLVED_ROUTINE] Cannot resolve function `negate` on
search path [`s
panbingkun opened a new pull request, #39275:
URL: https://github.com/apache/spark/pull/39275
### What changes were proposed in this pull request?
### Why are the changes needed?
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch te
cloud-fan commented on code in PR #39202:
URL: https://github.com/apache/spark/pull/39202#discussion_r1058711270
##
core/src/main/scala/org/apache/spark/internal/config/History.scala:
##
@@ -79,6 +79,21 @@ private[spark] object History {
.stringConf
.createOptional
+
cloud-fan commented on code in PR #39263:
URL: https://github.com/apache/spark/pull/39263#discussion_r1058706279
##
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveStrategies.scala:
##
@@ -232,15 +233,35 @@ case class RelationConversions(
if DDLUtils.isHiveTable
dengziming opened a new pull request, #39274:
URL: https://github.com/apache/spark/pull/39274
### What changes were proposed in this pull request?
1. This changes enables enforcing `scalafmt` for the Connect client module
since it's a new module.
2. This change applies `scalafmt` o
mridulm commented on PR #39202:
URL: https://github.com/apache/spark/pull/39202#issuecomment-1367044296
+CC @thejdeep, @shardulm94
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific co
mridulm commented on code in PR #39226:
URL: https://github.com/apache/spark/pull/39226#discussion_r1058705480
##
core/src/main/scala/org/apache/spark/status/AppStatusStore.scala:
##
@@ -733,6 +734,15 @@ private[spark] class AppStatusStore(
def close(): Unit = {
store.
zhengruifeng commented on PR #39273:
URL: https://github.com/apache/spark/pull/39273#issuecomment-1367042645
@HyukjinKwon
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
T
zhengruifeng opened a new pull request, #39273:
URL: https://github.com/apache/spark/pull/39273
### What changes were proposed in this pull request?
Fix `Column.{isNull, isNotNull, eqNullSafe}`
### Why are the changes needed?
they were wrongly implemented
### Does
ulysses-you commented on code in PR #39263:
URL: https://github.com/apache/spark/pull/39263#discussion_r1058703839
##
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveStrategies.scala:
##
@@ -232,15 +233,35 @@ case class RelationConversions(
if DDLUtils.isHiveTab
cloud-fan commented on code in PR #39099:
URL: https://github.com/apache/spark/pull/39099#discussion_r1058702415
##
sql/catalyst/src/main/scala/org/apache/spark/sql/types/Decimal.scala:
##
@@ -374,7 +374,7 @@ final class Decimal extends Ordered[Decimal] with
Serializable {
cloud-fan commented on code in PR #39099:
URL: https://github.com/apache/spark/pull/39099#discussion_r1058702308
##
sql/catalyst/src/main/scala/org/apache/spark/sql/types/Decimal.scala:
##
@@ -374,7 +374,7 @@ final class Decimal extends Ordered[Decimal] with
Serializable {
cloud-fan commented on code in PR #39263:
URL: https://github.com/apache/spark/pull/39263#discussion_r1058702056
##
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveStrategies.scala:
##
@@ -232,15 +233,35 @@ case class RelationConversions(
if DDLUtils.isHiveTable
mridulm commented on code in PR #36165:
URL: https://github.com/apache/spark/pull/36165#discussion_r1058701274
##
core/src/main/protobuf/org/apache/spark/status/protobuf/store_types.proto:
##
@@ -100,11 +100,21 @@ message TaskDataWrapper {
int64 shuffle_remote_bytes_read_to_d
mridulm commented on code in PR #36165:
URL: https://github.com/apache/spark/pull/36165#discussion_r1058701274
##
core/src/main/protobuf/org/apache/spark/status/protobuf/store_types.proto:
##
@@ -100,11 +100,21 @@ message TaskDataWrapper {
int64 shuffle_remote_bytes_read_to_d
dengziming commented on PR #39158:
URL: https://github.com/apache/spark/pull/39158#issuecomment-1367036120
> adding the repartitionBy* APIs in Client ?
Do you mean adding them to python client? yes, I'm working on it.
--
This is an automated message from the Apache Git Service.
To r
zhengruifeng commented on PR #39272:
URL: https://github.com/apache/spark/pull/39272#issuecomment-1367036019
cc @HyukjinKwon
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
zhengruifeng opened a new pull request, #39272:
URL: https://github.com/apache/spark/pull/39272
### What changes were proposed in this pull request?
Implement `Column.{bitwiseAND, bitwiseOR, bitwiseXOR}`
### Why are the changes needed?
fix
### Does this PR introdu
cloud-fan closed pull request #39266: [SPARK-41753][SQL][TEST] Add tests for
ArrayZip to check the result size and nullability
URL: https://github.com/apache/spark/pull/39266
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and us
cloud-fan commented on PR #39266:
URL: https://github.com/apache/spark/pull/39266#issuecomment-1367035659
thanks, merging to master!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific c
techaddict commented on code in PR #39249:
URL: https://github.com/apache/spark/pull/39249#discussion_r1058694818
##
python/pyspark/sql/connect/column.py:
##
@@ -390,3 +391,61 @@ def __nonzero__(self) -> None:
Column.__doc__ = PySparkColumn.__doc__
+
+
+def _test() -> None:
techaddict commented on code in PR #39249:
URL: https://github.com/apache/spark/pull/39249#discussion_r1058693270
##
python/pyspark/sql/connect/column.py:
##
@@ -390,3 +391,61 @@ def __nonzero__(self) -> None:
Column.__doc__ = PySparkColumn.__doc__
+
+
+def _test() -> None:
HyukjinKwon commented on code in PR #39249:
URL: https://github.com/apache/spark/pull/39249#discussion_r1058690914
##
python/pyspark/sql/connect/column.py:
##
@@ -390,3 +391,61 @@ def __nonzero__(self) -> None:
Column.__doc__ = PySparkColumn.__doc__
+
+
+def _test() -> None
HyukjinKwon closed pull request #39271:
[SPARK-41747][SPARK-41744][SPARK-41748][SPARK-41749][CONNECT][TESTS] Reeanble
tests for multiple arguments in max, min, sum and avg in groupby
URL: https://github.com/apache/spark/pull/39271
--
This is an automated message from the Apache Git Service.
HyukjinKwon commented on PR #39271:
URL: https://github.com/apache/spark/pull/39271#issuecomment-1367024253
Merged to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
HyukjinKwon commented on code in PR #39249:
URL: https://github.com/apache/spark/pull/39249#discussion_r1058687939
##
python/pyspark/sql/connect/column.py:
##
@@ -390,3 +391,61 @@ def __nonzero__(self) -> None:
Column.__doc__ = PySparkColumn.__doc__
+
+
+def _test() -> None
HyukjinKwon commented on code in PR #39249:
URL: https://github.com/apache/spark/pull/39249#discussion_r1058687670
##
python/pyspark/sql/column.py:
##
@@ -200,17 +200,17 @@ class Column:
... [(2, "Alice"), (5, "Bob")], ["age", "name"])
Select a column out of a D
techaddict commented on code in PR #39249:
URL: https://github.com/apache/spark/pull/39249#discussion_r1058687601
##
python/pyspark/sql/connect/column.py:
##
@@ -390,3 +391,61 @@ def __nonzero__(self) -> None:
Column.__doc__ = PySparkColumn.__doc__
+
+
+def _test() -> None:
HyukjinKwon commented on code in PR #39249:
URL: https://github.com/apache/spark/pull/39249#discussion_r1058687254
##
python/pyspark/sql/connect/column.py:
##
@@ -390,3 +391,61 @@ def __nonzero__(self) -> None:
Column.__doc__ = PySparkColumn.__doc__
+
+
+def _test() -> None
HyukjinKwon commented on code in PR #39249:
URL: https://github.com/apache/spark/pull/39249#discussion_r1058686957
##
python/pyspark/sql/connect/column.py:
##
@@ -388,5 +389,62 @@ def __nonzero__(self) -> None:
__bool__ = __nonzero__
Review Comment:
```suggestion
HyukjinKwon commented on code in PR #39249:
URL: https://github.com/apache/spark/pull/39249#discussion_r1058686880
##
python/pyspark/sql/column.py:
##
@@ -1258,8 +1258,7 @@ def over(self, window: "WindowSpec") -> "Column":
>>> from pyspark.sql import Window
>>>
HyukjinKwon commented on PR #39249:
URL: https://github.com/apache/spark/pull/39249#issuecomment-1367019627
Thanks for working on this @techaddict
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to
HyukjinKwon commented on code in PR #39239:
URL: https://github.com/apache/spark/pull/39239#discussion_r1058683926
##
python/pyspark/sql/types.py:
##
@@ -276,7 +276,15 @@ def toInternal(self, dt: datetime.datetime) -> int:
def fromInternal(self, ts: int) -> datetime.datetim
HyukjinKwon commented on code in PR #39249:
URL: https://github.com/apache/spark/pull/39249#discussion_r1058682621
##
python/pyspark/sql/connect/column.py:
##
@@ -390,3 +391,61 @@ def __nonzero__(self) -> None:
Column.__doc__ = PySparkColumn.__doc__
+
+
+def _test() -> None
HyukjinKwon commented on PR #39271:
URL: https://github.com/apache/spark/pull/39271#issuecomment-1367016580
cc @zhengruifeng
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
techaddict commented on code in PR #39110:
URL: https://github.com/apache/spark/pull/39110#discussion_r1058682080
##
core/src/main/protobuf/org/apache/spark/status/protobuf/store_types.proto:
##
@@ -390,3 +390,38 @@ message SQLExecutionUIData {
repeated int64 stages = 11;
techaddict commented on PR #39110:
URL: https://github.com/apache/spark/pull/39110#issuecomment-1367016502
@gengliangwang updated the PR
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specif
1 - 100 of 195 matches
Mail list logo