attilapiros commented on a change in pull request #30788:
URL: https://github.com/apache/spark/pull/30788#discussion_r555257755
##########
File path:
sql/core/src/test/scala/org/apache/spark/sql/DataFrameAggregateSuite.scala
##########
@@ -1079,6 +1079,18 @@ class DataFrameAggregateSuite extends QueryTest
assert(aggs.head.output.map(_.dataType.simpleString).head ===
aggs.last.output.map(_.dataType.simpleString).head)
}
+
+ test("SPARK-33726 Duplicate field name aggregation should not have null
values in dataframe") {
Review comment:
Use ":" to separate the Spark jira ticket name and the description.
The description should explain the test and not the assert.
So what about this?
```suggestion
test("SPARK-33726: Aggregation on a table where a column name is reused") {
```
##########
File path:
sql/core/src/test/scala/org/apache/spark/sql/DataFrameAggregateSuite.scala
##########
@@ -1079,6 +1079,18 @@ class DataFrameAggregateSuite extends QueryTest
assert(aggs.head.output.map(_.dataType.simpleString).head ===
aggs.last.output.map(_.dataType.simpleString).head)
}
+
+ test("SPARK-33726 Duplicate field name aggregation should not have null
values in dataframe") {
+ val query =
+ """|with T as (select id as a, -id as x from range(3)), U as (select id
as b,
+ |cast(id as string) as x from range(3)) select T.x, U.x, min(a) as
ma, min(b) as mb
+ |from T join U on a=b group by U.x, T.x
+ """.stripMargin
+ val df = spark.sql(query)
+ val nullCount = df.filter($"ma".isNull ).count + df.filter($"mb".isNull
).count
+ + df.filter($"U.x".isNull ).count + df.filter($"T.x".isNull).count
+ assert(nullCount == 0)
Review comment:
I would suggest to use the `checkAnswer` method for validating the
result as it is a consistent set of Rows. You do not need to worry about
ordering as the expected and actual data is checked with a set contains.
And a nit: the space before ")" is not needed.
So I suggest this:
```suggestion
checkAnswer(df, Row(0, "0", 0, 0) :: Row(-1, "1", 1, 1) :: Row(-2, "2",
2, 2) :: Nil)
```
##########
File path:
sql/catalyst/src/main/java/org/apache/spark/sql/catalyst/expressions/RowBasedKeyValueBatch.java
##########
@@ -79,13 +79,11 @@ public static RowBasedKeyValueBatch allocate(StructType
keySchema, StructType va
boolean allFixedLength = true;
// checking if there is any variable length fields
// there is probably a more succinct impl of this
- for (String name : keySchema.fieldNames()) {
- allFixedLength = allFixedLength
- && UnsafeRow.isFixedLength(keySchema.apply(name).dataType());
+ for (StructField field: keySchema.fields()) {
Review comment:
Keep the space on both sides of the ":" in the for each loop.
```suggestion
for (StructField field : keySchema.fields()) {
```
##########
File path:
sql/catalyst/src/main/java/org/apache/spark/sql/catalyst/expressions/RowBasedKeyValueBatch.java
##########
@@ -79,13 +79,11 @@ public static RowBasedKeyValueBatch allocate(StructType
keySchema, StructType va
boolean allFixedLength = true;
// checking if there is any variable length fields
// there is probably a more succinct impl of this
- for (String name : keySchema.fieldNames()) {
- allFixedLength = allFixedLength
- && UnsafeRow.isFixedLength(keySchema.apply(name).dataType());
+ for (StructField field: keySchema.fields()) {
+ allFixedLength = allFixedLength &&
UnsafeRow.isFixedLength(field.dataType());
}
- for (String name : valueSchema.fieldNames()) {
- allFixedLength = allFixedLength
- && UnsafeRow.isFixedLength(valueSchema.apply(name).dataType());
+ for (StructField field: valueSchema.fields()) {
Review comment:
Here too:
```suggestion
for (StructField field : valueSchema.fields()) {
```
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]