dongjoon-hyun commented on code in PR #38683:
URL: https://github.com/apache/spark/pull/38683#discussion_r1027524453
##########
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/FileMetadataStructSuite.scala:
##########
@@ -654,4 +654,19 @@ class FileMetadataStructSuite extends QueryTest with
SharedSparkSession {
}
}
}
+
+ metadataColumnsTest("SPARK-41151: consistent _metadata nullability " +
+ "between analyzed and executed", schema) { (df, _, _) =>
Review Comment:
I'm curious why it's not a parameter? For me, the second line is a parameter
because is **a part of the first parameter**. And, Apache Spark usually splits
parameter definition sections from method definition sections, doesn't it?
> The second line is not a parameter but a continuation of test name string.
BTW, one thing I agree with @HeartSaVioR that we respect the nearest style
in the code in general. So, I want to ask @Yaohua628 and @HeartSaVioR
explicitly. Do you want to make this as an Apache Spark coding style
officially? What I'm asking is that `the indentation on test case name
splitting`.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]