stefankandic commented on code in PR #45436:
URL: https://github.com/apache/spark/pull/45436#discussion_r1518248868
##########
sql/core/src/test/scala/org/apache/spark/sql/CollationSuite.scala:
##########
@@ -438,6 +438,39 @@ class CollationSuite extends DatasourceV2SQLBase with
AdaptiveSparkPlanHelper {
}
}
+ test("test concurrently running aggregates") {
+ // generating ICU sort keys is not thread-safe by default so this should
fail
+ // if we don't handle the concurrency properly on Collator level
+
+ for (_ <- 1 to 100) {
+ Seq(
+ ("ucs_basic", Seq("AAA", "aaa"), Seq(Row(1, "AAA"), Row(1, "aaa"))),
+ ("ucs_basic", Seq("aaa", "aaa"), Seq(Row(2, "aaa"))),
+ ("ucs_basic", Seq("aaa", "bbb"), Seq(Row(1, "aaa"), Row(1, "bbb"))),
+ ("ucs_basic_lcase", Seq("aaa", "aaa"), Seq(Row(2, "aaa"))),
+ ("ucs_basic_lcase", Seq("AAA", "aaa"), Seq(Row(2, "AAA"))),
+ ("ucs_basic_lcase", Seq("aaa", "bbb"), Seq(Row(1, "aaa"), Row(1,
"bbb"))),
+ ("unicode", Seq("AAA", "aaa"), Seq(Row(1, "AAA"), Row(1, "aaa"))),
+ ("unicode", Seq("aaa", "aaa"), Seq(Row(2, "aaa"))),
+ ("unicode", Seq("aaa", "bbb"), Seq(Row(1, "aaa"), Row(1, "bbb"))),
+ ("unicode_CI", Seq("aaa", "aaa"), Seq(Row(2, "aaa"))),
+ ("unicode_CI", Seq("AAA", "aaa"), Seq(Row(2, "AAA"))),
+ ("unicode_CI", Seq("aaa", "bbb"), Seq(Row(1, "aaa"), Row(1, "bbb")))
+ ).foreach {
Review Comment:
that's a very good point, the test would fail simply because it would be ran
100 times and at least one of those execution would have a data race - i
improved it now to just call `getCollationKey` in a parallel for each and not
really on spark's execution of the aggregate query
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]