agrawaldevesh commented on a change in pull request #29304:
URL: https://github.com/apache/spark/pull/29304#discussion_r464135838
##########
File path:
sql/core/src/main/scala/org/apache/spark/sql/execution/joins/HashedRelation.scala
##########
@@ -327,11 +327,27 @@ private[joins] object UnsafeHashedRelation {
// Create a mapping of buildKeys -> rows
val keyGenerator = UnsafeProjection.create(key)
var numFields = 0
+ val nullPaddingCombinations: Seq[UnsafeProjection] = if (isNullAware) {
+ // C(numKeys, 0), C(numKeys, 1) ... C(numKeys, numKeys - 1)
+ // In total 2^numKeys - 1 records will be appended.
+ key.indices.flatMap { n =>
+ key.indices.combinations(n).map { combination =>
+ // combination is Seq[Int] indicates which key should be replaced to
null padding
+ val exprs = key.indices.map { index =>
+ if (combination.contains(index)) {
+ Literal.create(null, key(index).dataType)
Review comment:
It should be easy to fold this optimization into this PR ... All we need
to do is to only create the combinations for the (truly) nullable keys. I think
this is an important performance optimization since most keys are non nullable
(in a well formed schema) and it would reduce the memory blow up.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]