Github user cloud-fan commented on a diff in the pull request:

    https://github.com/apache/spark/pull/15480#discussion_r89513885
  
    --- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/codegen/GenerateOrdering.scala
 ---
    @@ -118,7 +118,42 @@ object GenerateOrdering extends 
CodeGenerator[Seq[SortOrder], Ordering[InternalR
               }
           """
         }.mkString("\n")
    -    comparisons
    +
    +    /*
    +     * 40 = 7000 bytes / 170 (around 170 bytes per ordering comparison).
    +     * The maximum byte code size to be compiled for HotSpot is 8000 bytes.
    +     * We should keep less than 8000 bytes.
    +     */
    +    val numberOfComparisonsThreshold = 40
    +
    +    if (ordering.size <= numberOfComparisonsThreshold) {
    +      comparisons(ordering)
    +    } else {
    +      val groupedOrderingItr = 
ordering.grouped(numberOfComparisonsThreshold)
    +      val funcNamePrefix = ctx.freshName("compare")
    +      val funcNames = groupedOrderingItr.zipWithIndex.map { case 
(orderingGroup, i) =>
    +        val funcName = s"${funcNamePrefix}_$i"
    +        val funcCode =
    +          s"""
    +             |private int $funcName(InternalRow a, InternalRow b) {
    +             |  InternalRow ${ctx.INPUT_ROW} = null;  // Holds current row 
being evaluated.
    +             |  ${comparisons(orderingGroup)}
    +             |  return 0;
    --- End diff --
    
    for performance concerns, we should avoid using member variables. If there 
is no easy way to reuse `splitExpressions`, I'm ok with the current approach.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to