Github user ravipesala commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/1672#discussion_r157738771
  
    --- Diff: 
integration/spark2/src/main/scala/org/apache/spark/sql/execution/strategy/CarbonLateDecodeStrategy.scala
 ---
    @@ -130,6 +130,44 @@ private[sql] class CarbonLateDecodeStrategy extends 
SparkStrategy {
           table.carbonTable.getTableInfo.serialize())
       }
     
    +  /**
    +   * Converts to physical RDD of carbon after pushing down applicable 
filters.
    +   * @param relation
    +   * @param projects
    +   * @param filterPredicates
    +   * @param scanBuilder
    +   * @return
    +   */
    +  private def pruneFilterProject(
    +      relation: LogicalRelation,
    +      projects: Seq[NamedExpression],
    +      filterPredicates: Seq[Expression],
    +      scanBuilder: (Seq[Attribute], Array[Filter],
    +        ArrayBuffer[AttributeReference], Seq[String]) => RDD[InternalRow]) 
= {
    --- End diff --
    
    Yes, you are right. It is difficult to track the parameters meaning. Even I 
also find it difficult pass a new parameter partitions but this is old code 
added as part of spark2.0 support and may we can refactor later. 


---

Reply via email to