VenuReddy2103 commented on a change in pull request #3953:
URL: https://github.com/apache/carbondata/pull/3953#discussion_r496212659



##########
File path: 
core/src/main/java/org/apache/carbondata/core/scan/filter/executer/RowLevelFilterExecutorImpl.java
##########
@@ -106,6 +108,11 @@
    */
   boolean isNaturalSorted;
 
+  /**
+   * date direct dictionary generator
+   */
+  private DirectDictionaryGenerator dateDictionaryGenerator;
+
   public RowLevelFilterExecutorImpl(List<DimColumnResolvedFilterInfo> 
dimColEvaluatorInfoList,

Review comment:
       Have checked this. We are using IncludeFilterExecutorImpl as long as 
their is no cast expression involved. Cast is involved only when the data type 
of column to compare is  not same as the literal values it is compared with.
         <p>Each literal value in the InFilter can be of different type. Spark 
chooses one datatype for the entire list of literal values.  
   For example, with i as int column
   i IN (3, 2, 1.0) -> all values treated as decimal
   i IN (3, 2.0, '1.0') -> all values are treated as string. Note: 2.0 is not 
same as 2 when casted as string.
   <p> When the cast operation is present, `SparkUnknownExpression` with spark 
cast(`SparkExpression`) is used. `SparkUnknownExpression.evaluate()` takes 
row's column value.  Using spark cast expression, validates it and converts it 
to datatype of literal value it chose. And then compare the converted column 
value against literal values. So validation and type conversions are handled 
with spark cast expression.
   IMO, IncludeFilterExecutorImpl is already used when possible.
   
   
   




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to