Github user jackylk commented on a diff in the pull request:

    https://github.com/apache/carbondata/pull/2311#discussion_r189417575
  
    --- Diff: 
integration/spark2/src/main/scala/org/apache/spark/sql/execution/strategy/CarbonLateDecodeStrategy.scala
 ---
    @@ -479,6 +479,24 @@ private[sql] class CarbonLateDecodeStrategy extends 
SparkStrategy {
           case a: Attribute if isComplexAttribute(a) => a
         }.size == 0 )
     
    +    // block filters for lucene with more than one text_match udf
    +    // Todo: handle when lucene and normal query filter is supported
    +    var count: Int = 0
    +    if (predicates.nonEmpty) {
    +      predicates.foreach(predicate => {
    +        if (predicate.isInstanceOf[ScalaUDF]) {
    +          predicate match {
    +            case u: ScalaUDF if u.function.isInstanceOf[TextMatchUDF] ||
    --- End diff --
    
    Better to check it in below `for..yield...` loop, instead of adding another 
loop
    And you do not need to count the number of UDF, throw exception when second 
UDF is encountered


---

Reply via email to