Github user WeichenXu123 commented on a diff in the pull request:

    https://github.com/apache/spark/pull/19588#discussion_r150760733
  
    --- Diff: 
mllib/src/main/scala/org/apache/spark/ml/feature/VectorIndexer.scala ---
    @@ -311,22 +346,39 @@ class VectorIndexerModel private[ml] (
       // TODO: Check more carefully about whether this whole class will be 
included in a closure.
     
       /** Per-vector transform function */
    -  private val transformFunc: Vector => Vector = {
    +  private lazy val transformFunc: Vector => Vector = {
         val sortedCatFeatureIndices = categoryMaps.keys.toArray.sorted
         val localVectorMap = categoryMaps
         val localNumFeatures = numFeatures
    +    val localHandleInvalid = getHandleInvalid
         val f: Vector => Vector = { (v: Vector) =>
           assert(v.size == localNumFeatures, "VectorIndexerModel expected 
vector of length" +
             s" $numFeatures but found length ${v.size}")
           v match {
             case dv: DenseVector =>
    +          var hasInvalid = false
               val tmpv = dv.copy
               localVectorMap.foreach { case (featureIndex: Int, categoryMap: 
Map[Double, Int]) =>
    -            tmpv.values(featureIndex) = categoryMap(tmpv(featureIndex))
    +            try {
    --- End diff --
    
    But, I don't think so. JVM exception handling will be very efficient, when 
exception do not occur, the code with `try... catch` will have the same 
performance with non `try... catch` code.
    here is some explanation: 
https://www.quora.com/How-expensive-is-the-try-catch-block-in-Java-in-terms-of-performance


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to