Github user cloud-fan commented on a diff in the pull request:

    https://github.com/apache/spark/pull/20377#discussion_r164032685
  
    --- Diff: 
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/IsolatedClientLoader.scala
 ---
    @@ -251,10 +251,8 @@ private[hive] class IsolatedClientLoader(
     
       /** The isolated client interface to Hive. */
       private[hive] def createClient(): HiveClient = synchronized {
    -    val warehouseDir = 
Option(hadoopConf.get(ConfVars.METASTOREWAREHOUSE.varname))
         if (!isolationOn) {
    -      return new HiveClientImpl(version, warehouseDir, sparkConf, 
hadoopConf, config,
    -        baseClassLoader, this)
    +      return new HiveClientImpl(version, sparkConf, hadoopConf, config, 
baseClassLoader, this)
    --- End diff --
    
    I think many people may get confused when they see this 
`JIterable[JMap.Entry[String, String]]` and the extra `warehouse` parameter. I 
don't agree that we can sacrifices code quality because it's only called twice.
    
    So the argument is, is this wrapper solution cleaner than the current one? 
If it is, and can fix the issue, let's go for it.
    
    I think refactoring should be encouraged, it happens a lot in the spark 
code base, people see hacky code and people fix it.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to