Github user OopsOutOfMemory commented on the pull request:

    https://github.com/apache/spark/pull/4387#issuecomment-73094427
  
    @pwendell @rxin   
    Here is my concern, sorry if I'm wrong.
    For Usage Level:
    Shall we not `automatically` provide `hiveContext` in spark shell even 
Spark built with Hive support?
    Because you don't know which sql dialect user need.
    I think we only automatically provide `sqlContext` is ok ... If user need 
hive, can manually import it.
    
    For Code Level:
    I can use reflection to generate new A instance of HiveContext, but I if we 
return the sqlContext with one of the two dialect, user may need the other one, 
user can not revert it. And this may also cause ambiguity. 
    ```
    def createSQLContext(): SQLContext = {
    ......
    val loader = Utils.getContextOrSparkClassLoader
        val clazz: Class[_] = try loader.loadClass("o.a.s.s.hive.HiveContext") 
catch {
          case cnf: java.lang.ClassNotFoundException =>
                // here we need handle two kind of dialect, and use 
`asInstanceOf[HiveContext]` not works (i.e. don't know  `HiveContext` is 
available or not) 
         // And here we will return `SQLContext`, even hiveContext can be 
returned as SQLContext, user still need convert it from SQLContext to 
HiveContext.
            }
        }
    ```
    If you have a better solution, please point it out or give an example, I'd 
be very pleasure : )


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to