Github user jayadevanmurali commented on the pull request:

    https://github.com/apache/spark/pull/10983#issuecomment-177392588
  
    Thanks @hvanhovell , Got your point. I updated my code and repeat the 
steps. I was able to replicate this. Please check the steps
    
    jayadevan@Satellite-L640:~/spark$ ./bin/spark-shell
    NOTE: SPARK_PREPEND_CLASSES is set, placing locally compiled Spark classes 
ahead of assembly.
    Using Spark's default log4j profile: 
org/apache/spark/log4j-defaults.properties
    Setting default log level to "WARN".
    To adjust logging level use sc.setLogLevel(newLevel).
    16/01/31 09:27:57 WARN NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
    16/01/31 09:27:57 WARN Utils: Your hostname, Satellite-L640 resolves to a 
loopback address: 127.0.1.1, but we couldn't find any external IP address!
    16/01/31 09:27:57 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to 
another address
    Spark context available as sc (master = local[*], app id = 
local-1454212680541).
    SQL context available as sqlContext.
    Welcome to
          ____              __
         / __/__  ___ _____/ /__
        _\ \/ _ \/ _ `/ __/  '_/
       /___/ .__/\_,_/_/ /_/\_\   version 2.0.0-SNAPSHOT
          /_/
             
    Using Scala version 2.11.7 (Java HotSpot(TM) 64-Bit Server VM, Java 
1.7.0_80)
    Type in expressions to have them evaluated.
    Type :help for more information.
    
    scala> import org.apache.spark.sql.types.{StringType, StructField, 
StructType}
    import org.apache.spark.sql.types.{StringType, StructField, StructType}
    
    scala> import org.apache.spark.sql.{DataFrame, Row, SQLContext}
    import org.apache.spark.sql.{DataFrame, Row, SQLContext}
    
    scala> import org.apache.spark.{SparkContext, SparkConf}
    import org.apache.spark.{SparkContext, SparkConf}
    
    scala> val rows = List(Row("foo"), Row("bar"));
    rows: List[org.apache.spark.sql.Row] = List([foo], [bar])
    
    scala> val schema = StructType(Seq(StructField("col", StringType)));
    schema: org.apache.spark.sql.types.StructType = 
StructType(StructField(col,StringType,true))
    
    scala> val rdd = sc.parallelize(rows);
    rdd: org.apache.spark.rdd.RDD[org.apache.spark.sql.Row] = 
ParallelCollectionRDD[0] at parallelize at <console>:29
    
    scala> val df = sqlContext.createDataFrame(rdd, schema)
    df: org.apache.spark.sql.DataFrame = [col: string]
    
    scala> df.registerTempTable("t~")
    
    scala> df.sqlContext.dropTempTable("t~")
    org.apache.spark.sql.AnalysisException: NoViableAltException(327@[209:20: ( 
DOT id2= identifier )?])
    ; line 1 pos 1
      at 
org.apache.spark.sql.catalyst.parser.ParseErrorReporter.throwError(ParseDriver.scala:158)
      at 
org.apache.spark.sql.catalyst.parser.ParseErrorReporter.throwError(ParseDriver.scala:147)
      at 
org.apache.spark.sql.catalyst.parser.ParseDriver$.parse(ParseDriver.scala:95)
      at 
org.apache.spark.sql.catalyst.parser.ParseDriver$.parseTableName(ParseDriver.scala:42)
      at 
org.apache.spark.sql.catalyst.CatalystQl.parseTableIdentifier(CatalystQl.scala:81)
      at org.apache.spark.sql.SQLContext.table(SQLContext.scala:811)
      at org.apache.spark.sql.SQLContext.dropTempTable(SQLContext.scala:738)
      ... 49 elided
    
    So I will close this pull request and raise a new one.
    



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to