[ 
https://issues.apache.org/jira/browse/SPARK-1427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Armbrust resolved SPARK-1427.
-------------------------------------

    Resolution: Fixed

Fixed the toString issue here: https://github.com/apache/spark/pull/343

Could not recreate the permgen problem, but I did run the examples by hand 
successfully.

> HQL Examples Don't Work
> -----------------------
>
>                 Key: SPARK-1427
>                 URL: https://issues.apache.org/jira/browse/SPARK-1427
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 1.0.0
>            Reporter: Patrick Wendell
>            Assignee: Michael Armbrust
>             Fix For: 1.0.0
>
>
> {code}
> scala> hql("CREATE TABLE IF NOT EXISTS src (key INT, value STRING)")
> 14/04/05 22:40:29 INFO ParseDriver: Parsing command: CREATE TABLE IF NOT 
> EXISTS src (key INT, value STRING)
> 14/04/05 22:40:30 INFO ParseDriver: Parse Completed
> 14/04/05 22:40:30 INFO Driver: <PERFLOG method=Driver.run>
> 14/04/05 22:40:30 INFO Driver: <PERFLOG method=TimeToSubmit>
> 14/04/05 22:40:30 INFO Driver: <PERFLOG method=compile>
> 14/04/05 22:40:30 INFO Driver: <PERFLOG method=parse>
> 14/04/05 22:40:30 INFO ParseDriver: Parsing command: CREATE TABLE IF NOT 
> EXISTS src (key INT, value STRING)
> 14/04/05 22:40:30 INFO ParseDriver: Parse Completed
> 14/04/05 22:40:30 INFO Driver: </PERFLOG method=parse start=1396762830162 
> end=1396762830163 duration=1>
> 14/04/05 22:40:30 INFO Driver: <PERFLOG method=semanticAnalyze>
> 14/04/05 22:40:30 INFO SemanticAnalyzer: Starting Semantic Analysis
> 14/04/05 22:40:30 INFO SemanticAnalyzer: Creating table src position=27
> 14/04/05 22:40:30 INFO HiveMetaStore: 0: Opening raw store with implemenation 
> class:org.apache.hadoop.hive.metastore.ObjectStore
> 14/04/05 22:40:30 INFO ObjectStore: ObjectStore, initialize called
> 14/04/05 22:40:30 INFO Persistence: Property datanucleus.cache.level2 unknown 
> - will be ignored
> 14/04/05 22:40:30 WARN BoneCPConfig: Max Connections < 1. Setting to 20
> 14/04/05 22:40:32 INFO ObjectStore: Setting MetaStore object pin classes with 
> hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
> 14/04/05 22:40:32 INFO ObjectStore: Initialized ObjectStore
> 14/04/05 22:40:33 WARN BoneCPConfig: Max Connections < 1. Setting to 20
> 14/04/05 22:40:33 INFO HiveMetaStore: 0: get_table : db=default tbl=src
> 14/04/05 22:40:33 INFO audit: ugi=patrick     ip=unknown-ip-addr      
> cmd=get_table : db=default tbl=src      
> 14/04/05 22:40:33 INFO Datastore: The class 
> "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as 
> "embedded-only" so does not have its own datastore table.
> 14/04/05 22:40:33 INFO Datastore: The class 
> "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" 
> so does not have its own datastore table.
> 14/04/05 22:40:34 INFO Driver: Semantic Analysis Completed
> 14/04/05 22:40:34 INFO Driver: </PERFLOG method=semanticAnalyze 
> start=1396762830163 end=1396762834001 duration=3838>
> 14/04/05 22:40:34 INFO Driver: Returning Hive schema: 
> Schema(fieldSchemas:null, properties:null)
> 14/04/05 22:40:34 INFO Driver: </PERFLOG method=compile start=1396762830146 
> end=1396762834006 duration=3860>
> 14/04/05 22:40:34 INFO Driver: <PERFLOG method=Driver.execute>
> 14/04/05 22:40:34 INFO Driver: Starting command: CREATE TABLE IF NOT EXISTS 
> src (key INT, value STRING)
> 14/04/05 22:40:34 INFO Driver: </PERFLOG method=TimeToSubmit 
> start=1396762830146 end=1396762834016 duration=3870>
> 14/04/05 22:40:34 INFO Driver: <PERFLOG method=runTasks>
> 14/04/05 22:40:34 INFO Driver: </PERFLOG method=runTasks start=1396762834016 
> end=1396762834016 duration=0>
> 14/04/05 22:40:34 INFO Driver: </PERFLOG method=Driver.execute 
> start=1396762834006 end=1396762834017 duration=11>
> 14/04/05 22:40:34 INFO Driver: OK
> 14/04/05 22:40:34 INFO Driver: <PERFLOG method=releaseLocks>
> 14/04/05 22:40:34 INFO Driver: </PERFLOG method=releaseLocks 
> start=1396762834019 end=1396762834019 duration=0>
> 14/04/05 22:40:34 INFO Driver: </PERFLOG method=Driver.run 
> start=1396762830146 end=1396762834019 duration=3873>
> 14/04/05 22:40:34 INFO Driver: <PERFLOG method=releaseLocks>
> 14/04/05 22:40:34 INFO Driver: </PERFLOG method=releaseLocks 
> start=1396762834019 end=1396762834020 duration=1>
> java.lang.AssertionError: assertion failed: No plan for NativeCommand CREATE 
> TABLE IF NOT EXISTS src (key INT, value STRING)
>       at scala.Predef$.assert(Predef.scala:179)
>       at 
> org.apache.spark.sql.catalyst.planning.QueryPlanner.apply(QueryPlanner.scala:59)
>       at 
> org.apache.spark.sql.SQLContext$QueryExecution.sparkPlan$lzycompute(SQLContext.scala:218)
>       at 
> org.apache.spark.sql.SQLContext$QueryExecution.sparkPlan(SQLContext.scala:218)
>       at 
> org.apache.spark.sql.SQLContext$QueryExecution.executedPlan$lzycompute(SQLContext.scala:219)
>       at 
> org.apache.spark.sql.SQLContext$QueryExecution.executedPlan(SQLContext.scala:219)
>       at 
> org.apache.spark.sql.SchemaRDDLike$class.toString(SchemaRDDLike.scala:44)
>       at org.apache.spark.sql.SchemaRDD.toString(SchemaRDD.scala:93)
>       at java.lang.String.valueOf(String.java:2854)
>       at scala.runtime.ScalaRunTime$.stringOf(ScalaRunTime.scala:331)
>       at scala.runtime.ScalaRunTime$.replStringOf(ScalaRunTime.scala:337)
>       at .<init>(<console>:10)
>       at .<clinit>(<console>)
>       at $print(<console>)
>       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>       at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>       at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>       at java.lang.reflect.Method.invoke(Method.java:606)
> {code}
> {code}
> scala> hql("select count(*) from src")
> 14/04/05 22:47:13 INFO ParseDriver: Parsing command: select count(*) from src
> 14/04/05 22:47:13 INFO ParseDriver: Parse Completed
> 14/04/05 22:47:13 INFO HiveMetaStore: 0: get_table : db=default tbl=src
> 14/04/05 22:47:13 INFO audit: ugi=patrick     ip=unknown-ip-addr      
> cmd=get_table : db=default tbl=src      
> 14/04/05 22:47:13 INFO MemoryStore: ensureFreeSpace(147107) called with 
> curMem=0, maxMem=308713881
> 14/04/05 22:47:13 INFO MemoryStore: Block broadcast_0 stored as values to 
> memory (estimated size 143.7 KB, free 294.3 MB)
> 14/04/05 22:47:13 WARN NativeCodeLoader: Unable to load native-hadoop library 
> for your platform... using builtin-java classes where applicable
> 14/04/05 22:47:13 WARN LoadSnappy: Snappy native library not loaded
> 14/04/05 22:47:13 INFO FileInputFormat: Total input paths to process : 1
> 14/04/05 22:47:13 INFO SparkContext: Starting job: count at 
> aggregates.scala:107
> 14/04/05 22:47:13 INFO DAGScheduler: Got job 0 (count at 
> aggregates.scala:107) with 2 output partitions (allowLocal=false)
> 14/04/05 22:47:13 INFO DAGScheduler: Final stage: Stage 0 (count at 
> aggregates.scala:107)
> 14/04/05 22:47:13 INFO DAGScheduler: Parents of final stage: List()
> 14/04/05 22:47:13 INFO DAGScheduler: Missing parents: List()
> 14/04/05 22:47:13 INFO DAGScheduler: Submitting Stage 0 (MappedRDD[9] at map 
> at aggregates.scala:94), which has no missing parents
> 14/04/05 22:47:13 INFO DAGScheduler: Submitting 2 missing tasks from Stage 0 
> (MappedRDD[9] at map at aggregates.scala:94)
> 14/04/05 22:47:13 INFO TaskSchedulerImpl: Adding task set 0.0 with 2 tasks
> 14/04/05 22:47:13 INFO TaskSetManager: Starting task 0.0:0 as TID 0 on 
> executor localhost: localhost (PROCESS_LOCAL)
> 14/04/05 22:47:14 INFO TaskSetManager: Serialized task 0.0:0 as 3919 bytes in 
> 323 ms
> 14/04/05 22:47:14 INFO Executor: Running task ID 0
> Exception in thread "Executor task launch worker-0" 
> java.lang.OutOfMemoryError: PermGen space
>       at 
> org.apache.spark.executor.Executor$TaskRunner$$anonfun$run$1.apply$mcV$sp(Executor.scala:271)
>       at 
> org.apache.spark.deploy.SparkHadoopUtil.runAsUser(SparkHadoopUtil.scala:46)
>       at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:176)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>       at java.lang.Thread.run(Thread.java:724)
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to