Hi, is this fixed in master? Grega
On Thu, May 14, 2015 at 7:50 PM, Michael Armbrust <[email protected]> wrote: > End of the month is the target: > https://cwiki.apache.org/confluence/display/SPARK/Wiki+Homepage > > On Thu, May 14, 2015 at 3:45 AM, Ishwardeep Singh < > [email protected]> wrote: > >> Hi Michael & Ayan, >> >> >> >> Thank you for your response to my problem. >> >> >> >> Michael do we have a tentative release date for Spark version 1.4? >> >> >> >> Regards, >> >> Ishwardeep >> >> >> >> >> >> *From:* Michael Armbrust [mailto:[email protected]] >> *Sent:* Wednesday, May 13, 2015 10:54 PM >> *To:* ayan guha >> *Cc:* Ishwardeep Singh; user >> *Subject:* Re: [Spark SQL 1.3.1] data frame saveAsTable returns exception >> >> >> >> I think this is a bug in our date handling that should be fixed in Spark >> 1.4. >> >> >> >> On Wed, May 13, 2015 at 8:23 AM, ayan guha <[email protected]> wrote: >> >> Your stack trace says it can't convert date to integer. You sure about >> column positions? >> >> On 13 May 2015 21:32, "Ishwardeep Singh" <[email protected]> >> wrote: >> >> Hi , >> >> I am using Spark SQL 1.3.1. >> >> I have created a dataFrame using jdbc data source and am using >> saveAsTable() >> method but got the following 2 exceptions: >> >> java.lang.RuntimeException: Unsupported datatype DecimalType() >> at scala.sys.package$.error(package.scala:27) >> at >> >> org.apache.spark.sql.parquet.ParquetTypesConverter$$anonfun$fromDataType$2.apply(ParquetTypes.scala:372) >> at >> >> org.apache.spark.sql.parquet.ParquetTypesConverter$$anonfun$fromDataType$2.apply(ParquetTypes.scala:316) >> at scala.Option.getOrElse(Option.scala:120) >> at >> >> org.apache.spark.sql.parquet.ParquetTypesConverter$.fromDataType(ParquetTypes.scala:315) >> at >> >> org.apache.spark.sql.parquet.ParquetTypesConverter$$anonfun$4.apply(ParquetTypes.scala:395) >> at >> >> org.apache.spark.sql.parquet.ParquetTypesConverter$$anonfun$4.apply(ParquetTypes.scala:394) >> at >> >> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244) >> at >> >> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244) >> at scala.collection.immutable.List.foreach(List.scala:318) >> at >> scala.collection.TraversableLike$class.map(TraversableLike.scala:244) >> at scala.collection.AbstractTraversable.map(Traversable.scala:105) >> at >> >> org.apache.spark.sql.parquet.ParquetTypesConverter$.convertFromAttributes(ParquetTypes.scala:393) >> at >> >> org.apache.spark.sql.parquet.ParquetTypesConverter$.writeMetaData(ParquetTypes.scala:440) >> at >> >> org.apache.spark.sql.parquet.ParquetRelation2$MetadataCache.prepareMetadata(newParquet.scala:260) >> at >> >> org.apache.spark.sql.parquet.ParquetRelation2$MetadataCache$$anonfun$6.apply(newParquet.scala:276) >> at >> >> org.apache.spark.sql.parquet.ParquetRelation2$MetadataCache$$anonfun$6.apply(newParquet.scala:269) >> at >> >> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244) >> at >> >> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244) >> at scala.collection.immutable.List.foreach(List.scala:318) >> at >> scala.collection.TraversableLike$class.map(TraversableLike.scala:244) >> at scala.collection.AbstractTraversable.map(Traversable.scala:105) >> at >> >> org.apache.spark.sql.parquet.ParquetRelation2$MetadataCache.refresh(newParquet.scala:269) >> at >> org.apache.spark.sql.parquet.ParquetRelation2.<init>(newParquet.scala:391) >> at >> >> org.apache.spark.sql.parquet.DefaultSource.createRelation(newParquet.scala:98) >> at >> >> org.apache.spark.sql.parquet.DefaultSource.createRelation(newParquet.scala:128) >> at >> org.apache.spark.sql.sources.ResolvedDataSource$.apply(ddl.scala:240) >> at >> >> org.apache.spark.sql.hive.execution.CreateMetastoreDataSourceAsSelect.run(commands.scala:218) >> at >> >> org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:54) >> at >> >> org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:54) >> at >> org.apache.spark.sql.execution.ExecutedCommand.execute(commands.scala:64) >> at >> >> org.apache.spark.sql.SQLContext$QueryExecution.toRdd$lzycompute(SQLContext.scala:1099) >> at >> >> org.apache.spark.sql.SQLContext$QueryExecution.toRdd(SQLContext.scala:1099) >> at >> org.apache.spark.sql.DataFrame.saveAsTable(DataFrame.scala:1121) >> at >> org.apache.spark.sql.DataFrame.saveAsTable(DataFrame.scala:1071) >> at >> org.apache.spark.sql.DataFrame.saveAsTable(DataFrame.scala:1037) >> at >> org.apache.spark.sql.DataFrame.saveAsTable(DataFrame.scala:1015) >> >> java.lang.ClassCastException: java.sql.Date cannot be cast to >> java.lang.Integer >> at scala.runtime.BoxesRunTime.unboxToInt(BoxesRunTime.java:106) >> at >> >> org.apache.spark.sql.parquet.RowWriteSupport.writePrimitive(ParquetTableSupport.scala:215) >> at >> >> org.apache.spark.sql.parquet.RowWriteSupport.writeValue(ParquetTableSupport.scala:192) >> at >> >> org.apache.spark.sql.parquet.RowWriteSupport.write(ParquetTableSupport.scala:171) >> at >> >> org.apache.spark.sql.parquet.RowWriteSupport.write(ParquetTableSupport.scala:134) >> at >> >> parquet.hadoop.InternalParquetRecordWriter.write(InternalParquetRecordWriter.java:120) >> at >> parquet.hadoop.ParquetRecordWriter.write(ParquetRecordWriter.java:81) >> at >> parquet.hadoop.ParquetRecordWriter.write(ParquetRecordWriter.java:37) >> at >> org.apache.spark.sql.parquet.ParquetRelation2.org >> $apache$spark$sql$parquet$ParquetRelation2$$writeShard$1(newParquet.scala:671) >> at >> >> org.apache.spark.sql.parquet.ParquetRelation2$$anonfun$insert$2.apply(newParquet.scala:689) >> at >> >> org.apache.spark.sql.parquet.ParquetRelation2$$anonfun$insert$2.apply(newParquet.scala:689) >> at >> org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61) >> at org.apache.spark.scheduler.Task.run(Task.scala:64) >> at >> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:203) >> at >> >> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) >> at >> >> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) >> at java.lang.Thread.run(Thread.java:722) >> >> Earlier I was using Spark SQL 1.3.0 and was getting some other exception >> so >> I upgraded to 1.3.1 but got a different exception. >> >> Any help would be appreciated >> Regards, >> Ishwardeep >> >> >> >> >> -- >> View this message in context: >> http://apache-spark-user-list.1001560.n3.nabble.com/Spark-SQL-1-3-1-data-frame-saveAsTable-returns-exception-tp22867.html >> Sent from the Apache Spark User List mailing list archive at Nabble.com. >> >> --------------------------------------------------------------------- >> To unsubscribe, e-mail: [email protected] >> For additional commands, e-mail: [email protected] >> >> >> >> ------------------------------ >> >> >> >> >> >> >> NOTE: This message may contain information that is confidential, >> proprietary, privileged or otherwise protected by law. The message is >> intended solely for the named addressee. If received in error, please >> destroy and notify the sender. Any use of this email is prohibited when >> received in error. Impetus does not represent, warrant and/or guarantee, >> that the integrity of this communication has been maintained nor that the >> communication is free of errors, virus, interception or interference. >> > >
