Spiro Michaylov created SPARK-6587:
--------------------------------------
Summary: Inferring schema for case class hierarchy fails with
mysterious message
Key: SPARK-6587
URL: https://issues.apache.org/jira/browse/SPARK-6587
Project: Spark
Issue Type: Bug
Components: SQL
Affects Versions: 1.3.0
Environment: At least Windows 8, Scala 2.11.2.
Reporter: Spiro Michaylov
(Don't know if this is a functionality bug, error reporting bug or an RFE ...)
I define the following hierarchy:
{code}
private abstract class MyHolder
private case class StringHolder(s: String) extends MyHolder
private case class IntHolder(i: Int) extends MyHolder
private case class BooleanHolder(b: Boolean) extends MyHolder
{code}
and a top level case class:
{code}
private case class Thing(key: Integer, foo: MyHolder)
{code}
When I try to convert it:
{code}
val things = Seq(
Thing(1, IntHolder(42)),
Thing(2, StringHolder("hello")),
Thing(3, BooleanHolder(false))
)
val thingsDF = sc.parallelize(things, 4).toDF()
thingsDF.registerTempTable("things")
val all = sqlContext.sql("SELECT * from things")
{code}
I get the following stack trace:
{quote}
Exception in thread "main" scala.MatchError:
sql.CaseClassSchemaProblem.MyHolder (of class
scala.reflect.internal.Types$ClassNoArgsTypeRef)
at
org.apache.spark.sql.catalyst.ScalaReflection$class.schemaFor(ScalaReflection.scala:112)
at
org.apache.spark.sql.catalyst.ScalaReflection$.schemaFor(ScalaReflection.scala:30)
at
org.apache.spark.sql.catalyst.ScalaReflection$$anonfun$schemaFor$1.apply(ScalaReflection.scala:159)
at
org.apache.spark.sql.catalyst.ScalaReflection$$anonfun$schemaFor$1.apply(ScalaReflection.scala:157)
at scala.collection.immutable.List.map(List.scala:276)
at
org.apache.spark.sql.catalyst.ScalaReflection$class.schemaFor(ScalaReflection.scala:157)
at
org.apache.spark.sql.catalyst.ScalaReflection$.schemaFor(ScalaReflection.scala:30)
at
org.apache.spark.sql.catalyst.ScalaReflection$class.schemaFor(ScalaReflection.scala:107)
at
org.apache.spark.sql.catalyst.ScalaReflection$.schemaFor(ScalaReflection.scala:30)
at org.apache.spark.sql.SQLContext.createDataFrame(SQLContext.scala:312)
at
org.apache.spark.sql.SQLContext$implicits$.rddToDataFrameHolder(SQLContext.scala:250)
at sql.CaseClassSchemaProblem$.main(CaseClassSchemaProblem.scala:35)
at sql.CaseClassSchemaProblem.main(CaseClassSchemaProblem.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:134)
{quote}
I wrote this to answer [a question on
StackOverflow|http://stackoverflow.com/questions/29310405/what-is-the-right-way-to-represent-an-any-type-in-spark-sql]
which uses a much simpler approach and suffers the same problem.
Looking at what seems to me to be the [relevant unit test
suite|https://github.com/apache/spark/blob/master/sql/core/src/test/scala/org/apache/spark/sql/ScalaReflectionRelationSuite.scala]
I see that this case is not covered.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]