[ https://issues.apache.org/jira/browse/SPARK-18055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15646477#comment-15646477 ]
Song Jun edited comment on SPARK-18055 at 11/8/16 5:04 AM: ----------------------------------------------------------- [~davies] I can't reproduce it on the master branch or on databricks(Spark 2.0.1-db1 (Scala 2.11)),my code: import scala.collection.mutable.ArrayBuffer case class MyData(id: String,arr: Seq\[String]) val myarr = ArrayBuffer\[MyData]() for(i <- 20 to 30){ val arr = ArrayBuffer\[String]() for(j <- 1 to 10) { arr += (i+j).toString } val mydata = new MyData(i.toString,arr) myarr += mydata } val rdd = spark.sparkContext.makeRDD(myarr) val ds = rdd.toDS ds.rdd.flatMap(_.arr) ds.flatMap(_.arr) there is no exception, it has fixed? or My code is wrong? I also test with the test-jar_2.11-1.0.jar in spark-shell: >spark-shell --jars test-jar_2.11-1.0.jar and there is no exception. was (Author: windpiger): [~davies] I can't reproduce it on the master branch or on databricks(Spark 2.0.1-db1 (Scala 2.11)),my code: import scala.collection.mutable.ArrayBuffer case class MyData(id: String,arr: Seq\[String]) val myarr = ArrayBuffer\[MyData]() for(i <- 20 to 30){ val arr = ArrayBuffer\[String]() for(j <- 1 to 10) { arr += (i+j).toString } val mydata = new MyData(i.toString,arr) myarr += mydata } val rdd = spark.sparkContext.makeRDD(myarr) val ds = rdd.toDS ds.rdd.flatMap(_.arr) ds.flatMap(_.arr) there is no exception, it has fixed? or My code is wrong? or I must test it with customed jar? > Dataset.flatMap can't work with types from customized jar > --------------------------------------------------------- > > Key: SPARK-18055 > URL: https://issues.apache.org/jira/browse/SPARK-18055 > Project: Spark > Issue Type: Bug > Components: SQL > Affects Versions: 2.0.1 > Reporter: Davies Liu > Attachments: test-jar_2.11-1.0.jar > > > Try to apply flatMap() on Dataset column which of of type > com.A.B > Here's a schema of a dataset: > {code} > root > |-- id: string (nullable = true) > |-- outputs: array (nullable = true) > | |-- element: string > {code} > flatMap works on RDD > {code} > ds.rdd.flatMap(_.outputs) > {code} > flatMap doesnt work on dataset and gives the following error > {code} > ds.flatMap(_.outputs) > {code} > The exception: > {code} > scala.ScalaReflectionException: class com.A.B in JavaMirror … not found > at scala.reflect.internal.Mirrors$RootsBase.staticClass(Mirrors.scala:123) > at scala.reflect.internal.Mirrors$RootsBase.staticClass(Mirrors.scala:22) > at > line189424fbb8cd47b3b62dc41e417841c159.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$typecreator3$1.apply(<console>:51) > at > scala.reflect.api.TypeTags$WeakTypeTagImpl.tpe$lzycompute(TypeTags.scala:232) > at scala.reflect.api.TypeTags$WeakTypeTagImpl.tpe(TypeTags.scala:232) > at > org.apache.spark.sql.SQLImplicits$$typecreator9$1.apply(SQLImplicits.scala:125) > at > scala.reflect.api.TypeTags$WeakTypeTagImpl.tpe$lzycompute(TypeTags.scala:232) > at scala.reflect.api.TypeTags$WeakTypeTagImpl.tpe(TypeTags.scala:232) > at > org.apache.spark.sql.catalyst.encoders.ExpressionEncoder$.apply(ExpressionEncoder.scala:49) > at > org.apache.spark.sql.SQLImplicits.newProductSeqEncoder(SQLImplicits.scala:125) > {code} > Spoke to Michael Armbrust and he confirmed it as a Dataset bug. > There is a workaround using explode() > {code} > ds.select(explode(col("outputs"))) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org