[ 
https://issues.apache.org/jira/browse/SPARK-18251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15677396#comment-15677396
 ] 

Cheng Lian edited comment on SPARK-18251 at 11/18/16 6:38 PM:
--------------------------------------------------------------

I'd prefer option 1 because of consistency of the semantics, and I don't think 
this is really a bug since {{Option\[T\]}} shouldn't be used as top level 
{{Dataset}} types anyway.

While doing schema inference, Catalyst always translates {{Option\[T\]}} to the 
nullable version of {{T'}}, where {{T'}} is the inferred data type of {{T}}. 
Take {{case class A(i: Option\[Int\])}} as an example, if we go for option 2, 
then what should the inferred schema of {{A}} be? To keep the original 
semantics, it should be
{noformat}
new StructType()
  .add("i", IntegerType, nullable = true)
{noformat}
while option 2 requires
{noformat}
new StructType()
  .add("i", new StructType()
    .add("value", IntegerType, nullable = true), nullable = true)
{noformat}
since now {{Option\[T\]}} is treated as a single field struct.

Option 1 keeps the current semantics, which is pretty clear and easy to reason 
about, while option 2 either introduces inconsistency or requires us to further 
special case schema inference for top level {{Dataset}} types.



was (Author: lian cheng):
I'd prefer option 1 because of consistency of the semantics, and I don't think 
this is really a bug since {{Option\[T\]}} shouldn't be used as top level 
{{Dataset}} types anyway.

While doing schema inference, Catalyst always translates {{Option\[T\]}} to the 
nullable version of {{T'}}, where {{T'}} is the inferred data type of {{T}}. 
Take {{case class A(i: Option\[Int\])}} as an example, if we go for option 2, 
then what should the inferred schema of {{A}} be? To keep the original 
semantics, it should be
{noformat}
new StructType()
  .add("i", IntegerType, nullable = true)
{noformat}
while option 2 requires
{noformat}
new StructType()
  .add("i", new StructType()
    .add("value", IntegerType, nullable = true), nullable = true)
{noformat}
since now {{Option\[T\]}} is treated as a single field struct.

Option 1 keeps the current semantics, which is pretty clear and easy to reason 
about, while option 2 either introduce inconsistency or requires further 
special casing schema inference for top level {{Dataset}} types.


> DataSet API | RuntimeException: Null value appeared in non-nullable field 
> when holding Option Case Class
> --------------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-18251
>                 URL: https://issues.apache.org/jira/browse/SPARK-18251
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 2.0.1
>         Environment: OS X
>            Reporter: Aniket Bhatnagar
>
> I am running into a runtime exception when a DataSet is holding an Empty 
> object instance for an Option type that is holding non-nullable field. For 
> instance, if we have the following case class:
> case class DataRow(id: Int, value: String)
> Then, DataSet[Option[DataRow]] can only hold Some(DataRow) objects and cannot 
> hold Empty. If it does so, the following exception is thrown:
> {noformat}
> Exception in thread "main" org.apache.spark.SparkException: Job aborted due 
> to stage failure: Task 6 in stage 0.0 failed 1 times, most recent failure: 
> Lost task 6.0 in stage 0.0 (TID 6, localhost): java.lang.RuntimeException: 
> Null value appeared in non-nullable field:
> - field (class: "scala.Int", name: "id")
> - option value class: "DataSetOptBug.DataRow"
> - root class: "scala.Option"
> If the schema is inferred from a Scala tuple/case class, or a Java bean, 
> please try to use scala.Option[_] or other nullable types (e.g. 
> java.lang.Integer instead of int/scala.Int).
>       at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.agg_doAggregateWithoutKey$(Unknown
>  Source)
>       at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown
>  Source)
>       at 
> org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
>       at 
> org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:370)
>       at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
>       at 
> org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125)
>       at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79)
>       at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:47)
>       at org.apache.spark.scheduler.Task.run(Task.scala:86)
>       at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>       at java.lang.Thread.run(Thread.java:745)
> {noformat}
> The bug can be reproduce by using the program: 
> https://gist.github.com/aniketbhatnagar/2ed74613f70d2defe999c18afaa4816e



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to