[ 
https://issues.apache.org/jira/browse/SPARK-11725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15005391#comment-15005391
 ] 

Herman van Hovell commented on SPARK-11725:
-------------------------------------------

I'd rather add a warning than prevent this from happening.

I cannot reproduce the {{-1}} default values on Spark 1.5.2. For example:
{noformat}
val id = udf((x: Int) => {
    x
})
val q = sqlContext
  .range(1 << 10)
  .select($"id", when(($"id" mod 2) === 1, $"id").as("val1"))
  .select($"id", $"val1", id($"val1").as("val2"))
q.show

// Result:
id: org.apache.spark.sql.UserDefinedFunction = 
UserDefinedFunction(<function1>,IntegerType,List(IntegerType))
q: org.apache.spark.sql.DataFrame = [id: bigint, val1: bigint, val2: int]
+---+----+----+
| id|val1|val2|
+---+----+----+
|  0|null|   0|
|  1|   1|   1|
|  2|null|   0|
|  3|   3|   3|
|  4|null|   0|
|  5|   5|   5|
|  6|null|   0|
|  7|   7|   7|
|  8|null|   0|
|  9|   9|   9|
| 10|null|   0|
| 11|  11|  11|
| 12|null|   0|
| 13|  13|  13|
| 14|null|   0|
| 15|  15|  15|
| 16|null|   0|
| 17|  17|  17|
| 18|null|   0|
| 19|  19|  19|
+---+----+----+
only showing top 20 rows
{noformat}

What version of Spark are you using?




> Let UDF to handle null value
> ----------------------------
>
>                 Key: SPARK-11725
>                 URL: https://issues.apache.org/jira/browse/SPARK-11725
>             Project: Spark
>          Issue Type: Improvement
>          Components: SQL
>            Reporter: Jeff Zhang
>
> I notice that currently spark will take the long field as -1 if it is null.
> Here's the sample code.
> {code}
> sqlContext.udf.register("f", (x:Int)=>x+1)
> df.withColumn("age2", expr("f(age)")).show()
> //////////////// Output ///////////////////////
> +----+-------+----+
> | age|   name|age2|
> +----+-------+----+
> |null|Michael|   0|
> |  30|   Andy|  31|
> |  19| Justin|  20|
> +----+-------+----+
> {code}
> I think for the null value we have 3 options
> * Use a special value to represent it (what spark does now)
> * Always return null if the udf input has null value argument 
> * Let udf itself to handle null
> I would prefer the third option 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to