Github user mengxr commented on the pull request:
https://github.com/apache/spark/pull/9705#issuecomment-157184519
@JihongMA Sorry for moving back-and-forth, we need to compromise between
SQL behavior and R/Python/others. Basically, we all agree that aggregation
should ignore NULL values. So `sum([NULL]) == sum([])` and `sum([NULL, 1.0]) ==
sum([1.0])`. Then the question is what to return if the collection is empty. In
R/Python/Scala, the result is 0.0, but it is NULL in SQL engines. I don't think
this is critical because most people won't care an empty collection. Though I
prefer returning `0.0`, it is more important to be consistent across Spark SQL.
`stddev([1.0])` is different here because we see a value and by definition
`stddev = 0.0 / 0 = NaN`.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]