srowen commented on a change in pull request #25998: [SPARK-29328][SQL] Fix
calculation of mean seconds per month
URL: https://github.com/apache/spark/pull/25998#discussion_r330876188
##########
File path: python/pyspark/sql/functions.py
##########
@@ -1122,9 +1122,9 @@ def months_between(date1, date2, roundOff=True):
>>> df = spark.createDataFrame([('1997-02-28 10:30:00', '1996-10-30')],
['date1', 'date2'])
>>> df.select(months_between(df.date1, df.date2).alias('months')).collect()
Review comment:
As an aside, I would have expected `months_between` returns an integer, like
just the difference in months ignoring day, but that's not what other DBs do.
However browsing some links like
https://www.ibm.com/support/knowledgecenter/SSCRJT_5.0.1/com.ibm.swg.im.bigsql.commsql.doc/doc/r0053631.html
and
https://www.vertica.com/docs/9.2.x/HTML/Content/Authoring/SQLReferenceManual/Functions/Date-Time/MONTHS_BETWEEN.htm
I see that some implementations just assume all months including Feb have 31
days (!?) .
I agree that this is more accurate, but is it less consistent with Hive or
other DBs? maybe it's already not consistent.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]