The problem is that -(1 << 63) is -(sys.maxsize + 1) so the code which
used to work before is off by one.

On 11/30/2016 06:43 PM, Reynold Xin wrote:
> Can you give a repro? Anything less than -(1 << 63) is considered
> negative infinity (i.e. unbounded preceding).
>
> On Wed, Nov 30, 2016 at 8:27 AM, Maciej Szymkiewicz
> <mszymkiew...@gmail.com <mailto:mszymkiew...@gmail.com>> wrote:
>
>     Hi,
>
>     I've been looking at the SPARK-17845 and I am curious if there is any
>     reason to make it a breaking change. In Spark 2.0 and below we
>     could use:
>
>        
>     Window().partitionBy("foo").orderBy("bar").rowsBetween(-sys.maxsize,
>     sys.maxsize))
>
>     In 2.1.0 this code will silently produce incorrect results (ROWS
>     BETWEEN
>     -1 PRECEDING AND UNBOUNDED FOLLOWING) Couldn't we use
>     Window.unboundedPreceding equal -sys.maxsize to ensure backward
>     compatibility?
>
>     --
>
>     Maciej Szymkiewicz
>
>
>     ---------------------------------------------------------------------
>     To unsubscribe e-mail: dev-unsubscr...@spark.apache.org
>     <mailto:dev-unsubscr...@spark.apache.org>
>
>

-- 
Maciej Szymkiewicz

Reply via email to