It is platform specific so theoretically can be larger, but 2**63 - 1 is
a standard on 64 bit platform and 2**31 - 1 on 32bit platform. I can
submit a patch but I am not sure how to proceed. Personally I would set

unboundedPreceding = -sys.maxsize

unboundedFollowing = sys.maxsize

to keep backwards compatibility.

On 11/30/2016 06:52 PM, Reynold Xin wrote:
> Ah ok for some reason when I did the pull request sys.maxsize was much
> larger than 2^63. Do you want to submit a patch to fix this?
>
>
> On Wed, Nov 30, 2016 at 9:48 AM, Maciej Szymkiewicz
> <mszymkiew...@gmail.com <mailto:mszymkiew...@gmail.com>> wrote:
>
>     The problem is that -(1 << 63) is -(sys.maxsize + 1) so the code
>     which used to work before is off by one.
>
>     On 11/30/2016 06:43 PM, Reynold Xin wrote:
>>     Can you give a repro? Anything less than -(1 << 63) is considered
>>     negative infinity (i.e. unbounded preceding).
>>
>>     On Wed, Nov 30, 2016 at 8:27 AM, Maciej Szymkiewicz
>>     <mszymkiew...@gmail.com <mailto:mszymkiew...@gmail.com>> wrote:
>>
>>         Hi,
>>
>>         I've been looking at the SPARK-17845 and I am curious if
>>         there is any
>>         reason to make it a breaking change. In Spark 2.0 and below
>>         we could use:
>>
>>            
>>         Window().partitionBy("foo").orderBy("bar").rowsBetween(-sys.maxsize,
>>         sys.maxsize))
>>
>>         In 2.1.0 this code will silently produce incorrect results
>>         (ROWS BETWEEN
>>         -1 PRECEDING AND UNBOUNDED FOLLOWING) Couldn't we use
>>         Window.unboundedPreceding equal -sys.maxsize to ensure backward
>>         compatibility?
>>
>>         --
>>
>>         Maciej Szymkiewicz
>>
>>
>>         ---------------------------------------------------------------------
>>         To unsubscribe e-mail: dev-unsubscr...@spark.apache.org
>>         <mailto:dev-unsubscr...@spark.apache.org>
>>
>>
>
>     -- 
>     Maciej Szymkiewicz
>
>

-- 
Maciej Szymkiewicz

Reply via email to