Github user hvanhovell commented on a diff in the pull request:

    https://github.com/apache/spark/pull/14568#discussion_r74145557
  
    --- Diff: sql/core/src/main/scala/org/apache/spark/sql/functions.scala ---
    @@ -993,12 +993,31 @@ object functions {
        * This expression would return the following IDs:
        * 0, 1, 2, 8589934592 (1L << 33), 8589934593, 8589934594.
        *
    +   * @group normal_funcs
    +   * @since 1.6.0
    +   */
    +  def monotonically_increasing_id(): Column = withExpr { 
MonotonicallyIncreasingID() }
    +
    +  /**
    +   * A column expression that generates monotonically increasing 64-bit 
integers.
    +   *
    +   * The generated ID is guaranteed to be monotonically increasing and 
unique, but not consecutive.
    +   * The current implementation puts the partition ID in the upper 31 
bits, and the record number
    +   * within each partition in the lower 33 bits. The assumption is that 
the data frame has
    +   * less than 1 billion partitions, and each partition has less than 8 
billion records.
    +   *
        * Optionally, you can specify the offset where the Id starts
        *
    +   * As an example, consider a [[DataFrame]] with two partitions, each 
with 3 records.
    +   * This expression would return the following IDs:
    +   * 0, 1, 2, 8589934592 (1L << 33), 8589934593, 8589934594.
    +   *
        * @group normal_funcs
    -   * @since 1.6.0
    +   * @since 2.0.1
    --- End diff --
    
    Lets make this `2.1.0`. Could you also add a little bit of documentation on 
the offset? One line suffices.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to