itholic opened a new pull request, #40507:

   ### What changes were proposed in this pull request?
   This PR proposes adding the `_distributed_sequence_id` to support pandas API 
on Spark in Spark Connect. `_distributed_sequence_id` create the 
distributed-sequence column which is used to generate [default index 
 for pandas API on Spark.
   >>> import pyspark.sql.connect.functions as CF
   >>> data = [("Alice", 1), ("Bob", 2), ("Charlie", 3)]
   >>> sdf = spark.createDataFrame(data, ["name", "age"])
   |   name|age|
   |  Alice|  1|
   |    Bob|  2|
   |Charlie|  3|
   |sequence-index|   name|age|
   |             0|  Alice|  1|
   |             1|    Bob|  2|
   |             2|Charlie|  3|
   ### Why are the changes needed?
   Spark Connect cannot reuse the existing logic for pandas API on Spark, 
because the existing logic uses Py4J to utilize functions in the JVM.
   ### Does this PR introduce _any_ user-facing change?
   No, this is an internal function.
   ### How was this patch tested?
   The patch was tested by adding unit tests and manually verifying the results.

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail:

For queries about this service, please contact Infrastructure at:

To unsubscribe, e-mail:
For additional commands, e-mail:

Reply via email to