[ 
https://issues.apache.org/jira/browse/SPARK-37348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tim Schwab updated SPARK-37348:
-------------------------------
    Description: 
Because Spark is built on the JVM, in PySpark, F.lit(-1) % F.lit(2) returns -1. 
However, the modulus is often desired instead of the remainder.

 

There is a [PMOD() function in Spark 
SQL|https://spark.apache.org/docs/latest/api/sql/#pmod], but [not in 
PySpark|https://spark.apache.org/docs/latest/api/python/reference/pyspark.sql.html#functions].
 So at the moment, the two options for getting the modulus is to use 
F.expr("pmod(A, B)"), or create a helper function such as:
 
{code:java}
def pmod(dividend, divisor):
    return F.when(dividend < 0, (dividend % divisor) + 
divisor).otherwise(dividend % divisor){code}
 
 
Neither are optimal - pmod should be native to PySpark as it is in Spark SQL.

  was:
Because Spark is built on the JVM, in PySpark, F.lit(-1) % F.lit(2) returns -1. 
However, the modulus is often desired instead of the remainder.

 

There is a PMOD() function in Spark SQL, but not in PySpark. So at the moment, 
the two options for getting the modulus is to use F.expr("pmod(A, B)"), or 
create a helper function such as:
 
{code:java}
def pmod(dividend, divisor):
    return F.when(dividend < 0, (dividend % divisor) + 
divisor).otherwise(dividend % divisor){code}
 
 
Neither are optimal - pmod should be native to PySpark as it is in Spark SQL.


> PySpark pmod function
> ---------------------
>
>                 Key: SPARK-37348
>                 URL: https://issues.apache.org/jira/browse/SPARK-37348
>             Project: Spark
>          Issue Type: New Feature
>          Components: PySpark
>    Affects Versions: 3.2.0
>            Reporter: Tim Schwab
>            Priority: Minor
>              Labels: newbie
>
> Because Spark is built on the JVM, in PySpark, F.lit(-1) % F.lit(2) returns 
> -1. However, the modulus is often desired instead of the remainder.
>  
> There is a [PMOD() function in Spark 
> SQL|https://spark.apache.org/docs/latest/api/sql/#pmod], but [not in 
> PySpark|https://spark.apache.org/docs/latest/api/python/reference/pyspark.sql.html#functions].
>  So at the moment, the two options for getting the modulus is to use 
> F.expr("pmod(A, B)"), or create a helper function such as:
>  
> {code:java}
> def pmod(dividend, divisor):
>     return F.when(dividend < 0, (dividend % divisor) + 
> divisor).otherwise(dividend % divisor){code}
>  
>  
> Neither are optimal - pmod should be native to PySpark as it is in Spark SQL.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to