[ https://issues.apache.org/jira/browse/SPARK-24946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16561784#comment-16561784 ]
Paul Westenthanner commented on SPARK-24946: -------------------------------------------- Yes I agree that it's rather sugar than necessary, however I'd be happy to implement it. I agree that we should just check if it's iterable and since to the py4j a list is passed we might just do something like {{percentiles_list = [is_valid(v) for v in input_iterable] where is_valid checks if all values are floats between 0 and 1.}} If you agree that this could be useful I'll be happy to create a pull request > PySpark - Allow np.Arrays and pd.Series in df.approxQuantile > ------------------------------------------------------------ > > Key: SPARK-24946 > URL: https://issues.apache.org/jira/browse/SPARK-24946 > Project: Spark > Issue Type: Improvement > Components: PySpark > Affects Versions: 2.3.1 > Reporter: Paul Westenthanner > Priority: Minor > Labels: DataFrame, beginner, pyspark > > As Python user it is convenient to pass a numpy array or pandas series > `{{approxQuantile}}(_col_, _probabilities_, _relativeError_)` for the > probabilities parameter. > > Especially for creating cumulative plots (say in 1% steps) it is handy to use > `approxQuantile(col, np.arange(0, 1.0, 0.01), relativeError)`. > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org