Github user HyukjinKwon commented on a diff in the pull request: https://github.com/apache/spark/pull/18999#discussion_r134131441 --- Diff: python/pyspark/sql/dataframe.py --- @@ -659,19 +659,77 @@ def distinct(self): return DataFrame(self._jdf.distinct(), self.sql_ctx) @since(1.3) - def sample(self, withReplacement, fraction, seed=None): + def sample(self, withReplacement=None, fraction=None, seed=None): """Returns a sampled subset of this :class:`DataFrame`. + :param withReplacement: Sample with replacement or not (default False). + :param fraction: Fraction of rows to generate, range [0.0, 1.0]. + :param seed: Seed for sampling (default a random seed). + .. note:: This is not guaranteed to provide exactly the fraction specified of the total count of the given :class:`DataFrame`. - >>> df.sample(False, 0.5, 42).count() - 2 - """ - assert fraction >= 0.0, "Negative fraction value: %s" % fraction --- End diff -- Hm.. wouldn't we better avoid duplicating expression requirement? It looks I should do: https://github.com/apache/spark/blob/5ad1796b9fd6bce31bbc1cdc2f607115d2dd0e7d/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/basicLogicalOperators.scala#L714-L722 within Python side. I have been thinking of avoiding it if the error message makes sense to Python users (but not the case of exposing non-Pythonic error messages, for example, Java types `java.lang.Long` in the error message) although I understand it is better to throw an exception ahead before going to JVM.
--- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org