Github user GregBowyer commented on a diff in the pull request:
https://github.com/apache/spark/pull/14517#discussion_r75011758
--- Diff: python/pyspark/sql/readwriter.py ---
@@ -692,8 +734,7 @@ def orc(self, path, mode=None, partitionBy=None,
compression=None):
This will override ``orc.compress``. If None
is set, it uses the
default value, ``snappy``.
- >>> orc_df =
spark.read.orc('python/test_support/sql/orc_partitioned')
--- End diff --
Ah sorry, I was going to look into making the test do the lucene random
testing thing of switching between the dataformats provided for `df` randomly.
I was going to change the runner to use `random.choice` to pick between orc
and parquet (and you know one day arrow, hdf5 whatever).
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]