HyukjinKwon commented on a change in pull request #32204:
URL: https://github.com/apache/spark/pull/32204#discussion_r635791028
##########
File path: python/pyspark/sql/readwriter.py
##########
@@ -110,18 +110,6 @@ def option(self, key, value):
"""Adds an input option for the underlying data source.
You can set the following option(s) for reading files:
- * ``timeZone``: sets the string that indicates a time zone ID to
be used to parse
- timestamps in the JSON/CSV datasources or partition values.
The following
- formats of `timeZone` are supported:
-
- * Region-based zone ID: It should have the form 'area/city',
such as \
- 'America/Los_Angeles'.
- * Zone offset: It should be in the format '(+|-)HH:mm', for
example '-08:00' or \
- '+01:00'. Also 'UTC' and 'Z' are supported as aliases of
'+00:00'.
-
- Other short names like 'CST' are not recommended to use
because they can be
- ambiguous. If it isn't set, the current value of the SQL config
- ``spark.sql.session.timeZone`` is used by default.
* ``pathGlobFilter``: an optional glob pattern to only include
files with paths matching
Review comment:
I can you can remove these too and link Generic options page
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]