Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16928#discussion_r120021921
--- Diff: python/pyspark/sql/readwriter.py ---
@@ -191,10 +191,13 @@ def json(self, path, schema=None,
primitivesAsString=None, prefersDecimal=None,
:param mode: allows a mode for dealing with corrupt records during
parsing. If None is
set, it uses the default value, ``PERMISSIVE``.
- * ``PERMISSIVE`` : sets other fields to ``null`` when it
meets a corrupted \
- record and puts the malformed string into a new field
configured by \
- ``columnNameOfCorruptRecord``. When a schema is set by
user, it sets \
- ``null`` for extra fields.
+ * ``PERMISSIVE`` : sets other fields to ``null`` when it
meets a corrupted \
+ record, and puts the malformed string into a field
configured by \
+ ``columnNameOfCorruptRecord``. To keep corrupt records,
an user can set \
+ a string type field named ``columnNameOfCorruptRecord``
in an user-defined \
+ schema. If a schema does not have the field, it drops
corrupt records during \
+ parsing. When inferring a schema, it implicitly adds a \
+ ``columnNameOfCorruptRecord`` field in an output schema.
--- End diff --
@maropu For JSON, we implicitly adds the ``columnNameOfCorruptRecord``
field in schema inference. What is the reason we are not doing the same thing
for CSV?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]