HyukjinKwon commented on a change in pull request #24894: [SPARK-28058][DOC] 
Add a note to DROPMALFORMED mode of CSV for column pruning
URL: https://github.com/apache/spark/pull/24894#discussion_r294572685
 
 

 ##########
 File path: python/pyspark/sql/readwriter.py
 ##########
 @@ -441,7 +441,12 @@ def csv(self, path, schema=None, sep=None, encoding=None, 
quote=None, escape=Non
                   When it meets a record having fewer tokens than the length 
of the schema, \
                   sets ``null`` to extra fields. When the record has more 
tokens than the \
                   length of the schema, it drops extra tokens.
-                * ``DROPMALFORMED`` : ignores the whole corrupted records.
+                * ``DROPMALFORMED`` : ignores the whole corrupted records. 
Note that when CSV \
+                  parser column pruning 
(``spark.sql.csv.parser.columnPruning.enabled``) is \
 
 Review comment:
   Actually, @viirya, I think this note applies to all other modes. Would it be 
better to leave a short node under `mode` parameter? I think we can describe, 
for instance ..
   
   Note that Spark tries to parse only required columns in CSV. Therefore, 
malformed records can be different based on required set of fields. This 
behavior can be controlled by ``spark.sql.csv.parser.columnPruning.enabled`` 
(enabled by default)
   
   Feel free to reword this .. 
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to