Github user rxin commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16204#discussion_r93535132
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
    @@ -466,6 +466,19 @@ object SQLConf {
         .longConf
         .createWithDefault(4 * 1024 * 1024)
     
    +  val IGNORE_CORRUPT_FILES = 
SQLConfigBuilder("spark.sql.files.ignoreCorruptFiles")
    +    .doc("Whether to ignore corrupt files. If true, the Spark jobs will 
continue to run when " +
    +      "encountering corrupted or non-existing and contents that have been 
read will still be " +
    +      "returned.")
    +    .booleanConf
    +    .createWithDefault(false)
    +
    +  val MAX_RECORDS_PER_FILE = 
SQLConfigBuilder("spark.sql.files.maxRecordsPerFile")
    --- End diff --
    
    the limit realistically speaking is so high that i doubt it'd matter unless 
this value is set to 1.



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to