Github user hvanhovell commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16835#discussion_r99847415
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
    @@ -1026,25 +1026,27 @@ object StaticSQLConf {
       // value of this property). We will split the JSON string of a schema to 
its length exceeds the
       // threshold. Note that, this conf is only read in HiveExternalCatalog 
which is cross-session,
       // that's why this conf has to be a static SQL conf.
    -  val SCHEMA_STRING_LENGTH_THRESHOLD = 
buildConf("spark.sql.sources.schemaStringLengthThreshold")
    -    .doc("The maximum length allowed in a single cell when " +
    -      "storing additional schema information in Hive's metastore.")
    -    .internal()
    -    .intConf
    -    .createWithDefault(4000)
    +  val SCHEMA_STRING_LENGTH_THRESHOLD =
    +  buildStaticConf("spark.sql.sources.schemaStringLengthThreshold")
    --- End diff --
    
    Nit identation


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to