Github user marmbrus commented on a diff in the pull request:

    https://github.com/apache/spark/pull/3820#discussion_r22360301
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/parquet/ParquetTableSupport.scala 
---
    @@ -84,7 +86,8 @@ private[parquet] class RowReadSupport extends 
ReadSupport[Row] with Logging {
         // TODO: Why it can be null?
         if (schema == null)  {
           log.debug("falling back to Parquet read schema")
    -      schema = ParquetTypesConverter.convertToAttributes(parquetSchema, 
false)
    +      schema = ParquetTypesConverter.convertToAttributes(
    +        parquetSchema, new SQLContext(new SparkContext))
    --- End diff --
    
    I don't think its safe to instantiate a SparkContext here as thats a pretty 
expensive operations and will throw exceptions if there is more than one in a 
single JVM.  We can try to refactor this in the future, but I'd just pass two 
options here (using named parameters for booleans).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to