Github user chutium commented on a diff in the pull request:
https://github.com/apache/spark/pull/195#discussion_r15936076
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/parquet/ParquetRelation.scala ---
@@ -72,16 +71,56 @@ case class ParquetRelation(val tableName: String, val
path: String) extends Base
/** Output **/
override val output = attributes
+ /** Name (dummy value) */
+ // TODO: rethink whether ParquetRelation should inherit from BaseRelation
+ // (currently required to re-use HiveStrategies but should be removed)
+ override def tableName = "parquet"
+
// Parquet files have no concepts of keys, therefore no Partitioner
// Note: we could allow Block level access; needs to be thought through
override def isPartitioned = false
}
-object ParquetRelation {
+private[sql] object ParquetRelation {
+ // change this to enable/disable Parquet logging
+ var DEBUG: Boolean = false
+
+ def setParquetLogLevel() {
+ // Note: Parquet does not use forwarding to parent loggers which
+ // is required for the JUL-SLF4J bridge to work. Also there is
+ // a default logger that appends to Console which needs to be
+ // reset.
+ import org.slf4j.bridge.SLF4JBridgeHandler
+ import java.util.logging.Logger
+ import java.util.logging.LogManager
+
+ val loggerNames = Seq(
+ "parquet.hadoop.ColumnChunkPageWriteStore",
+ "parquet.hadoop.InternalParquetRecordWriter",
+ "parquet.hadoop.ParquetRecordReader",
+ "parquet.hadoop.ParquetInputFormat",
+ "parquet.hadoop.ParquetOutputFormat",
+ "parquet.hadoop.ParquetFileReader",
+ "parquet.hadoop.InternalParquetRecordReader",
+ "parquet.hadoop.codec.CodecConfig")
+ LogManager.getLogManager.reset()
+ SLF4JBridgeHandler.install()
+ for(name <- loggerNames) {
+ val logger = Logger.getLogger(name)
+ logger.setParent(Logger.getGlobal)
+ logger.setUseParentHandlers(true)
+ }
+ }
// The element type for the RDDs that this relation maps to.
type RowType =
org.apache.spark.sql.catalyst.expressions.GenericMutableRow
+ // The compression type
+ type CompressionType = parquet.hadoop.metadata.CompressionCodecName
+
+ // The default compression
+ val defaultCompression = CompressionCodecName.GZIP
--- End diff --
Hi @AndreSchumacher and @marmbrus , it seems we can use a hadoop config
property to change this ```defaultCompression```, in ```createEmpty``` method,
there is a check:
```
if (conf.get(ParquetOutputFormat.COMPRESSION) == null) {
conf.set(ParquetOutputFormat.COMPRESSION,
ParquetRelation.defaultCompression.name())
}
```
but it is a hadoop config property, not a spark config property, so we can
not simply set this property in conf/spark-defaults.conf
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]