Hi,
Is there a way to turn off logging for parquet.
Our unit test output is filled with parquet logging and makes reading the
logs really difficult.

I tried using log4j.xml to turn it off but no luck.  We are using slf4j in
our project for all logging.


Example output:
NT64 {
   r:0
   d:0
   data: DictionaryValuesWriter{
   data: plain: PLAIN CapacityByteArrayOutputStream 1 slabs, 65,536 bytes
   data: dict:8
   data: values:4
   data:}

   pages: ColumnChunkPageWriter CapacityByteArrayOutputStream 1 slabs,
4,668,442 bytes
   total: 8/4,733,990
 }
}

Aug 25, 2014 10:29:44 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore:
written 69B for [timestamp] INT64: 1 values, 8B raw, 28B comp, 1 pages,
encodings: [BIT_PACKED, PLAIN]
Aug 25, 2014 10:29:44 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore:
written 61B for [rid] BINARY: 1 values, 8B raw, 28B comp, 1 pages,
encodings: [BIT_PACKED, PLAIN]
Aug 25, 2014 10:29:44 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore:
written 155B for [qi] BINARY: 1 values, 40B raw, 58B comp, 1 pages,
encodings: [BIT_PACKED, PLAIN]
Aug 25, 2014 10:29:44 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore:
written 48B for [hasError] BOOLEAN: 1 values, 1B raw, 21B comp, 1 pages,
encodings: [BIT_PACKED, PLAIN]
Aug 25, 2014 10:29:44 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore:
written 48B for [isNullAndLow] BOOLEAN: 1 values, 1B raw, 21B comp, 1
pages, encodings: [BIT_PACKED, PLAIN]
Aug 25, 2014 10:29:44 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore:
written 92,485B for [outputTables] BINARY: 1 values, 44,628B raw, 3,205B
comp, 1 pages, encodings: [BIT_PACKED, PLAIN]
Aug 25, 2014 10:29:44 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore:
written 31,380B for [queryBlobs] BINARY: 1 values, 14,197B raw, 2,964B
comp, 1 pages, encodings: [BIT_PACKED, PLAIN]
Aug 25, 2014 10:29:44 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore:
written 64B for [recallSize] INT64: 1 values, 8B raw, 23B comp, 1 pages,
encodings: [BIT_PACKED, PLAIN]
Aug 25, 2014 10:29:44 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore:
written 64B for [totalTQ] INT64: 1 values, 8B raw, 23B comp, 1 pages,
encodings: [BIT_PACKED, PLAIN]
Aug 25, 2014 10:29:44 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore:
written 65B for [execTime] INT64: 1 values, 8B raw, 24B comp, 1 pages,
encodings: [BIT_PACKED, PLAIN]
Aug 25, 2014 10:29:44 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore:
written 58B for [profile] BINARY: 1 values, 7B raw, 27B comp, 1 pages,
encodings: [BIT_PACKED, PLAIN]
Aug 25, 2014 10:29:44 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore:
written 79B for [SlbRecord, array, timestamp] INT64: 1 values, 20B raw, 38B
comp, 1 pages, encodings: [RLE, PLAIN]
Aug 25, 2014 10:29:44 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore:
written 66B for [SlbRecord, array, host] BINARY: 1 values, 19B raw, 35B
comp, 1 pages, encodings: [RLE, PLAIN]

thanks
mohnish

Reply via email to