[
https://issues.apache.org/jira/browse/PARQUET-1066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
hadu updated PARQUET-1066:
--------------------------
Description: (was: I run a very simple hive sql task on hive console
client, the original table tmp_hadu_crowd_dev has two partitions, which the
total size is over 220GB。
I have tried so many times, but finally end with the GC overhead limit exceeded
error like this:
hive> dfs -du -h /user/hadu/data/tmp_hadu_crowd_dev;
186.5 G /user/hadu/data/tmp_hadu_crowd_dev/day=1970-01-02
39.0 G /user/hadu/data/tmp_hadu_crowd_dev/day=2017-07-20
hive> select
> user_id,
> concat_ws(',',collect_set(crowd_id))
> from
> haitao_open.tmp_hadu_crowd_dev
> where
> (day='2017-07-20' or day='1970-01-02')
> group by user_id;
Query ID = da_20170724095641_d52f8717-d1c0-4445-863a-7669090e9781
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Estimated from input data size: 300
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapreduce.job.reduces=<number>
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further
details.
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is
on
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.ParquetOutputFormat: Validation is
off
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.ParquetOutputFormat: Writer
version is: PARQUET_1_0
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem columnStore to file. allocated memory: 0
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is
on
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.ParquetOutputFormat: Validation is
off
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.ParquetOutputFormat: Writer
version is: PARQUET_1_0
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem columnStore to file. allocated memory: 0
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is
on
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.ParquetOutputFormat: Validation is
off
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.ParquetOutputFormat: Writer
version is: PARQUET_1_0
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem columnStore to file. allocated memory: 0
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is
on
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.ParquetOutputFormat: Validation is
off
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.ParquetOutputFormat: Writer
version is: PARQUET_1_0
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem columnStore to file. allocated memory: 0
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is
on
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.ParquetOutputFormat: Validation is
off
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.ParquetOutputFormat: Writer
version is: PARQUET_1_0
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem columnStore to file. allocated memory: 0
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is
on
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.ParquetOutputFormat: Validation is
off
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.ParquetOutputFormat: Writer
version is: PARQUET_1_0
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem columnStore to file. allocated memory: 0
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is
on
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.ParquetOutputFormat: Validation is
off
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.ParquetOutputFormat: Writer
version is: PARQUET_1_0
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem columnStore to file. allocated memory: 0
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is
on
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.ParquetOutputFormat: Validation is
off
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.ParquetOutputFormat: Writer
version is: PARQUET_1_0
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem columnStore to file. allocated memory: 0
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is
on
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.ParquetOutputFormat: Validation is
off
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.ParquetOutputFormat: Writer
version is: PARQUET_1_0
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem columnStore to file. allocated memory: 0
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is
on
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.ParquetOutputFormat: Validation is
off
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.ParquetOutputFormat: Writer
version is: PARQUET_1_0
Jul 24, 2017 9:56:55 AM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem columnStore to file. allocated memory: 0
Jul 24, 2017 9:56:56 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:56:56 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:56:56 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2017 9:56:56 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 9:56:56 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24, 2017 9:56:56 AM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is
on
Jul 24, 2017 9:56:56 AM INFO: parquet.hadoop.ParquetOutputFormat: Validation is
off
Jul 24, 2017 9:56:56 AM INFO: parquet.hadoop.ParquetOutputFormat: Writer
version is: PARQUET_1_0
Jul 24, 2017 9:56:56 AM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem columnStore to file. allocated memory: 0
Jul 24, 2017 9:56:56 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:56:56 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:56:56 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2017 9:56:56 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 9:56:56 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24, 2017 9:56:56 AM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is
on
Jul 24, 2017 9:56:56 AM INFO: parquet.hadoop.ParquetOutputFormat: Validation is
off
Jul 24, 2017 9:56:56 AM INFO: parquet.hadoop.ParquetOutputFormat: Writer
version is: PARQUET_1_0
Jul 24, 2017 9:56:56 AM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem columnStore to file. allocated memory: 0
Jul 24, 2017 9:56:56 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:56:56 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:56:56 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2017 9:56:56 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 9:56:56 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24, 2017 9:56:56 AM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is
on
Jul 24, 2017 9:56:56 AM INFO: parquet.hadoop.ParquetOutputFormat: Validation is
off
Jul 24, 2017 9:56:56 AM INFO: parquet.hadoop.ParquetOutputFormat: Writer
version is: PARQUET_1_0
Jul 24, 2017 9:56:56 AM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem columnStore to file. allocated memory: 0
Jul 24, 2017 9:56:57 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:56:57 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:56:57 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2017 9:56:57 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 9:56:57 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24, 2017 9:56:57 AM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is
on
Jul 24, 2017 9:56:57 AM INFO: parquet.hadoop.ParquetOutputFormat: Validation is
off
Jul 24, 2017 9:56:57 AM INFO: parquet.hadoop.ParquetOutputFormat: Writer
version is: PARQUET_1_0
Jul 24, 2017 9:56:57 AM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem columnStore to file. allocated memory: 0
Jul 24, 2017 9:56:57 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:56:57 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:56:57 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2017 9:56:57 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 9:56:57 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24, 2017 9:56:57 AM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is
on
Jul 24, 2017 9:56:57 AM INFO: parquet.hadoop.ParquetOutputFormat: Validation is
off
Jul 24, 2017 9:56:57 AM INFO: parquet.hadoop.ParquetOutputFormat: Writer
version is: PARQUET_1_0
Jul 24, 2017 9:56:57 AM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem columnStore to file. allocated memory: 0
Jul 24, 2017 9:56:57 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:56:57 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:56:57 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2017 9:56:57 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 9:56:57 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24, 2017 9:56:57 AM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is
on
Jul 24, 2017 9:56:57 AM INFO: parquet.hadoop.ParquetOutputFormat: Validation is
off
Jul 24, 2017 9:56:57 AM INFO: parquet.hadoop.ParquetOutputFormat: Writer
version is: PARQUET_1_0
Jul 24, 2017 9:56:57 AM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem columnStore to file. allocated memory: 0
Jul 24, 2017 9:56:57 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:56:57 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:56:57 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2017 9:56:57 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 9:56:57 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24, 2017 9:56:57 AM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is
on
Jul 24, 2017 9:56:57 AM INFO: parquet.hadoop.ParquetOutputFormat: Validation is
off
Jul 24, 2017 9:56:57 AM INFO: parquet.hadoop.ParquetOutputFormat: Writer
version is: PARQUET_1_0
Jul 24, 2017 9:56:57 AM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem columnStore to file. allocated memory: 0
Jul 24, 2017 9:56:57 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:56:57 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:56:57 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2017 9:56:57 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 9:56:57 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24, 2017 9:56:57 AM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is
on
Jul 24, 2017 9:56:57 AM INFO: parquet.hadoop.ParquetOutputFormat: Validation is
off
Jul 24, 2017 9:56:57 AM INFO: parquet.hadoop.ParquetOutputFormat: Writer
version is: PARQUET_1_0
Jul 24, 2017 9:56:57 AM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem columnStore to file. allocated memory: 0
Jul 24, 2017 9:56:57 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:56:57 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:56:57 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2017 9:56:57 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 9:56:57 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24, 2017 9:56:57 AM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is
on
Jul 24, 2017 9:56:57 AM INFO: parquet.hadoop.ParquetOutputFormat: Validation is
off
Jul 24, 2017 9:56:57 AM INFO: parquet.hadoop.ParquetOutputFormat: Writer
version is: PARQUET_1_0
Jul 24, 2017 9:56:57 AM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem columnStore to file. allocated memory: 0
Jul 24, 2017 9:56:58 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:56:58 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:56:58 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2017 9:56:58 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 9:56:58 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24, 2017 9:56:58 AM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is
on
Jul 24, 2017 9:56:58 AM INFO: parquet.hadoop.ParquetOutputFormat: Validation is
off
Jul 24, 2017 9:56:58 AM INFO: parquet.hadoop.ParquetOutputFormat: Writer
version is: PARQUET_1_0
Jul 24, 2017 9:56:58 AM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem columnStore to file. allocated memory: 0
Jul 24, 2017 9:56:58 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:56:58 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:56:58 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2017 9:56:58 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 9:56:58 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24, 2017 9:56:58 AM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is
on
Jul 24, 2017 9:56:58 AM INFO: parquet.hadoop.ParquetOutputFormat: Validation is
off
Jul 24, 2017 9:56:58 AM INFO: parquet.hadoop.ParquetOutputFormat: Writer
version is: PARQUET_1_0
Jul 24, 2017 9:56:58 AM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem columnStore to file. allocated memory: 0
Jul 24, 2017 9:56:58 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:56:58 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:56:58 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2017 9:56:58 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 9:56:58 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24, 2017 9:56:58 AM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is
on
Jul 24, 2017 9:56:58 AM INFO: parquet.hadoop.ParquetOutputFormat: Validation is
off
Jul 24, 2017 9:56:58 AM INFO: parquet.hadoop.ParquetOutputFormat: Writer
version is: PARQUET_1_0
Jul 24, 2017 9:56:58 AM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem columnStore to file. allocated memory: 0
Jul 24, 2017 9:56:58 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:56:58 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:56:58 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2017 9:56:58 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 9:56:58 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24, 2017 9:56:58 AM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is
on
Jul 24, 2017 9:56:58 AM INFO: parquet.hadoop.ParquetOutputFormat: Validation is
off
Jul 24, 2017 9:56:58 AM INFO: parquet.hadoop.ParquetOutputFormat: Writer
version is: PARQUET_1_0
Jul 24, 2017 9:56:58 AM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem columnStore to file. allocated memory: 0
Jul 24, 2017 9:56:59 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:56:59 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:56:59 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2017 9:56:59 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 9:56:59 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24, 2017 9:56:59 AM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is
on
Jul 24, 2017 9:56:59 AM INFO: parquet.hadoop.ParquetOutputFormat: Validation is
off
Jul 24, 2017 9:56:59 AM INFO: parquet.hadoop.ParquetOutputFormat: Writer
version is: PARQUET_1_0
Jul 24, 2017 9:56:59 AM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem columnStore to file. allocated memory: 0
Jul 24, 2017 9:56:59 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:56:59 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:56:59 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2017 9:56:59 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 9:56:59 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24, 2017 9:56:59 AM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is
on
Jul 24, 2017 9:56:59 AM INFO: parquet.hadoop.ParquetOutputFormat: Validation is
off
Jul 24, 2017 9:56:59 AM INFO: parquet.hadoop.ParquetOutputFormat: Writer
version is: PARQUET_1_0
Jul 24, 2017 9:56:59 AM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem columnStore to file. allocated memory: 0
Jul 24, 2017 9:56:59 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:56:59 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:56:59 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2017 9:56:59 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 9:56:59 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24, 2017 9:56:59 AM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is
on
Jul 24, 2017 9:56:59 AM INFO: parquet.hadoop.ParquetOutputFormat: Validation is
off
Jul 24, 2017 9:56:59 AM INFO: parquet.hadoop.ParquetOutputFormat: Writer
version is: PARQUET_1_0
Jul 24, 2017 9:56:59 AM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem columnStore to file. allocated memory: 0
Jul 24, 2017 9:57:00 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:57:00 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:57:00 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2017 9:57:00 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 9:57:00 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24, 2017 9:57:00 AM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is
on
Jul 24, 2017 9:57:00 AM INFO: parquet.hadoop.ParquetOutputFormat: Validation is
off
Jul 24, 2017 9:57:00 AM INFO: parquet.hadoop.ParquetOutputFormat: Writer
version is: PARQUET_1_0
Jul 24, 2017 9:57:00 AM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem columnStore to file. allocated memory: 0
Jul 24, 2017 9:57:00 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:57:00 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:57:00 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2017 9:57:00 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 9:57:00 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24, 2017 9:57:00 AM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is
on
Jul 24, 2017 9:57:00 AM INFO: parquet.hadoop.ParquetOutputFormat: Validation is
off
Jul 24, 2017 9:57:00 AM INFO: parquet.hadoop.ParquetOutputFormat: Writer
version is: PARQUET_1_0
Jul 24, 2017 9:57:00 AM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem columnStore to file. allocated memory: 0
Jul 24, 2017 9:57:00 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:57:00 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:57:00 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2017 9:57:00 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 9:57:00 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24, 2017 9:57:00 AM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is
on
Jul 24, 2017 9:57:00 AM INFO: parquet.hadoop.ParquetOutputFormat: Validation is
off
Jul 24, 2017 9:57:00 AM INFO: parquet.hadoop.ParquetOutputFormat: Writer
version is: PARQUET_1_0
Jul 24, 2017 9:57:01 AM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem columnStore to file. allocated memory: 0
Jul 24, 2017 9:57:01 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:57:01 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:57:01 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2017 9:57:01 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 9:57:01 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24, 2017 9:57:01 AM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is
on
Jul 24, 2017 9:57:01 AM INFO: parquet.hadoop.ParquetOutputFormat: Validation is
off
Jul 24, 2017 9:57:01 AM INFO: parquet.hadoop.ParquetOutputFormat: Writer
version is: PARQUET_1_0
Jul 24, 2017 9:57:01 AM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem columnStore to file. allocated memory: 0
Jul 24, 2017 9:57:01 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:57:01 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:57:01 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2017 9:57:01 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 9:57:01 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24, 2017 9:57:01 AM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is
on
Jul 24, 2017 9:57:01 AM INFO: parquet.hadoop.ParquetOutputFormat: Validation is
off
Jul 24, 2017 9:57:01 AM INFO: parquet.hadoop.ParquetOutputFormat: Writer
version is: PARQUET_1_0
Jul 24, 2017 9:57:01 AM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem columnStore to file. allocated memory: 0
Jul 24, 2017 9:57:01 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:57:01 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:57:01 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2017 9:57:01 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 9:57:01 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24, 2017 9:57:01 AM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is
on
Jul 24, 2017 9:57:01 AM INFO: parquet.hadoop.ParquetOutputFormat: Validation is
off
Jul 24, 2017 9:57:01 AM INFO: parquet.hadoop.ParquetOutputFormat: Writer
version is: PARQUET_1_0
Jul 24, 2017 9:57:01 AM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem columnStore to file. allocated memory: 0
Jul 24, 2017 9:57:01 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:57:01 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:57:01 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2017 9:57:01 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 9:57:01 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24, 2017 9:57:01 AM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is
on
Jul 24, 2017 9:57:01 AM INFO: parquet.hadoop.ParquetOutputFormat: Validation is
off
Jul 24, 2017 9:57:01 AM INFO: parquet.hadoop.ParquetOutputFormat: Writer
version is: PARQUET_1_0
Jul 24, 2017 9:57:01 AM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem columnStore to file. allocated memory: 0
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is
on
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.ParquetOutputFormat: Validation is
off
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.ParquetOutputFormat: Writer
version is: PARQUET_1_0
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem columnStore to file. allocated memory: 0
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is
on
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.ParquetOutputFormat: Validation is
off
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.ParquetOutputFormat: Writer
version is: PARQUET_1_0
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem columnStore to file. allocated memory: 0
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is
on
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.ParquetOutputFormat: Validation is
off
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.ParquetOutputFormat: Writer
version is: PARQUET_1_0
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem columnStore to file. allocated memory: 0
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is
on
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.ParquetOutputFormat: Validation is
off
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.ParquetOutputFormat: Writer
version is: PARQUET_1_0
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem columnStore to file. allocated memory: 0
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is
on
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.ParquetOutputFormat: Validation is
off
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.ParquetOutputFormat: Writer
version is: PARQUET_1_0
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem columnStore to file. allocated memory: 0
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is
on
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.ParquetOutputFormat: Validation is
off
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.ParquetOutputFormat: Writer
version is: PARQUET_1_0
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem columnStore to file. allocated memory: 0
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is
on
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.ParquetOutputFormat: Validation is
off
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.ParquetOutputFormat: Writer
version is: PARQUET_1_0
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem columnStore to file. allocated memory: 0
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is
on
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.ParquetOutputFormat: Validation is
off
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.ParquetOutputFormat: Writer
version is: PARQUET_1_0
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem columnStore to file. allocated memory: 0
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is
on
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.ParquetOutputFormat: Validation is
off
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.ParquetOutputFormat: Writer
version is: PARQUET_1_0
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem columnStore to file. allocated memory: 0
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is
on
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.ParquetOutputFormat: Validation is
off
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.ParquetOutputFormat: Writer
version is: PARQUET_1_0
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem columnStore to file. allocated memory: 0
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is
on
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.ParquetOutputFormat: Validation is
off
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.ParquetOutputFormat: Writer
version is: PARQUET_1_0
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem columnStore to file. allocated memory: 0
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is
on
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.ParquetOutputFormat: Validation is
off
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.ParquetOutputFormat: Writer
version is: PARQUET_1_0
Jul 24, 2017 9:57:02 AM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem columnStore to file. allocated memory: 0
Jul 24, 2017 9:57:03 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:57:03 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:57:03 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2017 9:57:03 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 9:57:03 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24, 2017 9:57:03 AM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is
on
Jul 24, 2017 9:57:03 AM INFO: parquet.hadoop.ParquetOutputFormat: Validation is
off
Jul 24, 2017 9:57:03 AM INFO: parquet.hadoop.ParquetOutputFormat: Writer
version is: PARQUET_1_0
Jul 24, 2017 9:57:03 AM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem columnStore to file. allocated memory: 0
Jul 24, 2017 9:57:03 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:57:03 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:57:03 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2017 9:57:03 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 9:57:03 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24, 2017 9:57:03 AM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is
on
Jul 24, 2017 9:57:03 AM INFO: parquet.hadoop.ParquetOutputFormat: Validation is
off
Jul 24, 2017 9:57:03 AM INFO: parquet.hadoop.ParquetOutputFormat: Writer
version is: PARQUET_1_0
Jul 24, 2017 9:57:03 AM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem columnStore to file. allocated memory: 0
Jul 24, 2017 9:57:03 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:57:03 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:57:03 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2017 9:57:03 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 9:57:03 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24, 2017 9:57:03 AM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is
on
Jul 24, 2017 9:57:03 AM INFO: parquet.hadoop.ParquetOutputFormat: Validation is
off
Jul 24, 2017 9:57:03 AM INFO: parquet.hadoop.ParquetOutputFormat: Writer
version is: PARQUET_1_0
Jul 24, 2017 9:57:03 AM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem columnStore to file. allocated memory: 0
Jul 24, 2017 9:57:03 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:57:03 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:57:03 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2017 9:57:03 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 9:57:03 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24, 2017 9:57:03 AM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is
on
Jul 24, 2017 9:57:03 AM INFO: parquet.hadoop.ParquetOutputFormat: Validation is
off
Jul 24, 2017 9:57:03 AM INFO: parquet.hadoop.ParquetOutputFormat: Writer
version is: PARQUET_1_0
Jul 24, 2017 9:57:03 AM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem columnStore to file. allocated memory: 0
Jul 24, 2017 9:57:03 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:57:03 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:57:03 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2017 9:57:03 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 9:57:03 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24, 2017 9:57:03 AM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is
on
Jul 24, 2017 9:57:03 AM INFO: parquet.hadoop.ParquetOutputFormat: Validation is
off
Jul 24, 2017 9:57:03 AM INFO: parquet.hadoop.ParquetOutputFormat: Writer
version is: PARQUET_1_0
Jul 24, 2017 9:57:03 AM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem columnStore to file. allocated memory: 0
Jul 24, 2017 9:57:03 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:57:03 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:57:03 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2017 9:57:03 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 9:57:03 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24, 2017 9:57:03 AM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is
on
Jul 24, 2017 9:57:03 AM INFO: parquet.hadoop.ParquetOutputFormat: Validation is
off
Jul 24, 2017 9:57:03 AM INFO: parquet.hadoop.ParquetOutputFormat: Writer
version is: PARQUET_1_0
Jul 24, 2017 9:57:03 AM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem columnStore to file. allocated memory: 0
Jul 24, 2017 9:57:14 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:57:14 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:57:14 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2017 9:57:14 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 9:57:14 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24, 2017 9:57:14 AM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is
on
Jul 24, 2017 9:57:14 AM INFO: parquet.hadoop.ParquetOutputFormat: Validation is
off
Jul 24, 2017 9:57:14 AM INFO: parquet.hadoop.ParquetOutputFormat: Writer
version is: PARQUET_1_0
Jul 24, 2017 9:57:14 AM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem columnStore to file. allocated memory: 0
Jul 24, 2017 9:57:14 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:57:14 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:57:14 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2017 9:57:14 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 9:57:14 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24, 2017 9:57:14 AM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is
on
Jul 24, 2017 9:57:14 AM INFO: parquet.hadoop.ParquetOutputFormat: Validation is
off
Jul 24, 2017 9:57:14 AM INFO: parquet.hadoop.ParquetOutputFormat: Writer
version is: PARQUET_1_0
Jul 24, 2017 9:57:14 AM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem columnStore to file. allocated memory: 0
Jul 24, 2017 9:57:15 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:57:15 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:57:15 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2017 9:57:15 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 9:57:15 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24, 2017 9:57:15 AM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is
on
Jul 24, 2017 9:57:15 AM INFO: parquet.hadoop.ParquetOutputFormat: Validation is
off
Jul 24, 2017 9:57:15 AM INFO: parquet.hadoop.ParquetOutputFormat: Writer
version is: PARQUET_1_0
Jul 24, 2017 9:57:15 AM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem columnStore to file. allocated memory: 0
Jul 24, 2017 9:57:15 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:57:15 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:57:15 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2017 9:57:15 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 9:57:15 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24, 2017 9:57:15 AM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is
on
Jul 24, 2017 9:57:15 AM INFO: parquet.hadoop.ParquetOutputFormat: Validation is
off
Jul 24, 2017 9:57:15 AM INFO: parquet.hadoop.ParquetOutputFormat: Writer
version is: PARQUET_1_0
Jul 24, 2017 9:57:15 AM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem columnStore to file. allocated memory: 0
Jul 24, 2017 9:57:15 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:57:15 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:57:15 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2017 9:57:15 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 9:57:15 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24, 2017 9:57:15 AM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is
on
Jul 24, 2017 9:57:15 AM INFO: parquet.hadoop.ParquetOutputFormat: Validation is
off
Jul 24, 2017 9:57:15 AM INFO: parquet.hadoop.ParquetOutputFormat: Writer
version is: PARQUET_1_0
Jul 24, 2017 9:57:16 AM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem columnStore to file. allocated memory: 0
Jul 24, 2017 9:57:16 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:57:16 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:57:16 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2017 9:57:16 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 9:57:16 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24, 2017 9:57:16 AM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is
on
Jul 24, 2017 9:57:16 AM INFO: parquet.hadoop.ParquetOutputFormat: Validation is
off
Jul 24, 2017 9:57:16 AM INFO: parquet.hadoop.ParquetOutputFormat: Writer
version is: PARQUET_1_0
Jul 24, 2017 9:57:16 AM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem columnStore to file. allocated memory: 0
Jul 24, 2017 9:57:16 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:57:16 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:57:16 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2017 9:57:16 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 9:57:16 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24, 2017 9:57:16 AM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is
on
Jul 24, 2017 9:57:16 AM INFO: parquet.hadoop.ParquetOutputFormat: Validation is
off
Jul 24, 2017 9:57:16 AM INFO: parquet.hadoop.ParquetOutputFormat: Writer
version is: PARQUET_1_0
Jul 24, 2017 9:57:16 AM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem columnStore to file. allocated memory: 0
Jul 24, 2017 9:57:17 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:57:17 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:57:17 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2017 9:57:17 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 9:57:17 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24, 2017 9:57:17 AM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is
on
Jul 24, 2017 9:57:17 AM INFO: parquet.hadoop.ParquetOutputFormat: Validation is
off
Jul 24, 2017 9:57:17 AM INFO: parquet.hadoop.ParquetOutputFormat: Writer
version is: PARQUET_1_0
Jul 24, 2017 9:57:17 AM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem columnStore to file. allocated memory: 0
Jul 24, 2017 9:57:17 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:57:17 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:57:17 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2017 9:57:17 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 9:57:17 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24, 2017 9:57:17 AM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is
on
Jul 24, 2017 9:57:17 AM INFO: parquet.hadoop.ParquetOutputFormat: Validation is
off
Jul 24, 2017 9:57:17 AM INFO: parquet.hadoop.ParquetOutputFormat: Writer
version is: PARQUET_1_0
Jul 24, 2017 9:57:17 AM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem columnStore to file. allocated memory: 0
Jul 24, 2017 9:57:18 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:57:18 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:57:18 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2017 9:57:18 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 9:57:18 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24, 2017 9:57:18 AM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is
on
Jul 24, 2017 9:57:18 AM INFO: parquet.hadoop.ParquetOutputFormat: Validation is
off
Jul 24, 2017 9:57:18 AM INFO: parquet.hadoop.ParquetOutputFormat: Writer
version is: PARQUET_1_0
Jul 24, 2017 9:57:18 AM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem columnStore to file. allocated memory: 0
Jul 24, 2017 9:57:18 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:57:18 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:57:18 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2017 9:57:18 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 9:57:18 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24, 2017 9:57:18 AM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is
on
Jul 24, 2017 9:57:18 AM INFO: parquet.hadoop.ParquetOutputFormat: Validation is
off
Jul 24, 2017 9:57:18 AM INFO: parquet.hadoop.ParquetOutputFormat: Writer
version is: PARQUET_1_0
Jul 24, 2017 9:57:18 AM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem columnStore to file. allocated memory: 0
Jul 24, 2017 9:57:19 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:57:19 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:57:19 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2017 9:57:19 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 9:57:19 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24, 2017 9:57:19 AM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is
on
Jul 24, 2017 9:57:19 AM INFO: parquet.hadoop.ParquetOutputFormat: Validation is
off
Jul 24, 2017 9:57:19 AM INFO: parquet.hadoop.ParquetOutputFormat: Writer
version is: PARQUET_1_0
Jul 24, 2017 9:57:19 AM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem columnStore to file. allocated memory: 0
Jul 24, 2017 9:57:19 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:57:19 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:57:19 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2017 9:57:19 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 9:57:19 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24, 2017 9:57:19 AM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is
on
Jul 24, 2017 9:57:19 AM INFO: parquet.hadoop.ParquetOutputFormat: Validation is
off
Jul 24, 2017 9:57:19 AM INFO: parquet.hadoop.ParquetOutputFormat: Writer
version is: PARQUET_1_0
Jul 24, 2017 9:57:19 AM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem columnStore to file. allocated memory: 0
Jul 24, 2017 9:57:20 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:57:20 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:57:20 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2017 9:57:20 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 9:57:20 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24, 2017 9:57:20 AM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is
on
Jul 24, 2017 9:57:20 AM INFO: parquet.hadoop.ParquetOutputFormat: Validation is
off
Jul 24, 2017 9:57:20 AM INFO: parquet.hadoop.ParquetOutputFormat: Writer
version is: PARQUET_1_0
Jul 24, 2017 9:57:20 AM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem columnStore to file. allocated memory: 0
Jul 24, 2017 9:57:20 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:57:20 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:57:20 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2017 9:57:20 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 9:57:20 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24, 2017 9:57:20 AM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is
on
Jul 24, 2017 9:57:20 AM INFO: parquet.hadoop.ParquetOutputFormat: Validation is
off
Jul 24, 2017 9:57:20 AM INFO: parquet.hadoop.ParquetOutputFormat: Writer
version is: PARQUET_1_0
Jul 24, 2017 9:57:20 AM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem columnStore to file. allocated memory: 0
Jul 24, 2017 9:57:21 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:57:21 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:57:21 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2017 9:57:21 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 9:57:21 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24, 2017 9:57:21 AM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is
on
Jul 24, 2017 9:57:21 AM INFO: parquet.hadoop.ParquetOutputFormat: Validation is
off
Jul 24, 2017 9:57:21 AM INFO: parquet.hadoop.ParquetOutputFormat: Writer
version is: PARQUET_1_0
Jul 24, 2017 9:57:21 AM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem columnStore to file. allocated memory: 0
Jul 24, 2017 9:57:21 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:57:21 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:57:21 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2017 9:57:21 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 9:57:21 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24, 2017 9:57:21 AM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is
on
Jul 24, 2017 9:57:21 AM INFO: parquet.hadoop.ParquetOutputFormat: Validation is
off
Jul 24, 2017 9:57:21 AM INFO: parquet.hadoop.ParquetOutputFormat: Writer
version is: PARQUET_1_0
Jul 24, 2017 9:57:21 AM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem columnStore to file. allocated memory: 0
Jul 24, 2017 9:57:22 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:57:22 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:57:22 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2017 9:57:22 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 9:57:22 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24, 2017 9:57:22 AM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is
on
Jul 24, 2017 9:57:22 AM INFO: parquet.hadoop.ParquetOutputFormat: Validation is
off
Jul 24, 2017 9:57:22 AM INFO: parquet.hadoop.ParquetOutputFormat: Writer
version is: PARQUET_1_0
Jul 24, 2017 9:57:22 AM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem columnStore to file. allocated memory: 0
Jul 24, 2017 9:57:22 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:57:22 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:57:22 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2017 9:57:22 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 9:57:22 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24, 2017 9:57:22 AM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is
on
Jul 24, 2017 9:57:22 AM INFO: parquet.hadoop.ParquetOutputFormat: Validation is
off
Jul 24, 2017 9:57:22 AM INFO: parquet.hadoop.ParquetOutputFormat: Writer
version is: PARQUET_1_0
Jul 24, 2017 9:57:22 AM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem columnStore to file. allocated memory: 0
Jul 24, 2017 9:57:22 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:57:22 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:57:22 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2017 9:57:22 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 9:57:22 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24, 2017 9:57:22 AM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is
on
Jul 24, 2017 9:57:22 AM INFO: parquet.hadoop.ParquetOutputFormat: Validation is
off
Jul 24, 2017 9:57:22 AM INFO: parquet.hadoop.ParquetOutputFormat: Writer
version is: PARQUET_1_0
Jul 24, 2017 9:57:22 AM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem columnStore to file. allocated memory: 0
Jul 24, 2017 9:57:23 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:57:23 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:57:23 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2017 9:57:23 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 9:57:23 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24, 2017 9:57:23 AM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is
on
Jul 24, 2017 9:57:23 AM INFO: parquet.hadoop.ParquetOutputFormat: Validation is
off
Jul 24, 2017 9:57:23 AM INFO: parquet.hadoop.ParquetOutputFormat: Writer
version is: PARQUET_1_0
Jul 24, 2017 9:57:23 AM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem columnStore to file. allocated memory: 0
Jul 24, 2017 9:57:23 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:57:23 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:57:23 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2017 9:57:23 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 9:57:23 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24, 2017 9:57:23 AM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is
on
Jul 24, 2017 9:57:23 AM INFO: parquet.hadoop.ParquetOutputFormat: Validation is
off
Jul 24, 2017 9:57:23 AM INFO: parquet.hadoop.ParquetOutputFormat: Writer
version is: PARQUET_1_0
Jul 24, 2017 9:57:23 AM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem columnStore to file. allocated memory: 0
Jul 24, 2017 9:57:23 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:57:23 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:57:23 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2017 9:57:23 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 9:57:23 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24, 2017 9:57:23 AM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is
on
Jul 24, 2017 9:57:23 AM INFO: parquet.hadoop.ParquetOutputFormat: Validation is
off
Jul 24, 2017 9:57:23 AM INFO: parquet.hadoop.ParquetOutputFormat: Writer
version is: PARQUET_1_0
Jul 24, 2017 9:57:23 AM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem columnStore to file. allocated memory: 0
Jul 24, 2017 9:57:23 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:57:23 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:57:23 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2017 9:57:23 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 9:57:23 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24, 2017 9:57:23 AM INFO: parquet.hadoop.ParquetOutputFormat: Dictionary is
on
Jul 24, 2017 9:57:23 AM INFO: parquet.hadoop.ParquetOutputFormat: Validation is
off
Jul 24, 2017 9:57:23 AM INFO: parquet.hadoop.ParquetOutputFormat: Writer
version is: PARQUET_1_0
Jul 24, 2017 9:57:23 AM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem columnStore to file. allocated memory: 0
Jul 24, 2017 9:57:23 AM INFO: parquet.hadoop.codec.CodecConfig: Compression set
to false
Jul 24, 2017 9:57:23 AM INFO: parquet.hadoop.codec.CodecConfig: Compression:
UNCOMPRESSED
Jul 24, 2017 9:57:23 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet block
size to 134217728
Jul 24, 2java.lang.OutOfMemoryError: GC overhead limit exceeded
at
org.apache.hadoop.conf.Configuration.handleDeprecation(Configuration.java:595)
at org.apache.hadoop.conf.Configuration.get(Configuration.java:855)
at org.apache.hadoop.conf.Configuration.getTrimmed(Configuration.java:877)
at org.apache.hadoop.conf.Configuration.getInt(Configuration.java:1097)
at org.apache.hadoop.hdfs.DFSClient$Conf.<init>(DFSClient.java:310)
at
org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:157)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:602)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:547)
at
org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:139)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2591)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:365)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
at
org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:97)
at
org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
at
org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:242)
at
org.apache.hadoop.hive.shims.Hadoop23Shims$1.listStatus(Hadoop23Shims.java:146)
at
org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat.getSplits(CombineFileInputFormat.java:217)
at
org.apache.hadoop.mapred.lib.CombineFileInputFormat.getSplits(CombineFileInputFormat.java:75)
at
org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileInputFormatShim.getSplits(HadoopShimsSecure.java:309)
at
org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getCombineSplits(CombineHiveInputFormat.java:470)
at
org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getSplits(CombineHiveInputFormat.java:571)
at
org.apache.hadoop.mapreduce.JobSubmitter.writeOldSplits(JobSubmitter.java:520)
at
org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:512)
at
org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:394)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:562)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:557)
FAILED: Execution Error, return code -101 from
org.apache.hadoop.hive.ql.exec.mr.MapRedTask. GC overhead limit exceeded
and sometime like this:
Jul 24, 2017 10:40:05 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet page
size to 1048576
Jul 24, 2017 10:40:05 AM INFO: parquet.hadoop.ParquetOutputFormat: Parquet
dictionary page size to 1048576
Jul 24,java.io.IOException: Couldn't create proxy provider class
org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
at
org.apache.hadoop.hdfs.NameNodeProxies.createFailoverProxyProvider(NameNodeProxies.java:478)
at
org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:148)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:602)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:547)
at
org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:139)
at
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2591)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:365)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
at
org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:97)
at
org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
at
org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:242)
at
org.apache.hadoop.hive.shims.Hadoop23Shims$1.listStatus(Hadoop23Shims.java:146)
at
org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat.getSplits(CombineFileInputFormat.java:217)
at
org.apache.hadoop.mapred.lib.CombineFileInputFormat.getSplits(CombineFileInputFormat.java:75)
at
org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileInputFormatShim.getSplits(HadoopShimsSecure.java:309)
at
org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getCombineSplits(CombineHiveInputFormat.java:470)
at
org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getSplits(CombineHiveInputFormat.java:571)
at
org.apache.hadoop.mapreduce.JobSubmitter.writeOldSplits(JobSubmitter.java:520)
at
org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:512)
at
org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:394)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:562)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:557)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
at
org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:557)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:548)
at
org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:431)
at
org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:137)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)
at
org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:88)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1653)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1412)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1195)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1059)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1049)
at
org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:213)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
at
org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:736)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.GeneratedConstructorAccessor36.newInstance(Unknown
Source)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at
org.apache.hadoop.hdfs.NameNodeProxies.createFailoverProxyProvider(NameNodeProxies.java:461)
... 52 more
Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded
at java.util.Hashtable$Entry.clone(Hashtable.java:1052)
at java.util.Hashtable.clone(Hashtable.java:613)
at org.apache.hadoop.conf.Configuration.<init>(Configuration.java:666)
at
org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider.<init>(ConfiguredFailoverProxyProvider.java:70)
at sun.reflect.GeneratedConstructorAccessor36.newInstance(Unknown
Source)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at
org.apache.hadoop.hdfs.NameNodeProxies.createFailoverProxyProvider(NameNodeProxies.java:461)
at
org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:148)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:602)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:547)
at
org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:139)
at
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2591)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:365)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
at
org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:97)
at
org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
at
org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:242)
at
org.apache.hadoop.hive.shims.Hadoop23Shims$1.listStatus(Hadoop23Shims.java:146)
at
org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat.getSplits(CombineFileInputFormat.java:217)
at
org.apache.hadoop.mapred.lib.CombineFileInputFormat.getSplits(CombineFileInputFormat.java:75)
at
org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileInputFormatShim.getSplits(HadoopShimsSecure.java:309)
at
org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getCombineSplits(CombineHiveInputFormat.java:470)
at
org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getSplits(CombineHiveInputFormat.java:571)
at
org.apache.hadoop.mapreduce.JobSubmitter.writeOldSplits(JobSubmitter.java:520)
at
org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:512)
at
org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:394)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
Job Submission failed with exception 'java.io.IOException(Couldn't create proxy
provider class
org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider)'
FAILED: Execution Error, return code 1 from
org.apache.hadoop.hive.ql.exec.mr.MapRedTask
someone can help to solve the problem?)
> GC overhead limit exceeded while run a hive task with parquet format hdfs file
> ------------------------------------------------------------------------------
>
> Key: PARQUET-1066
> URL: https://issues.apache.org/jira/browse/PARQUET-1066
> Project: Parquet
> Issue Type: Bug
> Reporter: hadu
>
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)