See <https://builds.apache.org/job/Tajo-master-CODEGEN-build/425/changes>
Changes:
[hyunsik] TAJO-1721: Separate routine for CREATE TABLE from DDLExecutor.
------------------------------------------
[...truncated 1549 lines...]
Running org.apache.tajo.storage.avro.TestAvroUtil
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.342 sec - in
org.apache.tajo.storage.avro.TestAvroUtil
Running org.apache.tajo.storage.TestCompressionStorages
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.954 sec - in
org.apache.tajo.storage.TestCompressionStorages
Running org.apache.tajo.storage.index.TestSingleCSVFileBSTIndex
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.654 sec - in
org.apache.tajo.storage.index.TestSingleCSVFileBSTIndex
Running org.apache.tajo.storage.index.TestBSTIndex
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.495 sec -
in org.apache.tajo.storage.index.TestBSTIndex
Running org.apache.tajo.storage.TestSplitProcessor
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.009 sec - in
org.apache.tajo.storage.TestSplitProcessor
Running org.apache.tajo.storage.TestDelimitedTextFile
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.234 sec - in
org.apache.tajo.storage.TestDelimitedTextFile
Running org.apache.tajo.storage.TestFileTablespace
Formatting using clusterid: testClusterID
Formatting using clusterid: testClusterID
Formatting using clusterid: testClusterID
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.789 sec - in
org.apache.tajo.storage.TestFileTablespace
Running org.apache.tajo.storage.TestStorages
Aug 3, 2015 8:10:46 AM INFO:
org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore
to file. allocated memory: 65,659
Aug 3, 2015 8:10:46 AM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 34B for
[myboolean] BOOLEAN: 1 values, 7B raw, 7B comp, 1 pages, encodings: [RLE,
BIT_PACKED, PLAIN]
Aug 3, 2015 8:10:46 AM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 43B for [mybit]
INT32: 1 values, 10B raw, 10B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Aug 3, 2015 8:10:46 AM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 38B for [mychar]
BINARY: 1 values, 11B raw, 11B comp, 1 pages, encodings: [RLE, BIT_PACKED,
PLAIN]
Aug 3, 2015 8:10:46 AM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 43B for [myint2]
INT32: 1 values, 10B raw, 10B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Aug 3, 2015 8:10:46 AM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 43B for [myint4]
INT32: 1 values, 10B raw, 10B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Aug 3, 2015 8:10:46 AM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 55B for [myint8]
INT64: 1 values, 14B raw, 14B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Aug 3, 2015 8:10:46 AM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 43B for [myfloat4]
FLOAT: 1 values, 10B raw, 10B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Aug 3, 2015 8:10:46 AM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 55B for [myfloat8]
DOUBLE: 1 values, 14B raw, 14B comp, 1 pages, encodings: [RLE, BIT_PACKED,
PLAIN]
Aug 3, 2015 8:10:46 AM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 50B for [mytext]
BINARY: 1 values, 15B raw, 15B comp, 1 pages, encodings: [RLE, BIT_PACKED,
PLAIN]
Aug 3, 2015 8:10:46 AM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 50B for [myblob]
BINARY: 1 values, 15B raw, 15B comp, 1 pages, encodings: [RLE, BIT_PACKED,
PLAIN]
Aug 3, 2015 8:10:46 AM INFO: org.apache.parquet.hadoop.ParquetFileReader:
Initiating action with parallelism: 5
Aug 3, 2015 8:10:46 AM INFO: org.apache.parquet.hadoop.ParquetFileReader:
reading another 1 footers
Aug 3, 2015 8:10:46 AM INFO: org.apache.parquet.hadoop.ParquetFileReader:
Initiating action with parallelism: 5
Aug 3, 2015 8:10:55 AM INFO:
org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore
to file. allocated memory: 200,029
Aug 3, 2015 8:10:55 AM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 40,047B for [id]
INT32: 10,000 values, 40,008B raw, 40,008B comp, 1 pages, encodings: [RLE,
BIT_PACKED, PLAIN]
Aug 3, 2015 8:10:55 AM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 53B for [file]
BINARY: 10,000 values, 12B raw, 12B comp, 1 pages, encodings: [RLE, BIT_PACKED,
PLAIN_DICTIONARY], dic { 1 entries, 11B raw, 1B comp}
Aug 3, 2015 8:10:55 AM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 51B for [name]
BINARY: 10,000 values, 12B raw, 12B comp, 1 pages, encodings: [RLE, BIT_PACKED,
PLAIN_DICTIONARY], dic { 1 entries, 10B raw, 1B comp}
Aug 3, 2015 8:10:55 AM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 55B for [age]
INT64: 10,000 values, 12B raw, 12B comp, 1 pages, encodings: [RLE, BIT_PACKED,
PLAIN_DICTIONARY], dic { 1 entries, 8B raw, 1B comp}
Aug 3, 2015 8:10:55 AM INFO:
org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore
to file. allocated memory: 200,029
Aug 3, 2015 8:10:55 AM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 40,047B for [id]
INT32: 10,000 values, 40,008B raw, 40,008B comp, 1 pages, encodings: [RLE,
BIT_PACKED, PLAIN]
Aug 3, 2015 8:10:55 AM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 53B for [file]
BINARY: 10,000 values, 12B raw, 12B comp, 1 pages, encodings: [RLE, BIT_PACKED,
PLAIN_DICTIONARY], dic { 1 entries, 11B raw, 1B comp}
Aug 3, 2015 8:10:55 AM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 51B for [name]
BINARY: 10,000 values, 12B raw, 12B comp, 1 pages, encodings: [RLE, BIT_PACKED,
PLAIN_DICTIONARY], dic { 1 entries, 10B raw, 1B comp}
Aug 3, 2015 8:10:55 AM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 55B for [age]
INT64: 10,000 values, 12B raw, 12B comp, 1 pages, encodings: [RLE, BIT_PACKED,
PLAIN_DICTIONARY], dic { 1 entries, 8B raw, 1B comp}
Aug 3, 2015 8:10:55 AM INFO: org.apache.parquet.hadoop.ParquetFileReader:
Initiating action with parallelism: 5
Aug 3, 2015 8:10:55 AM INFO: org.apache.parquet.hadoop.ParquetFileReader:
reading another 1 footers
Aug 3, 2015 8:10:55 AM INFO: org.apache.parquet.hadoop.ParquetFileReader:
Initiating action with parallelism: 5
Aug 3, 2015 8:11:31 AM INFO:
org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore
to file. allocated memory: 65,690
Aug 3, 2015 8:11:31 AM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 34B for [col1]
BOOLEAN: 1 values, 7B raw, 7B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Aug 3, 2015 8:11:31 AM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 56B for [col2]
BINARY: 1 values, 17B raw, 17B comp, 1 pages, encodings: [RLE, BIT_PACKED,
PLAIN]
Aug 3, 2015 8:11:31 AM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 43B for [col3]
INT32: 1 values, 10B raw, 10B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Aug 3, 2015 8:11:31 AM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 43B for [col4]
INT32: 1 values, 10B raw, 10B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Aug 3, 2015 8:11:31 AM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 55B for [col5]
INT64: 1 values, 14B raw, 14B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Aug 3, 2015 8:11:31 AM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 43B for [col6]
FLOAT: 1 values, 10B raw, 10B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Aug 3, 2015 8:11:31 AM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 55B for [col7]
DOUBLE: 1 values, 14B raw, 14B comp, 1 pages, encodings: [RLE, BIT_PACKED,
PLAIN]
Aug 3, 2015 8:11:31 AM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 56B for [col8]
BINARY: 1 values, 17B raw, 17B comp, 1 pages, encodings: [RLE, BIT_PACKED,
PLAIN]
Aug 3, 2015 8:11:31 AM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 56B for [col9]
BINARY: 1 values, 17B raw, 17B comp, 1 pages, encodings: [RLE, BIT_PACKED,
PLAIN]
Aug 3, 2015 8:11:31 AM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 47B for [col10]
BINARY: 1 values, 14B raw, 14B comp, 1 pages, encodings: [RLE, BIT_PACKED,
PLAIN]
Aug 3, 2015 8:11:31 AM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 62B for [col12]
BINARY: 1 values, 19B raw, 19B comp, 1 pages, encodings: [RLE, BIT_PACKED,
PLAIN]
Aug 3, 2015 8:11:31 AM INFO: org.apache.parquet.hadoop.ParquetFileReader:
Initiating action with parallelism: 5
Aug 3, 2015 8:11:31 AM INFO: org.apache.parquet.hadoop.ParquetFileReader:
reading another 1 footers
Aug 3, 2015 8:11:31 AM INFO: org.apache.parquet.hadoop.ParquetFileReader:
Initiating action with parallelism: 5
Aug 3, 2015 8:11:31 AM INFO:
org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore
to file. allocated memory: 48
Aug 3, 2015 8:11:31 AM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 43B for [col1]
FLOAT: 1 values, 10B raw, 10B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Aug 3, 2015 8:11:31 AM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 55B for [col2]
DOUBLE: 1 values, 14B raw, 14B comp, 1 pages, encodings: [RLE, BIT_PACKED,
PLAIN]
Aug 3, 2015 8:11:31 AM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 43B for [col3]
INT32: 1 values, 10B raw, 10B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Aug 3, 2015 8:11:31 AM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 43B for [col4]
INT32: 1 values, 10B raw, 10B comp, 1 pages, encodings: [RLE, BIT_PACTests run:
98, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 27.133 sec <<< FAILURE! -
in org.apache.tajo.storage.TestStorages
testVariousTypes[2](org.apache.tajo.storage.TestStorages) Time elapsed: 0.055
sec <<< ERROR!
java.io.IOException: Could not read footer: java.lang.NoSuchMethodError:
java.lang.Integer.compare(II)I
at
org.apache.parquet.hadoop.ParquetFileReader.readAllFootersInParallel(ParquetFileReader.java:248)
at
org.apache.parquet.hadoop.ParquetFileReader.readAllFootersInParallelUsingSummaryFiles(ParquetFileReader.java:189)
at
org.apache.parquet.hadoop.ParquetReader.<init>(ParquetReader.java:115)
at org.apache.parquet.hadoop.ParquetReader.<init>(ParquetReader.java:65)
at
org.apache.tajo.storage.parquet.TajoParquetReader.<init>(TajoParquetReader.java:54)
at
org.apache.tajo.storage.parquet.ParquetScanner.init(ParquetScanner.java:60)
at
org.apache.tajo.storage.TestStorages.testVariousTypes(TestStorages.java:383)
Caused by: java.lang.NoSuchMethodError: java.lang.Integer.compare(II)I
at org.apache.parquet.SemanticVersion.compareTo(SemanticVersion.java:99)
at
org.apache.parquet.CorruptStatistics.shouldIgnoreStatistics(CorruptStatistics.java:74)
at
org.apache.parquet.format.converter.ParquetMetadataConverter.fromParquetStatistics(ParquetMetadataConverter.java:263)
at
org.apache.parquet.format.converter.ParquetMetadataConverter.fromParquetMetadata(ParquetMetadataConverter.java:567)
at
org.apache.parquet.format.converter.ParquetMetadataConverter.readParquetMetadata(ParquetMetadataConverter.java:544)
at
org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:431)
at
org.apache.parquet.hadoop.ParquetFileReader$2.call(ParquetFileReader.java:238)
at
org.apache.parquet.hadoop.ParquetFileReader$2.call(ParquetFileReader.java:234)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
at java.lang.Thread.run(Thread.java:662)
testNullHandlingTypes[2](org.apache.tajo.storage.TestStorages) Time elapsed:
0.074 sec <<< ERROR!
java.io.IOException: Could not read footer: java.lang.NoSuchMethodError:
java.lang.Integer.compare(II)I
at
org.apache.parquet.hadoop.ParquetFileReader.readAllFootersInParallel(ParquetFileReader.java:248)
at
org.apache.parquet.hadoop.ParquetFileReader.readAllFootersInParallelUsingSummaryFiles(ParquetFileReader.java:189)
at
org.apache.parquet.hadoop.ParquetReader.<init>(ParquetReader.java:115)
at org.apache.parquet.hadoop.ParquetReader.<init>(ParquetReader.java:65)
at
org.apache.tajo.storage.parquet.TajoParquetReader.<init>(TajoParquetReader.java:54)
at
org.apache.tajo.storage.parquet.ParquetScanner.init(ParquetScanner.java:60)
at
org.apache.tajo.storage.TestStorages.testNullHandlingTypes(TestStorages.java:472)
Caused by: java.lang.NoSuchMethodError: java.lang.Integer.compare(II)I
at org.apache.parquet.SemanticVersion.compareTo(SemanticVersion.java:99)
at
org.apache.parquet.CorruptStatistics.shouldIgnoreStatistics(CorruptStatistics.java:74)
at
org.apache.parquet.format.converter.ParquetMetadataConverter.fromParquetStatistics(ParquetMetadataConverter.java:263)
at
org.apache.parquet.format.converter.ParquetMetadataConverter.fromParquetMetadata(ParquetMetadataConverter.java:567)
at
org.apache.parquet.format.converter.ParquetMetadataConverter.readParquetMetadata(ParquetMetadataConverter.java:544)
at
org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:431)
at
org.apache.parquet.hadoop.ParquetFileReader$2.call(ParquetFileReader.java:238)
at
org.apache.parquet.hadoop.ParquetFileReader$2.call(ParquetFileReader.java:234)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
at java.lang.Thread.run(Thread.java:662)
Running org.apache.tajo.storage.orc.TestORCScanner
Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 0.013 sec <<<
FAILURE! - in org.apache.tajo.storage.orc.TestORCScanner
testReadTuple(org.apache.tajo.storage.orc.TestORCScanner) Time elapsed: 0.013
sec <<< ERROR!
java.lang.UnsupportedClassVersionError: com/facebook/presto/orc/OrcDataSource :
Unsupported major.minor version 51.0
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClassCond(ClassLoader.java:631)
at java.lang.ClassLoader.defineClass(ClassLoader.java:615)
at
java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:283)
at java.net.URLClassLoader.access$000(URLClassLoader.java:58)
at java.net.URLClassLoader$1.run(URLClassLoader.java:197)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
at
org.apache.tajo.storage.orc.TestORCScanner.setup(TestORCScanner.java:73)
testReadTuple(org.apache.tajo.storage.orc.TestORCScanner) Time elapsed: 0.013
sec <<< ERROR!
java.lang.NullPointerException: null
at
org.apache.tajo.storage.orc.TestORCScanner.end(TestORCScanner.java:102)
Running org.apache.tajo.storage.TestByteBufLineReader
Formatting using clusterid: testClusterID
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.236 sec - in
org.apache.tajo.storage.TestByteBufLineReader
Running org.apache.tajo.storage.raw.TestDirectRawFile
Formatting using clusterid: testClusterID
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.35 sec - in
org.apache.tajo.storage.raw.TestDirectRawFile
Running org.apache.tajo.storage.TestFileSystems
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.027 sec - in
org.apache.tajo.storage.TestFileSystems
Running org.apache.tajo.storage.json.TestJsonSerDe
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.023 sec - in
org.apache.tajo.storage.json.TestJsonSerDe
Running org.apache.tajo.storage.TestStorageUtil
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.11 sec - in
org.apache.tajo.storage.TestStorageUtil
Running org.apache.tajo.storage.TestLineReader
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.373 sec - in
org.apache.tajo.storage.TestLineReader
KED, PLAIN]
Aug 3, 2015 8:11:31 AM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 55B for [col5]
INT64: 1 values, 14B raw, 14B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Aug 3, 2015 8:11:31 AM INFO: org.apache.parquet.hadoop.ParquetFileReader:
Initiating action with parallelism: 5
Aug 3, 2015 8:11:31 AM INFO: org.apache.parquet.hadoop.ParquetFileReader:
reading another 1 footers
Aug 3, 2015 8:11:31 AM INFO: org.apache.parquet.hadoop.ParquetFileReader:
Initiating action with parallelism: 5
Aug 3, 2015 8:11:31 AM INFO:
org.apache.parquet.hadoop.InternalParquetRecordReader: RecordReader initialized
will read a total of 1 records.
Aug 3, 2015 8:11:31 AM INFO:
org.apache.parquet.hadoop.InternalParquetRecordReader: at row 0. reading next
block
Aug 3, 2015 8:11:31 AM INFO:
org.apache.parquet.hadoop.InternalParquetRecordReader: block read in memory in
12 ms. row count = 1
Aug 3, 2015 8:11:32 AM INFO:
org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore
to file. allocated memory: 280,000
Aug 3, 2015 8:11:32 AM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 40,047B for [id]
INT32: 10,000 values, 40,008B raw, 40,008B comp, 1 pages, encodings: [RLE,
BIT_PACKED, PLAIN]
Aug 3, 2015 8:11:32 AM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 80,055B for [age]
INT64: 10,000 values, 80,008B raw, 80,008B comp, 1 pages, encodings: [RLE,
BIT_PACKED, PLAIN]
Aug 3, 2015 8:11:32 AM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 40,047B for
[score] FLOAT: 10,000 values, 40,008B raw, 40,008B comp, 1 pages, encodings:
[RLE, BIT_PACKED, PLAIN]
Aug 3, 2015 8:11:32 AM INFO: org.apache.parquet.hadoop.ParquetFileReader:
Initiating action with parallelism: 5
Aug 3, 2015 8:11:32 AM INFO: org.apache.parquet.hadoop.ParquetFileReader:
reading another 1 footers
Aug 3, 2015 8:11:32 AM INFO: org.apache.parquet.hadoop.ParquetFileReader:
Initiating action with parallelism: 5
Aug 3, 2015 8:11:32 AM INFO:
org.apache.parquet.hadoop.InternalParquetRecordReader: RecordReader initialized
will read a total of 10000 records.
Aug 3, 2015 8:11:32 AM INFO:
org.apache.parquet.hadoop.InternalParquetRecordReader: at row 0. reading next
block
Aug 3, 2015 8:11:32 AM INFO:
org.apache.parquet.hadoop.InternalParquetRecordReader: block read in memory in
0 ms. row count = 10000
Aug 3, 2015 8:11:32 AM INFO:
org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore
to file. allocated memory: 66,794
Aug 3, 2015 8:11:32 AM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 36B for [col1]
BOOLEAN: 12 values, 9B raw, 9B comp, 1 pages, encodings: [RLE, BIT_PACKED,
PLAIN]
Aug 3, 2015 8:11:32 AM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 48B for [col2]
BINARY: 12 values, 9B raw, 9B comp, 1 pages, encodings: [RLE, BIT_PACKED,
PLAIN_DICTIONARY], dic { 1 entries, 11B raw, 1B comp}
Aug 3, 2015 8:11:32 AM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 42B for [col3]
INT32: 12 values, 9B raw, 9B comp, 1 pages, encodings: [RLE, BIT_PACKED,
PLAIN_DICTIONARY], dic { 1 entries, 4B raw, 1B comp}
Aug 3, 2015 8:11:32 AM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 42B for [col4]
INT32: 12 values, 9B raw, 9B comp, 1 pages, encodings: [RLE, BIT_PACKED,
PLAIN_DICTIONARY], dic { 1 entries, 4B raw, 1B comp}
Aug 3, 2015 8:11:32 AM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 50B for [col5]
INT64: 12 values, 9B raw, 9B comp, 1 pages, encodings: [RLE, BIT_PACKED,
PLAIN_DICTIONARY], dic { 1 entries, 8B raw, 1B comp}
Aug 3, 2015 8:11:32 AM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 42B for [col6]
FLOAT: 12 values, 9B raw, 9B comp, 1 pages, encodings: [RLE, BIT_PACKED,
PLAIN_DICTIONARY], dic { 1 entries, 4B raw, 1B comp}
Aug 3, 2015 8:11:32 AM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 50B for [col7]
DOUBLE: 12 values, 9B raw, 9B comp, 1 pages, encodings: [RLE, BIT_PACKED,
PLAIN_DICTIONARY], dic { 1 entries, 8B raw, 1B comp}
Aug 3, 2015 8:11:32 AM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 48B for [col8]
BINARY: 12 values, 9B raw, 9B comp, 1 pages, encodings: [RLE, BIT_PACKED,
PLAIN_DICTIONARY], dic { 1 entries, 11B raw, 1B comp}
Aug 3, 2015 8:11:32 AM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 49B for [col9]
BINARY: 12 values, 10B raw, 10B comp, 1 pages, encodings: [RLE, BIT_PACKED,
PLAIN_DICTIONARY], dic { 1 entries, 11B raw, 1B comp}
Aug 3, 2015 8:11:32 AM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 43B for [col10]
BINARY: 12 values, 10B raw, 10B comp, 1 pages, encodings: [RLE, BIT_PACKED,
PLAIN_DICTIONARY], dic { 1 entries, 8B raw, 1B comp}
Aug 3, 2015 8:11:32 AM INFO:
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 53B for [col12]
BINARY: 12 values, 10B raw, 10B comp, 1 pages, encodings: [RLE, BIT_PACKED,
PLAIN_DICTIONARY], dic { 1 entries, 13B raw, 1B comp}
Aug 3, 2015 8:11:32 AM INFO: org.apache.parquet.hadoop.ParquetFileReader:
Initiating action with parallelism: 5
Aug 3, 2015 8:11:32 AM INFO: org.apache.parquet.hadoop.ParquetFileReader:
reading another 1 footers
Aug 3, 2015 8:11:32 AM INFO: org.apache.parquet.hadoop.ParquetFileReader:
Initiating action with parallelism: 5
Results :
Tests in error:
TestReadWrite.testAll:93 » IO Could not read footer:
java.lang.NoSuchMethodErr...
TestMergeScanner.testMultipleFiles:168 » IO Could not read footer:
java.lang.N...
TestStorages.testVariousTypes:383 » IO Could not read footer:
java.lang.NoSuch...
TestStorages.testNullHandlingTypes:472 » IO Could not read footer:
java.lang.N...
TestORCScanner.setup:73 » UnsupportedClassVersion
com/facebook/presto/orc/OrcD...
TestORCScanner.end:102 NullPointer
Tests run: 178, Failures: 0, Errors: 6, Skipped: 0
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] Tajo Main ......................................... SUCCESS [ 2.247 s]
[INFO] Tajo Project POM .................................. SUCCESS [ 2.540 s]
[INFO] Tajo Maven Plugins ................................ SUCCESS [ 4.471 s]
[INFO] Tajo Common ....................................... SUCCESS [ 29.702 s]
[INFO] Tajo Algebra ...................................... SUCCESS [ 1.964 s]
[INFO] Tajo Catalog Common ............................... SUCCESS [ 7.328 s]
[INFO] Tajo Plan ......................................... SUCCESS [ 8.121 s]
[INFO] Tajo Rpc Common ................................... SUCCESS [ 0.592 s]
[INFO] Tajo Protocol Buffer Rpc .......................... SUCCESS [ 44.794 s]
[INFO] Tajo Catalog Client ............................... SUCCESS [ 2.216 s]
[INFO] Tajo Catalog Server ............................... SUCCESS [ 11.964 s]
[INFO] Tajo Storage Common ............................... SUCCESS [ 13.719 s]
[INFO] Tajo HDFS Storage ................................. FAILURE [01:19 min]
[INFO] Tajo HBase Storage ................................ SKIPPED
[INFO] Tajo PullServer ................................... SKIPPED
[INFO] Tajo Client ....................................... SKIPPED
[INFO] Tajo CLI tools .................................... SKIPPED
[INFO] Tajo JDBC Driver .................................. SKIPPED
[INFO] ASM (thirdparty) .................................. SKIPPED
[INFO] Tajo RESTful Container ............................ SKIPPED
[INFO] Tajo Metrics ...................................... SKIPPED
[INFO] Tajo Core ......................................... SKIPPED
[INFO] Tajo RPC .......................................... SKIPPED
[INFO] Tajo Catalog Drivers Hive ......................... SKIPPED
[INFO] Tajo Catalog Drivers .............................. SKIPPED
[INFO] Tajo Catalog ...................................... SKIPPED
[INFO] Tajo Storage ...................................... SKIPPED
[INFO] Tajo Distribution ................................. SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 03:30 min
[INFO] Finished at: 2015-08-03T08:12:00+00:00
[INFO] Final Memory: 65M/709M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on
project tajo-storage-hdfs: There are test failures.
[ERROR]
[ERROR] Please refer to
<https://builds.apache.org/job/Tajo-master-CODEGEN-build/ws/tajo-storage/tajo-storage-hdfs/target/surefire-reports>
for the individual test results.
[ERROR] -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please
read the following articles:
[ERROR] [Help 1]
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR] mvn <goals> -rf :tajo-storage-hdfs
Build step 'Execute shell' marked build as failure
Updating TAJO-1721