[ 
https://issues.apache.org/jira/browse/BAHIR-84?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15831099#comment-15831099
 ] 

ASF subversion and git services commented on BAHIR-84:
------------------------------------------------------

Commit bce9cd15989e648be03468c6a5b848ee8193df4d in bahir's branch 
refs/heads/master from [~ckadner]
[ https://git-wip-us.apache.org/repos/asf?p=bahir.git;h=bce9cd1 ]

[BAHIR-84] Suppress Parquet-MR build log messages

Since Parquet-MR (1.7.0) uses Java Simple Logging (not Log4j) we
need to add a logging.properties file and add it to the configuration
of the maven-surefire-plugin and scalatest-maven-plugin.
Since Parquet-MR is logging everything to System.out despite log file
handler settings we raise the threshold to ERROR.

https://github.com/Parquet/parquet-mr/issues/390
https://github.com/Parquet/parquet-mr/issues/425

Closes #33


> Build log flooded with test log messages
> ----------------------------------------
>
>                 Key: BAHIR-84
>                 URL: https://issues.apache.org/jira/browse/BAHIR-84
>             Project: Bahir
>          Issue Type: Test
>          Components: Spark Structured Streaming Connectors
>         Environment: Mac OS X
>            Reporter: Christian Kadner
>            Assignee: Christian Kadner
>            Priority: Minor
>
> The maven build log/console gets flooded with INFO messages from 
> {{org.apache.parquet.hadoop.*}} during the {{test}} phase of module 
> {{sql-streaming-mqtt}} . This makes it hard to find actual problems and test 
> results especially when the log messages intersect with build and test status 
> messages throwing off line breaks etc.
> *Excerpt of build log:*
> {code:title=$ mvn clean package}
> ...
> Discovery completed in 293 milliseconds.
> Run starting. Expected test count is: 7
> BasicMQTTSourceSuite:
> - basic usage
> Jan 11, 2017 11:05:54 PM INFO: org.apache.parquet.hadoop.codec.CodecConfig: 
> Compression: SNAPPY
> Jan 11, 2017 11:05:54 PM INFO: org.apache.parquet.hadoop.codec.CodecConfig: 
> Compression: SNAPPY
> Jan 11, 2017 11:05:54 PM INFO: org.apache.parquet.hadoop.codec.CodecConfig: 
> Compression: SNAPPY
> Jan 11, 2017 11:05:54 PM INFO: org.apache.parquet.hadoop.codec.CodecConfig: 
> Compression: SNAPPY
> Jan 11, 2017 11:05:54 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: 
> Parquet block size to 134217728
> Jan 11, 2017 11:05:54 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: 
> Parquet block size to 134217728
> Jan 11, 2017 11:05:54 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: 
> Parquet page size to 1048576
> Jan 11, 2017 11:05:54 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: 
> Parquet block size to 134217728
> Jan 11, 2017 11:05:54 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: 
> Parquet block size to 134217728
> Jan 11, 2017 11:05:54 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: 
> Parquet page size to 1048576
> Jan 11, 2017 11:05:54 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: 
> Parquet dictionary page size to 1048576
> Jan 11, 2017 11:05:54 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: 
> Parquet page size to 1048576
> Jan 11, 2017 11:05:54 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: 
> Dictionary is on
> Jan 11, 2017 11:05:54 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: 
> Parquet dictionary page size to 1048576
> Jan 11, 2017 11:05:54 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: 
> Parquet page size to 1048576
> ...
> Jan 11, 2017 11:06:03 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: 
> Parquet block size to 134217728
> Jan 11, 2017 11:06:03 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: 
> Parquet page size to 1048576
> Jan 11, 2017 11:06:03 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: 
> Parquet dictionary page size to 1048576
> Jan 11, 2017 11:06:03 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: 
> Dictionary is on
> Jan 11, 2017 11:06:03 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: 
> Validation is o- Send and receive 100 messages.
> - no server up
> - params not provided.
> - Recovering offset from the last processed offset. !!! IGNORED !!!
> StressTestMQTTSource:
> - Send and receive messages of size 250MB. !!! IGNORED !!!
> LocalMessageStoreSuite:
> - serialize and deserialize
> - Store and retreive
> - Max offset stored
> MQTTStreamSourceSuite:
> Run completed in 20 seconds, 622 milliseconds.
> Total number of tests run: 7
> Suites: completed 5, aborted 0
> Tests: succeeded 7, failed 0, canceled 0, ignored 2, pending 0
> All tests passed.
> ff
> Jan 11, 2017 11:06:03 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: 
> Writer version is: PARQUET_1_0
> Jan 11, 2017 11:06:03 PM INFO: 
> org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem 
> columnStore to file. allocated memory: 48
> Jan 11, 2017 11:06:03 PM INFO: 
> org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem 
> columnStore to file. allocated memory: 48
> Jan 11, 2017 11:06:03 PM INFO: 
> org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 109B for [value] 
> BINARY: 1 values, 34B raw, 36B comp, 1 pages, encodings: [RLE, PLAIN, 
> BIT_PACKED]
> Jan 11, 2017 11:06:03 PM INFO: 
> org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 59B for 
> [timestamp] INT96: 1 values, 8B raw, 10B comp, 1 pages, encodings: [RLE, 
> PLAIN_DICTIONARY, BIT_PACKED], dic { 1 entries,...
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to