[ 
https://issues.apache.org/jira/browse/BAHIR-84?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15820507#comment-15820507
 ] 

ASF GitHub Bot commented on BAHIR-84:
-------------------------------------

GitHub user ckadner opened a pull request:

    https://github.com/apache/bahir/pull/33

    [BAHIR-84] suppress "...INFO: org.apache.parquet.hadoop..." log messages 
from build log

    [BAHIR-84: Build log flooded with test log 
messages](https://issues.apache.org/jira/browse/BAHIR-84)
    
    **Proposed Changes**
    - [Parquet-MR 
(1.7.0)](https://github.com/apache/parquet-mr/blob/parquet-1.7.0/parquet-common/src/main/java/org/apache/parquet/Log.java)
 uses Java Logging, not Log4j, so we ...
    - add a new properties file: 
`sql-streaming-mqtt/src/test/resources/logging.properties`
    - increase the logger threshold for `org.apache.parquet.hadoop.*` to 
`SEVERE` 
    - add the the new `logging.properties` to the _maven-surefire-plugin_ and 
_scalatest-maven-plugin_ via the `java.util.logging.config.file` system property
    
    **How was this change tested?**
    - ran `mvn clean package`
    - quick test with `mvn clean test -pl sql-streaming-mqtt -q`
    
    **Result**
    ```
    $ mvn clean test -pl sql-streaming-mqtt -q
    
    -------------------------------------------------------
     T E S T S
    -------------------------------------------------------
    Java HotSpot(TM) 64-Bit Server VM warning: ignoring option 
MaxPermSize=512m; support was removed in 8.0
    
    Results :
    
    Tests run: 0, Failures: 0, Errors: 0, Skipped: 0
    
    Java HotSpot(TM) 64-Bit Server VM warning: ignoring option 
MaxPermSize=512m; support was removed in 8.0
    Discovery starting.
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in 
[jar:file:/Users/ckadner/.m2/repository/org/slf4j/slf4j-log4j12/1.7.16/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in 
[jar:file:/Users/ckadner/.m2/repository/org/apache/activemq/activemq-all/5.13.3/activemq-all-5.13.3.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
    SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
    SLF4J: Defaulting to no-operation (NOP) logger implementation
    SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further 
details.
    Discovery completed in 323 milliseconds.
    Run starting. Expected test count is: 7
    BasicMQTTSourceSuite:
    - basic usage
    - Send and receive 100 messages.
    - no server up
    - params not provided.
    - Recovering offset from the last processed offset. !!! IGNORED !!!
    StressTestMQTTSource:
    - Send and receive messages of size 250MB. !!! IGNORED !!!
    LocalMessageStoreSuite:
    - serialize and deserialize
    - Store and retreive
    - Max offset stored
    MQTTStreamSourceSuite:
    Run completed in 21 seconds, 346 milliseconds.
    Total number of tests run: 7
    Suites: completed 5, aborted 0
    Tests: succeeded 7, failed 0, canceled 0, ignored 2, pending 0
    All tests passed.
    ```

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/ckadner/bahir 
BAHIR-84_silence-parquet-hadoop-log-messages

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/bahir/pull/33.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #33
    
----
commit dceefb80b5f3c4947d63a6b075eda1ef402fdbc6
Author: Christian Kadner <ckad...@us.ibm.com>
Date:   2017-01-12T08:07:02Z

    [BAHIR-84] suppress "... INFO: org.apache.parquet.hadoop..." log messages 
from maven build log

----


> Build log flooded with test log messages
> ----------------------------------------
>
>                 Key: BAHIR-84
>                 URL: https://issues.apache.org/jira/browse/BAHIR-84
>             Project: Bahir
>          Issue Type: Test
>          Components: Spark Structured Streaming Connectors
>         Environment: Mac OS X
>            Reporter: Christian Kadner
>            Assignee: Christian Kadner
>            Priority: Minor
>
> The maven build log/console gets flooded with INFO messages from 
> {{org.apache.parquet.hadoop.*}} during the {{test}} phase of module 
> {{sql-streaming-mqtt}} . This makes it hard to find actual problems and test 
> results especially when the log messages intersect with build and test status 
> messages throwing off line breaks etc.
> *Excerpt of build log:*
> {code:title=$ mvn clean package}
> ...
> Discovery completed in 293 milliseconds.
> Run starting. Expected test count is: 7
> BasicMQTTSourceSuite:
> - basic usage
> Jan 11, 2017 11:05:54 PM INFO: org.apache.parquet.hadoop.codec.CodecConfig: 
> Compression: SNAPPY
> Jan 11, 2017 11:05:54 PM INFO: org.apache.parquet.hadoop.codec.CodecConfig: 
> Compression: SNAPPY
> Jan 11, 2017 11:05:54 PM INFO: org.apache.parquet.hadoop.codec.CodecConfig: 
> Compression: SNAPPY
> Jan 11, 2017 11:05:54 PM INFO: org.apache.parquet.hadoop.codec.CodecConfig: 
> Compression: SNAPPY
> Jan 11, 2017 11:05:54 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: 
> Parquet block size to 134217728
> Jan 11, 2017 11:05:54 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: 
> Parquet block size to 134217728
> Jan 11, 2017 11:05:54 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: 
> Parquet page size to 1048576
> Jan 11, 2017 11:05:54 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: 
> Parquet block size to 134217728
> Jan 11, 2017 11:05:54 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: 
> Parquet block size to 134217728
> Jan 11, 2017 11:05:54 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: 
> Parquet page size to 1048576
> Jan 11, 2017 11:05:54 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: 
> Parquet dictionary page size to 1048576
> Jan 11, 2017 11:05:54 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: 
> Parquet page size to 1048576
> Jan 11, 2017 11:05:54 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: 
> Dictionary is on
> Jan 11, 2017 11:05:54 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: 
> Parquet dictionary page size to 1048576
> Jan 11, 2017 11:05:54 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: 
> Parquet page size to 1048576
> ...
> Jan 11, 2017 11:06:03 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: 
> Parquet block size to 134217728
> Jan 11, 2017 11:06:03 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: 
> Parquet page size to 1048576
> Jan 11, 2017 11:06:03 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: 
> Parquet dictionary page size to 1048576
> Jan 11, 2017 11:06:03 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: 
> Dictionary is on
> Jan 11, 2017 11:06:03 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: 
> Validation is o- Send and receive 100 messages.
> - no server up
> - params not provided.
> - Recovering offset from the last processed offset. !!! IGNORED !!!
> StressTestMQTTSource:
> - Send and receive messages of size 250MB. !!! IGNORED !!!
> LocalMessageStoreSuite:
> - serialize and deserialize
> - Store and retreive
> - Max offset stored
> MQTTStreamSourceSuite:
> Run completed in 20 seconds, 622 milliseconds.
> Total number of tests run: 7
> Suites: completed 5, aborted 0
> Tests: succeeded 7, failed 0, canceled 0, ignored 2, pending 0
> All tests passed.
> ff
> Jan 11, 2017 11:06:03 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: 
> Writer version is: PARQUET_1_0
> Jan 11, 2017 11:06:03 PM INFO: 
> org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem 
> columnStore to file. allocated memory: 48
> Jan 11, 2017 11:06:03 PM INFO: 
> org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem 
> columnStore to file. allocated memory: 48
> Jan 11, 2017 11:06:03 PM INFO: 
> org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 109B for [value] 
> BINARY: 1 values, 34B raw, 36B comp, 1 pages, encodings: [RLE, PLAIN, 
> BIT_PACKED]
> Jan 11, 2017 11:06:03 PM INFO: 
> org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 59B for 
> [timestamp] INT96: 1 values, 8B raw, 10B comp, 1 pages, encodings: [RLE, 
> PLAIN_DICTIONARY, BIT_PACKED], dic { 1 entries,...
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to