Thanks Prasanth.

I run the same test you did, and I found the following sizes:

BEFORE HIVE-10166
*13M* Aug  5 11:57 ./hive-unit/target/tmp/log/hive.log

WITH HIVE-10166
*2.4G* Aug  5 12:07 ./hive-unit/target/tmp/log/hive.log

CURRENT HEAD
*3.2G* Aug  5 12:36 ./hive-unit/target/tmp/log/hive.log

HIVE-10166 is adding more size to the file. But, there are other commits
that are adding more values to it that we should investigate.

I created a JIRA to track this issue.
https://issues.apache.org/jira/browse/HIVE-11466


On Mon, Aug 3, 2015 at 3:29 PM, Prasanth Jayachandran <
pjayachand...@hortonworks.com> wrote:

> Hi Sergio
>
> This seems to be related to recent merge in HIVE-10166. I checked out a
> commit prior to HIVE-10166 and ran TestJdbcWithMiniHS2 and the log seems
> reasonable.
> After HIVE-10166, the log file is getting too many WARN msgs (this log msg
> repeats and fills up the disk)
>
> org.apache.thrift.transport.TTransportException: No underlying server
> socket.
>    at
> org.apache.thrift.transport.TServerSocket.acceptImpl(TServerSocket.java:126)
>    at
> org.apache.thrift.transport.TServerSocket.acceptImpl(TServerSocket.java:35)
>    at
> org.apache.thrift.transport.TServerTransport.accept(TServerTransport.java:60)
>    at
> org.apache.thrift.server.TThreadPoolServer.serve(TThreadPoolServer.java:161)
>    at
> org.apache.hive.service.cli.thrift.ThriftBinaryCLIService.run(ThriftBinaryCLIService.java:100)
>
> Can someone take a look?
>
> Thanks
> Prasanth
>
> > On Aug 2, 2015, at 9:29 PM, Prasanth Jayachandran <
> pjayachand...@hortonworks.com> wrote:
> >
> > Hi Sergio
> >
> > Thanks for looking into this. It could be related to my patch HIVE-11304
> (Log4j2 migration). I might have mistakenly specified log4j2 threshold to
> ALL level somewhere resulting in DEBUG level logging. I will look into it.
> >
> > Thanks
> > Prasanth
> >
> >> On Aug 2, 2015, at 9:21 PM, Sergio Pena <sergio.p...@cloudera.com>
> wrote:
> >>
> >> Hi Prasanth,
> >>
> >> I see there are some logs in the system that are too big, and using many
> >> space. Jenkins will delete those logs eventually.
> >> These are some of the logs bigger than 1G that I found:
> >>
> >> *13G
> >>
> ./logs/PreCommit-HIVE-TRUNK-Build-4789/succeeded/TestJdbcWithMiniHS2/hive.log*
> >> *9.9G
> >>
> ./logs/PreCommit-HIVE-TRUNK-Build-4790/succeeded/TestJdbcWithMiniHS2/hive.log
> >>  <<<  HIVE-11416*
> >> *5.5G
> >>
> ./logs/PreCommit-HIVE-TRUNK-Build-4790/succeeded/TestSchedulerQueue/hive.log*
> >> *4.9G
> >>
> ./logs/PreCommit-HIVE-TRUNK-Build-4789/succeeded/TestSchedulerQueue/hive.log*
> >> *4.6G
> >>
> ./logs/PreCommit-HIVE-TRUNK-Build-4792/succeeded/TestSchedulerQueue/hive.log*
> >> *4.1G
> >>
> ./logs/PreCommit-HIVE-TRUNK-Build-Upload-10/succeeded/TestSchedulerQueue/hive.log*
> >> 2.0G ./logs/PreCommit-HIVE-TRUNK-Build-4792/succeeded/TestSSL/hive.log
> >> 1.9G ./logs/PreCommit-HIVE-TRUNK-Build-4790/failed/TestSSL/hive.log
> >> 1.8G ./logs/PreCommit-HIVE-TRUNK-Build-4789/succeeded/TestSSL/hive.log
> >> 1.8G
> >>
> ./logs/PreCommit-HIVE-TRUNK-Build-Upload-10/succeeded/TestJdbcWithMiniHS2/hive.log
> >> 1.7G
> >>
> ./logs/HIVE-TRUNK-HADOOP-2-1/succeeded/TestSparkCliDriver-date_udf.q-join23.q-auto_join4.q-and-12-more/spark.log
> >> 1.7G
> >>
> ./logs/PreCommit-HIVE-TRUNK-Build-4789/succeeded/TestSparkCliDriver-timestamp_lazy.q-bucketsortoptimize_insert_4.q-date_udf.q-and-12-more/spark.log
> >> 1.7G
> >>
> ./logs/PreCommit-HIVE-TRUNK-Build-4790/succeeded/TestSparkCliDriver-timestamp_lazy.q-bucketsortoptimize_insert_4.q-date_udf.q-and-12-more/spark.log
> >> 1.7G
> >>
> ./logs/PreCommit-HIVE-TRUNK-Build-4792/succeeded/TestSparkCliDriver-timestamp_lazy.q-bucketsortoptimize_insert_4.q-date_udf.q-and-12-more/spark.log
> >>
> >> *TestJdbcWithMiniHS2* is one causing this issue. Is debug enabled on
> this
> >> log?
> >>
> >> - Sergio
> >>
> >>
> >>
> >> On Sun, Aug 2, 2015 at 7:01 PM, Prasanth Jayachandran <
> >> pjayachand...@hortonworks.com> wrote:
> >>
> >>> Looks like there is something wrong with the precommit tests.
> >>> The tests runs through but throws IOException or runs out of disk.
> >>> https://issues.apache.org/jira/browse/HIVE-11416
> >>> https://issues.apache.org/jira/browse/HIVE-11304
> >>>
> >>> Can someone take a look whats going on?
> >>>
> >>> Thanks
> >>> Prasanth
> >>>
> >
> >
>
>

Reply via email to