[
https://issues.apache.org/jira/browse/HIVE-16873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16124310#comment-16124310
]
Hive QA commented on HIVE-16873:
--------------------------------
Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12881476/HIVE-16873.3.patch
{color:red}ERROR:{color} -1 due to no test(s) being added or modified.
{color:red}ERROR:{color} -1 due to 11 failed/errored test(s), 11003 tests
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[insert_overwrite_dynamic_partitions_merge_move]
(batchId=243)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[insert_overwrite_dynamic_partitions_merge_only]
(batchId=243)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[insert_overwrite_dynamic_partitions_move_only]
(batchId=243)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[spark_dynamic_partition_pruning_mapjoin_only]
(batchId=170)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[spark_vectorized_dynamic_partition_pruning]
(batchId=169)
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainuser_3]
(batchId=99)
org.apache.hadoop.hive.cli.TestPerfCliDriver.testCliDriver[query14]
(batchId=235)
org.apache.hive.hcatalog.api.TestHCatClient.testPartitionRegistrationWithCustomSchema
(batchId=180)
org.apache.hive.hcatalog.api.TestHCatClient.testPartitionSpecRegistrationWithCustomSchema
(batchId=180)
org.apache.hive.hcatalog.api.TestHCatClient.testTableSchemaPropagation
(batchId=180)
org.apache.hive.jdbc.TestJdbcWithMiniHS2.testHttpRetryOnServerIdleTimeout
(batchId=228)
{noformat}
Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/6359/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/6359/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-6359/
Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 11 tests failed
{noformat}
This message is automatically generated.
ATTACHMENT ID: 12881476 - PreCommit-HIVE-Build
> Remove Thread Cache From Logging
> --------------------------------
>
> Key: HIVE-16873
> URL: https://issues.apache.org/jira/browse/HIVE-16873
> Project: Hive
> Issue Type: Improvement
> Components: Metastore
> Reporter: BELUGA BEHR
> Assignee: BELUGA BEHR
> Priority: Minor
> Attachments: HIVE-16873.1.patch, HIVE-16873.2.patch,
> HIVE-16873.3.patch
>
>
> In {{org.apache.hadoop.hive.metastore.HiveMetaStore}} we have a {{Formatter}}
> class (and its buffer) tied to every thread.
> This {{Formatter}} is for logging purposes. I would suggest that we simply
> let let the logging framework itself handle these kind of details and ditch
> the buffer per thread.
> {code}
> public static final String AUDIT_FORMAT =
> "ugi=%s\t" + // ugi
> "ip=%s\t" + // remote IP
> "cmd=%s\t"; // command
> public static final Logger auditLog = LoggerFactory.getLogger(
> HiveMetaStore.class.getName() + ".audit");
> private static final ThreadLocal<Formatter> auditFormatter =
> new ThreadLocal<Formatter>() {
> @Override
> protected Formatter initialValue() {
> return new Formatter(new StringBuilder(AUDIT_FORMAT.length() *
> 4));
> }
> };
> ...
> private static final void logAuditEvent(String cmd) {
> final Formatter fmt = auditFormatter.get();
> ((StringBuilder) fmt.out()).setLength(0);
> String address = getIPAddress();
> if (address == null) {
> address = "unknown-ip-addr";
> }
> auditLog.info(fmt.format(AUDIT_FORMAT, ugi.getUserName(),
> address, cmd).toString());
> }
> {code}
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)