[ 
https://issues.apache.org/jira/browse/HIVE-9436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14289984#comment-14289984
 ] 

Sushanth Sowmyan commented on HIVE-9436:
----------------------------------------

As to the precommit tests, it looks like the majority of those tests were 
already failing before this build:

{noformat}
Test Result (66 failures / +3)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udaf_histogram_numeric
org.apache.hadoop.hive.thrift.TestHadoop20SAuthBridge.testSaslWithHiveMetaStore
org.apache.hadoop.hive.thrift.TestHadoop20SAuthBridge.testMetastoreProxyUser
org.apache.hive.hcatalog.streaming.TestStreaming.testEndpointConnection
...
{noformat}

This indicates that tests were already flaky (probably due to an earlier 
commit?) and there were a total of 3 reported regressions. Diffing between this 
build(2495) and the previous one where tests ran (2493) to find changes, I get:

{noformat}
< org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_bucket5
---
> org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udaf_histogram_numeric
> org.apache.hadoop.hive.thrift.TestHadoop20SAuthBridge.testSaslWithHiveMetaStore
> org.apache.hadoop.hive.thrift.TestHadoop20SAuthBridge.testMetastoreProxyUser
> org.apache.hive.hcatalog.streaming.TestStreaming.testEndpointConnection
{noformat}

For 
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udaf_histogram_numeric:

{noformat}
Running: diff -a 
/home/hiveptest/54.205.215.38-hiveptest-1/apache-svn-trunk-source/itests/qtest/../../itests/qtest/target/qfile-results/clientpositive/udaf_histogram_numeric.q.out
 
/home/hiveptest/54.205.215.38-hiveptest-1/apache-svn-trunk-source/itests/qtest/../../ql/src/test/results/clientpositive/udaf_histogram_numeric.q.out
9c9
< [{"x":139.16078431372554,"y":255.0},{"x":386.1428571428572,"y":245.0}]
---
> [{"x":135.0284552845532,"y":246.0},{"x":381.39370078740143,"y":254.0}]
{noformat}

This does not look connected to this issue.

As for 
org.apache.hadoop.hive.thrift.TestHadoop20SAuthBridge.testSaslWithHiveMetaStore 
and 

{noformat}
java.lang.NullPointerException: null
        at 
org.apache.hadoop.hive.metastore.HiveMetaStore.getDelegationToken(HiveMetaStore.java:5596)
        at 
org.apache.hadoop.hive.thrift.TestHadoop20SAuthBridge.getDelegationTokenStr(TestHadoop20SAuthBridge.java:318)
        at 
org.apache.hadoop.hive.thrift.TestHadoop20SAuthBridge.obtainTokenAndAddIntoUGI(TestHadoop20SAuthBridge.java:339)
        at 
org.apache.hadoop.hive.thrift.TestHadoop20SAuthBridge.testSaslWithHiveMetaStore(TestHadoop20SAuthBridge.java:231)
{noformat}
{noformat}
java.lang.NullPointerException: null
        at 
org.apache.hadoop.hive.metastore.HiveMetaStore.getDelegationToken(HiveMetaStore.java:5596)
        at 
org.apache.hadoop.hive.thrift.TestHadoop20SAuthBridge.getDelegationTokenStr(TestHadoop20SAuthBridge.java:318)
        at 
org.apache.hadoop.hive.thrift.TestHadoop20SAuthBridge.access$100(TestHadoop20SAuthBridge.java:62)
{noformat}

Both of these seem to be errors getting a delegation token, again something not 
connected with this connection retry issue.

As to the last one, 
org.apache.hive.hcatalog.streaming.TestStreaming.testEndpointConnection, that 
looks like an issue with derby setup? I'm not certain. It does look like a 
valid issue in and of itself, but again, not related to this patch.

{noformat}
java.sql.SQLException: Table/View 'TXNS' already exists in Schema 'APP'.
        at org.apache.derby.iapi.error.StandardException.newException(Unknown 
Source)
        at org.apache.derby.iapi.error.StandardException.newException(Unknown 
Source)
        at 
org.apache.derby.impl.sql.catalog.DataDictionaryImpl.duplicateDescriptorException(Unknown
 Source)
        at 
org.apache.derby.impl.sql.catalog.DataDictionaryImpl.addDescriptor(Unknown 
Source)
        at 
org.apache.derby.impl.sql.execute.CreateTableConstantAction.executeConstantAction(Unknown
 Source)
        at org.apache.derby.impl.sql.execute.MiscResultSet.open(Unknown Source)
        at 
org.apache.derby.impl.sql.GenericPreparedStatement.executeStmt(Unknown Source)
        at org.apache.derby.impl.sql.GenericPreparedStatement.execute(Unknown 
Source)
        at org.apache.derby.impl.jdbc.EmbedStatement.executeStatement(Unknown 
Source)
        at org.apache.derby.impl.jdbc.EmbedStatement.execute(Unknown Source)
        at org.apache.derby.impl.jdbc.EmbedStatement.execute(Unknown Source)
        at 
org.apache.hadoop.hive.metastore.txn.TxnDbUtil.prepDb(TxnDbUtil.java:72)
        at 
org.apache.hadoop.hive.metastore.txn.TxnDbUtil.prepDb(TxnDbUtil.java:131)
        at 
org.apache.hive.hcatalog.streaming.TestStreaming.<init>(TestStreaming.java:157)
{noformat}


> RetryingMetaStoreClient does not retry JDOExceptions
> ----------------------------------------------------
>
>                 Key: HIVE-9436
>                 URL: https://issues.apache.org/jira/browse/HIVE-9436
>             Project: Hive
>          Issue Type: Bug
>    Affects Versions: 0.14.0, 0.13.1
>            Reporter: Sushanth Sowmyan
>            Assignee: Sushanth Sowmyan
>         Attachments: HIVE-9436.2.patch, HIVE-9436.patch
>
>
> RetryingMetaStoreClient has a bug in the following bit of code:
> {code}
>         } else if ((e.getCause() instanceof MetaException) &&
>             e.getCause().getMessage().matches("JDO[a-zA-Z]*Exception")) {
>           caughtException = (MetaException) e.getCause();
>         } else {
>           throw e.getCause();
>         }
> {code}
> The bug here is that java String.matches matches the entire string to the 
> regex, and thus, that match will fail if the message contains anything before 
> or after JDO[a-zA-Z]\*Exception. The solution, however, is very simple, we 
> should match .\*JDO[a-zA-Z]\*Exception.\*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to