[ 
https://issues.apache.org/jira/browse/SENTRY-1377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15371673#comment-15371673
 ] 

Vadim Spector commented on SENTRY-1377:
---------------------------------------

Patch 002:
1) Added safeCloseResultSet() method for consistency. In 3 places the code did 
not guarantee calling ResultSet.close() in case of a failure.
2) Subtle change in verifyQuery(): the original code placed 
Assert.assertEquals(n, numRows) check inside the re-try loop. It implies that 
even after successful executeQuery() the number of rows may not be immediately 
equal to the expected number and more executeQuery() attempts may be necessary 
for the number of rows to settle down. Cannot say if that was the actual 
intent, but that's what the code effectively does. The patch 002 
unintentionally moved assertion outside the re-try loop - which implies that 
once executeQuery() succeeded, the number of returned rows is final and no more 
re-tries can change it. Reverting to the original logic though it may not 
matter.

> improve handling of failures, both in tests and after-test cleanup, in 
> TestHDFSIntegration.java
> -----------------------------------------------------------------------------------------------
>
>                 Key: SENTRY-1377
>                 URL: https://issues.apache.org/jira/browse/SENTRY-1377
>             Project: Sentry
>          Issue Type: Test
>          Components: Sentry
>    Affects Versions: 1.8.0
>            Reporter: Vadim Spector
>            Priority: Minor
>             Fix For: 1.8.0
>
>         Attachments: SENTRY-1377.001.patch, SENTRY-1377.002.patch
>
>
> 1. TestHDFSIntegration.java should provide best-attempt cleanup in 
> cleanAfterTest() method. Currently, if any cleanup operation fails, the rest 
> of cleanup code is not executed. Cleanup logic would not normally fail if 
> corresponding test succeeds. But if some do not, short-circuiting some 
> cleanup logic tends to cascade secondary failures which complicate 
> troubleshooting.
> 2. Test methods would short-circuit closing database Connection's and 
> Statement's if the test logic fails. Moreover, some test methods have 
> multiple initializations of Connection ans Statement, but only one 
> stmt.close(); conn.close(); pair at the end. Correct pattern for each 
> distinct {Connection,Statement} pair would be:
> conn = null;
> stmt = null;
> try {
> ..... test logic here, including conn and stmt initialization ...
> } finally {
>   safeClose(conn, stmt);
> }
> where
> private static safeClose(Connection conn, Statement stmt) {
>   if (stmt != null) try { stmt.close(); } catch (Exception ignore) {}
>   if (conn != null) try { conn.close(); } catch (Exception ignore) {}
> }
> Testing jdbc driver implementation is not in the scope of HDFS integration 
> tests, so ignoring Connection.close() and Statement.close() failures should 
> not be a concern for this test class.
> 3. Retry logic uses recursion in some places, as in startHiveServer2(), 
> verifyQuery(), verifyOnAllSubDirs. Better to implement it via straightforward 
> retry loop. Exception stack trace is more confusing than it needs to be in 
> case of recursive calls. Plus, with NUM_RETRIES == 10, at least 
> theoretically, it creates running out of stack as an unnecessary failure 
> mechanism.
> 4. startHiveServer2() ignores hiveServer2.start() call failure, only logging 
> info message.
> 5. Starting hiveServer2 and Hive metastore in separate threads and then 
> keeping those threads alive seems unnecessary, since both servers' start() 
> methods create servers running on their own threads anyway. It effectively 
> leads to ignoring the start() method failure for both servers. Also, it 
> leaves no guarantee that hiveServer2 will be started after the Hive metastore 
> - both start from independent threads with no cross-thread coordination 
> mechanism in place.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to