[jira] [Updated] (SENTRY-1377) improve handling of failures, both in tests and after-test cleanup, in TestHDFSIntegration.java

2016-12-13 Thread Hao Hao (JIRA)

 [ 
https://issues.apache.org/jira/browse/SENTRY-1377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hao Hao updated SENTRY-1377:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> improve handling of failures, both in tests and after-test cleanup, in 
> TestHDFSIntegration.java
> ---
>
> Key: SENTRY-1377
> URL: https://issues.apache.org/jira/browse/SENTRY-1377
> Project: Sentry
>  Issue Type: Sub-task
>  Components: Sentry
>Affects Versions: 1.8.0
>Reporter: Vadim Spector
>Assignee: Vadim Spector
> Fix For: 1.8.0
>
> Attachments: SENTRY-1377.001.patch, SENTRY-1377.002.patch
>
>
> There are multiple issues making HDFS sync tests flaky or sometimes failing.
> 1. TestHDFSIntegrationBase.java should provide best-attempt cleanup in 
> cleanAfterTest() method. Currently, if any cleanup operation fails, the rest 
> of cleanup code is not executed. Cleanup logic would not normally fail if 
> corresponding test succeeds. But if some do not, short-circuiting some 
> cleanup logic tends to cascade secondary failures which complicate 
> troubleshooting.
> 2. TestHDFSIntegration*.java classes do not guarantee calling close() method 
> on Connection and Statement objects. It happens because 
> a) no try-finally or try-with-resource is used, so tests can skip close() 
> calls if fail in the middle.
> b) many methods re-open Connection and Statement multiple times, yet provide 
> a single close() at the end. 
> 3. Retry logic uses recursion in some places, as in startHiveServer2() and 
> verifyOnAllSubDirs. Better to implement it via straightforward retry loop. 
> Exception stack trace is more confusing than it needs to be in case of 
> recursive calls. Plus, with NUM_RETRIES == 10, at least theoretically, it 
> creates running out of stack as an unnecessary failure mechanism.
> 4. startHiveServer2() ignores hiveServer2.start() call failure, only logging 
> info message.
> 5. Starting hiveServer2 and Hive metastore in separate threads and then 
> keeping those threads alive seems unnecessary, since both servers' start() 
> methods create servers running on their own threads anyway. It effectively 
> leads to ignoring the start() method failure for both servers. Also, it 
> leaves no guarantee that hiveServer2 will be started after the Hive metastore 
> - both start from independent threads with no cross-thread coordination 
> mechanism in place.
> 6. Thread.sleep() missing in multiple places between HiveServer2 SQL calls 
> changing permissions and verifyOnAllSubDirs() calls verifying that those 
> changes took effect.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (SENTRY-1377) improve handling of failures, both in tests and after-test cleanup, in TestHDFSIntegration.java

2016-12-13 Thread Vadim Spector (JIRA)

 [ 
https://issues.apache.org/jira/browse/SENTRY-1377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Spector updated SENTRY-1377:
--
Attachment: (was: SENTRY-1377.002.patch)

> improve handling of failures, both in tests and after-test cleanup, in 
> TestHDFSIntegration.java
> ---
>
> Key: SENTRY-1377
> URL: https://issues.apache.org/jira/browse/SENTRY-1377
> Project: Sentry
>  Issue Type: Sub-task
>  Components: Sentry
>Affects Versions: 1.8.0
>Reporter: Vadim Spector
>Assignee: Vadim Spector
> Fix For: 1.8.0
>
> Attachments: SENTRY-1377.001.patch, SENTRY-1377.002.patch
>
>
> There are multiple issues making HDFS sync tests flaky or sometimes failing.
> 1. TestHDFSIntegrationBase.java should provide best-attempt cleanup in 
> cleanAfterTest() method. Currently, if any cleanup operation fails, the rest 
> of cleanup code is not executed. Cleanup logic would not normally fail if 
> corresponding test succeeds. But if some do not, short-circuiting some 
> cleanup logic tends to cascade secondary failures which complicate 
> troubleshooting.
> 2. TestHDFSIntegration*.java classes do not guarantee calling close() method 
> on Connection and Statement objects. It happens because 
> a) no try-finally or try-with-resource is used, so tests can skip close() 
> calls if fail in the middle.
> b) many methods re-open Connection and Statement multiple times, yet provide 
> a single close() at the end. 
> 3. Retry logic uses recursion in some places, as in startHiveServer2() and 
> verifyOnAllSubDirs. Better to implement it via straightforward retry loop. 
> Exception stack trace is more confusing than it needs to be in case of 
> recursive calls. Plus, with NUM_RETRIES == 10, at least theoretically, it 
> creates running out of stack as an unnecessary failure mechanism.
> 4. startHiveServer2() ignores hiveServer2.start() call failure, only logging 
> info message.
> 5. Starting hiveServer2 and Hive metastore in separate threads and then 
> keeping those threads alive seems unnecessary, since both servers' start() 
> methods create servers running on their own threads anyway. It effectively 
> leads to ignoring the start() method failure for both servers. Also, it 
> leaves no guarantee that hiveServer2 will be started after the Hive metastore 
> - both start from independent threads with no cross-thread coordination 
> mechanism in place.
> 6. Thread.sleep() missing in multiple places between HiveServer2 SQL calls 
> changing permissions and verifyOnAllSubDirs() calls verifying that those 
> changes took effect.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (SENTRY-1377) improve handling of failures, both in tests and after-test cleanup, in TestHDFSIntegration.java

2016-12-13 Thread Vadim Spector (JIRA)

 [ 
https://issues.apache.org/jira/browse/SENTRY-1377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Spector updated SENTRY-1377:
--
Attachment: SENTRY-1377.002.patch

> improve handling of failures, both in tests and after-test cleanup, in 
> TestHDFSIntegration.java
> ---
>
> Key: SENTRY-1377
> URL: https://issues.apache.org/jira/browse/SENTRY-1377
> Project: Sentry
>  Issue Type: Sub-task
>  Components: Sentry
>Affects Versions: 1.8.0
>Reporter: Vadim Spector
>Assignee: Vadim Spector
> Fix For: 1.8.0
>
> Attachments: SENTRY-1377.001.patch, SENTRY-1377.002.patch
>
>
> There are multiple issues making HDFS sync tests flaky or sometimes failing.
> 1. TestHDFSIntegrationBase.java should provide best-attempt cleanup in 
> cleanAfterTest() method. Currently, if any cleanup operation fails, the rest 
> of cleanup code is not executed. Cleanup logic would not normally fail if 
> corresponding test succeeds. But if some do not, short-circuiting some 
> cleanup logic tends to cascade secondary failures which complicate 
> troubleshooting.
> 2. TestHDFSIntegration*.java classes do not guarantee calling close() method 
> on Connection and Statement objects. It happens because 
> a) no try-finally or try-with-resource is used, so tests can skip close() 
> calls if fail in the middle.
> b) many methods re-open Connection and Statement multiple times, yet provide 
> a single close() at the end. 
> 3. Retry logic uses recursion in some places, as in startHiveServer2() and 
> verifyOnAllSubDirs. Better to implement it via straightforward retry loop. 
> Exception stack trace is more confusing than it needs to be in case of 
> recursive calls. Plus, with NUM_RETRIES == 10, at least theoretically, it 
> creates running out of stack as an unnecessary failure mechanism.
> 4. startHiveServer2() ignores hiveServer2.start() call failure, only logging 
> info message.
> 5. Starting hiveServer2 and Hive metastore in separate threads and then 
> keeping those threads alive seems unnecessary, since both servers' start() 
> methods create servers running on their own threads anyway. It effectively 
> leads to ignoring the start() method failure for both servers. Also, it 
> leaves no guarantee that hiveServer2 will be started after the Hive metastore 
> - both start from independent threads with no cross-thread coordination 
> mechanism in place.
> 6. Thread.sleep() missing in multiple places between HiveServer2 SQL calls 
> changing permissions and verifyOnAllSubDirs() calls verifying that those 
> changes took effect.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (SENTRY-1377) improve handling of failures, both in tests and after-test cleanup, in TestHDFSIntegration.java

2016-12-12 Thread Vadim Spector (JIRA)

 [ 
https://issues.apache.org/jira/browse/SENTRY-1377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Spector updated SENTRY-1377:
--
Attachment: (was: SENTRY-1377.002.patch)

> improve handling of failures, both in tests and after-test cleanup, in 
> TestHDFSIntegration.java
> ---
>
> Key: SENTRY-1377
> URL: https://issues.apache.org/jira/browse/SENTRY-1377
> Project: Sentry
>  Issue Type: Sub-task
>  Components: Sentry
>Affects Versions: 1.8.0
>Reporter: Vadim Spector
>Assignee: Vadim Spector
> Fix For: 1.8.0
>
> Attachments: SENTRY-1377.001.patch, SENTRY-1377.002.patch
>
>
> There are multiple issues making HDFS sync tests flaky or sometimes failing.
> 1. TestHDFSIntegrationBase.java should provide best-attempt cleanup in 
> cleanAfterTest() method. Currently, if any cleanup operation fails, the rest 
> of cleanup code is not executed. Cleanup logic would not normally fail if 
> corresponding test succeeds. But if some do not, short-circuiting some 
> cleanup logic tends to cascade secondary failures which complicate 
> troubleshooting.
> 2. TestHDFSIntegration*.java classes do not guarantee calling close() method 
> on Connection and Statement objects. It happens because 
> a) no try-finally or try-with-resource is used, so tests can skip close() 
> calls if fail in the middle.
> b) many methods re-open Connection and Statement multiple times, yet provide 
> a single close() at the end. 
> 3. Retry logic uses recursion in some places, as in startHiveServer2() and 
> verifyOnAllSubDirs. Better to implement it via straightforward retry loop. 
> Exception stack trace is more confusing than it needs to be in case of 
> recursive calls. Plus, with NUM_RETRIES == 10, at least theoretically, it 
> creates running out of stack as an unnecessary failure mechanism.
> 4. startHiveServer2() ignores hiveServer2.start() call failure, only logging 
> info message.
> 5. Starting hiveServer2 and Hive metastore in separate threads and then 
> keeping those threads alive seems unnecessary, since both servers' start() 
> methods create servers running on their own threads anyway. It effectively 
> leads to ignoring the start() method failure for both servers. Also, it 
> leaves no guarantee that hiveServer2 will be started after the Hive metastore 
> - both start from independent threads with no cross-thread coordination 
> mechanism in place.
> 6. Thread.sleep() missing in multiple places between HiveServer2 SQL calls 
> changing permissions and verifyOnAllSubDirs() calls verifying that those 
> changes took effect.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (SENTRY-1377) improve handling of failures, both in tests and after-test cleanup, in TestHDFSIntegration.java

2016-12-09 Thread Vadim Spector (JIRA)

 [ 
https://issues.apache.org/jira/browse/SENTRY-1377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Spector updated SENTRY-1377:
--
Attachment: SENTRY-1377.002.patch

> improve handling of failures, both in tests and after-test cleanup, in 
> TestHDFSIntegration.java
> ---
>
> Key: SENTRY-1377
> URL: https://issues.apache.org/jira/browse/SENTRY-1377
> Project: Sentry
>  Issue Type: Sub-task
>  Components: Sentry
>Affects Versions: 1.8.0
>Reporter: Vadim Spector
>Assignee: Vadim Spector
> Fix For: 1.8.0
>
> Attachments: SENTRY-1377.001.patch, SENTRY-1377.002.patch
>
>
> There are multiple issues making HDFS sync tests flaky or sometimes failing.
> 1. TestHDFSIntegrationBase.java should provide best-attempt cleanup in 
> cleanAfterTest() method. Currently, if any cleanup operation fails, the rest 
> of cleanup code is not executed. Cleanup logic would not normally fail if 
> corresponding test succeeds. But if some do not, short-circuiting some 
> cleanup logic tends to cascade secondary failures which complicate 
> troubleshooting.
> 2. TestHDFSIntegration*.java classes do not guarantee calling close() method 
> on Connection and Statement objects. It happens because 
> a) no try-finally or try-with-resource is used, so tests can skip close() 
> calls if fail in the middle.
> b) many methods re-open Connection and Statement multiple times, yet provide 
> a single close() at the end. 
> 3. Retry logic uses recursion in some places, as in startHiveServer2() and 
> verifyOnAllSubDirs. Better to implement it via straightforward retry loop. 
> Exception stack trace is more confusing than it needs to be in case of 
> recursive calls. Plus, with NUM_RETRIES == 10, at least theoretically, it 
> creates running out of stack as an unnecessary failure mechanism.
> 4. startHiveServer2() ignores hiveServer2.start() call failure, only logging 
> info message.
> 5. Starting hiveServer2 and Hive metastore in separate threads and then 
> keeping those threads alive seems unnecessary, since both servers' start() 
> methods create servers running on their own threads anyway. It effectively 
> leads to ignoring the start() method failure for both servers. Also, it 
> leaves no guarantee that hiveServer2 will be started after the Hive metastore 
> - both start from independent threads with no cross-thread coordination 
> mechanism in place.
> 6. Thread.sleep() missing in multiple places between HiveServer2 SQL calls 
> changing permissions and verifyOnAllSubDirs() calls verifying that those 
> changes took effect.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (SENTRY-1377) improve handling of failures, both in tests and after-test cleanup, in TestHDFSIntegration.java

2016-12-02 Thread Vadim Spector (JIRA)

 [ 
https://issues.apache.org/jira/browse/SENTRY-1377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Spector updated SENTRY-1377:
--
Description: 
There are multiple issues making HDFS sync tests flaky or sometimes failing.

1. TestHDFSIntegrationBase.java should provide best-attempt cleanup in 
cleanAfterTest() method. Currently, if any cleanup operation fails, the rest of 
cleanup code is not executed. Cleanup logic would not normally fail if 
corresponding test succeeds. But if some do not, short-circuiting some cleanup 
logic tends to cascade secondary failures which complicate troubleshooting.

2. TestHDFSIntegration*.java classes do not guarantee calling close() method on 
Connection and Statement objects. It happens because 
a) no try-finally or try-with-resource is used, so tests can skip close() calls 
if fail in the middle.
b) many methods re-open Connection and Statement multiple times, yet provide 
asingle close() at the end. 

3. Retry logic uses recursion in some places, as in startHiveServer2() and 
verifyOnAllSubDirs. Better to implement it via straightforward retry loop. 
Exception stack trace is more confusing than it needs to be in case of 
recursive calls. Plus, with NUM_RETRIES == 10, at least theoretically, it 
creates running out of stack as an unnecessary failure mechanism.

4. startHiveServer2() ignores hiveServer2.start() call failure, only logging 
info message.

5. Starting hiveServer2 and Hive metastore in separate threads and then keeping 
those threads alive seems unnecessary, since both servers' start() methods 
create servers running on their own threads anyway. It effectively leads to 
ignoring the start() method failure for both servers. Also, it leaves no 
guarantee that hiveServer2 will be started after the Hive metastore - both 
start from independent threads with no cross-thread coordination mechanism in 
place.

6. Thread.sleep() missing in multiple places between HiveServer2 SQL calls 
changing permissions and verifyOnAllSubDirs() calls verifying that those 
changes took effect.

  was:
There are multiple issues making HDFS sync tests flaky or sometimes failing.

1. TestHDFSIntegrationBase.java should provide best-attempt cleanup in 
cleanAfterTest() method. Currently, if any cleanup operation fails, the rest of 
cleanup code is not executed. Cleanup logic would not normally fail if 
corresponding test succeeds. But if some do not, short-circuiting some cleanup 
logic tends to cascade secondary failures which complicate troubleshooting.

2. TestHDFSIntegration*.java classes do not guarantee calling close() method on 
Connection and Statement objects. It happens because 
a) no try-finally or try-with-resource is used, so test can cause skipping the 
close() calls.
b) many methods re-open Connection and Statement multiple times, yet provide 
asingle close() at the end. 

3. Retry logic uses recursion in some places, as in startHiveServer2() and 
verifyOnAllSubDirs. Better to implement it via straightforward retry loop. 
Exception stack trace is more confusing than it needs to be in case of 
recursive calls. Plus, with NUM_RETRIES == 10, at least theoretically, it 
creates running out of stack as an unnecessary failure mechanism.

4. startHiveServer2() ignores hiveServer2.start() call failure, only logging 
info message.

5. Starting hiveServer2 and Hive metastore in separate threads and then keeping 
those threads alive seems unnecessary, since both servers' start() methods 
create servers running on their own threads anyway. It effectively leads to 
ignoring the start() method failure for both servers. Also, it leaves no 
guarantee that hiveServer2 will be started after the Hive metastore - both 
start from independent threads with no cross-thread coordination mechanism in 
place.

6. Thread.sleep() missing in multiple places between HiveServer2 SQL calls 
changing permissions and verifyOnAllSubDirs() calls verifying that those 
changes took effect.


> improve handling of failures, both in tests and after-test cleanup, in 
> TestHDFSIntegration.java
> ---
>
> Key: SENTRY-1377
> URL: https://issues.apache.org/jira/browse/SENTRY-1377
> Project: Sentry
>  Issue Type: Sub-task
>  Components: Sentry
>Affects Versions: 1.8.0
>Reporter: Vadim Spector
>Assignee: Vadim Spector
> Fix For: 1.8.0
>
> Attachments: SENTRY-1377.001.patch
>
>
> There are multiple issues making HDFS sync tests flaky or sometimes failing.
> 1. TestHDFSIntegrationBase.java should provide best-attempt cleanup in 
> cleanAfterTest() method. Currently, if any cleanup operation fails, the rest 
> of cleanup code is not executed. Cleanup logic would not normally fail if 
> corresponding test succeeds. But if some do not, short-circuiting 

[jira] [Updated] (SENTRY-1377) improve handling of failures, both in tests and after-test cleanup, in TestHDFSIntegration.java

2016-12-02 Thread Vadim Spector (JIRA)

 [ 
https://issues.apache.org/jira/browse/SENTRY-1377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Spector updated SENTRY-1377:
--
Description: 
There are multiple issues making HDFS sync tests flaky or sometimes failing.

1. TestHDFSIntegrationBase.java should provide best-attempt cleanup in 
cleanAfterTest() method. Currently, if any cleanup operation fails, the rest of 
cleanup code is not executed. Cleanup logic would not normally fail if 
corresponding test succeeds. But if some do not, short-circuiting some cleanup 
logic tends to cascade secondary failures which complicate troubleshooting.

2. TestHDFSIntegration*.java classes do not guarantee calling close() method on 
Connection and Statement objects. It happens because 
a) no try-finally or try-with-resource is used, so tests can skip close() calls 
if fail in the middle.
b) many methods re-open Connection and Statement multiple times, yet provide a 
single close() at the end. 

3. Retry logic uses recursion in some places, as in startHiveServer2() and 
verifyOnAllSubDirs. Better to implement it via straightforward retry loop. 
Exception stack trace is more confusing than it needs to be in case of 
recursive calls. Plus, with NUM_RETRIES == 10, at least theoretically, it 
creates running out of stack as an unnecessary failure mechanism.

4. startHiveServer2() ignores hiveServer2.start() call failure, only logging 
info message.

5. Starting hiveServer2 and Hive metastore in separate threads and then keeping 
those threads alive seems unnecessary, since both servers' start() methods 
create servers running on their own threads anyway. It effectively leads to 
ignoring the start() method failure for both servers. Also, it leaves no 
guarantee that hiveServer2 will be started after the Hive metastore - both 
start from independent threads with no cross-thread coordination mechanism in 
place.

6. Thread.sleep() missing in multiple places between HiveServer2 SQL calls 
changing permissions and verifyOnAllSubDirs() calls verifying that those 
changes took effect.

  was:
There are multiple issues making HDFS sync tests flaky or sometimes failing.

1. TestHDFSIntegrationBase.java should provide best-attempt cleanup in 
cleanAfterTest() method. Currently, if any cleanup operation fails, the rest of 
cleanup code is not executed. Cleanup logic would not normally fail if 
corresponding test succeeds. But if some do not, short-circuiting some cleanup 
logic tends to cascade secondary failures which complicate troubleshooting.

2. TestHDFSIntegration*.java classes do not guarantee calling close() method on 
Connection and Statement objects. It happens because 
a) no try-finally or try-with-resource is used, so tests can skip close() calls 
if fail in the middle.
b) many methods re-open Connection and Statement multiple times, yet provide 
asingle close() at the end. 

3. Retry logic uses recursion in some places, as in startHiveServer2() and 
verifyOnAllSubDirs. Better to implement it via straightforward retry loop. 
Exception stack trace is more confusing than it needs to be in case of 
recursive calls. Plus, with NUM_RETRIES == 10, at least theoretically, it 
creates running out of stack as an unnecessary failure mechanism.

4. startHiveServer2() ignores hiveServer2.start() call failure, only logging 
info message.

5. Starting hiveServer2 and Hive metastore in separate threads and then keeping 
those threads alive seems unnecessary, since both servers' start() methods 
create servers running on their own threads anyway. It effectively leads to 
ignoring the start() method failure for both servers. Also, it leaves no 
guarantee that hiveServer2 will be started after the Hive metastore - both 
start from independent threads with no cross-thread coordination mechanism in 
place.

6. Thread.sleep() missing in multiple places between HiveServer2 SQL calls 
changing permissions and verifyOnAllSubDirs() calls verifying that those 
changes took effect.


> improve handling of failures, both in tests and after-test cleanup, in 
> TestHDFSIntegration.java
> ---
>
> Key: SENTRY-1377
> URL: https://issues.apache.org/jira/browse/SENTRY-1377
> Project: Sentry
>  Issue Type: Sub-task
>  Components: Sentry
>Affects Versions: 1.8.0
>Reporter: Vadim Spector
>Assignee: Vadim Spector
> Fix For: 1.8.0
>
> Attachments: SENTRY-1377.001.patch
>
>
> There are multiple issues making HDFS sync tests flaky or sometimes failing.
> 1. TestHDFSIntegrationBase.java should provide best-attempt cleanup in 
> cleanAfterTest() method. Currently, if any cleanup operation fails, the rest 
> of cleanup code is not executed. Cleanup logic would not normally fail if 
> corresponding test succeeds. But if some do not, 

[jira] [Updated] (SENTRY-1377) improve handling of failures, both in tests and after-test cleanup, in TestHDFSIntegration.java

2016-12-02 Thread Vadim Spector (JIRA)

 [ 
https://issues.apache.org/jira/browse/SENTRY-1377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Spector updated SENTRY-1377:
--
Attachment: SENTRY-1377.001.patch

> improve handling of failures, both in tests and after-test cleanup, in 
> TestHDFSIntegration.java
> ---
>
> Key: SENTRY-1377
> URL: https://issues.apache.org/jira/browse/SENTRY-1377
> Project: Sentry
>  Issue Type: Sub-task
>  Components: Sentry
>Affects Versions: 1.8.0
>Reporter: Vadim Spector
>Assignee: Vadim Spector
> Fix For: 1.8.0
>
> Attachments: SENTRY-1377.001.patch
>
>
> There are multiple issues making HDFS sync tests flaky or sometimes failing.
> 1. TestHDFSIntegrationBase.java should provide best-attempt cleanup in 
> cleanAfterTest() method. Currently, if any cleanup operation fails, the rest 
> of cleanup code is not executed. Cleanup logic would not normally fail if 
> corresponding test succeeds. But if some do not, short-circuiting some 
> cleanup logic tends to cascade secondary failures which complicate 
> troubleshooting.
> 2. TestHDFSIntegration*.java classes do not guarantee calling close() method 
> on Connection and Statement objects. It happens because 
> a) no try-finally or try-with-resource is used, so test can cause skipping 
> the close() calls.
> b) many methods re-open Connection and Statement multiple times, yet provide 
> asingle close() at the end. 
> 3. Retry logic uses recursion in some places, as in startHiveServer2() and 
> verifyOnAllSubDirs. Better to implement it via straightforward retry loop. 
> Exception stack trace is more confusing than it needs to be in case of 
> recursive calls. Plus, with NUM_RETRIES == 10, at least theoretically, it 
> creates running out of stack as an unnecessary failure mechanism.
> 4. startHiveServer2() ignores hiveServer2.start() call failure, only logging 
> info message.
> 5. Starting hiveServer2 and Hive metastore in separate threads and then 
> keeping those threads alive seems unnecessary, since both servers' start() 
> methods create servers running on their own threads anyway. It effectively 
> leads to ignoring the start() method failure for both servers. Also, it 
> leaves no guarantee that hiveServer2 will be started after the Hive metastore 
> - both start from independent threads with no cross-thread coordination 
> mechanism in place.
> 6. Thread.sleep() missing in multiple places between HiveServer2 SQL calls 
> changing permissions and verifyOnAllSubDirs() calls verifying that those 
> changes took effect.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (SENTRY-1377) improve handling of failures, both in tests and after-test cleanup, in TestHDFSIntegration.java

2016-12-02 Thread Vadim Spector (JIRA)

 [ 
https://issues.apache.org/jira/browse/SENTRY-1377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Spector updated SENTRY-1377:
--
Description: 
There are multiple issues making HDFS sync tests flaky or sometimes failing.

1. TestHDFSIntegrationBase.java should provide best-attempt cleanup in 
cleanAfterTest() method. Currently, if any cleanup operation fails, the rest of 
cleanup code is not executed. Cleanup logic would not normally fail if 
corresponding test succeeds. But if some do not, short-circuiting some cleanup 
logic tends to cascade secondary failures which complicate troubleshooting.

2. TestHDFSIntegration*.java classes do not guarantee calling close() method on 
Connection and Statement objects. It happens because 
a) no try-finally or try-with-resource is used, so test can cause skipping the 
close() calls.
b) many methods re-open Connection and Statement multiple times, yet provide 
asingle close() at the end. 

3. Retry logic uses recursion in some places, as in startHiveServer2() and 
verifyOnAllSubDirs. Better to implement it via straightforward retry loop. 
Exception stack trace is more confusing than it needs to be in case of 
recursive calls. Plus, with NUM_RETRIES == 10, at least theoretically, it 
creates running out of stack as an unnecessary failure mechanism.

4. startHiveServer2() ignores hiveServer2.start() call failure, only logging 
info message.

5. Starting hiveServer2 and Hive metastore in separate threads and then keeping 
those threads alive seems unnecessary, since both servers' start() methods 
create servers running on their own threads anyway. It effectively leads to 
ignoring the start() method failure for both servers. Also, it leaves no 
guarantee that hiveServer2 will be started after the Hive metastore - both 
start from independent threads with no cross-thread coordination mechanism in 
place.

6. Thread.sleep() missing in multiple places between HiveServer2 SQL calls 
changing permissions and verifyOnAllSubDirs() calls verifying that those 
changes took effect.

  was:
1. TestHDFSIntegration.java should provide best-attempt cleanup in 
cleanAfterTest() method. Currently, if any cleanup operation fails, the rest of 
cleanup code is not executed. Cleanup logic would not normally fail if 
corresponding test succeeds. But if some do not, short-circuiting some cleanup 
logic tends to cascade secondary failures which complicate troubleshooting.

2. Test methods would short-circuit closing database Connection's and 
Statement's if the test logic fails. Moreover, some test methods have multiple 
initializations of Connection ans Statement, but only one stmt.close(); 
conn.close(); pair at the end. Correct pattern for each distinct 
{Connection,Statement} pair would be:

conn = null;
stmt = null;
try {
. test logic here, including conn and stmt initialization ...
} finally { safeClose(conn, stmt); }

private static safeClose(Connection conn, Statement stmt) {
  if (stmt != null) try { stmt.close(); } catch (Exception ignore) {}
  if (conn != null) try { conn.close(); } catch (Exception ignore) {}
}

Testing jdbc driver implementation is not in the scope of HDFS integration 
tests, so ignoring Connection.close() and Statement.close() failures should not 
be a concern for this test class.

3. Retry logic uses recursion in some places, as in startHiveServer2(), 
verifyQuery(), verifyOnAllSubDirs. Better to implement it via straightforward 
retry loop. Exception stack trace is more confusing than it needs to be in case 
of recursive calls. Plus, with NUM_RETRIES == 10, at least theoretically, it 
creates running out of stack as an unnecessary failure mechanism.

4. startHiveServer2() ignores hiveServer2.start() call failure, only logging 
info message.

5. Starting hiveServer2 and Hive metastore in separate threads and then keeping 
those threads alive seems unnecessary, since both servers' start() methods 
create servers running on their own threads anyway. It effectively leads to 
ignoring the start() method failure for both servers. Also, it leaves no 
guarantee that hiveServer2 will be started after the Hive metastore - both 
start from independent threads with no cross-thread coordination mechanism in 
place.

6. miniDFS.getFileSystem().mkdirs(tmpHDFSDir) call missing in several places

7. Thread.sleep() missing in multiple places between HiveServer2 SQL calls 
changing permissions and verifyOnAllSubDirs() calls verifying that those 
changes took effect.


> improve handling of failures, both in tests and after-test cleanup, in 
> TestHDFSIntegration.java
> ---
>
> Key: SENTRY-1377
> URL: https://issues.apache.org/jira/browse/SENTRY-1377
> Project: Sentry
>  Issue Type: Sub-task
>  Components: Sentry
>Affects Versions: 1.8.0
>

[jira] [Updated] (SENTRY-1377) improve handling of failures, both in tests and after-test cleanup, in TestHDFSIntegration.java

2016-12-02 Thread Vadim Spector (JIRA)

 [ 
https://issues.apache.org/jira/browse/SENTRY-1377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Spector updated SENTRY-1377:
--
Attachment: (was: SENTRY-1377.001.patch)

> improve handling of failures, both in tests and after-test cleanup, in 
> TestHDFSIntegration.java
> ---
>
> Key: SENTRY-1377
> URL: https://issues.apache.org/jira/browse/SENTRY-1377
> Project: Sentry
>  Issue Type: Sub-task
>  Components: Sentry
>Affects Versions: 1.8.0
>Reporter: Vadim Spector
>Assignee: Vadim Spector
>Priority: Minor
> Fix For: 1.8.0
>
>
> 1. TestHDFSIntegration.java should provide best-attempt cleanup in 
> cleanAfterTest() method. Currently, if any cleanup operation fails, the rest 
> of cleanup code is not executed. Cleanup logic would not normally fail if 
> corresponding test succeeds. But if some do not, short-circuiting some 
> cleanup logic tends to cascade secondary failures which complicate 
> troubleshooting.
> 2. Test methods would short-circuit closing database Connection's and 
> Statement's if the test logic fails. Moreover, some test methods have 
> multiple initializations of Connection ans Statement, but only one 
> stmt.close(); conn.close(); pair at the end. Correct pattern for each 
> distinct {Connection,Statement} pair would be:
> conn = null;
> stmt = null;
> try {
> . test logic here, including conn and stmt initialization ...
> } finally { safeClose(conn, stmt); }
> private static safeClose(Connection conn, Statement stmt) {
>   if (stmt != null) try { stmt.close(); } catch (Exception ignore) {}
>   if (conn != null) try { conn.close(); } catch (Exception ignore) {}
> }
> Testing jdbc driver implementation is not in the scope of HDFS integration 
> tests, so ignoring Connection.close() and Statement.close() failures should 
> not be a concern for this test class.
> 3. Retry logic uses recursion in some places, as in startHiveServer2(), 
> verifyQuery(), verifyOnAllSubDirs. Better to implement it via straightforward 
> retry loop. Exception stack trace is more confusing than it needs to be in 
> case of recursive calls. Plus, with NUM_RETRIES == 10, at least 
> theoretically, it creates running out of stack as an unnecessary failure 
> mechanism.
> 4. startHiveServer2() ignores hiveServer2.start() call failure, only logging 
> info message.
> 5. Starting hiveServer2 and Hive metastore in separate threads and then 
> keeping those threads alive seems unnecessary, since both servers' start() 
> methods create servers running on their own threads anyway. It effectively 
> leads to ignoring the start() method failure for both servers. Also, it 
> leaves no guarantee that hiveServer2 will be started after the Hive metastore 
> - both start from independent threads with no cross-thread coordination 
> mechanism in place.
> 6. miniDFS.getFileSystem().mkdirs(tmpHDFSDir) call missing in several places
> 7. Thread.sleep() missing in multiple places between HiveServer2 SQL calls 
> changing permissions and verifyOnAllSubDirs() calls verifying that those 
> changes took effect.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (SENTRY-1377) improve handling of failures, both in tests and after-test cleanup, in TestHDFSIntegration.java

2016-12-02 Thread Vadim Spector (JIRA)

 [ 
https://issues.apache.org/jira/browse/SENTRY-1377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Spector updated SENTRY-1377:
--
Attachment: (was: SENTRY-1377.003.patch)

> improve handling of failures, both in tests and after-test cleanup, in 
> TestHDFSIntegration.java
> ---
>
> Key: SENTRY-1377
> URL: https://issues.apache.org/jira/browse/SENTRY-1377
> Project: Sentry
>  Issue Type: Sub-task
>  Components: Sentry
>Affects Versions: 1.8.0
>Reporter: Vadim Spector
>Assignee: Vadim Spector
>Priority: Minor
> Fix For: 1.8.0
>
> Attachments: SENTRY-1377.001.patch, SENTRY-1377.002.patch
>
>
> 1. TestHDFSIntegration.java should provide best-attempt cleanup in 
> cleanAfterTest() method. Currently, if any cleanup operation fails, the rest 
> of cleanup code is not executed. Cleanup logic would not normally fail if 
> corresponding test succeeds. But if some do not, short-circuiting some 
> cleanup logic tends to cascade secondary failures which complicate 
> troubleshooting.
> 2. Test methods would short-circuit closing database Connection's and 
> Statement's if the test logic fails. Moreover, some test methods have 
> multiple initializations of Connection ans Statement, but only one 
> stmt.close(); conn.close(); pair at the end. Correct pattern for each 
> distinct {Connection,Statement} pair would be:
> conn = null;
> stmt = null;
> try {
> . test logic here, including conn and stmt initialization ...
> } finally { safeClose(conn, stmt); }
> private static safeClose(Connection conn, Statement stmt) {
>   if (stmt != null) try { stmt.close(); } catch (Exception ignore) {}
>   if (conn != null) try { conn.close(); } catch (Exception ignore) {}
> }
> Testing jdbc driver implementation is not in the scope of HDFS integration 
> tests, so ignoring Connection.close() and Statement.close() failures should 
> not be a concern for this test class.
> 3. Retry logic uses recursion in some places, as in startHiveServer2(), 
> verifyQuery(), verifyOnAllSubDirs. Better to implement it via straightforward 
> retry loop. Exception stack trace is more confusing than it needs to be in 
> case of recursive calls. Plus, with NUM_RETRIES == 10, at least 
> theoretically, it creates running out of stack as an unnecessary failure 
> mechanism.
> 4. startHiveServer2() ignores hiveServer2.start() call failure, only logging 
> info message.
> 5. Starting hiveServer2 and Hive metastore in separate threads and then 
> keeping those threads alive seems unnecessary, since both servers' start() 
> methods create servers running on their own threads anyway. It effectively 
> leads to ignoring the start() method failure for both servers. Also, it 
> leaves no guarantee that hiveServer2 will be started after the Hive metastore 
> - both start from independent threads with no cross-thread coordination 
> mechanism in place.
> 6. miniDFS.getFileSystem().mkdirs(tmpHDFSDir) call missing in several places
> 7. Thread.sleep() missing in multiple places between HiveServer2 SQL calls 
> changing permissions and verifyOnAllSubDirs() calls verifying that those 
> changes took effect.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (SENTRY-1377) improve handling of failures, both in tests and after-test cleanup, in TestHDFSIntegration.java

2016-10-28 Thread Sravya Tirukkovalur (JIRA)

 [ 
https://issues.apache.org/jira/browse/SENTRY-1377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sravya Tirukkovalur updated SENTRY-1377:

Status: Patch Available  (was: Open)

> improve handling of failures, both in tests and after-test cleanup, in 
> TestHDFSIntegration.java
> ---
>
> Key: SENTRY-1377
> URL: https://issues.apache.org/jira/browse/SENTRY-1377
> Project: Sentry
>  Issue Type: Sub-task
>  Components: Sentry
>Affects Versions: 1.8.0
>Reporter: Vadim Spector
>Assignee: Vadim Spector
>Priority: Minor
> Fix For: 1.8.0
>
> Attachments: SENTRY-1377.001.patch, SENTRY-1377.002.patch, 
> SENTRY-1377.003.patch
>
>
> 1. TestHDFSIntegration.java should provide best-attempt cleanup in 
> cleanAfterTest() method. Currently, if any cleanup operation fails, the rest 
> of cleanup code is not executed. Cleanup logic would not normally fail if 
> corresponding test succeeds. But if some do not, short-circuiting some 
> cleanup logic tends to cascade secondary failures which complicate 
> troubleshooting.
> 2. Test methods would short-circuit closing database Connection's and 
> Statement's if the test logic fails. Moreover, some test methods have 
> multiple initializations of Connection ans Statement, but only one 
> stmt.close(); conn.close(); pair at the end. Correct pattern for each 
> distinct {Connection,Statement} pair would be:
> conn = null;
> stmt = null;
> try {
> . test logic here, including conn and stmt initialization ...
> } finally { safeClose(conn, stmt); }
> private static safeClose(Connection conn, Statement stmt) {
>   if (stmt != null) try { stmt.close(); } catch (Exception ignore) {}
>   if (conn != null) try { conn.close(); } catch (Exception ignore) {}
> }
> Testing jdbc driver implementation is not in the scope of HDFS integration 
> tests, so ignoring Connection.close() and Statement.close() failures should 
> not be a concern for this test class.
> 3. Retry logic uses recursion in some places, as in startHiveServer2(), 
> verifyQuery(), verifyOnAllSubDirs. Better to implement it via straightforward 
> retry loop. Exception stack trace is more confusing than it needs to be in 
> case of recursive calls. Plus, with NUM_RETRIES == 10, at least 
> theoretically, it creates running out of stack as an unnecessary failure 
> mechanism.
> 4. startHiveServer2() ignores hiveServer2.start() call failure, only logging 
> info message.
> 5. Starting hiveServer2 and Hive metastore in separate threads and then 
> keeping those threads alive seems unnecessary, since both servers' start() 
> methods create servers running on their own threads anyway. It effectively 
> leads to ignoring the start() method failure for both servers. Also, it 
> leaves no guarantee that hiveServer2 will be started after the Hive metastore 
> - both start from independent threads with no cross-thread coordination 
> mechanism in place.
> 6. miniDFS.getFileSystem().mkdirs(tmpHDFSDir) call missing in several places
> 7. Thread.sleep() missing in multiple places between HiveServer2 SQL calls 
> changing permissions and verifyOnAllSubDirs() calls verifying that those 
> changes took effect.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (SENTRY-1377) improve handling of failures, both in tests and after-test cleanup, in TestHDFSIntegration.java

2016-10-28 Thread Sravya Tirukkovalur (JIRA)

 [ 
https://issues.apache.org/jira/browse/SENTRY-1377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sravya Tirukkovalur updated SENTRY-1377:

Status: Open  (was: Patch Available)

> improve handling of failures, both in tests and after-test cleanup, in 
> TestHDFSIntegration.java
> ---
>
> Key: SENTRY-1377
> URL: https://issues.apache.org/jira/browse/SENTRY-1377
> Project: Sentry
>  Issue Type: Sub-task
>  Components: Sentry
>Affects Versions: 1.8.0
>Reporter: Vadim Spector
>Assignee: Vadim Spector
>Priority: Minor
> Fix For: 1.8.0
>
> Attachments: SENTRY-1377.001.patch, SENTRY-1377.002.patch, 
> SENTRY-1377.003.patch
>
>
> 1. TestHDFSIntegration.java should provide best-attempt cleanup in 
> cleanAfterTest() method. Currently, if any cleanup operation fails, the rest 
> of cleanup code is not executed. Cleanup logic would not normally fail if 
> corresponding test succeeds. But if some do not, short-circuiting some 
> cleanup logic tends to cascade secondary failures which complicate 
> troubleshooting.
> 2. Test methods would short-circuit closing database Connection's and 
> Statement's if the test logic fails. Moreover, some test methods have 
> multiple initializations of Connection ans Statement, but only one 
> stmt.close(); conn.close(); pair at the end. Correct pattern for each 
> distinct {Connection,Statement} pair would be:
> conn = null;
> stmt = null;
> try {
> . test logic here, including conn and stmt initialization ...
> } finally { safeClose(conn, stmt); }
> private static safeClose(Connection conn, Statement stmt) {
>   if (stmt != null) try { stmt.close(); } catch (Exception ignore) {}
>   if (conn != null) try { conn.close(); } catch (Exception ignore) {}
> }
> Testing jdbc driver implementation is not in the scope of HDFS integration 
> tests, so ignoring Connection.close() and Statement.close() failures should 
> not be a concern for this test class.
> 3. Retry logic uses recursion in some places, as in startHiveServer2(), 
> verifyQuery(), verifyOnAllSubDirs. Better to implement it via straightforward 
> retry loop. Exception stack trace is more confusing than it needs to be in 
> case of recursive calls. Plus, with NUM_RETRIES == 10, at least 
> theoretically, it creates running out of stack as an unnecessary failure 
> mechanism.
> 4. startHiveServer2() ignores hiveServer2.start() call failure, only logging 
> info message.
> 5. Starting hiveServer2 and Hive metastore in separate threads and then 
> keeping those threads alive seems unnecessary, since both servers' start() 
> methods create servers running on their own threads anyway. It effectively 
> leads to ignoring the start() method failure for both servers. Also, it 
> leaves no guarantee that hiveServer2 will be started after the Hive metastore 
> - both start from independent threads with no cross-thread coordination 
> mechanism in place.
> 6. miniDFS.getFileSystem().mkdirs(tmpHDFSDir) call missing in several places
> 7. Thread.sleep() missing in multiple places between HiveServer2 SQL calls 
> changing permissions and verifyOnAllSubDirs() calls verifying that those 
> changes took effect.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (SENTRY-1377) improve handling of failures, both in tests and after-test cleanup, in TestHDFSIntegration.java

2016-08-05 Thread Vadim Spector (JIRA)

 [ 
https://issues.apache.org/jira/browse/SENTRY-1377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Spector updated SENTRY-1377:
--
Attachment: SENTRY-1377.003.patch

> improve handling of failures, both in tests and after-test cleanup, in 
> TestHDFSIntegration.java
> ---
>
> Key: SENTRY-1377
> URL: https://issues.apache.org/jira/browse/SENTRY-1377
> Project: Sentry
>  Issue Type: Test
>  Components: Sentry
>Affects Versions: 1.8.0
>Reporter: Vadim Spector
>Assignee: Vadim Spector
>Priority: Minor
> Fix For: 1.8.0
>
> Attachments: SENTRY-1377.001.patch, SENTRY-1377.002.patch, 
> SENTRY-1377.003.patch
>
>
> 1. TestHDFSIntegration.java should provide best-attempt cleanup in 
> cleanAfterTest() method. Currently, if any cleanup operation fails, the rest 
> of cleanup code is not executed. Cleanup logic would not normally fail if 
> corresponding test succeeds. But if some do not, short-circuiting some 
> cleanup logic tends to cascade secondary failures which complicate 
> troubleshooting.
> 2. Test methods would short-circuit closing database Connection's and 
> Statement's if the test logic fails. Moreover, some test methods have 
> multiple initializations of Connection ans Statement, but only one 
> stmt.close(); conn.close(); pair at the end. Correct pattern for each 
> distinct {Connection,Statement} pair would be:
> conn = null;
> stmt = null;
> try {
> . test logic here, including conn and stmt initialization ...
> } finally { safeClose(conn, stmt); }
> private static safeClose(Connection conn, Statement stmt) {
>   if (stmt != null) try { stmt.close(); } catch (Exception ignore) {}
>   if (conn != null) try { conn.close(); } catch (Exception ignore) {}
> }
> Testing jdbc driver implementation is not in the scope of HDFS integration 
> tests, so ignoring Connection.close() and Statement.close() failures should 
> not be a concern for this test class.
> 3. Retry logic uses recursion in some places, as in startHiveServer2(), 
> verifyQuery(), verifyOnAllSubDirs. Better to implement it via straightforward 
> retry loop. Exception stack trace is more confusing than it needs to be in 
> case of recursive calls. Plus, with NUM_RETRIES == 10, at least 
> theoretically, it creates running out of stack as an unnecessary failure 
> mechanism.
> 4. startHiveServer2() ignores hiveServer2.start() call failure, only logging 
> info message.
> 5. Starting hiveServer2 and Hive metastore in separate threads and then 
> keeping those threads alive seems unnecessary, since both servers' start() 
> methods create servers running on their own threads anyway. It effectively 
> leads to ignoring the start() method failure for both servers. Also, it 
> leaves no guarantee that hiveServer2 will be started after the Hive metastore 
> - both start from independent threads with no cross-thread coordination 
> mechanism in place.
> 6. miniDFS.getFileSystem().mkdirs(tmpHDFSDir) call missing in several places
> 7. Thread.sleep() missing in multiple places between HiveServer2 SQL calls 
> changing permissions and verifyOnAllSubDirs() calls verifying that those 
> changes took effect.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (SENTRY-1377) improve handling of failures, both in tests and after-test cleanup, in TestHDFSIntegration.java

2016-08-05 Thread Vadim Spector (JIRA)

 [ 
https://issues.apache.org/jira/browse/SENTRY-1377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Spector updated SENTRY-1377:
--
Attachment: (was: SENTRY-1377.003.patch)

> improve handling of failures, both in tests and after-test cleanup, in 
> TestHDFSIntegration.java
> ---
>
> Key: SENTRY-1377
> URL: https://issues.apache.org/jira/browse/SENTRY-1377
> Project: Sentry
>  Issue Type: Test
>  Components: Sentry
>Affects Versions: 1.8.0
>Reporter: Vadim Spector
>Assignee: Vadim Spector
>Priority: Minor
> Fix For: 1.8.0
>
> Attachments: SENTRY-1377.001.patch, SENTRY-1377.002.patch
>
>
> 1. TestHDFSIntegration.java should provide best-attempt cleanup in 
> cleanAfterTest() method. Currently, if any cleanup operation fails, the rest 
> of cleanup code is not executed. Cleanup logic would not normally fail if 
> corresponding test succeeds. But if some do not, short-circuiting some 
> cleanup logic tends to cascade secondary failures which complicate 
> troubleshooting.
> 2. Test methods would short-circuit closing database Connection's and 
> Statement's if the test logic fails. Moreover, some test methods have 
> multiple initializations of Connection ans Statement, but only one 
> stmt.close(); conn.close(); pair at the end. Correct pattern for each 
> distinct {Connection,Statement} pair would be:
> conn = null;
> stmt = null;
> try {
> . test logic here, including conn and stmt initialization ...
> } finally { safeClose(conn, stmt); }
> private static safeClose(Connection conn, Statement stmt) {
>   if (stmt != null) try { stmt.close(); } catch (Exception ignore) {}
>   if (conn != null) try { conn.close(); } catch (Exception ignore) {}
> }
> Testing jdbc driver implementation is not in the scope of HDFS integration 
> tests, so ignoring Connection.close() and Statement.close() failures should 
> not be a concern for this test class.
> 3. Retry logic uses recursion in some places, as in startHiveServer2(), 
> verifyQuery(), verifyOnAllSubDirs. Better to implement it via straightforward 
> retry loop. Exception stack trace is more confusing than it needs to be in 
> case of recursive calls. Plus, with NUM_RETRIES == 10, at least 
> theoretically, it creates running out of stack as an unnecessary failure 
> mechanism.
> 4. startHiveServer2() ignores hiveServer2.start() call failure, only logging 
> info message.
> 5. Starting hiveServer2 and Hive metastore in separate threads and then 
> keeping those threads alive seems unnecessary, since both servers' start() 
> methods create servers running on their own threads anyway. It effectively 
> leads to ignoring the start() method failure for both servers. Also, it 
> leaves no guarantee that hiveServer2 will be started after the Hive metastore 
> - both start from independent threads with no cross-thread coordination 
> mechanism in place.
> 6. miniDFS.getFileSystem().mkdirs(tmpHDFSDir) call missing in several places
> 7. Thread.sleep() missing in multiple places between HiveServer2 SQL calls 
> changing permissions and verifyOnAllSubDirs() calls verifying that those 
> changes took effect.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (SENTRY-1377) improve handling of failures, both in tests and after-test cleanup, in TestHDFSIntegration.java

2016-07-29 Thread Vadim Spector (JIRA)

 [ 
https://issues.apache.org/jira/browse/SENTRY-1377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Spector updated SENTRY-1377:
--
Description: 
1. TestHDFSIntegration.java should provide best-attempt cleanup in 
cleanAfterTest() method. Currently, if any cleanup operation fails, the rest of 
cleanup code is not executed. Cleanup logic would not normally fail if 
corresponding test succeeds. But if some do not, short-circuiting some cleanup 
logic tends to cascade secondary failures which complicate troubleshooting.

2. Test methods would short-circuit closing database Connection's and 
Statement's if the test logic fails. Moreover, some test methods have multiple 
initializations of Connection ans Statement, but only one stmt.close(); 
conn.close(); pair at the end. Correct pattern for each distinct 
{Connection,Statement} pair would be:

conn = null;
stmt = null;
try {
. test logic here, including conn and stmt initialization ...
} finally { safeClose(conn, stmt); }

where
private static safeClose(Connection conn, Statement stmt) {
  if (stmt != null) try { stmt.close(); } catch (Exception ignore) {}
  if (conn != null) try { conn.close(); } catch (Exception ignore) {}
}

Testing jdbc driver implementation is not in the scope of HDFS integration 
tests, so ignoring Connection.close() and Statement.close() failures should not 
be a concern for this test class.

3. Retry logic uses recursion in some places, as in startHiveServer2(), 
verifyQuery(), verifyOnAllSubDirs. Better to implement it via straightforward 
retry loop. Exception stack trace is more confusing than it needs to be in case 
of recursive calls. Plus, with NUM_RETRIES == 10, at least theoretically, it 
creates running out of stack as an unnecessary failure mechanism.

4. startHiveServer2() ignores hiveServer2.start() call failure, only logging 
info message.

5. Starting hiveServer2 and Hive metastore in separate threads and then keeping 
those threads alive seems unnecessary, since both servers' start() methods 
create servers running on their own threads anyway. It effectively leads to 
ignoring the start() method failure for both servers. Also, it leaves no 
guarantee that hiveServer2 will be started after the Hive metastore - both 
start from independent threads with no cross-thread coordination mechanism in 
place.

6. miniDFS.getFileSystem().mkdirs(tmpHDFSDir) call missing in several places

7. Thread.sleep() missing in multiple places between HiveServer2 SQL calls 
changing permissions and verifyOnAllSubDirs() calls verifying that those 
changes took effect.

  was:
1. TestHDFSIntegration.java should provide best-attempt cleanup in 
cleanAfterTest() method. Currently, if any cleanup operation fails, the rest of 
cleanup code is not executed. Cleanup logic would not normally fail if 
corresponding test succeeds. But if some do not, short-circuiting some cleanup 
logic tends to cascade secondary failures which complicate troubleshooting.

2. Test methods would short-circuit closing database Connection's and 
Statement's if the test logic fails. Moreover, some test methods have multiple 
initializations of Connection ans Statement, but only one stmt.close(); 
conn.close(); pair at the end. Correct pattern for each distinct 
{Connection,Statement} pair would be:

conn = null;
stmt = null;
try {
. test logic here, including conn and stmt initialization ...
} finally {
  safeClose(conn, stmt);
}

where
private static safeClose(Connection conn, Statement stmt) {
  if (stmt != null) try { stmt.close(); } catch (Exception ignore) {}
  if (conn != null) try { conn.close(); } catch (Exception ignore) {}
}

Testing jdbc driver implementation is not in the scope of HDFS integration 
tests, so ignoring Connection.close() and Statement.close() failures should not 
be a concern for this test class.

3. Retry logic uses recursion in some places, as in startHiveServer2(), 
verifyQuery(), verifyOnAllSubDirs. Better to implement it via straightforward 
retry loop. Exception stack trace is more confusing than it needs to be in case 
of recursive calls. Plus, with NUM_RETRIES == 10, at least theoretically, it 
creates running out of stack as an unnecessary failure mechanism.

4. startHiveServer2() ignores hiveServer2.start() call failure, only logging 
info message.

5. Starting hiveServer2 and Hive metastore in separate threads and then keeping 
those threads alive seems unnecessary, since both servers' start() methods 
create servers running on their own threads anyway. It effectively leads to 
ignoring the start() method failure for both servers. Also, it leaves no 
guarantee that hiveServer2 will be started after the Hive metastore - both 
start from independent threads with no cross-thread coordination mechanism in 
place.

6. miniDFS.getFileSystem().mkdirs(tmpHDFSDir) call missing in several places

7. Thread.sleep() missing in multiple places between HiveServer2 SQL 

[jira] [Updated] (SENTRY-1377) improve handling of failures, both in tests and after-test cleanup, in TestHDFSIntegration.java

2016-07-29 Thread Vadim Spector (JIRA)

 [ 
https://issues.apache.org/jira/browse/SENTRY-1377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Spector updated SENTRY-1377:
--
Description: 
1. TestHDFSIntegration.java should provide best-attempt cleanup in 
cleanAfterTest() method. Currently, if any cleanup operation fails, the rest of 
cleanup code is not executed. Cleanup logic would not normally fail if 
corresponding test succeeds. But if some do not, short-circuiting some cleanup 
logic tends to cascade secondary failures which complicate troubleshooting.

2. Test methods would short-circuit closing database Connection's and 
Statement's if the test logic fails. Moreover, some test methods have multiple 
initializations of Connection ans Statement, but only one stmt.close(); 
conn.close(); pair at the end. Correct pattern for each distinct 
{Connection,Statement} pair would be:

conn = null;
stmt = null;
try {
. test logic here, including conn and stmt initialization ...
} finally {
  safeClose(conn, stmt);
}

where
private static safeClose(Connection conn, Statement stmt) {
  if (stmt != null) try { stmt.close(); } catch (Exception ignore) {}
  if (conn != null) try { conn.close(); } catch (Exception ignore) {}
}

Testing jdbc driver implementation is not in the scope of HDFS integration 
tests, so ignoring Connection.close() and Statement.close() failures should not 
be a concern for this test class.

3. Retry logic uses recursion in some places, as in startHiveServer2(), 
verifyQuery(), verifyOnAllSubDirs. Better to implement it via straightforward 
retry loop. Exception stack trace is more confusing than it needs to be in case 
of recursive calls. Plus, with NUM_RETRIES == 10, at least theoretically, it 
creates running out of stack as an unnecessary failure mechanism.

4. startHiveServer2() ignores hiveServer2.start() call failure, only logging 
info message.

5. Starting hiveServer2 and Hive metastore in separate threads and then keeping 
those threads alive seems unnecessary, since both servers' start() methods 
create servers running on their own threads anyway. It effectively leads to 
ignoring the start() method failure for both servers. Also, it leaves no 
guarantee that hiveServer2 will be started after the Hive metastore - both 
start from independent threads with no cross-thread coordination mechanism in 
place.

6. miniDFS.getFileSystem().mkdirs(tmpHDFSDir) call missing in several places

7. Thread.sleep() missing in multiple places between HiveServer2 SQL calls 
changing permissions and verifyOnAllSubDirs() calls verifying that those 
changes took effect.

  was:
1. TestHDFSIntegration.java should provide best-attempt cleanup in 
cleanAfterTest() method. Currently, if any cleanup operation fails, the rest of 
cleanup code is not executed. Cleanup logic would not normally fail if 
corresponding test succeeds. But if some do not, short-circuiting some cleanup 
logic tends to cascade secondary failures which complicate troubleshooting.

2. Test methods would short-circuit closing database Connection's and 
Statement's if the test logic fails. Moreover, some test methods have multiple 
initializations of Connection ans Statement, but only one stmt.close(); 
conn.close(); pair at the end. Correct pattern for each distinct 
{Connection,Statement} pair would be:

conn = null;
stmt = null;
try {
. test logic here, including conn and stmt initialization ...
} finally {
  safeClose(conn, stmt);
}

where
private static safeClose(Connection conn, Statement stmt) {
  if (stmt != null) try { stmt.close(); } catch (Exception ignore) {}
  if (conn != null) try { conn.close(); } catch (Exception ignore) {}
}

Testing jdbc driver implementation is not in the scope of HDFS integration 
tests, so ignoring Connection.close() and Statement.close() failures should not 
be a concern for this test class.

3. Retry logic uses recursion in some places, as in startHiveServer2(), 
verifyQuery(), verifyOnAllSubDirs. Better to implement it via straightforward 
retry loop. Exception stack trace is more confusing than it needs to be in case 
of recursive calls. Plus, with NUM_RETRIES == 10, at least theoretically, it 
creates running out of stack as an unnecessary failure mechanism.

4. startHiveServer2() ignores hiveServer2.start() call failure, only logging 
info message.

5. Starting hiveServer2 and Hive metastore in separate threads and then keeping 
those threads alive seems unnecessary, since both servers' start() methods 
create servers running on their own threads anyway. It effectively leads to 
ignoring the start() method failure for both servers. Also, it leaves no 
guarantee that hiveServer2 will be started after the Hive metastore - both 
start from independent threads with no cross-thread coordination mechanism in 
place.

6. miniDFS.getFileSystem().mkdirs(tmpHDFSDir) call missing in several places

7. Thread.sleep() missing in multiple places between HiveServer2 SQL 

[jira] [Updated] (SENTRY-1377) improve handling of failures, both in tests and after-test cleanup, in TestHDFSIntegration.java

2016-07-29 Thread Vadim Spector (JIRA)

 [ 
https://issues.apache.org/jira/browse/SENTRY-1377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Spector updated SENTRY-1377:
--
Description: 
1. TestHDFSIntegration.java should provide best-attempt cleanup in 
cleanAfterTest() method. Currently, if any cleanup operation fails, the rest of 
cleanup code is not executed. Cleanup logic would not normally fail if 
corresponding test succeeds. But if some do not, short-circuiting some cleanup 
logic tends to cascade secondary failures which complicate troubleshooting.

2. Test methods would short-circuit closing database Connection's and 
Statement's if the test logic fails. Moreover, some test methods have multiple 
initializations of Connection ans Statement, but only one stmt.close(); 
conn.close(); pair at the end. Correct pattern for each distinct 
{Connection,Statement} pair would be:

conn = null;
stmt = null;
try {
. test logic here, including conn and stmt initialization ...
} finally {
  safeClose(conn, stmt);
}

where
private static safeClose(Connection conn, Statement stmt) {
  if (stmt != null) try { stmt.close(); } catch (Exception ignore) {}
  if (conn != null) try { conn.close(); } catch (Exception ignore) {}
}

Testing jdbc driver implementation is not in the scope of HDFS integration 
tests, so ignoring Connection.close() and Statement.close() failures should not 
be a concern for this test class.

3. Retry logic uses recursion in some places, as in startHiveServer2(), 
verifyQuery(), verifyOnAllSubDirs. Better to implement it via straightforward 
retry loop. Exception stack trace is more confusing than it needs to be in case 
of recursive calls. Plus, with NUM_RETRIES == 10, at least theoretically, it 
creates running out of stack as an unnecessary failure mechanism.

4. startHiveServer2() ignores hiveServer2.start() call failure, only logging 
info message.

5. Starting hiveServer2 and Hive metastore in separate threads and then keeping 
those threads alive seems unnecessary, since both servers' start() methods 
create servers running on their own threads anyway. It effectively leads to 
ignoring the start() method failure for both servers. Also, it leaves no 
guarantee that hiveServer2 will be started after the Hive metastore - both 
start from independent threads with no cross-thread coordination mechanism in 
place.

6. miniDFS.getFileSystem().mkdirs(tmpHDFSDir) call missing in several places

7. Thread.sleep() missing in multiple places between HiveServer2 SQL calls 
changing permissions and 

  was:
1. TestHDFSIntegration.java should provide best-attempt cleanup in 
cleanAfterTest() method. Currently, if any cleanup operation fails, the rest of 
cleanup code is not executed. Cleanup logic would not normally fail if 
corresponding test succeeds. But if some do not, short-circuiting some cleanup 
logic tends to cascade secondary failures which complicate troubleshooting.

2. Test methods would short-circuit closing database Connection's and 
Statement's if the test logic fails. Moreover, some test methods have multiple 
initializations of Connection ans Statement, but only one stmt.close(); 
conn.close(); pair at the end. Correct pattern for each distinct 
{Connection,Statement} pair would be:

conn = null;
stmt = null;
try {
. test logic here, including conn and stmt initialization ...
} finally {
  safeClose(conn, stmt);
}

where
private static safeClose(Connection conn, Statement stmt) {
  if (stmt != null) try { stmt.close(); } catch (Exception ignore) {}
  if (conn != null) try { conn.close(); } catch (Exception ignore) {}
}

Testing jdbc driver implementation is not in the scope of HDFS integration 
tests, so ignoring Connection.close() and Statement.close() failures should not 
be a concern for this test class.

3. Retry logic uses recursion in some places, as in startHiveServer2(), 
verifyQuery(), verifyOnAllSubDirs. Better to implement it via straightforward 
retry loop. Exception stack trace is more confusing than it needs to be in case 
of recursive calls. Plus, with NUM_RETRIES == 10, at least theoretically, it 
creates running out of stack as an unnecessary failure mechanism.

4. startHiveServer2() ignores hiveServer2.start() call failure, only logging 
info message.

5. Starting hiveServer2 and Hive metastore in separate threads and then keeping 
those threads alive seems unnecessary, since both servers' start() methods 
create servers running on their own threads anyway. It effectively leads to 
ignoring the start() method failure for both servers. Also, it leaves no 
guarantee that hiveServer2 will be started after the Hive metastore - both 
start from independent threads with no cross-thread coordination mechanism in 
place.

5. 


> improve handling of failures, both in tests and after-test cleanup, in 
> TestHDFSIntegration.java
> ---
>
>  

[jira] [Updated] (SENTRY-1377) improve handling of failures, both in tests and after-test cleanup, in TestHDFSIntegration.java

2016-07-29 Thread Vadim Spector (JIRA)

 [ 
https://issues.apache.org/jira/browse/SENTRY-1377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Spector updated SENTRY-1377:
--
Description: 
1. TestHDFSIntegration.java should provide best-attempt cleanup in 
cleanAfterTest() method. Currently, if any cleanup operation fails, the rest of 
cleanup code is not executed. Cleanup logic would not normally fail if 
corresponding test succeeds. But if some do not, short-circuiting some cleanup 
logic tends to cascade secondary failures which complicate troubleshooting.

2. Test methods would short-circuit closing database Connection's and 
Statement's if the test logic fails. Moreover, some test methods have multiple 
initializations of Connection ans Statement, but only one stmt.close(); 
conn.close(); pair at the end. Correct pattern for each distinct 
{Connection,Statement} pair would be:

conn = null;
stmt = null;
try {
. test logic here, including conn and stmt initialization ...
} finally {
  safeClose(conn, stmt);
}

where
private static safeClose(Connection conn, Statement stmt) {
  if (stmt != null) try { stmt.close(); } catch (Exception ignore) {}
  if (conn != null) try { conn.close(); } catch (Exception ignore) {}
}

Testing jdbc driver implementation is not in the scope of HDFS integration 
tests, so ignoring Connection.close() and Statement.close() failures should not 
be a concern for this test class.

3. Retry logic uses recursion in some places, as in startHiveServer2(), 
verifyQuery(), verifyOnAllSubDirs. Better to implement it via straightforward 
retry loop. Exception stack trace is more confusing than it needs to be in case 
of recursive calls. Plus, with NUM_RETRIES == 10, at least theoretically, it 
creates running out of stack as an unnecessary failure mechanism.

4. startHiveServer2() ignores hiveServer2.start() call failure, only logging 
info message.

5. Starting hiveServer2 and Hive metastore in separate threads and then keeping 
those threads alive seems unnecessary, since both servers' start() methods 
create servers running on their own threads anyway. It effectively leads to 
ignoring the start() method failure for both servers. Also, it leaves no 
guarantee that hiveServer2 will be started after the Hive metastore - both 
start from independent threads with no cross-thread coordination mechanism in 
place.

5. 

  was:
1. TestHDFSIntegration.java should provide best-attempt cleanup in 
cleanAfterTest() method. Currently, if any cleanup operation fails, the rest of 
cleanup code is not executed. Cleanup logic would not normally fail if 
corresponding test succeeds. But if some do not, short-circuiting some cleanup 
logic tends to cascade secondary failures which complicate troubleshooting.

2. Test methods would short-circuit closing database Connection's and 
Statement's if the test logic fails. Moreover, some test methods have multiple 
initializations of Connection ans Statement, but only one stmt.close(); 
conn.close(); pair at the end. Correct pattern for each distinct 
{Connection,Statement} pair would be:

conn = null;
stmt = null;
try {
. test logic here, including conn and stmt initialization ...
} finally {
  safeClose(conn, stmt);
}

where
private static safeClose(Connection conn, Statement stmt) {
  if (stmt != null) try { stmt.close(); } catch (Exception ignore) {}
  if (conn != null) try { conn.close(); } catch (Exception ignore) {}
}

Testing jdbc driver implementation is not in the scope of HDFS integration 
tests, so ignoring Connection.close() and Statement.close() failures should not 
be a concern for this test class.

3. Retry logic uses recursion in some places, as in startHiveServer2(), 
verifyQuery(), verifyOnAllSubDirs. Better to implement it via straightforward 
retry loop. Exception stack trace is more confusing than it needs to be in case 
of recursive calls. Plus, with NUM_RETRIES == 10, at least theoretically, it 
creates running out of stack as an unnecessary failure mechanism.

4. startHiveServer2() ignores hiveServer2.start() call failure, only logging 
info message.

5. Starting hiveServer2 and Hive metastore in separate threads and then keeping 
those threads alive seems unnecessary, since both servers' start() methods 
create servers running on their own threads anyway. It effectively leads to 
ignoring the start() method failure for both servers. Also, it leaves no 
guarantee that hiveServer2 will be started after the Hive metastore - both 
start from independent threads with no cross-thread coordination mechanism in 
place.


> improve handling of failures, both in tests and after-test cleanup, in 
> TestHDFSIntegration.java
> ---
>
> Key: SENTRY-1377
> URL: https://issues.apache.org/jira/browse/SENTRY-1377
> Project: Sentry
>  Issue Type: Test
>  Components: 

[jira] [Updated] (SENTRY-1377) improve handling of failures, both in tests and after-test cleanup, in TestHDFSIntegration.java

2016-07-29 Thread Vadim Spector (JIRA)

 [ 
https://issues.apache.org/jira/browse/SENTRY-1377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Spector updated SENTRY-1377:
--
Attachment: SENTRY-1377.003.patch

> improve handling of failures, both in tests and after-test cleanup, in 
> TestHDFSIntegration.java
> ---
>
> Key: SENTRY-1377
> URL: https://issues.apache.org/jira/browse/SENTRY-1377
> Project: Sentry
>  Issue Type: Test
>  Components: Sentry
>Affects Versions: 1.8.0
>Reporter: Vadim Spector
>Assignee: Vadim Spector
>Priority: Minor
> Fix For: 1.8.0
>
> Attachments: SENTRY-1377.001.patch, SENTRY-1377.002.patch, 
> SENTRY-1377.003.patch
>
>
> 1. TestHDFSIntegration.java should provide best-attempt cleanup in 
> cleanAfterTest() method. Currently, if any cleanup operation fails, the rest 
> of cleanup code is not executed. Cleanup logic would not normally fail if 
> corresponding test succeeds. But if some do not, short-circuiting some 
> cleanup logic tends to cascade secondary failures which complicate 
> troubleshooting.
> 2. Test methods would short-circuit closing database Connection's and 
> Statement's if the test logic fails. Moreover, some test methods have 
> multiple initializations of Connection ans Statement, but only one 
> stmt.close(); conn.close(); pair at the end. Correct pattern for each 
> distinct {Connection,Statement} pair would be:
> conn = null;
> stmt = null;
> try {
> . test logic here, including conn and stmt initialization ...
> } finally {
>   safeClose(conn, stmt);
> }
> where
> private static safeClose(Connection conn, Statement stmt) {
>   if (stmt != null) try { stmt.close(); } catch (Exception ignore) {}
>   if (conn != null) try { conn.close(); } catch (Exception ignore) {}
> }
> Testing jdbc driver implementation is not in the scope of HDFS integration 
> tests, so ignoring Connection.close() and Statement.close() failures should 
> not be a concern for this test class.
> 3. Retry logic uses recursion in some places, as in startHiveServer2(), 
> verifyQuery(), verifyOnAllSubDirs. Better to implement it via straightforward 
> retry loop. Exception stack trace is more confusing than it needs to be in 
> case of recursive calls. Plus, with NUM_RETRIES == 10, at least 
> theoretically, it creates running out of stack as an unnecessary failure 
> mechanism.
> 4. startHiveServer2() ignores hiveServer2.start() call failure, only logging 
> info message.
> 5. Starting hiveServer2 and Hive metastore in separate threads and then 
> keeping those threads alive seems unnecessary, since both servers' start() 
> methods create servers running on their own threads anyway. It effectively 
> leads to ignoring the start() method failure for both servers. Also, it 
> leaves no guarantee that hiveServer2 will be started after the Hive metastore 
> - both start from independent threads with no cross-thread coordination 
> mechanism in place.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (SENTRY-1377) improve handling of failures, both in tests and after-test cleanup, in TestHDFSIntegration.java

2016-07-29 Thread Vadim Spector (JIRA)

 [ 
https://issues.apache.org/jira/browse/SENTRY-1377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Spector updated SENTRY-1377:
--
Attachment: (was: SENTRY-1377.003.patch)

> improve handling of failures, both in tests and after-test cleanup, in 
> TestHDFSIntegration.java
> ---
>
> Key: SENTRY-1377
> URL: https://issues.apache.org/jira/browse/SENTRY-1377
> Project: Sentry
>  Issue Type: Test
>  Components: Sentry
>Affects Versions: 1.8.0
>Reporter: Vadim Spector
>Assignee: Vadim Spector
>Priority: Minor
> Fix For: 1.8.0
>
> Attachments: SENTRY-1377.001.patch, SENTRY-1377.002.patch, 
> SENTRY-1377.003.patch
>
>
> 1. TestHDFSIntegration.java should provide best-attempt cleanup in 
> cleanAfterTest() method. Currently, if any cleanup operation fails, the rest 
> of cleanup code is not executed. Cleanup logic would not normally fail if 
> corresponding test succeeds. But if some do not, short-circuiting some 
> cleanup logic tends to cascade secondary failures which complicate 
> troubleshooting.
> 2. Test methods would short-circuit closing database Connection's and 
> Statement's if the test logic fails. Moreover, some test methods have 
> multiple initializations of Connection ans Statement, but only one 
> stmt.close(); conn.close(); pair at the end. Correct pattern for each 
> distinct {Connection,Statement} pair would be:
> conn = null;
> stmt = null;
> try {
> . test logic here, including conn and stmt initialization ...
> } finally {
>   safeClose(conn, stmt);
> }
> where
> private static safeClose(Connection conn, Statement stmt) {
>   if (stmt != null) try { stmt.close(); } catch (Exception ignore) {}
>   if (conn != null) try { conn.close(); } catch (Exception ignore) {}
> }
> Testing jdbc driver implementation is not in the scope of HDFS integration 
> tests, so ignoring Connection.close() and Statement.close() failures should 
> not be a concern for this test class.
> 3. Retry logic uses recursion in some places, as in startHiveServer2(), 
> verifyQuery(), verifyOnAllSubDirs. Better to implement it via straightforward 
> retry loop. Exception stack trace is more confusing than it needs to be in 
> case of recursive calls. Plus, with NUM_RETRIES == 10, at least 
> theoretically, it creates running out of stack as an unnecessary failure 
> mechanism.
> 4. startHiveServer2() ignores hiveServer2.start() call failure, only logging 
> info message.
> 5. Starting hiveServer2 and Hive metastore in separate threads and then 
> keeping those threads alive seems unnecessary, since both servers' start() 
> methods create servers running on their own threads anyway. It effectively 
> leads to ignoring the start() method failure for both servers. Also, it 
> leaves no guarantee that hiveServer2 will be started after the Hive metastore 
> - both start from independent threads with no cross-thread coordination 
> mechanism in place.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (SENTRY-1377) improve handling of failures, both in tests and after-test cleanup, in TestHDFSIntegration.java

2016-07-29 Thread Vadim Spector (JIRA)

 [ 
https://issues.apache.org/jira/browse/SENTRY-1377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Spector updated SENTRY-1377:
--
Attachment: (was: SENTRY-1377.003.patch)

> improve handling of failures, both in tests and after-test cleanup, in 
> TestHDFSIntegration.java
> ---
>
> Key: SENTRY-1377
> URL: https://issues.apache.org/jira/browse/SENTRY-1377
> Project: Sentry
>  Issue Type: Test
>  Components: Sentry
>Affects Versions: 1.8.0
>Reporter: Vadim Spector
>Assignee: Vadim Spector
>Priority: Minor
> Fix For: 1.8.0
>
> Attachments: SENTRY-1377.001.patch, SENTRY-1377.002.patch, 
> SENTRY-1377.003.patch
>
>
> 1. TestHDFSIntegration.java should provide best-attempt cleanup in 
> cleanAfterTest() method. Currently, if any cleanup operation fails, the rest 
> of cleanup code is not executed. Cleanup logic would not normally fail if 
> corresponding test succeeds. But if some do not, short-circuiting some 
> cleanup logic tends to cascade secondary failures which complicate 
> troubleshooting.
> 2. Test methods would short-circuit closing database Connection's and 
> Statement's if the test logic fails. Moreover, some test methods have 
> multiple initializations of Connection ans Statement, but only one 
> stmt.close(); conn.close(); pair at the end. Correct pattern for each 
> distinct {Connection,Statement} pair would be:
> conn = null;
> stmt = null;
> try {
> . test logic here, including conn and stmt initialization ...
> } finally {
>   safeClose(conn, stmt);
> }
> where
> private static safeClose(Connection conn, Statement stmt) {
>   if (stmt != null) try { stmt.close(); } catch (Exception ignore) {}
>   if (conn != null) try { conn.close(); } catch (Exception ignore) {}
> }
> Testing jdbc driver implementation is not in the scope of HDFS integration 
> tests, so ignoring Connection.close() and Statement.close() failures should 
> not be a concern for this test class.
> 3. Retry logic uses recursion in some places, as in startHiveServer2(), 
> verifyQuery(), verifyOnAllSubDirs. Better to implement it via straightforward 
> retry loop. Exception stack trace is more confusing than it needs to be in 
> case of recursive calls. Plus, with NUM_RETRIES == 10, at least 
> theoretically, it creates running out of stack as an unnecessary failure 
> mechanism.
> 4. startHiveServer2() ignores hiveServer2.start() call failure, only logging 
> info message.
> 5. Starting hiveServer2 and Hive metastore in separate threads and then 
> keeping those threads alive seems unnecessary, since both servers' start() 
> methods create servers running on their own threads anyway. It effectively 
> leads to ignoring the start() method failure for both servers. Also, it 
> leaves no guarantee that hiveServer2 will be started after the Hive metastore 
> - both start from independent threads with no cross-thread coordination 
> mechanism in place.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (SENTRY-1377) improve handling of failures, both in tests and after-test cleanup, in TestHDFSIntegration.java

2016-07-29 Thread Vadim Spector (JIRA)

 [ 
https://issues.apache.org/jira/browse/SENTRY-1377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Spector updated SENTRY-1377:
--
Attachment: (was: SENTRY-1377.003.patch)

> improve handling of failures, both in tests and after-test cleanup, in 
> TestHDFSIntegration.java
> ---
>
> Key: SENTRY-1377
> URL: https://issues.apache.org/jira/browse/SENTRY-1377
> Project: Sentry
>  Issue Type: Test
>  Components: Sentry
>Affects Versions: 1.8.0
>Reporter: Vadim Spector
>Assignee: Vadim Spector
>Priority: Minor
> Fix For: 1.8.0
>
> Attachments: SENTRY-1377.001.patch, SENTRY-1377.002.patch, 
> SENTRY-1377.003.patch
>
>
> 1. TestHDFSIntegration.java should provide best-attempt cleanup in 
> cleanAfterTest() method. Currently, if any cleanup operation fails, the rest 
> of cleanup code is not executed. Cleanup logic would not normally fail if 
> corresponding test succeeds. But if some do not, short-circuiting some 
> cleanup logic tends to cascade secondary failures which complicate 
> troubleshooting.
> 2. Test methods would short-circuit closing database Connection's and 
> Statement's if the test logic fails. Moreover, some test methods have 
> multiple initializations of Connection ans Statement, but only one 
> stmt.close(); conn.close(); pair at the end. Correct pattern for each 
> distinct {Connection,Statement} pair would be:
> conn = null;
> stmt = null;
> try {
> . test logic here, including conn and stmt initialization ...
> } finally {
>   safeClose(conn, stmt);
> }
> where
> private static safeClose(Connection conn, Statement stmt) {
>   if (stmt != null) try { stmt.close(); } catch (Exception ignore) {}
>   if (conn != null) try { conn.close(); } catch (Exception ignore) {}
> }
> Testing jdbc driver implementation is not in the scope of HDFS integration 
> tests, so ignoring Connection.close() and Statement.close() failures should 
> not be a concern for this test class.
> 3. Retry logic uses recursion in some places, as in startHiveServer2(), 
> verifyQuery(), verifyOnAllSubDirs. Better to implement it via straightforward 
> retry loop. Exception stack trace is more confusing than it needs to be in 
> case of recursive calls. Plus, with NUM_RETRIES == 10, at least 
> theoretically, it creates running out of stack as an unnecessary failure 
> mechanism.
> 4. startHiveServer2() ignores hiveServer2.start() call failure, only logging 
> info message.
> 5. Starting hiveServer2 and Hive metastore in separate threads and then 
> keeping those threads alive seems unnecessary, since both servers' start() 
> methods create servers running on their own threads anyway. It effectively 
> leads to ignoring the start() method failure for both servers. Also, it 
> leaves no guarantee that hiveServer2 will be started after the Hive metastore 
> - both start from independent threads with no cross-thread coordination 
> mechanism in place.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (SENTRY-1377) improve handling of failures, both in tests and after-test cleanup, in TestHDFSIntegration.java

2016-07-29 Thread Vadim Spector (JIRA)

 [ 
https://issues.apache.org/jira/browse/SENTRY-1377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Spector updated SENTRY-1377:
--
Attachment: SENTRY-1377.003.patch

> improve handling of failures, both in tests and after-test cleanup, in 
> TestHDFSIntegration.java
> ---
>
> Key: SENTRY-1377
> URL: https://issues.apache.org/jira/browse/SENTRY-1377
> Project: Sentry
>  Issue Type: Test
>  Components: Sentry
>Affects Versions: 1.8.0
>Reporter: Vadim Spector
>Assignee: Vadim Spector
>Priority: Minor
> Fix For: 1.8.0
>
> Attachments: SENTRY-1377.001.patch, SENTRY-1377.002.patch, 
> SENTRY-1377.003.patch
>
>
> 1. TestHDFSIntegration.java should provide best-attempt cleanup in 
> cleanAfterTest() method. Currently, if any cleanup operation fails, the rest 
> of cleanup code is not executed. Cleanup logic would not normally fail if 
> corresponding test succeeds. But if some do not, short-circuiting some 
> cleanup logic tends to cascade secondary failures which complicate 
> troubleshooting.
> 2. Test methods would short-circuit closing database Connection's and 
> Statement's if the test logic fails. Moreover, some test methods have 
> multiple initializations of Connection ans Statement, but only one 
> stmt.close(); conn.close(); pair at the end. Correct pattern for each 
> distinct {Connection,Statement} pair would be:
> conn = null;
> stmt = null;
> try {
> . test logic here, including conn and stmt initialization ...
> } finally {
>   safeClose(conn, stmt);
> }
> where
> private static safeClose(Connection conn, Statement stmt) {
>   if (stmt != null) try { stmt.close(); } catch (Exception ignore) {}
>   if (conn != null) try { conn.close(); } catch (Exception ignore) {}
> }
> Testing jdbc driver implementation is not in the scope of HDFS integration 
> tests, so ignoring Connection.close() and Statement.close() failures should 
> not be a concern for this test class.
> 3. Retry logic uses recursion in some places, as in startHiveServer2(), 
> verifyQuery(), verifyOnAllSubDirs. Better to implement it via straightforward 
> retry loop. Exception stack trace is more confusing than it needs to be in 
> case of recursive calls. Plus, with NUM_RETRIES == 10, at least 
> theoretically, it creates running out of stack as an unnecessary failure 
> mechanism.
> 4. startHiveServer2() ignores hiveServer2.start() call failure, only logging 
> info message.
> 5. Starting hiveServer2 and Hive metastore in separate threads and then 
> keeping those threads alive seems unnecessary, since both servers' start() 
> methods create servers running on their own threads anyway. It effectively 
> leads to ignoring the start() method failure for both servers. Also, it 
> leaves no guarantee that hiveServer2 will be started after the Hive metastore 
> - both start from independent threads with no cross-thread coordination 
> mechanism in place.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (SENTRY-1377) improve handling of failures, both in tests and after-test cleanup, in TestHDFSIntegration.java

2016-07-29 Thread Vadim Spector (JIRA)

 [ 
https://issues.apache.org/jira/browse/SENTRY-1377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Spector updated SENTRY-1377:
--
Attachment: SENTRY-1377.003.patch

> improve handling of failures, both in tests and after-test cleanup, in 
> TestHDFSIntegration.java
> ---
>
> Key: SENTRY-1377
> URL: https://issues.apache.org/jira/browse/SENTRY-1377
> Project: Sentry
>  Issue Type: Test
>  Components: Sentry
>Affects Versions: 1.8.0
>Reporter: Vadim Spector
>Assignee: Vadim Spector
>Priority: Minor
> Fix For: 1.8.0
>
> Attachments: SENTRY-1377.001.patch, SENTRY-1377.002.patch, 
> SENTRY-1377.003.patch
>
>
> 1. TestHDFSIntegration.java should provide best-attempt cleanup in 
> cleanAfterTest() method. Currently, if any cleanup operation fails, the rest 
> of cleanup code is not executed. Cleanup logic would not normally fail if 
> corresponding test succeeds. But if some do not, short-circuiting some 
> cleanup logic tends to cascade secondary failures which complicate 
> troubleshooting.
> 2. Test methods would short-circuit closing database Connection's and 
> Statement's if the test logic fails. Moreover, some test methods have 
> multiple initializations of Connection ans Statement, but only one 
> stmt.close(); conn.close(); pair at the end. Correct pattern for each 
> distinct {Connection,Statement} pair would be:
> conn = null;
> stmt = null;
> try {
> . test logic here, including conn and stmt initialization ...
> } finally {
>   safeClose(conn, stmt);
> }
> where
> private static safeClose(Connection conn, Statement stmt) {
>   if (stmt != null) try { stmt.close(); } catch (Exception ignore) {}
>   if (conn != null) try { conn.close(); } catch (Exception ignore) {}
> }
> Testing jdbc driver implementation is not in the scope of HDFS integration 
> tests, so ignoring Connection.close() and Statement.close() failures should 
> not be a concern for this test class.
> 3. Retry logic uses recursion in some places, as in startHiveServer2(), 
> verifyQuery(), verifyOnAllSubDirs. Better to implement it via straightforward 
> retry loop. Exception stack trace is more confusing than it needs to be in 
> case of recursive calls. Plus, with NUM_RETRIES == 10, at least 
> theoretically, it creates running out of stack as an unnecessary failure 
> mechanism.
> 4. startHiveServer2() ignores hiveServer2.start() call failure, only logging 
> info message.
> 5. Starting hiveServer2 and Hive metastore in separate threads and then 
> keeping those threads alive seems unnecessary, since both servers' start() 
> methods create servers running on their own threads anyway. It effectively 
> leads to ignoring the start() method failure for both servers. Also, it 
> leaves no guarantee that hiveServer2 will be started after the Hive metastore 
> - both start from independent threads with no cross-thread coordination 
> mechanism in place.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (SENTRY-1377) improve handling of failures, both in tests and after-test cleanup, in TestHDFSIntegration.java

2016-07-29 Thread Vadim Spector (JIRA)

 [ 
https://issues.apache.org/jira/browse/SENTRY-1377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Spector updated SENTRY-1377:
--
Attachment: (was: SENTRY-1377.003.patch)

> improve handling of failures, both in tests and after-test cleanup, in 
> TestHDFSIntegration.java
> ---
>
> Key: SENTRY-1377
> URL: https://issues.apache.org/jira/browse/SENTRY-1377
> Project: Sentry
>  Issue Type: Test
>  Components: Sentry
>Affects Versions: 1.8.0
>Reporter: Vadim Spector
>Assignee: Vadim Spector
>Priority: Minor
> Fix For: 1.8.0
>
> Attachments: SENTRY-1377.001.patch, SENTRY-1377.002.patch
>
>
> 1. TestHDFSIntegration.java should provide best-attempt cleanup in 
> cleanAfterTest() method. Currently, if any cleanup operation fails, the rest 
> of cleanup code is not executed. Cleanup logic would not normally fail if 
> corresponding test succeeds. But if some do not, short-circuiting some 
> cleanup logic tends to cascade secondary failures which complicate 
> troubleshooting.
> 2. Test methods would short-circuit closing database Connection's and 
> Statement's if the test logic fails. Moreover, some test methods have 
> multiple initializations of Connection ans Statement, but only one 
> stmt.close(); conn.close(); pair at the end. Correct pattern for each 
> distinct {Connection,Statement} pair would be:
> conn = null;
> stmt = null;
> try {
> . test logic here, including conn and stmt initialization ...
> } finally {
>   safeClose(conn, stmt);
> }
> where
> private static safeClose(Connection conn, Statement stmt) {
>   if (stmt != null) try { stmt.close(); } catch (Exception ignore) {}
>   if (conn != null) try { conn.close(); } catch (Exception ignore) {}
> }
> Testing jdbc driver implementation is not in the scope of HDFS integration 
> tests, so ignoring Connection.close() and Statement.close() failures should 
> not be a concern for this test class.
> 3. Retry logic uses recursion in some places, as in startHiveServer2(), 
> verifyQuery(), verifyOnAllSubDirs. Better to implement it via straightforward 
> retry loop. Exception stack trace is more confusing than it needs to be in 
> case of recursive calls. Plus, with NUM_RETRIES == 10, at least 
> theoretically, it creates running out of stack as an unnecessary failure 
> mechanism.
> 4. startHiveServer2() ignores hiveServer2.start() call failure, only logging 
> info message.
> 5. Starting hiveServer2 and Hive metastore in separate threads and then 
> keeping those threads alive seems unnecessary, since both servers' start() 
> methods create servers running on their own threads anyway. It effectively 
> leads to ignoring the start() method failure for both servers. Also, it 
> leaves no guarantee that hiveServer2 will be started after the Hive metastore 
> - both start from independent threads with no cross-thread coordination 
> mechanism in place.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (SENTRY-1377) improve handling of failures, both in tests and after-test cleanup, in TestHDFSIntegration.java

2016-07-28 Thread Vadim Spector (JIRA)

 [ 
https://issues.apache.org/jira/browse/SENTRY-1377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Spector updated SENTRY-1377:
--
Attachment: SENTRY-1377.003.patch

> improve handling of failures, both in tests and after-test cleanup, in 
> TestHDFSIntegration.java
> ---
>
> Key: SENTRY-1377
> URL: https://issues.apache.org/jira/browse/SENTRY-1377
> Project: Sentry
>  Issue Type: Test
>  Components: Sentry
>Affects Versions: 1.8.0
>Reporter: Vadim Spector
>Assignee: Vadim Spector
>Priority: Minor
> Fix For: 1.8.0
>
> Attachments: SENTRY-1377.001.patch, SENTRY-1377.002.patch, 
> SENTRY-1377.003.patch
>
>
> 1. TestHDFSIntegration.java should provide best-attempt cleanup in 
> cleanAfterTest() method. Currently, if any cleanup operation fails, the rest 
> of cleanup code is not executed. Cleanup logic would not normally fail if 
> corresponding test succeeds. But if some do not, short-circuiting some 
> cleanup logic tends to cascade secondary failures which complicate 
> troubleshooting.
> 2. Test methods would short-circuit closing database Connection's and 
> Statement's if the test logic fails. Moreover, some test methods have 
> multiple initializations of Connection ans Statement, but only one 
> stmt.close(); conn.close(); pair at the end. Correct pattern for each 
> distinct {Connection,Statement} pair would be:
> conn = null;
> stmt = null;
> try {
> . test logic here, including conn and stmt initialization ...
> } finally {
>   safeClose(conn, stmt);
> }
> where
> private static safeClose(Connection conn, Statement stmt) {
>   if (stmt != null) try { stmt.close(); } catch (Exception ignore) {}
>   if (conn != null) try { conn.close(); } catch (Exception ignore) {}
> }
> Testing jdbc driver implementation is not in the scope of HDFS integration 
> tests, so ignoring Connection.close() and Statement.close() failures should 
> not be a concern for this test class.
> 3. Retry logic uses recursion in some places, as in startHiveServer2(), 
> verifyQuery(), verifyOnAllSubDirs. Better to implement it via straightforward 
> retry loop. Exception stack trace is more confusing than it needs to be in 
> case of recursive calls. Plus, with NUM_RETRIES == 10, at least 
> theoretically, it creates running out of stack as an unnecessary failure 
> mechanism.
> 4. startHiveServer2() ignores hiveServer2.start() call failure, only logging 
> info message.
> 5. Starting hiveServer2 and Hive metastore in separate threads and then 
> keeping those threads alive seems unnecessary, since both servers' start() 
> methods create servers running on their own threads anyway. It effectively 
> leads to ignoring the start() method failure for both servers. Also, it 
> leaves no guarantee that hiveServer2 will be started after the Hive metastore 
> - both start from independent threads with no cross-thread coordination 
> mechanism in place.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (SENTRY-1377) improve handling of failures, both in tests and after-test cleanup, in TestHDFSIntegration.java

2016-07-28 Thread Vadim Spector (JIRA)

 [ 
https://issues.apache.org/jira/browse/SENTRY-1377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Spector updated SENTRY-1377:
--
Attachment: (was: SENTRY-1377.003.patch)

> improve handling of failures, both in tests and after-test cleanup, in 
> TestHDFSIntegration.java
> ---
>
> Key: SENTRY-1377
> URL: https://issues.apache.org/jira/browse/SENTRY-1377
> Project: Sentry
>  Issue Type: Test
>  Components: Sentry
>Affects Versions: 1.8.0
>Reporter: Vadim Spector
>Assignee: Vadim Spector
>Priority: Minor
> Fix For: 1.8.0
>
> Attachments: SENTRY-1377.001.patch, SENTRY-1377.002.patch, 
> SENTRY-1377.003.patch
>
>
> 1. TestHDFSIntegration.java should provide best-attempt cleanup in 
> cleanAfterTest() method. Currently, if any cleanup operation fails, the rest 
> of cleanup code is not executed. Cleanup logic would not normally fail if 
> corresponding test succeeds. But if some do not, short-circuiting some 
> cleanup logic tends to cascade secondary failures which complicate 
> troubleshooting.
> 2. Test methods would short-circuit closing database Connection's and 
> Statement's if the test logic fails. Moreover, some test methods have 
> multiple initializations of Connection ans Statement, but only one 
> stmt.close(); conn.close(); pair at the end. Correct pattern for each 
> distinct {Connection,Statement} pair would be:
> conn = null;
> stmt = null;
> try {
> . test logic here, including conn and stmt initialization ...
> } finally {
>   safeClose(conn, stmt);
> }
> where
> private static safeClose(Connection conn, Statement stmt) {
>   if (stmt != null) try { stmt.close(); } catch (Exception ignore) {}
>   if (conn != null) try { conn.close(); } catch (Exception ignore) {}
> }
> Testing jdbc driver implementation is not in the scope of HDFS integration 
> tests, so ignoring Connection.close() and Statement.close() failures should 
> not be a concern for this test class.
> 3. Retry logic uses recursion in some places, as in startHiveServer2(), 
> verifyQuery(), verifyOnAllSubDirs. Better to implement it via straightforward 
> retry loop. Exception stack trace is more confusing than it needs to be in 
> case of recursive calls. Plus, with NUM_RETRIES == 10, at least 
> theoretically, it creates running out of stack as an unnecessary failure 
> mechanism.
> 4. startHiveServer2() ignores hiveServer2.start() call failure, only logging 
> info message.
> 5. Starting hiveServer2 and Hive metastore in separate threads and then 
> keeping those threads alive seems unnecessary, since both servers' start() 
> methods create servers running on their own threads anyway. It effectively 
> leads to ignoring the start() method failure for both servers. Also, it 
> leaves no guarantee that hiveServer2 will be started after the Hive metastore 
> - both start from independent threads with no cross-thread coordination 
> mechanism in place.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (SENTRY-1377) improve handling of failures, both in tests and after-test cleanup, in TestHDFSIntegration.java

2016-07-11 Thread Vadim Spector (JIRA)

 [ 
https://issues.apache.org/jira/browse/SENTRY-1377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Spector updated SENTRY-1377:
--
Attachment: SENTRY-1377.002.patch

> improve handling of failures, both in tests and after-test cleanup, in 
> TestHDFSIntegration.java
> ---
>
> Key: SENTRY-1377
> URL: https://issues.apache.org/jira/browse/SENTRY-1377
> Project: Sentry
>  Issue Type: Test
>  Components: Sentry
>Affects Versions: 1.8.0
>Reporter: Vadim Spector
>Priority: Minor
> Fix For: 1.8.0
>
> Attachments: SENTRY-1377.001.patch, SENTRY-1377.002.patch
>
>
> 1. TestHDFSIntegration.java should provide best-attempt cleanup in 
> cleanAfterTest() method. Currently, if any cleanup operation fails, the rest 
> of cleanup code is not executed. Cleanup logic would not normally fail if 
> corresponding test succeeds. But if some do not, short-circuiting some 
> cleanup logic tends to cascade secondary failures which complicate 
> troubleshooting.
> 2. Test methods would short-circuit closing database Connection's and 
> Statement's if the test logic fails. Moreover, some test methods have 
> multiple initializations of Connection ans Statement, but only one 
> stmt.close(); conn.close(); pair at the end. Correct pattern for each 
> distinct {Connection,Statement} pair would be:
> conn = null;
> stmt = null;
> try {
> . test logic here, including conn and stmt initialization ...
> } finally {
>   safeClose(conn, stmt);
> }
> where
> private static safeClose(Connection conn, Statement stmt) {
>   if (stmt != null) try { stmt.close(); } catch (Exception ignore) {}
>   if (conn != null) try { conn.close(); } catch (Exception ignore) {}
> }
> Testing jdbc driver implementation is not in the scope of HDFS integration 
> tests, so ignoring Connection.close() and Statement.close() failures should 
> not be a concern for this test class.
> 3. Retry logic uses recursion in some places, as in startHiveServer2(), 
> verifyQuery(), verifyOnAllSubDirs. Better to implement it via straightforward 
> retry loop. Exception stack trace is more confusing than it needs to be in 
> case of recursive calls. Plus, with NUM_RETRIES == 10, at least 
> theoretically, it creates running out of stack as an unnecessary failure 
> mechanism.
> 4. startHiveServer2() ignores hiveServer2.start() call failure, only logging 
> info message.
> 5. Starting hiveServer2 and Hive metastore in separate threads and then 
> keeping those threads alive seems unnecessary, since both servers' start() 
> methods create servers running on their own threads anyway. It effectively 
> leads to ignoring the start() method failure for both servers. Also, it 
> leaves no guarantee that hiveServer2 will be started after the Hive metastore 
> - both start from independent threads with no cross-thread coordination 
> mechanism in place.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (SENTRY-1377) improve handling of failures, both in tests and after-test cleanup, in TestHDFSIntegration.java

2016-07-11 Thread Vadim Spector (JIRA)

 [ 
https://issues.apache.org/jira/browse/SENTRY-1377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Spector updated SENTRY-1377:
--
Description: 
1. TestHDFSIntegration.java should provide best-attempt cleanup in 
cleanAfterTest() method. Currently, if any cleanup operation fails, the rest of 
cleanup code is not executed. Cleanup logic would not normally fail if 
corresponding test succeeds. But if some do not, short-circuiting some cleanup 
logic tends to cascade secondary failures which complicate troubleshooting.

2. Test methods would short-circuit closing database Connection's and 
Statement's if the test logic fails. Moreover, some test methods have multiple 
initializations of Connection ans Statement, but only one stmt.close(); 
conn.close(); pair at the end. Correct pattern for each distinct 
{Connection,Statement} pair would be:

conn = null;
stmt = null;
try {
. test logic here, including conn and stmt initialization ...
} finally {
  safeClose(conn, stmt);
}

where
private static safeClose(Connection conn, Statement stmt) {
  if (stmt != null) try { stmt.close(); } catch (Exception ignore) {}
  if (conn != null) try { conn.close(); } catch (Exception ignore) {}
}

Testing jdbc driver implementation is not in the scope of HDFS integration 
tests, so ignoring Connection.close() and Statement.close() failures should not 
be a concern for this test class.

3. Retry logic uses recursion in some places, as in startHiveServer2(), 
verifyQuery(), verifyOnAllSubDirs. Better to implement it via straightforward 
retry loop. Exception stack trace is more confusing than it needs to be in case 
of recursive calls. Plus, with NUM_RETRIES == 10, at least theoretically, it 
creates running out of stack as an unnecessary failure mechanism.

4. startHiveServer2() ignores hiveServer2.start() call failure, only logging 
info message.

5. Starting hiveServer2 and Hive metastore in separate threads and then keeping 
those threads alive seems unnecessary, since both servers' start() methods 
create servers running on their own threads anyway. It effectively leads to 
ignoring the start() method failure for both servers. Also, it leaves no 
guarantee that hiveServer2 will be started after the Hive metastore - both 
start from independent threads with no cross-thread coordination mechanism in 
place.

  was:
1. TestHDFSIntegration.java should provide best-attempt cleanup in 
cleanAfterTest() method. Currently, if any cleanup operation fails, the rest of 
cleanup code is not executed. Cleanup logic would not normally fail if 
corresponding test succeeds. But if some do not, short-circuiting some cleanup 
logic tends to cascade secondary failures which complicate troubleshooting.

2. Test methods would short-circuit closing database Connection's and 
Statement's if the test logic fails. Moreover, some test methods have multiple 
initializations of Connection ans Statement, but only one stmt.close(); 
conn.close(); pair at the end. Correct pattern for each distinct 
{Connection,Statement} pair would be:

conn = null;
stmt = null;
try {
. test logic here, including conn and stmt initialization ...
} finally {
  safeClose(conn, stmt);
}

where
private static safeClose(Connection conn, Statement stmt) {
  if (stmt != null) try { stmt.close(); } catch (Exception ignore) {}
  if (conn != null) try { conn.close(); } catch (Exception ignore) {}
}

Testing jdbc driver implementation is not in the scope of HDFS integration 
tests, so ignoring Connection.close() and Statement.close() failures should not 
be a concern for this test class.

3. Retry logic uses recursion in some places, as in startHiveServer2(), 
verifyQuery(), verifyOnAllSubDirs. Better to implement it via straightforward 
retry loop. Exception stack trace is more confusing than it needs to be in case 
of recursive calls. Plus, with NUM_RETRIES == 10, at least theoretically, it 
creates running out of stack as an unnecessary failure mechanism.

4. startHiveServer2() ignores hiveServer2.start() call failure, only logging 
info message.

5. Starting hiveServer2 and Hive metastore in separate threads and then keeping 
those threads alive seems unnecessary, since both servers' start() methods 
create servers running on their own threads anyway. It effectively leads to 
ignoring the start() method failure for both servers.


> improve handling of failures, both in tests and after-test cleanup, in 
> TestHDFSIntegration.java
> ---
>
> Key: SENTRY-1377
> URL: https://issues.apache.org/jira/browse/SENTRY-1377
> Project: Sentry
>  Issue Type: Test
>  Components: Sentry
>Affects Versions: 1.8.0
>Reporter: Vadim Spector
>Priority: Minor
> Fix For: 1.8.0
>
> Attachments: SENTRY-1377.001.patch
>
>
> 1. 

[jira] [Updated] (SENTRY-1377) improve handling of failures, both in tests and after-test cleanup, in TestHDFSIntegration.java

2016-07-08 Thread Vadim Spector (JIRA)

 [ 
https://issues.apache.org/jira/browse/SENTRY-1377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Spector updated SENTRY-1377:
--
Attachment: SENTRY-1377.001.patch

> improve handling of failures, both in tests and after-test cleanup, in 
> TestHDFSIntegration.java
> ---
>
> Key: SENTRY-1377
> URL: https://issues.apache.org/jira/browse/SENTRY-1377
> Project: Sentry
>  Issue Type: Test
>  Components: Sentry
>Affects Versions: 1.8.0
>Reporter: Vadim Spector
>Priority: Minor
> Fix For: 1.8.0
>
> Attachments: SENTRY-1377.001.patch
>
>
> 1. TestHDFSIntegration.java should provide best-attempt cleanup in 
> cleanAfterTest() method. Currently, if any cleanup operation fails, the rest 
> of cleanup code is not executed. Cleanup logic would not normally fail if 
> corresponding test succeeds. But if some do not, short-circuiting some 
> cleanup logic tends to cascade secondary failures which complicate 
> troubleshooting.
> 2. Test methods would short-circuit closing database Connection's and 
> Statement's if the test logic fails. Moreover, some test methods have 
> multiple initializations of Connection ans Statement, but only one 
> stmt.close(); conn.close(); pair at the end. Correct pattern for each 
> distinct {Connection,Statement} pair would be:
> conn = null;
> stmt = null;
> try {
> . test logic here, including conn and stmt initialization ...
> } finally {
>   safeClose(conn, stmt);
> }
> where
> private static safeClose(Connection conn, Statement stmt) {
>   if (stmt != null) try { stmt.close(); } catch (Exception ignore) {}
>   if (conn != null) try { conn.close(); } catch (Exception ignore) {}
> }
> Testing jdbc driver implementation is not in the scope of HDFS integration 
> tests, so ignoring Connection.close() and Statement.close() failures should 
> not be a concern for this test class.
> 3. Retry logic uses recursion in some places, as in startHiveServer2(), 
> verifyQuery(), verifyOnAllSubDirs. Better to implement it via straightforward 
> retry loop. Exception stack trace is more confusing than it needs to be in 
> case of recursive calls. Plus, with NUM_RETRIES == 10, at least 
> theoretically, it creates running out of stack as an unnecessary failure 
> mechanism.
> 4. startHiveServer2() ignores hiveServer2.start() call failure, only logging 
> info message.
> 5. Starting hiveServer2 and Hive metastore in separate threads and then 
> keeping those threads alive seems unnecessary, since both servers' start() 
> methods create servers running on their own threads anyway. It effectively 
> leads to ignoring the start() method failure for both servers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (SENTRY-1377) improve handling of failures, both in tests and after-test cleanup, in TestHDFSIntegration.java

2016-07-08 Thread Vadim Spector (JIRA)

 [ 
https://issues.apache.org/jira/browse/SENTRY-1377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Spector updated SENTRY-1377:
--
Description: 
1. TestHDFSIntegration.java should provide best-attempt cleanup in 
cleanAfterTest() method. Currently, if any cleanup operation fails, the rest of 
cleanup code is not executed. Cleanup logic would not normally fail if 
corresponding test succeeds. But if some do not, short-circuiting some cleanup 
logic tends to cascade secondary failures which complicate troubleshooting.

2. Test methods would short-circuit closing database Connection's and 
Statement's if the test logic fails. Moreover, some test methods have multiple 
initializations of Connection ans Statement, but only one stmt.close(); 
conn.close(); pair at the end. Correct pattern for each distinct 
{Connection,Statement} pair would be:

conn = null;
stmt = null;
try {
. test logic here, including conn and stmt initialization ...
} finally {
  safeClose(conn, stmt);
}

where
private static safeClose(Connection conn, Statement stmt) {
  if (stmt != null) try { stmt.close(); } catch (Exception ignore) {}
  if (conn != null) try { conn.close(); } catch (Exception ignore) {}
}

Testing jdbc driver implementation is not in the scope of HDFS integration 
tests, so ignoring Connection.close() and Statement.close() failures should not 
be a concern for this test class.

3. Retry logic uses recursion in some places, as in startHiveServer2(), 
verifyQuery(), verifyOnAllSubDirs. Better to implement it via straightforward 
retry loop. Exception stack trace is more confusing than it needs to be in case 
of recursive calls. Plus, with NUM_RETRIES == 10, at least theoretically, it 
creates running out of stack as an unnecessary failure mechanism.

4. startHiveServer2() ignores hiveServer2.start() call failure, only logging 
info message.

5. Starting hiveServer2 and Hive metastore in separate threads and then keeping 
those threads alive seems unnecessary, since both servers' start() methods 
create servers running on their own threads anyway. It effectively leads to 
ignoring the start() method failure for both servers.

  was:
1. TestHDFSIntegration.java should provide best-attempt cleanup in 
cleanAfterTest() method. Currently, if any cleanup operation fails, the rest of 
cleanup code is not executed. Cleanup logic would not normally fail if 
corresponding test succeeds. But if some do not, short-circuiting some cleanup 
logic tends to cascade secondary failures which complicate troubleshooting.

2. Test methods would short-circuit closing database Connection's and 
Statement's if the test logic fails. Moreover, some test methods have multiple 
initializations of Connection ans Statement, but only one stmt.close(); 
conn.close(); pair at the end. Correct pattern for each distinct 
{Connection,Statement} pair would be:

conn = null;
stmt = null;
try {
. test logic here, including conn and stmt initialization ...
} finally {
  safeClose(conn, stmt);
}

where
private static safeClose(Connection conn, Statement stmt) {
  if (stmt != null) try { stmt.close(); } catch (Exception ignore) {}
  if (conn != null) try { conn.close(); } catch (Exception ignore) {}
}

Testing jdbc driver implementation is not in the scope of HDFS integration 
tests, so ignoring Connection.close() and Statement.close() failures should not 
be a concern for this test class.

3. Retry logic uses recursion in some places, as in startHiveServer2(), 
verifyQuery(), verifyOnAllSubDirs. Better to implement it via straightforward 
retry loop. Exception stack trace is more confusing than it needs to be in case 
of recursive calls. Plus, with NUM_RETRIES == 10, at least theoretically, it 
creates running out of stack as an unnecessary failure mechanism.

4. startHiveServer2() ignores hiveServer2.start() call failure, only logging 
info message.

5. Starting hiveServer2 and Hive metastore in separate threads and then keeping 
those threads alive seems unmnecessary, since both servers' start() methods 
create servers running on their own threads anyway. It effectively leads to 
ignoring the start() method failure for both servers.


> improve handling of failures, both in tests and after-test cleanup, in 
> TestHDFSIntegration.java
> ---
>
> Key: SENTRY-1377
> URL: https://issues.apache.org/jira/browse/SENTRY-1377
> Project: Sentry
>  Issue Type: Test
>  Components: Sentry
>Reporter: Vadim Spector
>Priority: Minor
>
> 1. TestHDFSIntegration.java should provide best-attempt cleanup in 
> cleanAfterTest() method. Currently, if any cleanup operation fails, the rest 
> of cleanup code is not executed. Cleanup logic would not normally fail if 
> corresponding test succeeds. But if some do not, short-circuiting 

[jira] [Updated] (SENTRY-1377) improve handling of failures, both in tests and after-test cleanup, in TestHDFSIntegration.java

2016-07-08 Thread Vadim Spector (JIRA)

 [ 
https://issues.apache.org/jira/browse/SENTRY-1377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Spector updated SENTRY-1377:
--
Description: 
1. TestHDFSIntegration.java should provide best-attempt cleanup in 
cleanAfterTest() method. Currently, if any cleanup operation fails, the rest of 
cleanup code is not executed. Cleanup logic would not normally fail if 
corresponding test succeeds. But if some do not, short-circuiting some cleanup 
logic tends to cascade secondary failures which complicate troubleshooting.

2. Test methods would short-circuit closing database Connection's and 
Statement's if the test logic fails. Moreover, some test methods have multiple 
initializations of Connection ans Statement, but only one stmt.close(); 
conn.close(); pair at the end. Correct pattern for each distinct 
{Connection,Statement} pair would be:

conn = null;
stmt = null;
try {
. test logic here, including conn and stmt initialization ...
} finally {
  safeClose(conn, stmt);
}

where
private static safeClose(Connection conn, Statement stmt) {
  if (stmt != null) try { stmt.close(); } catch (Exception ignore) {}
  if (conn != null) try { conn.close(); } catch (Exception ignore) {}
}

Testing jdbc driver implementation is not in the scope of HDFS integration 
tests, so ignoring Connection.close() and Statement.close() failures should not 
be a concern for this test class.

3. Retry logic uses recursion in some places, as in startHiveServer2(), 
verifyQuery(), verifyOnAllSubDirs. Better to implement it via straightforward 
retry loop. Exception stack trace is more confusing than it needs to be in case 
of recursive calls. Plus, with NUM_RETRIES == 10, at least theoretically, it 
creates running out of stack as an unnecessary failure mechanism.

4. startHiveServer2() ignores hiveServer2.start() call failure, only logging 
info message.

5. Starting hiveServer2 and Hive metastore in separate threads and then keeping 
those threads alive seems unmnecessary, since both servers' start() methods 
create servers running on their own threads anyway. It effectively leads to 
ignoring the start() method failure for both servers.

  was:
1. TestHDFSIntegration.java should provide best-attempt cleanup in 
cleanAfterTest() method. Currently, if any cleanup operation fails, the rest of 
cleanup code is not executed. Cleanup logic would not normally fail if 
corresponding test succeeds. But if some do not, short-circuiting some cleanup 
logic tends to cascade secondary failures which complicate troubleshooting.

2. Test methods would short-circuit closing database Connection's and 
Statement's if the test logic fails. Moreover, some test methods have multiple 
initializations of Connection ans Statement, but only one stmt.close(); 
conn.close(); pair at the end. Correct pattern for each distinct 
{Connection,Statement} pair would be:

conn = null;
stmt = null;
try {
. test logic here, including conn and stmt initialization ...
} finally {
  safeClose(conn, stmt);
}

where
private static safeClose(Connection conn, Statement stmt) {
  if (stmt != null) try { stmt.close(); } catch (Exception ignore) {}
  if (conn != null) try { conn.close(); } catch (Exception ignore) {}
}

Testing jdbc driver implementation is not in the scope of HDFS integration 
tests, so ignoring Connection.close() and Statement.close() failures should not 
be a concern for this test class.

3. Initialization retry logic uses recursion, as in startHiveServer2(), 
verifyQuery(), verifyOnAllSubDirs. Better to implement it via straightforward 
retry loop. Exception stack trace is more confusing than it needs to be in case 
of recursive calls. Plus, with NUM_RETRIES == 10, at least theoretically, it 
creates running out of stack as an unnecessary failure mechanism.

4. startHiveServer2() ignores hiveServer2.start() call failure, only logging 
info message.

5. Starting hiveServer2 and Hive metastore in separate threads and then keeping 
those threads alive seems unmnecessary, since both servers' start() methods 
create servers running on their own threads anyway. It effectively leads to 
ignoring the start() method failure for both servers.


> improve handling of failures, both in tests and after-test cleanup, in 
> TestHDFSIntegration.java
> ---
>
> Key: SENTRY-1377
> URL: https://issues.apache.org/jira/browse/SENTRY-1377
> Project: Sentry
>  Issue Type: Test
>  Components: Sentry
>Reporter: Vadim Spector
>Priority: Minor
>
> 1. TestHDFSIntegration.java should provide best-attempt cleanup in 
> cleanAfterTest() method. Currently, if any cleanup operation fails, the rest 
> of cleanup code is not executed. Cleanup logic would not normally fail if 
> corresponding test succeeds. But if some do not, short-circuiting 

[jira] [Updated] (SENTRY-1377) improve handling of failures, both in tests and after-test cleanup, in TestHDFSIntegration.java

2016-07-08 Thread Vadim Spector (JIRA)

 [ 
https://issues.apache.org/jira/browse/SENTRY-1377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Spector updated SENTRY-1377:
--
Description: 
1. TestHDFSIntegration.java should provide best-attempt cleanup in 
cleanAfterTest() method. Currently, if any cleanup operation fails, the rest of 
cleanup code is not executed. Cleanup logic would not normally fail if 
corresponding test succeeds. But if some do not, short-circuiting some cleanup 
logic tends to cascade secondary failures which complicate troubleshooting.

2. Test methods would short-circuit closing database Connection's and 
Statement's if the test logic fails. Moreover, some test methods have multiple 
initializations of Connection ans Statement, but only one stmt.close(); 
conn.close(); pair at the end. Correct pattern for each distinct 
{Connection,Statement} pair would be:

conn = null;
stmt = null;
try {
. test logic here, including conn and stmt initialization ...
} finally {
  safeClose(conn, stmt);
}

where
private static safeClose(Connection conn, Statement stmt) {
  if (stmt != null) try { stmt.close(); } catch (Exception ignore) {}
  if (conn != null) try { conn.close(); } catch (Exception ignore) {}
}

Testing jdbc driver implementation is not in the scope of HDFS integration 
tests, so ignoring Connection.close() and Statement.close() failures should not 
be a concern for this test class.

3. Initialization retry logic uses recursion, as in startHiveServer2(), 
verifyQuery(), verifyOnAllSubDirs. Better to implement it via straightforward 
retry loop. Exception stack trace is more confusing than it needs to be in case 
of recursive calls. Plus, with NUM_RETRIES == 10, at least theoretically, it 
creates running out of stack as an unnecessary failure mechanism.

4. startHiveServer2() ignores hiveServer2.start() call failure, only logging 
info message.

5. Starting hiveServer2 and Hive metastore in separate threads and then keeping 
those threads alive seems unmnecessary, since both servers' start() methods 
create servers running on their own threads anyway. It effectively leads to 
ignoring the start() method failure for both servers.

  was:
1. TestHDFSIntegration.java should provide best-attempt cleanup in 
cleanAfterTest() method. Currently, if any cleanup operation fails, the rest of 
cleanup code is not executed. Cleanup logic would not normally fail if 
corresponding test succeeds. But if some do not, short-circuiting some cleanup 
logic tends to cascade secondary failures which complicate troubleshooting.

2. Test methods would short-circuit closing database Connection's and 
Statement's if the test logic fails. Moreover, some test methods have multiple 
initializations of Connection ans Statement, but only one stmt.close(); 
conn.close(); pair at the end. Correct pattern for each distinct 
{Connection,Statement} pair would be:

conn = null;
stmt = null;
try {
. test logic here, including conn and stmt initialization ...
} finally {
  safeClose(conn, stmt);
}

where
private static safeClose(Connection conn, Statement stmt) {
  if (stmt != null) try { stmt.close(); } catch (Exception ignore) {}
  if (conn != null) try { conn.close(); } catch (Exception ignore) {}
}

Testing jdbc driver implementation is not in the scope of HDFS integration 
tests, so ignoring Connection.close() and Statement.close() failures should not 
be a concern for this test class.

3. Initialization retry logic uses recursion, as in startHiveServer2(), 
verifyQuery(), verifyOnAllSubDirs. Better to implement it via straightforward 
retry loop. Exception stack trace is more confusing than it needs to be in case 
of recursive calls. Plus, with NUM_RETRIES == 10, at least theoretically, it 
creates running out of stack as an unnecessary failure mechanism.

4. startHiveServer2() ignores hiveServer2.start() call failure, only logging 
info message.

5. 


> improve handling of failures, both in tests and after-test cleanup, in 
> TestHDFSIntegration.java
> ---
>
> Key: SENTRY-1377
> URL: https://issues.apache.org/jira/browse/SENTRY-1377
> Project: Sentry
>  Issue Type: Test
>  Components: Sentry
>Reporter: Vadim Spector
>Priority: Minor
>
> 1. TestHDFSIntegration.java should provide best-attempt cleanup in 
> cleanAfterTest() method. Currently, if any cleanup operation fails, the rest 
> of cleanup code is not executed. Cleanup logic would not normally fail if 
> corresponding test succeeds. But if some do not, short-circuiting some 
> cleanup logic tends to cascade secondary failures which complicate 
> troubleshooting.
> 2. Test methods would short-circuit closing database Connection's and 
> Statement's if the test logic fails. Moreover, some test methods have 
> multiple initializations of Connection 

[jira] [Updated] (SENTRY-1377) improve handling of failures, both in tests and after-test cleanup, in TestHDFSIntegration.java

2016-07-08 Thread Vadim Spector (JIRA)

 [ 
https://issues.apache.org/jira/browse/SENTRY-1377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Spector updated SENTRY-1377:
--
Description: 
1. TestHDFSIntegration.java should provide best-attempt cleanup in 
cleanAfterTest() method. Currently, if any cleanup operation fails, the rest of 
cleanup code is not executed. Cleanup logic would not normally fail if 
corresponding test succeeds. But if some do not, short-circuiting some cleanup 
logic tends to cascade secondary failures which complicate troubleshooting.

2. Test methods would short-circuit closing database Connection's and 
Statement's if the test logic fails. Moreover, some test methods have multiple 
initializations of Connection ans Statement, but only one stmt.close(); 
conn.close(); pair at the end. Correct pattern for each distinct 
{Connection,Statement} pair would be:

conn = null;
stmt = null;
try {
. test logic here, including conn and stmt initialization ...
} finally {
  safeClose(conn, stmt);
}

where
private static safeClose(Connection conn, Statement stmt) {
  if (stmt != null) try { stmt.close(); } catch (Exception ignore) {}
  if (conn != null) try { conn.close(); } catch (Exception ignore) {}
}

Testing jdbc driver implementation is not in the scope of HDFS integration 
tests, so ignoring Connection.close() and Statement.close() failures should not 
be a concern for this test class.

3. Initialization retry logic uses recursion, as in startHiveServer2(), 
verifyQuery(), verifyOnAllSubDirs. Better to implement it via straightforward 
retry loop. Exception stack trace is more confusing than it needs to be in case 
of recursive calls. Plus, with NUM_RETRIES == 10, at least theoretically, it 
creates running out of stack as an unnecessary failure mechanism.

4. startHiveServer2() ignores hiveServer2.start() call failure, only logging 
info message.

5. 

  was:
1. TestHDFSIntegration.java should provide best-attempt cleanup in 
cleanAfterTest() method. Currently, if any cleanup operation fails, the rest of 
cleanup code is not executed. Cleanup logic would not normally fail if 
corresponding test succeeds. But if some do not, short-circuiting some cleanup 
logic tends to cascade secondary failures which complicate troubleshooting.

2. Test methods would short-circuit closing database Connection's and 
Statement's if the test logic fails. Moreover, some test methods have multiple 
initializations of Connection ans Statement, but only one stmt.close(); 
conn.close(); pair at the end. Correct pattern for each distinct 
{Connection,Statement} pair would be:

conn = null;
stmt = null;
try {
. test logic here, including conn and stmt initialization ...
} finally {
  safeClose(conn, stmt);
}

where
private static safeClose(Connection conn, Statement stmt) {
  if (stmt != null) try { stmt.close(); } catch (Exception ignore) {}
  if (conn != null) try { conn.close(); } catch (Exception ignore) {}
}

Testing jdbc driver implementation is not in the scope of HDFS integration 
tests, so ignoring Connection.close() and Statement.close() failures should not 
be a concern for this test class.

3. Initialization retry logic uses recursion, as in startHiveServer2(), 
verifyQuery(), verifyOnAllSubDirs. Better to implement it via straightforward 
retry loop. Exception stack trace is more confusing than it needs to be in case 
of recursive calls. Plus, with NUM_RETRIES == 10, at least theoretically, it 
creates running out of stack as an unnecessary failure mechanism.

4. startHiveServer2() 


> improve handling of failures, both in tests and after-test cleanup, in 
> TestHDFSIntegration.java
> ---
>
> Key: SENTRY-1377
> URL: https://issues.apache.org/jira/browse/SENTRY-1377
> Project: Sentry
>  Issue Type: Test
>  Components: Sentry
>Reporter: Vadim Spector
>Priority: Minor
>
> 1. TestHDFSIntegration.java should provide best-attempt cleanup in 
> cleanAfterTest() method. Currently, if any cleanup operation fails, the rest 
> of cleanup code is not executed. Cleanup logic would not normally fail if 
> corresponding test succeeds. But if some do not, short-circuiting some 
> cleanup logic tends to cascade secondary failures which complicate 
> troubleshooting.
> 2. Test methods would short-circuit closing database Connection's and 
> Statement's if the test logic fails. Moreover, some test methods have 
> multiple initializations of Connection ans Statement, but only one 
> stmt.close(); conn.close(); pair at the end. Correct pattern for each 
> distinct {Connection,Statement} pair would be:
> conn = null;
> stmt = null;
> try {
> . test logic here, including conn and stmt initialization ...
> } finally {
>   safeClose(conn, stmt);
> }
> where
> private static safeClose(Connection conn, 

[jira] [Updated] (SENTRY-1377) improve handling of failures, both in tests and after-test cleanup, in TestHDFSIntegration.java

2016-07-08 Thread Vadim Spector (JIRA)

 [ 
https://issues.apache.org/jira/browse/SENTRY-1377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Spector updated SENTRY-1377:
--
Description: 
1. TestHDFSIntegration.java should provide best-attempt cleanup in 
cleanAfterTest() method. Currently, if any cleanup operation fails, the rest of 
cleanup code is not executed. Cleanup logic would not normally fail if 
corresponding test succeeds. But if some do not, short-circuiting some cleanup 
logic tends to cascade secondary failures which complicate troubleshooting.

2. Test methods would short-circuit closing database Connection's and 
Statement's if the test logic fails. Moreover, some test methods have multiple 
initializations of Connection ans Statement, but only one stmt.close(); 
conn.close(); pair at the end. Correct pattern for each distinct 
{Connection,Statement} pair would be:

conn = null;
stmt = null;
try {
. test logic here, including conn and stmt initialization ...
} finally {
  safeClose(conn, stmt);
}

where
private static safeClose(Connection conn, Statement stmt) {
  if (stmt != null) try { stmt.close(); } catch (Exception ignore) {}
  if (conn != null) try { conn.close(); } catch (Exception ignore) {}
}

Testing jdbc driver implementation is not in the scope of HDFS integration 
tests, so ignoring Connection.close() and Statement.close() failures should not 
be a concern for this test class.

3. Initialization retry logic uses recursion, as in startHiveServer2(), 
verifyQuery(), verifyOnAllSubDirs. Better to implement it via straightforward 
retry loop. Exception stack trace is more confusing than it needs to be in case 
of recursive calls. Plus, with NUM_RETRIES == 10, at least theoretically, it 
creates running out of stack as an unnecessary failure mechanism.

4. startHiveServer2() 

  was:
1. TestHDFSIntegration.java should provide best-attempt cleanup in 
cleanAfterTest() method. Currently, if any cleanup operation fails, the rest of 
cleanup code is not executed. Cleanup logic would not normally fail if 
corresponding test succeeds. But if some do not, short-circuiting some cleanup 
logic tends to cascade secondary failures which complicate troubleshooting.

2. Test methods would short-circuit closing database Connection's and 
Statement's if the test logic fails. Moreover, some test methods have multiple 
initializations of Connection ans Statement, but only one stmt.close(); 
conn.close(); pair at the end. Correct pattern for each distinct 
{Connection,Statement} pair would be:

conn = null;
stmt = null;
try {
. test logic here, including conn and stmt initialization ...
} finally {
  safeClose(conn, stmt);
}

where
private static safeClose(Connection conn, Statement stmt) {
  if (stmt != null) try { stmt.close(); } catch (Exception ignore) {}
  if (conn != null) try { conn.close(); } catch (Exception ignore) {}
}

Testing jdbc driver implementation is not in the scope of HDFS integration 
tests, so ignoring Connection.close() and Statement.close() failures should not 
be a concern for this test class.

3. Initialization retry logic uses recursion, as in startHiveServer2(), 
verifyQuery(), verifyOnAllSubDirs. Better to implement it via straightforward 
retry loop. Exception stack trace is more confusing than it needs to be in case 
of recursive calls. Plus, with NUM_RETRIES == 10, at least theoretically, it 
creates running out of stack as an unnecessary failure mechanism.



4. 


> improve handling of failures, both in tests and after-test cleanup, in 
> TestHDFSIntegration.java
> ---
>
> Key: SENTRY-1377
> URL: https://issues.apache.org/jira/browse/SENTRY-1377
> Project: Sentry
>  Issue Type: Test
>  Components: Sentry
>Reporter: Vadim Spector
>Priority: Minor
>
> 1. TestHDFSIntegration.java should provide best-attempt cleanup in 
> cleanAfterTest() method. Currently, if any cleanup operation fails, the rest 
> of cleanup code is not executed. Cleanup logic would not normally fail if 
> corresponding test succeeds. But if some do not, short-circuiting some 
> cleanup logic tends to cascade secondary failures which complicate 
> troubleshooting.
> 2. Test methods would short-circuit closing database Connection's and 
> Statement's if the test logic fails. Moreover, some test methods have 
> multiple initializations of Connection ans Statement, but only one 
> stmt.close(); conn.close(); pair at the end. Correct pattern for each 
> distinct {Connection,Statement} pair would be:
> conn = null;
> stmt = null;
> try {
> . test logic here, including conn and stmt initialization ...
> } finally {
>   safeClose(conn, stmt);
> }
> where
> private static safeClose(Connection conn, Statement stmt) {
>   if (stmt != null) try { stmt.close(); } catch (Exception ignore) {}
>   if 

[jira] [Updated] (SENTRY-1377) improve handling of failures, both in tests and after-test cleanup, in TestHDFSIntegration.java

2016-07-08 Thread Vadim Spector (JIRA)

 [ 
https://issues.apache.org/jira/browse/SENTRY-1377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Spector updated SENTRY-1377:
--
Description: 
1. TestHDFSIntegration.java should provide best-attempt cleanup in 
cleanAfterTest() method. Currently, if any cleanup operation fails, the rest of 
cleanup code is not executed. Cleanup logic would not normally fail if 
corresponding test succeeds. But if some do not, short-circuiting some cleanup 
logic tends to cascade secondary failures which complicate troubleshooting.

2. Test methods would short-circuit closing database Connection's and 
Statement's if the test logic fails. Moreover, some test methods have multiple 
initializations of Connection ans Statement, but only one stmt.close(); 
conn.close(); pair at the end. Correct pattern for each distinct 
{Connection,Statement} pair would be:

conn = null;
stmt = null;
try {
. test logic here, including conn and stmt initialization ...
} finally {
  safeClose(conn, stmt);
}

where
private static safeClose(Connection conn, Statement stmt) {
  if (stmt != null) try { stmt.close(); } catch (Exception ignore) {}
  if (conn != null) try { conn.close(); } catch (Exception ignore) {}
}

Testing jdbc driver implementation is not in the scope of HDFS integration 
tests, so ignoring Connection.close() and Statement.close() failures should not 
be a concern for this test class.

3. Initialization retry logic uses recursion, as in startHiveServer2(), 
verifyQuery(), verifyOnAllSubDirs. Better to implement it via straightforward 
retry loop. Exception stack trace is more confusing than it needs to be in case 
of recursive calls. Plus, with NUM_RETRIES == 10, at least theoretically, it 
creates running out of stack as an unnecessary failure mechanism.



4. 

  was:
1. TestHDFSIntegration.java should provide best-attempt cleanup in 
cleanAfterTest() method. Currently, if any cleanup operation fails, the rest of 
cleanup code is not executed. Cleanup logic would not normally fail if 
corresponding test succeeds. But if some do not, short-circuiting some cleanup 
logic tends to cascade secondary failures which complicate troubleshooting.

2. Test methods would short-circuit closing database Connection's and 
Statement's if the test logic fails. Moreover, some test methods have multiple 
initializations of Connection ans Statement, but only one stmt.close(); 
conn.close(); pair at the end. Correct pattern for each distinct 
{Connection,Statement} pair would be:

conn = null;
stmt = null;
try {
. test logic here, including conn and stmt initialization ...
} finally {
  safeClose(conn, stmt);
}

where
private static safeClose(Connection conn, Statement stmt) {
  if (stmt != null) try { stmt.close(); } catch (Exception ignore) {}
  if (conn != null) try { conn.close(); } catch (Exception ignore) {}
}

Testing jdbc driver implementation is not in the scope of HDFS integration 
tests, so ignoring Connection.close() and Statement.close() failures should not 
be a concern for this test class.

3. Initialization retry logic uses recursion, as in startHiveServer2(), 
verifyQuery(), verifyOnAllSubDirs. Better to implement it via straightforward 
retry loop. Exception stack trace is more confusing than it needs to be in case 
of recursive calls.

4. 


> improve handling of failures, both in tests and after-test cleanup, in 
> TestHDFSIntegration.java
> ---
>
> Key: SENTRY-1377
> URL: https://issues.apache.org/jira/browse/SENTRY-1377
> Project: Sentry
>  Issue Type: Test
>  Components: Sentry
>Reporter: Vadim Spector
>Priority: Minor
>
> 1. TestHDFSIntegration.java should provide best-attempt cleanup in 
> cleanAfterTest() method. Currently, if any cleanup operation fails, the rest 
> of cleanup code is not executed. Cleanup logic would not normally fail if 
> corresponding test succeeds. But if some do not, short-circuiting some 
> cleanup logic tends to cascade secondary failures which complicate 
> troubleshooting.
> 2. Test methods would short-circuit closing database Connection's and 
> Statement's if the test logic fails. Moreover, some test methods have 
> multiple initializations of Connection ans Statement, but only one 
> stmt.close(); conn.close(); pair at the end. Correct pattern for each 
> distinct {Connection,Statement} pair would be:
> conn = null;
> stmt = null;
> try {
> . test logic here, including conn and stmt initialization ...
> } finally {
>   safeClose(conn, stmt);
> }
> where
> private static safeClose(Connection conn, Statement stmt) {
>   if (stmt != null) try { stmt.close(); } catch (Exception ignore) {}
>   if (conn != null) try { conn.close(); } catch (Exception ignore) {}
> }
> Testing jdbc driver implementation is not in the scope of HDFS 

[jira] [Updated] (SENTRY-1377) improve handling of failures, both in tests and after-test cleanup, in TestHDFSIntegration.java

2016-07-08 Thread Vadim Spector (JIRA)

 [ 
https://issues.apache.org/jira/browse/SENTRY-1377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Spector updated SENTRY-1377:
--
Description: 
1. TestHDFSIntegration.java should provide best-attempt cleanup in 
cleanAfterTest() method. Currently, if any cleanup operation fails, the rest of 
cleanup code is not executed. Cleanup logic would not normally fail if 
corresponding test succeeds. But if some do not, short-circuiting some cleanup 
logic tends to cascade secondary failures which complicate troubleshooting.

2. Test methods would short-circuit closing database Connection's and 
Statement's if the test logic fails. Moreover, some test methods have multiple 
initializations of Connection ans Statement, but only one stmt.close(); 
conn.close(); pair at the end. Correct pattern for each distinct 
{Connection,Statement} pair would be:

conn = null;
stmt = null;
try {
. test logic here, including conn and stmt initialization ...
} finally {
  safeClose(conn, stmt);
}

where
private static safeClose(Connection conn, Statement stmt) {
  if (stmt != null) try { stmt.close(); } catch (Exception ignore) {}
  if (conn != null) try { conn.close(); } catch (Exception ignore) {}
}

Testing jdbc driver implementation is not in the scope of HDFS integration 
tests, so ignoring Connection.close() and Statement.close() failures should not 
be a concern for this test class.

3. Initialization retry logic uses recursion, as in startHiveServer2(), 
verifyQuery(), verifyOnAllSubDirs. Better to implement it via straightforward 
retry loop. Exception stack trace is more confusing than it needs to be in case 
of recursive calls.

4. 

  was:
1. TestHDFSIntegration.java should provide best-attempt cleanup in 
cleanAfterTest() method. Currently, if any cleanup operation fails, the rest of 
cleanup code is not executed. Cleanup logic would not normally fail if 
corresponding test succeeds. But if some do not, short-circuiting some cleanup 
logic tends to cascade secondary failures which complicate troubleshooting.

2. Test methods would short-circuit closing database Connection's and 
Statement's if the test logic fails. Moreover, some test methods have multiple 
initializations of Connection ans Statement, but only one stmt.close(); 
conn.close(); pair at the end. Correct pattern for each distinct 
{Connection,Statement} pair would be:

conn = null;
stmt = null;
try {
. test logic here, including conn and stmt initialization ...
} finally {
  safeClose(conn, stmt);
}

where
private static safeClose(Connection conn, Statement stmt) {
  if (stmt != null) try { stmt.close(); } catch (Exception ignore) {}
  if (conn != null) try { conn.close(); } catch (Exception ignore) {}
}

Testing jdbc driver implementation is not in the scope of HDFS integration 
tests, so ignoring Connection.close() and Statement.close() failures should not 
be a concern for this test class.

3. Initialization retry logic uses recursion, as in startHiveServer2(), 
verifyQuery(), verifyOnAllSubDirs. Better to implement it via straightforward 
retry loop. Exception stack trace is more confusing than it needs to be in case 
of recursive calls.


> improve handling of failures, both in tests and after-test cleanup, in 
> TestHDFSIntegration.java
> ---
>
> Key: SENTRY-1377
> URL: https://issues.apache.org/jira/browse/SENTRY-1377
> Project: Sentry
>  Issue Type: Test
>  Components: Sentry
>Reporter: Vadim Spector
>Priority: Minor
>
> 1. TestHDFSIntegration.java should provide best-attempt cleanup in 
> cleanAfterTest() method. Currently, if any cleanup operation fails, the rest 
> of cleanup code is not executed. Cleanup logic would not normally fail if 
> corresponding test succeeds. But if some do not, short-circuiting some 
> cleanup logic tends to cascade secondary failures which complicate 
> troubleshooting.
> 2. Test methods would short-circuit closing database Connection's and 
> Statement's if the test logic fails. Moreover, some test methods have 
> multiple initializations of Connection ans Statement, but only one 
> stmt.close(); conn.close(); pair at the end. Correct pattern for each 
> distinct {Connection,Statement} pair would be:
> conn = null;
> stmt = null;
> try {
> . test logic here, including conn and stmt initialization ...
> } finally {
>   safeClose(conn, stmt);
> }
> where
> private static safeClose(Connection conn, Statement stmt) {
>   if (stmt != null) try { stmt.close(); } catch (Exception ignore) {}
>   if (conn != null) try { conn.close(); } catch (Exception ignore) {}
> }
> Testing jdbc driver implementation is not in the scope of HDFS integration 
> tests, so ignoring Connection.close() and Statement.close() failures should 
> not be a concern for this test class.
> 3. 

[jira] [Updated] (SENTRY-1377) improve handling of failures, both in tests and after-test cleanup, in TestHDFSIntegration.java

2016-07-08 Thread Vadim Spector (JIRA)

 [ 
https://issues.apache.org/jira/browse/SENTRY-1377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Spector updated SENTRY-1377:
--
Description: 
1. TestHDFSIntegration.java should provide best-attempt cleanup in 
cleanAfterTest() method. Currently, if any cleanup operation fails, the rest of 
cleanup code is not executed. Cleanup logic would not normally fail if 
corresponding test succeeds. But if some do not, short-circuiting some cleanup 
logic tends to cascade secondary failures which complicate troubleshooting.

2. Test methods would short-circuit closing database Connection's and 
Statement's if the test logic fails. Moreover, some test methods have multiple 
initializations of Connection ans Statement, but only one stmt.close(); 
conn.close(); pair at the end. Correct pattern for each distinct 
{Connection,Statement} pair would be:

conn = null;
stmt = null;
try {
. test logic here, including conn and stmt initialization ...
} finally {
  safeClose(conn, stmt);
}

where
private static safeClose(Connection conn, Statement stmt) {
  if (stmt != null) try { stmt.close(); } catch (Exception ignore) {}
  if (conn != null) try { conn.close(); } catch (Exception ignore) {}
}

Testing jdbc driver implementation is not in the scope of HDFS integration 
tests, so ignoring Connection.close() and Statement.close() failures should not 
be a concern for this test class.

3. Initialization retry logic uses recursion, as in startHiveServer2(), 
verifyQuery(), verifyOnAllSubDirs. Better to implement it via straightforward 
retry loop. Exception stack trace is more confusing than it needs to be in case 
of recursive calls.

  was:
1. TestHDFSIntegration.java should provide best-attempt cleanup in 
cleanAfterTest() method. Currently, if any cleanup operation fails, the rest of 
cleanup code is not executed. Cleanup logic would not normally fail if 
corresponding test succeeds. But if some do not, short-circuiting some cleanup 
logic tends to cascade secondary failures which complicate troubleshooting.

2. Test methods would short-circuit closing database Connection's and 
Statement's if the test logic fails. Moreover, some test methods have multiple 
initializations of Connection ans Statement, but only one stmt.close(); 
conn.close(); pair at the end. Correct pattern for each distinct 
{Connection,Statement} pair would be:

conn = null;
stmt = null;
try {
. test logic here, including conn and stmt initialization ...
} finally {
  safeClose(conn, stmt);
}

where
private static safeClose(Connection conn, Statement stmt) {
  if (stmt != null) try { stmt.close(); } catch (Exception ignore) {}
  if (conn != null) try { conn.close(); } catch (Exception ignore) {}
}

Testing jdbc driver implementation is not in the scope of HDFS integration 
tests, so ignoring Connection.close() and Statement.close() failures should not 
be a concern for this test class.

3. Initialization retry logic uses recursion, as in startHiveServer2(), 
verifyQuery(), verifyOnAllSubDirs. Better to implement it via straightforward 
retry loop. 


> improve handling of failures, both in tests and after-test cleanup, in 
> TestHDFSIntegration.java
> ---
>
> Key: SENTRY-1377
> URL: https://issues.apache.org/jira/browse/SENTRY-1377
> Project: Sentry
>  Issue Type: Test
>  Components: Sentry
>Reporter: Vadim Spector
>Priority: Minor
>
> 1. TestHDFSIntegration.java should provide best-attempt cleanup in 
> cleanAfterTest() method. Currently, if any cleanup operation fails, the rest 
> of cleanup code is not executed. Cleanup logic would not normally fail if 
> corresponding test succeeds. But if some do not, short-circuiting some 
> cleanup logic tends to cascade secondary failures which complicate 
> troubleshooting.
> 2. Test methods would short-circuit closing database Connection's and 
> Statement's if the test logic fails. Moreover, some test methods have 
> multiple initializations of Connection ans Statement, but only one 
> stmt.close(); conn.close(); pair at the end. Correct pattern for each 
> distinct {Connection,Statement} pair would be:
> conn = null;
> stmt = null;
> try {
> . test logic here, including conn and stmt initialization ...
> } finally {
>   safeClose(conn, stmt);
> }
> where
> private static safeClose(Connection conn, Statement stmt) {
>   if (stmt != null) try { stmt.close(); } catch (Exception ignore) {}
>   if (conn != null) try { conn.close(); } catch (Exception ignore) {}
> }
> Testing jdbc driver implementation is not in the scope of HDFS integration 
> tests, so ignoring Connection.close() and Statement.close() failures should 
> not be a concern for this test class.
> 3. Initialization retry logic uses recursion, as in startHiveServer2(), 
> verifyQuery(), 

[jira] [Updated] (SENTRY-1377) improve handling of failures, both in tests and after-test cleanup, in TestHDFSIntegration.java

2016-07-08 Thread Vadim Spector (JIRA)

 [ 
https://issues.apache.org/jira/browse/SENTRY-1377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Spector updated SENTRY-1377:
--
Description: 
1. TestHDFSIntegration.java should provide best-attempt cleanup in 
cleanAfterTest() method. Currently, if any cleanup operation fails, the rest of 
cleanup code is not executed. Cleanup logic would not normally fail if 
corresponding test succeeds. But if some do not, short-circuiting some cleanup 
logic tends to cascade secondary failures which complicate troubleshooting.

2. Test methods would short-circuit closing database Connection's and 
Statement's if the test logic fails. Moreover, some test methods have multiple 
initializations of Connection ans Statement, but only one stmt.close(); 
conn.close(); pair at the end. Correct pattern for each distinct 
{Connection,Statement} pair would be:

conn = null;
stmt = null;
try {
. test logic here, including conn and stmt initialization ...
} finally {
  safeClose(conn, stmt);
}

where
private static safeClose(Connection conn, Statement stmt) {
  if (stmt != null) try { stmt.close(); } catch (Exception ignore) {}
  if (conn != null) try { conn.close(); } catch (Exception ignore) {}
}

Testing jdbc driver implementation is not in the scope of HDFS integration 
tests, so ignoring Connection.close() and Statement.close() failures should not 
be a concern for this test class.

3. Initialization retry logic uses recursion, as in startHiveServer2(), 
verifyQuery(), verifyOnAllSubDirs. Better to implement it via straightforward 
retry loop. 

  was:
1. TestHDFSIntegration.java should provide best-attempt cleanup in 
cleanAfterTest() method. Currently, if any cleanup operation fails, the rest of 
cleanup code is not executed. Cleanup logic would not normally fail if 
corresponding test succeeds. But if some do not, short-circuiting some cleanup 
logic tends to cascade secondary failures which complicate troubleshooting.

2. Test methods would short-circuit closing database Connection's and 
Statement's if the test logic fails. Moreover, some test methods have multiple 
initializations of Connection ans Statement, but only one stmt.close(); 
conn.close(); pair at the end. Correct pattern for each distinct 
{Connection,Statement} pair would be:

conn = null;
stmt = null;
try {
. test logic here, including conn and stmt initialization ...
} finally {
  safeClose(conn, stmt);
}

where
private static safeClose(Connection conn, Statement stmt) {
  if (stmt != null) try { stmt.close(); } catch (Exception ignore) {}
  if (conn != null) try { conn.close(); } catch (Exception ignore) {}
}

Testing jdbc driver implementation is not in the scope of HDFS integration 
tests, so ignoring Connection.close() and Statement.close() failures should not 
be a concern for this test class.

3. Initialization retry logic uses recursion, as in startHiveServer2(), 
verifyQuery(), verifyOnAllSubDirs. It may not necessarily lead 


> improve handling of failures, both in tests and after-test cleanup, in 
> TestHDFSIntegration.java
> ---
>
> Key: SENTRY-1377
> URL: https://issues.apache.org/jira/browse/SENTRY-1377
> Project: Sentry
>  Issue Type: Test
>  Components: Sentry
>Reporter: Vadim Spector
>Priority: Minor
>
> 1. TestHDFSIntegration.java should provide best-attempt cleanup in 
> cleanAfterTest() method. Currently, if any cleanup operation fails, the rest 
> of cleanup code is not executed. Cleanup logic would not normally fail if 
> corresponding test succeeds. But if some do not, short-circuiting some 
> cleanup logic tends to cascade secondary failures which complicate 
> troubleshooting.
> 2. Test methods would short-circuit closing database Connection's and 
> Statement's if the test logic fails. Moreover, some test methods have 
> multiple initializations of Connection ans Statement, but only one 
> stmt.close(); conn.close(); pair at the end. Correct pattern for each 
> distinct {Connection,Statement} pair would be:
> conn = null;
> stmt = null;
> try {
> . test logic here, including conn and stmt initialization ...
> } finally {
>   safeClose(conn, stmt);
> }
> where
> private static safeClose(Connection conn, Statement stmt) {
>   if (stmt != null) try { stmt.close(); } catch (Exception ignore) {}
>   if (conn != null) try { conn.close(); } catch (Exception ignore) {}
> }
> Testing jdbc driver implementation is not in the scope of HDFS integration 
> tests, so ignoring Connection.close() and Statement.close() failures should 
> not be a concern for this test class.
> 3. Initialization retry logic uses recursion, as in startHiveServer2(), 
> verifyQuery(), verifyOnAllSubDirs. Better to implement it via straightforward 
> retry loop. 



--
This message was sent by Atlassian JIRA

[jira] [Updated] (SENTRY-1377) improve handling of failures, both in tests and after-test cleanup, in TestHDFSIntegration.java

2016-07-08 Thread Vadim Spector (JIRA)

 [ 
https://issues.apache.org/jira/browse/SENTRY-1377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Spector updated SENTRY-1377:
--
Description: 
1. TestHDFSIntegration.java should provide best-attempt cleanup in 
cleanAfterTest() method. Currently, if any cleanup operation fails, the rest of 
cleanup code is not executed. Cleanup logic would not normally fail if 
corresponding test succeeds. But if some do not, short-circuiting some cleanup 
logic tends to cascade secondary failures which complicate troubleshooting.

2. Test methods would short-circuit closing database Connection's and 
Statement's if the test logic fails. Moreover, some test methods have multiple 
initializations of Connection ans Statement, but only one stmt.close(); 
conn.close(); pair at the end. Correct pattern for each distinct 
{Connection,Statement} pair would be:

conn = null;
stmt = null;
try {
. test logic here, including conn and stmt initialization ...
} finally {
  safeClose(conn, stmt);
}

where
private static safeClose(Connection conn, Statement stmt) {
  if (stmt != null) try { stmt.close(); } catch (Exception ignore) {}
  if (conn != null) try { conn.close(); } catch (Exception ignore) {}
}

Testing jdbc driver implementation is not in the scope of HDFS integration 
tests, so ignoring Connection.close() and Statement.close() failures should not 
be a concern for this test class.

3. Initialization retry logic uses recursion, as in 

  was:
1. TestHDFSIntegration.java should provide best-attempt cleanup in 
cleanAfterTest() method. Currently, if any cleanup operation fails, the rest of 
cleanup code is not executed. Cleanup logic would not normally fail if 
corresponding test succeeds. But if some do not, short-circuiting some cleanup 
logic tends to cascade secondary failures which complicate troubleshooting.

2. Test methods would short-circuit closing database Connection's and 
Statement's if the test logic fails. Moreover, some test methods have multiple 
initializations of Connection ans Statement, but only one stmt.close(); 
conn.close(); pair at the end. Correct pattern for each distinct 
{Connection,Statement} pair would be:

conn = null;
stmt = null;
try {
. test logic here, including conn and stmt initialization ...
} finally {
  safeClose(conn, stmt);
}

where
private static safeClose(Connection conn, Statement stmt) {
  if (stmt != null) try { stmt.close(); } catch (Exception ignore) {}
  if (conn != null) try { conn.close(); } catch (Exception ignore) {}
}

Testing jdbc driver implementation is not in the scope of HDFS integration 
tests, so ignoring Connection.close() and Statement.close() failures should not 
be a concern for this test class.


> improve handling of failures, both in tests and after-test cleanup, in 
> TestHDFSIntegration.java
> ---
>
> Key: SENTRY-1377
> URL: https://issues.apache.org/jira/browse/SENTRY-1377
> Project: Sentry
>  Issue Type: Test
>  Components: Sentry
>Reporter: Vadim Spector
>Priority: Minor
>
> 1. TestHDFSIntegration.java should provide best-attempt cleanup in 
> cleanAfterTest() method. Currently, if any cleanup operation fails, the rest 
> of cleanup code is not executed. Cleanup logic would not normally fail if 
> corresponding test succeeds. But if some do not, short-circuiting some 
> cleanup logic tends to cascade secondary failures which complicate 
> troubleshooting.
> 2. Test methods would short-circuit closing database Connection's and 
> Statement's if the test logic fails. Moreover, some test methods have 
> multiple initializations of Connection ans Statement, but only one 
> stmt.close(); conn.close(); pair at the end. Correct pattern for each 
> distinct {Connection,Statement} pair would be:
> conn = null;
> stmt = null;
> try {
> . test logic here, including conn and stmt initialization ...
> } finally {
>   safeClose(conn, stmt);
> }
> where
> private static safeClose(Connection conn, Statement stmt) {
>   if (stmt != null) try { stmt.close(); } catch (Exception ignore) {}
>   if (conn != null) try { conn.close(); } catch (Exception ignore) {}
> }
> Testing jdbc driver implementation is not in the scope of HDFS integration 
> tests, so ignoring Connection.close() and Statement.close() failures should 
> not be a concern for this test class.
> 3. Initialization retry logic uses recursion, as in 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (SENTRY-1377) improve handling of failures, both in tests and after-test cleanup, in TestHDFSIntegration.java

2016-06-30 Thread Vadim Spector (JIRA)

 [ 
https://issues.apache.org/jira/browse/SENTRY-1377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Spector updated SENTRY-1377:
--
Description: 
1. TestHDFSIntegration.java should provide best-attempt cleanup in 
cleanAfterTest() method. Currently, if any cleanup operation fails, the rest of 
cleanup code is not executed. Cleanup logic would not normally fail if 
corresponding test succeeds. But if some do not, short-circuiting some cleanup 
logic tends to cascade secondary failures which complicate troubleshooting.

2. Test methods would short-circuit closing database Connection's and 
Statement's if the test logic fails. Moreover, some test methods have multiple 
initializations of Connection ans Statement, but only one stmt.close(); 
conn.close(); pair at the end. Correct pattern for each distinct 
{Connection,Statement} pair would be:

conn = null;
stmt = null;
try {
. test logic here, including conn and stmt initialization ...
} finally {
  safeClose(conn, stmt);
}

were
private static safeClose(Connection conn, Statement stmt) {
  if (stmt != null) try { stmt.close(); } catch (Exception ignore) {}
  if (conn != null) try { conn.close(); } catch (Exception ignore) {}
}

Testing jdbc driver implementation is not in the scope of HDFS integration 
tests, so ignoring Connection.close() and Statement.close() failures should not 
be a concern for this test class.

  was:
1. TestHDFSIntegration.java should provide best-attempt cleanup in 
cleanAfterTest() method. Currently, if any cleanup operation fails, the rest of 
cleanup code is not executed. It does not matter only if all tests succeed. If 
some do not, short-circuiting some cleanup logic tends to cascade secondary 
failures which complicate troubleshooting.

2. Test methods would short-circuit closing database Connection's and 
Statement's if the test logic fails. Moreover, some test methods have multiple 
initializations of Connection ans Statement, but only one stmt.close(); 
conn.close(); pair at the end. Correct pattern for each distinct 
{Connection,Statement} pair would be:

conn = null;
stmt = null;
try {
. test logic here, including conn and stmt initialization ...
} finally {
  safeClose(conn, stmt);
}

were
private static safeClose(Connection conn, Statement stmt) {
  if (stmt != null) try { stmt.close(); } catch (Exception ignore) {}
  if (conn != null) try { conn.close(); } catch (Exception ignore) {}
}

Testing jdbc driver implementation is not in the scope of HDFS integration 
tests, so ignoring Connection.close() and Statement.close() failures should not 
be a concern for this test class.


> improve handling of failures, both in tests and after-test cleanup, in 
> TestHDFSIntegration.java
> ---
>
> Key: SENTRY-1377
> URL: https://issues.apache.org/jira/browse/SENTRY-1377
> Project: Sentry
>  Issue Type: Test
>  Components: Sentry
>Reporter: Vadim Spector
>Priority: Minor
>
> 1. TestHDFSIntegration.java should provide best-attempt cleanup in 
> cleanAfterTest() method. Currently, if any cleanup operation fails, the rest 
> of cleanup code is not executed. Cleanup logic would not normally fail if 
> corresponding test succeeds. But if some do not, short-circuiting some 
> cleanup logic tends to cascade secondary failures which complicate 
> troubleshooting.
> 2. Test methods would short-circuit closing database Connection's and 
> Statement's if the test logic fails. Moreover, some test methods have 
> multiple initializations of Connection ans Statement, but only one 
> stmt.close(); conn.close(); pair at the end. Correct pattern for each 
> distinct {Connection,Statement} pair would be:
> conn = null;
> stmt = null;
> try {
> . test logic here, including conn and stmt initialization ...
> } finally {
>   safeClose(conn, stmt);
> }
> were
> private static safeClose(Connection conn, Statement stmt) {
>   if (stmt != null) try { stmt.close(); } catch (Exception ignore) {}
>   if (conn != null) try { conn.close(); } catch (Exception ignore) {}
> }
> Testing jdbc driver implementation is not in the scope of HDFS integration 
> tests, so ignoring Connection.close() and Statement.close() failures should 
> not be a concern for this test class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (SENTRY-1377) improve handling of failures, both in tests and after-test cleanup, in TestHDFSIntegration.java

2016-06-30 Thread Vadim Spector (JIRA)

 [ 
https://issues.apache.org/jira/browse/SENTRY-1377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Spector updated SENTRY-1377:
--
Description: 
1. TestHDFSIntegration.java should provide best-attempt cleanup in 
cleanAfterTest() method. Currently, if any cleanup operation fails, the rest of 
cleanup code is not executed. It does not matter only if all tests succeed. If 
some do not, short-circuiting some cleanup logic tends to cascade secondary 
failures which complicate troubleshooting.

2. Test methods would short-circuit closing database Connection's and 
Statement's if the test logic fails. Moreover, some test methods have multiple 
initializations of Connection ans Statement, but only one stmt.close(); 
conn.close(); pair at the end. Correct pattern for each distinct 
{Connection,Statement} pair would be:

conn = null;
stmt = null;
try {
. test logic here, including conn and stmt initialization ...
} finally {
  safeClose(conn, stmt);
}

were
private static safeClose(Connection conn, Statement stmt) {
  if (stmt != null) try { stmt.close(); } catch (Exception ignore) {}
  if (conn != null) try { conn.close(); } catch (Exception ignore) {}
}

Testing jdbc driver implementation is not in the scope of HDFS integration 
tests, so ignoring Connection.close() and Statement.close() failures should not 
be a concern for this test class.

  was:
1. TestHDFSIntegration.java should provide best-attempt cleanup in 
cleanAfterTest() method. Currently, if any cleanup operation fails, the rest of 
cleanup code is not executed. It does not matter only if all tests succeed. If 
some do not, short-circuiting some cleanup logic tends to cascade secondary 
failures which complicate troubleshooting.

2. Test methods would short-circuit closing database Connection's and 
Statement's if the test logic fails. Moreover, some test methods have multiple 
initializations of Connection ans Statement, but only one stmt.close(); 
conn.close(); pair at the end. Correct pattern would be for each distinct 
{Connection,Statement} pair would be:

conn = null;
stmt = null;
try {
. test logic here, including conn and stmt initialization ...
} finally {
  safeClose(conn, stmt);
}

were
private static safeClose(Connection conn, Statement stmt) {
  if (stmt != null) try { stmt.close(); } catch (Exception ignore) {}
  if (conn != null) try { conn.close(); } catch (Exception ignore) {}
}

Testing jdbc driver implementation is not in the scope of HDFS integration 
tests, so ignoring Connection.close() and Statement.close() failures should not 
be a concern for this test class.


> improve handling of failures, both in tests and after-test cleanup, in 
> TestHDFSIntegration.java
> ---
>
> Key: SENTRY-1377
> URL: https://issues.apache.org/jira/browse/SENTRY-1377
> Project: Sentry
>  Issue Type: Test
>  Components: Sentry
>Reporter: Vadim Spector
>Priority: Minor
>
> 1. TestHDFSIntegration.java should provide best-attempt cleanup in 
> cleanAfterTest() method. Currently, if any cleanup operation fails, the rest 
> of cleanup code is not executed. It does not matter only if all tests 
> succeed. If some do not, short-circuiting some cleanup logic tends to cascade 
> secondary failures which complicate troubleshooting.
> 2. Test methods would short-circuit closing database Connection's and 
> Statement's if the test logic fails. Moreover, some test methods have 
> multiple initializations of Connection ans Statement, but only one 
> stmt.close(); conn.close(); pair at the end. Correct pattern for each 
> distinct {Connection,Statement} pair would be:
> conn = null;
> stmt = null;
> try {
> . test logic here, including conn and stmt initialization ...
> } finally {
>   safeClose(conn, stmt);
> }
> were
> private static safeClose(Connection conn, Statement stmt) {
>   if (stmt != null) try { stmt.close(); } catch (Exception ignore) {}
>   if (conn != null) try { conn.close(); } catch (Exception ignore) {}
> }
> Testing jdbc driver implementation is not in the scope of HDFS integration 
> tests, so ignoring Connection.close() and Statement.close() failures should 
> not be a concern for this test class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)