[
https://issues.apache.org/jira/browse/IMPALA-8389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
radford nguyen resolved IMPALA-8389.
------------------------------------
Resolution: Fixed
> e2e custom cluster testsuite does not respect cluster_size when
> impala_log_dir present
> --------------------------------------------------------------------------------------
>
> Key: IMPALA-8389
> URL: https://issues.apache.org/jira/browse/IMPALA-8389
> Project: IMPALA
> Issue Type: Bug
> Components: Infrastructure
> Affects Versions: Impala 3.2.0
> Reporter: radford nguyen
> Assignee: radford nguyen
> Priority: Minor
> Original Estimate: 1h
> Remaining Estimate: 1h
>
> h3. Brief
> CustomClusterTestSuite always waits for 3 daemons on startup instead of
> {{cluster_size}} daemons when {{impala_log_dir}} is specified.
> h3. Description
> The {{@CustomClusterTestSuite.withArgs}} decorator allows a user to specify a
> custom cluster size for the test case being decorated. However, when this
> option is specified in conjunction with {{impala_log_dir}}, it will fail to
> wait for the correct number of daemons if any value other than
> {{DEFAULT_CLUSTER_SIZE}} is used.
> The root cause is the difference in how the cluster is started with and
> without {{impala_log_dir}}:
> [https://github.com/apache/impala/blob/3.2.0/tests/common/custom_cluster_test_suite.py#L147]
> h3. To Reproduce:
> * add {{cluster_size=5}} to decorator of test_grant_revoke in
> tests/authorization/test_ranger.py
> * $ impala-py.test tests/authorization/test_ranger.py
> * observe pass
> * add {{impala_log_dir=whatev}} to decorator of test_grant_revoke
> * $ impala-py.test tests/authorization/test_ranger.py
> * observe fail during cluster startup:
> ** 2019-04-04 14:25:54,140 INFO MainThread: Waiting for
> num_known_live_backends=3. Current value: 5
>
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]