clintropolis opened a new pull request #10680:
URL: https://github.com/apache/druid/pull/10680


   ### Description
   This PR adds integration tests that try to get some coverage of coordinator 
and overlord leadership changes, using queries against system tables and then 
cycling which containers are running and forcing leadership changes. The goal 
is to get some coverage for `DruidLeadershipClient` in the integration tests 
and avoid any sort of regressions if possible. Once kubernetes integration 
tests are in place, I imagine we could run these tests against k8s discovery 
instead of curator based as well since #10544 has now gone in.
   
   To aid in my sanity while testing this stuff, I have added `is_leader` to 
`sys.servers` which is a long column which returns `1` if the server is the 
leader and 0 if it is not (for coordinators and overlords), and for services 
which do not have the concept of leadership, will return the default long value 
(0 in default mode, null if `druid.generic.useDefaultValueForNull=false`).
   
   The integration tests add a new test group, 'high-availability', which has a 
special docker-compose file that brings up a cluster with 1 router, 1 broker, 2 
overlords, and 2 coordinators (and zk/kafka and metadata store). The tests 
check which containers are the leader, issues some system tables queries which 
should flex the leadership clients to both the current overlord and coordinator 
leaders, and then restart the containers to force leadership change, repeating 
this process a few times.
   
   I removed many of the `links` sections of the docker-compose file for 
integration tests, which afaict is deprecated and not necessary, and also 
modified the base docker-compose file to specify the hostnames to the container 
names, and set `druid.host` to the same, so that the tests could refer to hosts 
by hostname instead of container IP address (which is what druid.host defaults 
to if not specified otherwise).
   
   Finally, I fixed a funny race condition that i think could really only 
happen when doing something like this in docker and starting multiple 
coordinators at the same time, which would have a race condition when trying to 
initialize the basic auth extension default auth stuffs, where both containers 
would detect that it had not been initialized, one would lose the race, and 
explode out of lifecycle start causing the service to die before starting. This 
probably isn't a big deal even in a real system because if the process gets 
started again it would succeed because it would be initialized on the 2nd pass, 
but our integration test configs do not auto-restart (which is noisy on purpose 
I think), so instead I just wrapped the initialization in a retry which will 
skip and continue startup if the duplicate initialization explosion is detected.
   
   <hr>
   
   This PR has:
   - [ ] been self-reviewed.
      - [ ] using the [concurrency 
checklist](https://github.com/apache/druid/blob/master/dev/code-review/concurrency.md)
 (Remove this item if the PR doesn't have any relation to concurrency.)
   - [ ] added documentation for new or modified features or behaviors.
   - [ ] added Javadocs for most classes and all non-trivial methods. Linked 
related entities via Javadoc links.
   - [ ] added or updated version, license, or notice information in 
[licenses.yaml](https://github.com/apache/druid/blob/master/licenses.yaml)
   - [ ] added comments explaining the "why" and the intent of the code 
wherever would not be obvious for an unfamiliar reader.
   - [ ] added unit tests or modified existing tests to cover new code paths, 
ensuring the threshold for [code 
coverage](https://github.com/apache/druid/blob/master/dev/code-review/code-coverage.md)
 is met.
   - [x] added integration tests.
   - [ ] been tested in a test Druid cluster.
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to