[ 
https://issues.apache.org/jira/browse/IMPALA-11290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17537327#comment-17537327
 ] 

Quanlong Huang commented on IMPALA-11290:
-----------------------------------------

I think these mean the two Impala versions can share the same catalogd and 
statestored. Should we consider cases of upgrading catalogd?

> Add a mechanism to allow Impala to maintain 2 clusters during a rolling 
> restart.
> --------------------------------------------------------------------------------
>
>                 Key: IMPALA-11290
>                 URL: https://issues.apache.org/jira/browse/IMPALA-11290
>             Project: IMPALA
>          Issue Type: Bug
>            Reporter: Andrew Sherman
>            Priority: Major
>
> A rolling restart is where we restart impala daemons one by one. This can be 
> used to restart an Impala cluster while continuing to run queries. While this 
> is happening we want to prevent different versions of mpala daemons from 
> communicating.
> One way to do this would be to publish each impala daemon's version in a 
> statestore topic, so a coordinator could filter to only use executors with 
> its own version. This would also mean that all daemons would have a global 
> picture about the rolling upgrade, and do smart things.
> So practically two sub clusters would live at once for some time - we could 
> detect when the new version reaches a practical size (e.g. configurable 
> number of coordinators and executors), and at this point the old impalads 
> could blacklist themselves to make killing them faster.
> Maybe we would have options to use the Impala version from version.cc, or to 
> allow the version to be specified as a command line flag.
> Care would be needed when enabling this feature to avoid unintended 
> consequences.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to