keith-turner commented on PR #3262:
URL: https://github.com/apache/accumulo/pull/3262#issuecomment-1836734617

   > Feedback requested on 
https://cwiki.apache.org/confluence/display/ACCUMULO/Using+Resource+Groups+as+an+implementation+of+Multiple+Managers.
 Are there other options to consider, what other information should I add?
   
   I updated the above document with another possible solution.  Thinking that 
this PR and #3964 are already heading in the direction of that solution.  I 
still have a lot of uncertainty and I was thinking about how to reduce that.  
Thought of the following.
   
    1. We can start running scale test now that abuse the current code.  By 
doing this we may learn new things that help us make more informed decisions.  
I opened #4006 for this and created other items for scale test as TODOs on the 
elasticity board.
    2. We can reorganize the manager code to make the functional services in 
the manager more explicit.   I opened #4005 for this.  I am going to take a 
shot are reorganizing just one thing int he manager as described in that issue 
to see what it looks like.
    3. Would be good to chat sometime as mentioned in slack
   
   Warning this is not a fully formed thought.  #3964 took a bottom up approach 
to scaling the manager and this PR is taking a top down approach.  Was 
wondering about taking some of the things in the PR and creating something more 
focused on just distributing tablet management like #3964 is just for 
distributing FATE.  However, tablet managment is not as cleanly self contained 
in the code as FATE is, so its harder to do that.  That is one reason I opened 
#4005.  It would be nice to have an IT test that creates multiple tablet 
management objects each with different partitions and verifies that.  #3694 has 
 test like this for FATE.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to