Comments below

 

 

-- Rick Weber

 

 

From: Todd Lipcon <[email protected]>
Reply-To: "[email protected]" <[email protected]>
Date: Friday, February 10, 2017 at 4:12 PM
To: "[email protected]" <[email protected]>
Subject: Re: Feature request for Kudu 1.3.0

 

On Fri, Feb 10, 2017 at 10:32 AM, Weber, Richard <[email protected]> wrote:

I definitely would push for prioritization on this.

 

Our main use case is less about multiple racks and failure, and more about 
functionality during the install process.  Our clusters are installed in 
logical regions, and we install 1/3 of a region at a time.  That means 1/3 of 
the cluster can be down for the SW install, reboot, or something else.  
Allowing rack locality to be logically defined will allow the data to still be 
available during normal maintenance operations.

 

That's an interesting use case. How long is the 1/3rd of the cluster typically 
down for? I'd be afraid that, if it's down for more than a couple minutes, 
there's a decent chance of losing one server in the other 2/3 region, which 
would leave a tablet at 1/3 replication and unavailable for writes or 
consistent reads. Is that acceptable for your target use cases?

 

Nodes would be down typically for 5-15 minutes or so.  Are you saying that if 1 
node goes down, there's an increased chance of one of the other 2 going down as 
well?  That doesn't sound good if losing a node increases the instability of 
the system.  Additionally, wouldn't the tablets start re-replicating the data 
if 2/3 of the nodes detect the node is down for too long?  

 

How does the system typically handle a node failing?  Is re-replication of data 
not automatic?  (I haven't experimented with this enough)

 

Our install process is along the line of:

1)      copy software to target machine

2)      shut down services on machine

3)      expand software to final location

4)      reboot (if new kernel)

5)      restart services.

 

 

There are certain things we could consider doing to allow a replica to fall to 
1/3 while still remaining online, but don't think we've considered doing them 
any time particularly soon. Would be good to get data on how important that is.

 

I don't think it's too critical to have a replica fail down to 1/3 and still be 
available, or at least accept writes.  It'd be great to have it service read's 
at least.  But in our use case, we're looking at (relatively) quick bounces for 
installs and potential reboots

 

-Todd

 

 

 

 

 

 

 

-- Rick Weber

 

 

From: Todd Lipcon <[email protected]>
Reply-To: "[email protected]" <[email protected]>
Date: Friday, February 10, 2017 at 12:45 PM
To: "[email protected]" <[email protected]>
Subject: Re: Feature request for Kudu 1.3.0

 

Hi Jeff, 

 

Thanks for the input on prioritization.

 

I'm curious: do you have more than two racks in your cluster? With Kudu's 
replication strategy, we need at least three racks to be able to survive a full 
rack outage. (with just two racks it's impossible to distinguish a loss of a 
rack with a partition between the two racks).

 

-Todd

 

On Fri, Feb 10, 2017 at 7:27 AM, Jeff Dasch <[email protected]> wrote:

Any chance we can get a fix for KUDU-1535 "Add rack awareness" added to the 
1.3.0 release? 

 

While I appreciate the need for Kerberos and TLS for some production systems, 
for my use-case data availability really takes priority.

 

I looked at your scoping document, and for what it's worth, I'm fine with a 
shell script that is similar to what Hadoop uses.

 

thanks,

-jeff

 

 

 



 

-- 

Todd Lipcon
Software Engineer, Cloudera



 

-- 

Todd Lipcon
Software Engineer, Cloudera

Attachment: smime.p7s
Description: S/MIME cryptographic signature

Reply via email to