> on Saturday the 30th of January I am scheduled to give a presentation
> titled "Gluster roadmap, recent improvements and upcoming features":
> 
>   https://fosdem.org/2016/schedule/event/gluster_roadmap/
> 
> I would like to ask from all feature owners/developers to reply to this
> email with a short description and a few keywords about their features.
> My plan is to have at most one slide for each feature, so keep it short.

=== NSR

* journal- and server-based (vs. client-based AFR)

* better throughput for many workloads

* faster, more precise repair

* SSD-friendly

* (some day) internal snapshot capability

Some explanation, so you're not just reading the slides or in case              
                                                                
you're asked.                                                                   
                                                                
                                                                                
                                                                
* On throughput, NSR does not split each client's bandwidth among N             
                                                                
servers, and generates a nice sequential I/O pattern on each server             
                                                                
(into the journals).  These effects tend to outweigh any theoretical            
                                                                
increase in latency due to the extra server-to-server "hop" - as is             
                                                                
clearly demonstrated by other systems already using this approach.              
                                                                
                                                                                
                                                                
* WTF does "SSD-friendly" mean?  It means that NSR can trivially be             
                                                                
configured to put journals on a separate device from the main store.            
                                                                
Since we do full data journaling, that means we can serve newly written         
                                                               
data from that separate device, which can be of a faster type.  This            
                                                                
gives us a simple form of tiering, independently of that implemented in         
                                                                
DHT.  However, unlike Ceph, we do not *require* the journal to be on a          
                                                                
separate device.                                                                
                                                                
                                                                                
                                                                
* Similarly, because journals are time-based and separate from the              
                                                                
main store, simply skipping the most recent journal segments on reads           
                                                                
gives us a kind of snapshot.  This is a feature of the *design* that we         
                                                                
might exploit some day, but certainly not of the 4.0 *implementation*.          
                                                                
The nice thing about it is that it's completely independent of the              
                                                                
underlying local filesystem or volume manager, so (unlike our current           
                                                                
LVM-biased approach) it can work on any platform. 
_______________________________________________
Gluster-devel mailing list
[email protected]
http://www.gluster.org/mailman/listinfo/gluster-devel

Reply via email to