Does someone from Gluster like to contact me with a "reasonable" offer for sponsoring some kind of "optimistic cache" feature, with a specific view to optimising the NUFA server side replication architecture?

I would specifically like to optimise the case that you have a flat namespace on the server (master/master filesharing), but you optimise the applications in such a way that the applications running on each brick (NUFA) only touch a subset of all files (in general). eg a mailserver with a flat filesystem, but users are proxied so that they generally touch only a specific server, or a webserver with a flat namespace where a proxy points specific domains to be served by specific servers?

In this case I would like to see a specific brick realise that it's predominantly the reader/write for a subset of all files and optimise it's access at the expense of other bricks which need to access the same files (ie I don't just want to turn up the writeback cache, I want cache coherency across the entire cluster). I would accept that random read/writes to random bricks would be slower, in return for the optimisation that reads/writes would be faster *if* the clients optimise themselves to *prefer* to touch specific bricks (ie NUFA). Such an optimisation should not be set in stone of course, if the activity on a subdirectory generally seems to move across to another brick then that brick should eventually optimise it's read/write performance (at the expense that another brick's access now becomes slower to that same subset of files.)

Anyone care to quote on this? Seems like it's a popular performance issue on the mailing list and with some optimisation later it also seems like the basis for cross datacenter replication?


Thanks

Ed W

_______________________________________________
Gluster-devel mailing list
[email protected]
http://lists.nongnu.org/mailman/listinfo/gluster-devel

Reply via email to