Any news? Wed latest seems to have been and gone?
I think some kind of optimistic locking is the kind of thing which
pushes Gluster into the high performance bracket (without needing 40GB
cards, which when you think about it kind of ends up creating just one
big NUMA machine rather than a cluster setup)
If the kernel NLM can do this satisfactorily then seems like things get
even better?
Cheers
Ed W
On 26/09/2010 03:02, Craig Carl wrote:
Ed -
I'll follow up on your request with engineering and professional
services, can we get back to you Wednesday latest?
Thanks,
Craig
--
Craig Carl
Sales Engineer; Gluster, Inc.
Cell - (408) 829-9953 <about:blank> (California, USA)
Office - (408) 770-1884 <about:blank>
Gtalk - [email protected]
Twitter - @gluster
Installing Gluster Storage Platform, the movie!
<http://www.youtube.com/user/GlusterStorage>
http://rackerhacker.com/2010/08/11/one-month-with-glusterfs-in-production/
------------------------------------------------------------------------
*From: *"Ed W" <[email protected]>
*To: *[email protected]
*Sent: *Saturday, September 25, 2010 5:35:21 PM
*Subject: *Re: [Gluster-devel] Can I bring a development idea to Dev's
attention?
Does someone from Gluster like to contact me with a "reasonable" offer
for sponsoring some kind of "optimistic cache" feature, with a specific
view to optimising the NUFA server side replication architecture?
I would specifically like to optimise the case that you have a flat
namespace on the server (master/master filesharing), but you optimise
the applications in such a way that the applications running on each
brick (NUFA) only touch a subset of all files (in general). eg a
mailserver with a flat filesystem, but users are proxied so that they
generally touch only a specific server, or a webserver with a flat
namespace where a proxy points specific domains to be served by specific
servers?
In this case I would like to see a specific brick realise that it's
predominantly the reader/write for a subset of all files and optimise
it's access at the expense of other bricks which need to access the same
files (ie I don't just want to turn up the writeback cache, I want cache
coherency across the entire cluster). I would accept that random
read/writes to random bricks would be slower, in return for the
optimisation that reads/writes would be faster *if* the clients optimise
themselves to *prefer* to touch specific bricks (ie NUFA). Such an
optimisation should not be set in stone of course, if the activity on a
subdirectory generally seems to move across to another brick then that
brick should eventually optimise it's read/write performance (at the
expense that another brick's access now becomes slower to that same
subset of files.)
Anyone care to quote on this? Seems like it's a popular performance
issue on the mailing list and with some optimisation later it also seems
like the basis for cross datacenter replication?
Thanks
Ed W
_______________________________________________
Gluster-devel mailing list
[email protected]
http://lists.nongnu.org/mailman/listinfo/gluster-devel
_______________________________________________
Gluster-devel mailing list
[email protected]
http://lists.nongnu.org/mailman/listinfo/gluster-devel