On 05/11/2011 10:47 AM, Burnash, James wrote:
Hi Joe.

Your remarks are always useful and informative - thanks.

Thanks for the compliment!

As for our slow network throughput - what I didn't put in our
configuration is that our servers are on 10GBe, but all of our
clients are on 1Gbe because our core network can't (yet) handle the
load of 10GBe clients - just to fill in that data point.

Got it, that makes sense. We see typically 30-80% of wire speed in connections (depending upon contention and other things). Your numbers fall into that range.

I find your remark about the stability of 3.1.3 reassuring -
considering the painful struggle to get there from the 3.0.4
versions, and the stability issues that I noted in the list over the

3.0.5 has been remarkably stable at customer sites ... no complaints. 3.1.x has been a struggle until 3.1.2 and 3.1.3. Ran head first into some 3.1.4 issues that I still cannot tell if they were migration issues or real bugs. 3.2.0 was a test effort that did not succeed internally, so we backed off for the moment.

course of my migration. Your problems with 3.1.4 and 3.2 are enough
reason to not do anymore upgrades to the production systems yet.


We always recommend staging upgrades on test machines if possible. Sometimes you get bitten by some nasty bits you were not expecting (not with Gluster per se, but with an odd interaction).

As a rule of thumb, I never implement X.0 releases into production
anyhow - even from Redhat ... I have the arrows still sticking out of
my back from doing so in the past :-)

Heh ...

I am pretty happy so far with Centos/RHEL 5.6. We haven't tested the 6.0 much yet. Will do that soon.


--
Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics Inc.
email: [email protected]
web  : http://scalableinformatics.com
       http://scalableinformatics.com/sicluster
phone: +1 734 786 8423 x121
fax  : +1 866 888 3112
cell : +1 734 612 4615
_______________________________________________
Gluster-users mailing list
[email protected]
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

Reply via email to