02.09.2014, 07:18, "Joe Julian" <[email protected]>:
> My first suggestion, that's not phrased the very carefully chosen words I 
> usually use in order to be sure to be read as "nice", would be to stop 
> screwing the system design by randomly doing things to the bricks that you 
> clearly don't understand and instead let the software do what it was designed 
> to do.

mostly i followed gluster documentation and your blog ;), be assured that i'm 
not into doing "random" things, for now that was glusters part, but it wasn't 
designed as PRNG right ?

> I phrase it harshly because you started your email with hostility, blaming 
> developers that demonstrate more talent before breakfast than you enumerate 
> in your description below over, apparently, you're three year long experience 
> (based on the version history).
ever thought about that one has to test EVERY CASE before delpoying it at 
customer sites ?

> Most of the problems you describe can be caused directly by your "fixes" and 
> cannot be caused by resizing xfs.
> Mismatching layouts comes from having two fully populated servers with no 
> xattrs as you describe creating later.
The mentioned volume was freshly created and completely intact until i resized 
it (yep , inode numbers don't change )
it was fully populated by a fuse mount..

>> server1 was up and happy when you mount from it, but server2 spew 
>> input/output errors on several directories (for now just in that volume),
> Illogical as there should be no difference in operation regardless of which 
> server provides the client configuration. Did you somehow get the vols tree 
> out of sync between servers?

(if you mount it via fuse on server2 some dirs fail , fuse on server 1 will 
provide the full data, i also checked with "volume heal" and md5sum that they 
are in sync)
the funny thing is that it seems also to be a hosntame resolution ( e.g. their 
OS system names are different than "glustN.somedomain.com" , but they try to 
connect via hostnames when utilising fuse mounts, so i put everything in 
/etc/hosts, i guess it was 2 late )

> I'm sure you can get a refund for the software.
I just want it to work !
 But yeah, for that amount of pain i would pay some personal trainer..

> Perhaps you're getting "tons"  because you've already broken it. I don't 
> great too much in my logs.


>> really , even your worst *insertBigCorp* DB server will spit less logs,
> Absolutely. In fact I think the bigger the Corp, the smaller and more 
> obfuscated the logs. I guess that's a good thing?
^^for sure they want you to pay for SLA's , funding FOSS software nowadays is 
also "pay for support" to a certain extent

> BTW... You never backed up your accusation that there is a "data black hole".
remember when shd died ? glusterfs process also ate too much ram at random 
times. i just killed the volume everytime

>
> This can be solved. It's a complete mess though, by this account, and won't 
> be easy. Your gfids and dht mappings may be mismatched, you may have 
> directories and files with the same path on different bricks, and who knows 
> what state your .glusterfs directory is in. 
^^on the mentioned volume, as i  told you, there is no .glusterfs anymore , 
they had to die (really i was in the mood for destruction , i didnt write 
"TRIGGER WARNING" in subject , sry for that), all xattrs were cleared out 

To make it clear: I just need some way to make it reliable and working, and if 
its necessary i give a f*** about syncing for 1 week, but in fact the data are 
the same on both sides
i'll try a fresh installation of a third server, maybe i find differences in 
the confgs..

maybe see you on IRC
_______________________________________________
Gluster-users mailing list
[email protected]
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Reply via email to