I believe that it is more consistent and repeatable to just use the gluster 
command to set this. Example from this page: 
http://www.gluster.com/community/documentation/index.php/Gluster_3.1:_Setting_Volume_Options

gluster volume set VOLNAME performance.io-thread-count 64

This also means that your changes will persist across any other "gluster volume 
set" commands. Generally speaking, hand editing the volume config files is a 
bad idea, IMHO.

James

-----Original Message-----
From: [email protected] 
[mailto:[email protected]] On Behalf Of Justice London
Sent: Wednesday, May 18, 2011 3:49 PM
To: 'Tomasz Chmielewski'; 'Anthony J. Biacco'
Cc: [email protected]
Subject: Re: [Gluster-users] gluster 3.2.0 - totally broken?

Whoops, and forgot the threads edit for the brick instance config:

volume <volname>-io-threads
    type performance/io-threads
    option thread-count 64
    subvolumes <volname>-locks
end-volume

Justice London
Systems Administrator

phone 800-397-3743 ext. 7005
fax 760-510-0299
web www.lawinfo.com
e-mail [email protected]

PLEASE NOTE: This message, including any attachments, may include privileged, 
confidential and/or inside information. Any distribution or use of this 
communication by anyone other than the intended recipient(s) is strictly 
prohibited and may be unlawful. If you are not the intended recipient, please 
notify the sender by replying to this message and then delete it from your 
system.

-----Original Message-----
From: [email protected]
[mailto:[email protected]] On Behalf Of Tomasz Chmielewski
Sent: Wednesday, May 18, 2011 10:05 AM
To: Anthony J. Biacco
Cc: [email protected]
Subject: Re: [Gluster-users] gluster 3.2.0 - totally broken?

On 18.05.2011 18:56, Anthony J. Biacco wrote:
> I'm using it in real-world production, lot of small files (apache 
> webroot mounts mostly). I've seen a bunch of split-brain and self-heal 
> failing when I first did the switch. After I removed and recreated the 
> dirs it seemed to be fine for about a week now; yeah not long, I know.
>
> I 2^nd the notion that it'd be nice to see a list of what files/dirs 
> gluster thinks is out of sync or can't heal. Right now you gotta go 
> diving into the logs.
>
> I'm actually thinking of downgrading to 3.1.3 from 3.2.0. Wonder if 
> I'd have any ill-effects on the volume with a simple rpm downgrade and 
> daemon restart.

I've been using 3.2.0 for a while, but I had a problem with userspace programs 
"hanging" on accessing some files on the gluster mount (described here on the 
list).

I downgraded to 3.1.4 (remove 3.2.0 rpm and config, install 3.1.4 rpm, add 
nodes) and it works fine for me.

3.0.x was also crashing for me when SSHFS-like mount was used to the server 
with gluster mount (and reads/writes were made from the gluster mount through 
it).



--
Tomasz Chmielewski
http://wpkg.org
_______________________________________________
Gluster-users mailing list
[email protected]
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

_______________________________________________
Gluster-users mailing list
[email protected]
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


DISCLAIMER: 
This e-mail, and any attachments thereto, is intended only for use by the 
addressee(s) named herein and may contain legally privileged and/or 
confidential information. If you are not the intended recipient of this e-mail, 
you are hereby notified that any dissemination, distribution or copying of this 
e-mail, and any attachments thereto, is strictly prohibited. If you have 
received this in error, please immediately notify me and permanently delete the 
original and any copy of any e-mail and any printout thereof. E-mail 
transmission cannot be guaranteed to be secure or error-free. The sender 
therefore does not accept liability for any errors or omissions in the contents 
of this message which arise as a result of e-mail transmission. 
NOTICE REGARDING PRIVACY AND CONFIDENTIALITY Knight Capital Group may, at its 
discretion, monitor and review the content of all e-mail communications. 
http://www.knight.com
_______________________________________________
Gluster-users mailing list
[email protected]
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

Reply via email to