Look at this guide:
https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Storage/2.0/html-single/Administration_Guide/index.html#chap-User_Guide-Setting_Volumes
I noticed this: you must increase the inode size to 512 bytes from
the default 256 bytes
With an example mkfs.xfs like: mkfs.xfs -i
Hi Everyone,
I have 2 machines with 3x2TB disks, which are almost full with files and
synced (e.g. sdb1 on machine1 is packed with files, which are the same on
machine2 sdb1, and the same is valid for other 2 disks).
What is the best practice to setup Gluster on these machines, so that I
could
A reply becomes slow, and I am sorry.
3) gluster volume set syncdata performance.write-behind off
I set up this,It dissolved.
Thank you !
2012/9/21 Anand Avati anand.av...@gmail.com:
On Wed, Sep 12, 2012 at 12:05 AM, siga hiro hirokis...@gmail.com wrote:
Hi.
I make the data area of
On 10/03/2012 02:36 AM, Bryan Whitehead wrote:
Look at this guide:
https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Storage/2.0/html-single/Administration_Guide/index.html#chap-User_Guide-Setting_Volumes
I noticed this: you must increase the inode size to 512 bytes from
the default 256
On 10/03/2012 07:22 AM, Kaleb S. KEITHLEY wrote:
On 10/03/2012 02:36 AM, Bryan Whitehead wrote:
Look at this guide:
https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Storage/2.0/html-single/Administration_Guide/index.html#chap-User_Guide-Setting_Volumes
I noticed this: you must increase
On 10/02/2012 10:51 PM, harry mangalam wrote:
Hi All,
Well, it http://goo.gl/hzxyw was too good to be true. Under extreme,
extended IO on a 48core node, some part of the the NFS stack collapses and
leads to an IO lockup thru NFS. We've replicated it on 48core and 64 core
nodes, but
I agree on the fewer bricks. Also see if you can use Infiniband.
Best Regards
Ivan
On 10/2/12 11:29 PM, Robert Hajime Lanning wrote:
On 10/02/12 13:01, ja...@combatyoga.net wrote:
Basically, I'm trying to figure out if Gluster will perform better with
more storage nodes in the storage block
Sorry, couldn't reply earlier, was indisposed the last few days.
Thanks for the input Brian, especially on the 'map' translator.
It lead me to another one called the 'switch' scheduler that seems to do
exactly what I want.
i.e. distribute files on to selective bricks, based on file extension.
Hello all
I'm testing glusterfs 3.3.0-1 on a couple of CentOS 6.3 servers,
than run KVM
after inserting a new empty brick due to a simulated failure, the
Self-healing process kicked in, as expected
after that however the VMs became mostly unsuable due to IO delay
it looks like the
On Wed, Oct 03, 2012 at 08:37:55PM +0530, Indivar Nair wrote:
Sorry, couldn't reply earlier, was indisposed the last few days.
Thanks for the input Brian, especially on the 'map' translator.
It lead me to another one called the 'switch' scheduler that seems to
do exactly what I
10 matches
Mail list logo