I've had to put this cluster into production already, but I'll have
hardware for my lab end this month or beginning of April.

On Thu, Mar 21, 2013 at 5:09 AM, Jason Davis <scr...@gmail.com> wrote:
> Are you planning on including observations on IOPS and latency? Would be
> curious to see what performance penalty is incurred when you have a brick
> failure.
>
> I agree, having a writeup will be awesome. Thanks for your hard work!
> On Mar 21, 2013 1:03 AM, "Ahmad Emneina" <aemne...@gmail.com> wrote:
>
>> On Mar 20, 2013, at 4:31 PM, Bryan Whitehead <dri...@megahappy.net> wrote:
>>
>> > I've gotten some requests to give some idea of how to setup CloudStack
>> > with GlusterFS and what kind of numbers can be expected. I'm working
>> > on a more complete writeup, but thought I'd send something to the
>> > maillinglist so I can get an understanding of what questions people
>> > have.
>> >
>> > Since I'm adding another (small) cluster to my zone I wanted to get
>> > some hardware numbers out there and disk access speeds.
>> >
>> > Hardware consists of two servers with the following config:
>> > 1 6-core E5-1650 @ 3.2Ghz (looks like 12 in /proc/cpuinfo)
>> > 64GB RAM
>> > Raid-10, 4 sas disks @ 3TB each
>> > Infiniband Mellanox MT26428 @ 40GB/sec
>> >
>> > I get ~300MB/sec disk write speeds on the raw xfs-backed filesystem.
>> > command used: dd if=/dev/zero of=/gluster/qcow/temp.$SIZE count=$SIZE
>> > bs=1M oflag=sync
>> > SIZE is usually 20000 to 40000 when I run my tests
>> > My xfs filesystem was build with these options:
>> > mkfs.xfs -i size=512 /dev/vg_kvm/glust0
>> >
>> > I mount xfs volume with these options:
>> > /dev/vg_kvm/glust0 /gluster/0 xfs defaults,inode64 0 0
>> >
>> > Here is the output of my gluster volume:
>> > Volume Name: custqcow
>> > Type: Replicate
>> > Volume ID: d8d8570c-73ba-4b06-811e-2030d601cfaa
>> > Status: Started
>> > Number of Bricks: 1 x 2 = 2
>> > Transport-type: tcp
>> > Bricks:
>> > Brick1: 172.16.2.13:/gluster/0
>> > Brick2: 172.16.2.14:/gluster/0
>> > Options Reconfigured:
>> > performance.io-thread-count: 64
>> > nfs.disable: on
>> > performance.least-prio-threads: 8
>> > performance.normal-prio-threads: 32
>> > performance.high-prio-threads: 64
>> >
>> > here is my mount entry in /etc/fstab:
>> > 172.16.2.13:custqcow /gluster/qcow2 glusterfs defaults,_netdev 0 0
>> >
>> > After adding a gluster layer (fuse mount) write speeds per process are
>> > at ~150MB/sec.
>> > If I run the above dd command simultaneously X3 I get ~100MB/sec per
>> > dd. Adding more will proportionally reduce the rate evenly as dd's
>> > compete for IO over the glusterfs fuse mountpoint. This means while 1
>> > process with 1 filehandle cannot max out the underlying disks maximum
>> > speed - collectively many processes will give me the same speed from
>> > the gluster layer to the filesystem. I easily can get full IO out of
>> > my underlying disks with many VM's running.
>> >
>> > here is output from mount on 1 of the boxes:
>> > /dev/mapper/system-root on / type ext4 (rw)
>> > proc on /proc type proc (rw)
>> > sysfs on /sys type sysfs (rw)
>> > devpts on /dev/pts type devpts (rw,gid=5,mode=620)
>> > tmpfs on /dev/shm type tmpfs (rw)
>> > /dev/sda1 on /boot type ext4 (rw)
>> > none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
>> > /dev/mapper/vg_kvm-glust0 on /gluster/0 type xfs (rw,inode64)
>> > 172.16.2.13:custqcow on /gluster/qcow2 type fuse.glusterfs
>> > (rw,default_permissions,allow_other,max_read=131072)
>> >
>> > here is a df:
>> > Filesystem            Size  Used Avail Use% Mounted on
>> > /dev/mapper/system-root
>> >                       81G  1.6G   76G   3% /
>> > tmpfs                  32G     0   32G   0% /dev/shm
>> > /dev/sda1             485M   52M  408M  12% /boot
>> > /dev/mapper/vg_kvm-glust0
>> >                      4.0T   33M  4.0T   1% /gluster/0
>> > 172.16.2.13:custqcow  4.0T   33M  4.0T   1% /gluster/qcow2
>> >
>> > NOTES: I have larger cloudstack clusters in production with similar
>> > setups but it is a Distributed-Replicate (6 bricks with replication
>> > 2). Native Infiniband/RDMA is currently extremely crappy in gluster -
>> > at best I've been able to get 45MB/sec per process and higher load.
>> > Everything above is IPoIB. GlusterFS version 3.3.1.
>> >
>> > I run the cloud-agent and qemu-kvm with CentOS6.3 (old cluster). This
>> > cluster is qemu-kvm on CentOS6.4. Primary storage is sharedmountpoint
>> > to /gluster/qcow2/images.
>> >
>> > -Bryan
>>
>> No real questions here just eager to check out the write ups. This seems
>> insanely valuable to have out there for cloudstack users.

Reply via email to