Howdy,

I'm looking at using Gluster for one of our storage projects and decided to 
start by trying out Gluster 3.1 beta on a couple test virtual machines by 
creating a single volume of 32G. The volume is a distributed volume made up of 
4 8GB bricks.

As a test I attempted 3 separate copies to /mnt/glusterfs, each copying a 
different iso file of approx 3G from my client (Fedora 13 x86_64 mounting using 
the gluster client, not NFS "mount -t glusterfs 10.0.0.1:vol1 /mnt/glusterfs").

cp ~/1.iso /mnt/glusterfs/iso/
cp ~/2.iso /mnt/glusterfs/iso/
cp ~/3.iso /mnt/glusterfs/iso/

I expected that each copy would result in the ISO file ending up on a different 
brick, instead they all went to the same brick. Each test brick is only 8G, 
thus the final copy ultimately failed when the brick reached full capacity.

Questions:
 1. Shouldn't the glusterfs distribute the separate files across the bricks 
(i.e. file-01 -> brick1, file-02 -> brick30, file-03 -> brickXX)?
 2. Shouldn't GlusterFS select a destination brick with the available capacity 
to store the transfer?
 3. Is there a gluster command that will display the current capacity and 
utilization of each brick in a volume? For example, in Lustre you can do an 
'lfs df'. I initially expected 'gluster volume info' to provide that 
information.

I've read through the Beta doc, but didn't find the above info:
http://www.gluster.com/community/documentation/index.php/GlusterFS_3.1beta#Step_3:_Mounting_a_volume

Configuration for the test servers:
Server/Brick     Size   FS
srv-01
 /export/lun0 -> 8G     ext4
 /export/lun1 -> 8G     ext4
srv-02
 /export/lun2 -> 8G     ext4
 /export/lun3 -> 8G     ext4

gluster volume info

Volume Name: vol1
Type: Distribute
Status: Started
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: 10.0.0.1:/export/lun1
Brick2: 10.0.0.1:/export/lun0
Brick3: 10.0.0.2:/export/lun2
Brick4: 10.0.0.2:/export/lun3

=================================
Mike Hanby
[email protected]
UAB School of Engineering
Information Systems Specialist II
IT HPCS / Research Computing


_______________________________________________
Gluster-users mailing list
[email protected]
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

Reply via email to