Greg,

I'll admit to not understanding your response and would really 
appreciate a little more explanation.  I only have two servers
with 8 x 2TB each in AFR-DHT so far, but we are growing and will
continue to do so basically forever.

Q: If you are putting all your bricks into a single AFR-DHT volume
does any of this matter?
A:

Maybe I'm confused but it seems by keeping the drives as individual
bricks and using Gluster AFR-DHT to consolidate them into a single 
volume you are:

1) Maximizing your disk storage (i.e. no disks lost to RAID5 or
RAID6 overhead)

2) Limiting rebuilds due to disk failures to a single disk pair, 
thus shortening rebuild times and making rebuilds pretty clearly
defined.

3) Making it easier to grow your volume because it can be done by
adding only 2 drives/bricks at a time (which couldn't really be 
done if you consolidate via RAID5/RAID6 first).

Thanks in advance,

Larry Bates
vitalEsafe, Inc.


Message: 1
Date: Mon, 23 Jan 2012 15:54:45 -0600
From: [email protected]
Subject: Re: [Gluster-users] Best practices?
To: Brian Candler <[email protected]>
Cc: [email protected], [email protected]
Message-ID:
        
<ofbc7c5446.87ee4a00-on8625798e.00500253-8625798e.00785...@uscmail.uscourts.gov>
        
Content-Type: text/plain; charset=US-ASCII

[email protected] wrote on 01/22/2012 04:17:02 PM:

>
> Suppose I start building nodes with (say) 24 drives each in them.
>
> Would the standard/recommended approach be to make each drive its own
> filesystem, and export 24 separate bricks, server1:/data1 ..
> server1:/data24 ?  Making a distributed replicated volume between this
> and another server would then have to list all 48 drives individually.
>
> At the other extreme, I could put all 24 drives into some flavour of
> stripe or RAID and export a single filesystem out of that.
>
> It seems to me that having separate filesystems per disk could be the
> easiest to understand and to recover data from, and allow volume 
> 'hot spots' to be measured and controlled, at the expense of having 
> to add each brick separately into a volume.
>
> I was trying to find some current best-practices or system design
> guidelines on the wiki, but unfortunately a lot of what I find is 
> marked "out of date",
> e.g.
> http://gluster.org/community/documentation/index.php/
> Guide_to_Optimizing_GlusterFS
> http://gluster.org/community/documentation/index.php/Best_Practices_v1.3
> [the latter is not marked out of date, but links to pages which are]
>
> Also the glusterfs3.2 admin guide seems to dodge this issue, assuming you
> already have your bricks prepared before telling you how to add them into
> a volume.
>
> But if you can point me at some recommended reading, I'd be more than
> appy to read it :-)

It's been talked about a few times on the list in abstract but I can give
you one lesson learned from our environment.

the volume to brick ratio is a sliding scale.  you can have more of
one, but then you need to have less of the other.

So taking your example above:

2 nodes
24 disks per node

Let's put that out into possible configurations:

2 nodes
24 bricks per node per volume
1 volume
---------
= 24 running processes and 24 ports per node

2 nodes
24 bricks per node per volume
100 volumes
---------
= 2400 running processes and 2400 ports per node

2 nodes
1 brick per node per volume
24 volumes
---------
= 24 running processes and 24 ports per node

2 nodes
1 brick per node per volume
2400 volumes
---------
= 2400 running processes and 2400 ports per node


More process/ports means more potential for ports in use, connectivity
issues, file use limits (ulimits), etc.

that's not the only thing to keep in mind, but it's a poorly documented one
that burned me so :)

_______________________________________________
Gluster-users mailing list
[email protected]
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

Reply via email to