Norberto Meijome wrote:
> indeed. Well, as I said in OP, similar to what Lustre offers.
I don't know what Lustre has so my response might or might not be what
> Let's see : what I am after is a way to hook up a few computers ( say, 6) with
> a few HD (say, 6 x 400 GB), and setup across the whole thing a RAID that
> would let me :
> - keep running with no loss of data in case of a disk or node failure (or
> minimal in case of a node failure)
> - can be accessed by clients over standard network protocols (NFS, CIFS, etc)
> - the clients see volumes of x GB (maybe 1 of the total size, maybe 3 of
> different sizes...doesn't matter)
> All this until hear seems doable with FBSD + GEOM architecture. Now the tricky
> ones :
May or may not. Do you want to have the drives attached to individual
machines, then exported via ggate, then mounted on each machine and then
RAID-ed together? This looks like it needs a large overhead in
administration. If you have the drives attached to individual machines
and then one machine that imports them all in some way (ggate) and
exports them via NFS and CIFS, then you have a single point of failure.
In truth, there appears to be only one volume manager that's close to
being usable, and that's ZFS in 7. But there's nothing automatic in the
way it can use the drives and (re) export them via NFS, etc.
There is no distributed file systems for FreeBSD.
> - scales linearly - if I add 90 storage hosts, the system doesnt get bogged
> down with the management of the disks / stripe distributoin
The only things you have with GEOM are low-level building blocks: RAID
transformations and network devices. If you calculate for yourself that,
(a silly example) if you create a RAID1 out of 90 hosts you can survive
the data being explicitely sent to each of the hosts individually, then
go ahead. There is no magic or smart behaviour involved anywhere.
> - I can add new hosts (from 6 to 10) and the new storage is available for the
> cluster. If I had LVM, I could simply add the new disks to the physical group,
> and then grow the logical volume... I don't see how 7.0's recursive
> partitioning can help me here.... Can you please explain ?
Only ZFS can do that on 7.
In truth, FreeBSD is really bad for storage works, and probably noone
has ever done what you need (it's just not supported). The reasons are:
- there are no distributed file systems for freebsd (meaning ones that
have built-in support for operation on multiple nodes)
- UFS panics on the slightest IO error and is not mountable by multiple
hosts at the same time *at all*, even over gmirror
- gmirror over ggate sort-of works but ggate network and IO errors will
(at best) disconnect one of the drives and you'll need to manually
reconnect them to the gmirror. After that you'll need to rebuild the
whole drives (which is slow over the network). I.e. there's no automatic
firstname.lastname@example.org mailing list
To unsubscribe, send any mail to "[EMAIL PROTECTED]"