On Fri, Apr 24, 2009 at 01:41:45PM -0500, Zhengquan Zhang wrote:
> On Fri, Apr 24, 2009 at 02:10:41PM -0400, Douglas A. Tutty wrote:
> 
> Thank you very much for your reply!
> 
> > Well, debian has different requirements re licensing of modules.  Your
> > guess may be wrong if HP has provided a propriatary module for the
> > kernel that e.g. suse has included in its kernel but debian can't
> > include.  For some things (e.g. the nVidia driver), you can still get an
> > install done and add a module later; for the boot drive that becomes a
> > bit of a problem :)
> 
> My plan is to use the 250G harddrive as system drive and use 2x1T
> dardrives to do RAID 1 backup for several other servers. So what do you
> mean by 'for the boot drive'? 

The boot drive is whatever drive the system boots with.  In the model
you're describing, it would be the 250 GB hard drive.  Note, that for a
simple backup server, any drive over 1 GB (say 6 GB) will hold all the
system you need.

For extra redundancy, you may want to experiment and try installing a
system onto a 6 GB partition on one of the drives.  I bet you'll find it
more than big enough.  You could then reinstall, but put a 6 GB (or 10 or
12, whatever) partition at the beginning of each drive, in a raid1
fashion.  In this way, if any drive fails, you'll still be able to boot
the system.

> > So-called 'fake' raid is, as I understand it, hardware that allows you
> > to configure the raid in the bios, but the actual raid happens in
> > windows software rather than in the hardware.
> > 
> 
> Could you please explain 'in windows software' a little bit? Does that
> means the processing for raid is done in CPU rather that in the RAID
> controller?

Yes, with fake raid, the hardware can't do any raid processing.  It is
done by the main CPU just as in normal software raid.

> > > And the server will merely be used for backup.
> >  
> > A couple of issues then.  
> > 
> > 1.  Performance may or may not be an issue, depending on how many
> >     other computers will be using the server for data backup at the
> >     same time.  
> 
> three linux servers(1 development server 1 mail and web server and 1
> misc server) will be backing up their data to the back up server I
> purchase.

Will they be backing-up at once or one-at-a-time?

If one-at-a-time, then unless those servers are using raid striping, the
throughput of the servers' hard disk will be similar to the throughput
of the backup server's hard disk.  However, if all three boxes will be
spitting data to the backup server as fast as their hard drive (and
network) can move the data, then the backup server will need to be of
higher performance if it to avoid being a bottleneck.

> > 2.  With hardware raid, unless the raid card can save the
> >     configuration to each disk in the array, if something happens to
> >     the card (which could happen if a drive fails and takes down the
> >     controller), then the whole array could be caput if you put in a
> >     new controller card.
> 
> So this must be the downside of using hardware RAID?
 
Unless the card can safe the config to the actual disks.  

Another is that the raid cards have their own cache.  If you don't have
a UPS, then you'll want to set up the cache so that they don't tell the
OS that the data is on disk until it really is on disk (not just in its
cache), unless you get a raid card with a battery backup for its cache.

> > 3.  With software raid, the configuration is on the disk itself.
> >     Pop those disks in a new box and they should work (assuming that
> >     the new box's hardware can be booted by the old box's initrd).
> 
> This is great!
> 
> > 
> > 4.  Hardware raid comes into its own with exotic raid types (e.g.
> >     raid50 or raid60), with hot spares, hot swap, auto rebuild, etc.
> 
> I will only use raid 1, that is because this is simple and effective as
> it appears.

However, re performance (above), if you have three boxes streaming data
to the backup server, you may want raid10 (which you can do with
software raid) or raid 50, or raid60 (which can handle multiple drive
failures).  Its always a trade-off.

> > 2.  The importance of the data
> 
> mailing list archives, web data, database, svn repos.. home
> directories.. They are very important and I can not afford to lose any
> of them.
> 
> > 
> > 3.  The number of locations of the backup data.
> > 
> 
> What do you mean by this? Is it ok to put all of the backup in one
> backup server?

That means a single point of failure.  What happens if the power supply
fries the whole backup server?

Think of it as several steps:

1.      Backup the data, somewhere on the same box (optional)

2.      Copy the backup data to the backup server.

3.      Copy the data from the backup server to some remote location 
        either with removable media or a second backup server at a
        remote location.  A lot of this depends on the size of the
        backup set and your options of remote location.  I keep a backup
        in the bank's safety deposit box.



-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org

Reply via email to