that dell perc/2 is really an LSI megaraid. each of those channges will
handle 12 drives, 1-6 as drives, 7 for controller, and 8-14 as drives. i do
see an issue here and that is that the perc/2 is out of manufacture and you
may have trouble finding a replacement if your card fails, which means that
if that card fails, you will loose everything as you will likely not be able
to replace the raid card with another that will read the old array.
especially with dell as they used difference brands for difference
controllers sometimes and also had some modified firmware for their
controllers.
i would suggest that you use linux soft raid since you are using gutsy and
have 4 CPUs. linux software raid is very portable so if your motherboard or
raid card fries, you can hook the drives up with USB adapters and put the
raid back together or move the raid between machines. that raid card
failing will be a mess if you use the hardware raid.
the hardware raid would likely be somewhat faster on the hardware but not
fast enough to take the risk. alternatively you could hunt down a second
perc/2 so you had one on hand if this one fails. i am a big fan of linux
softraid because is it so flexible and portable. also, if you make your
root and /boot filesystems mirrored, you can still boot them if one drive
fails no any linux system without reliance on that card.
use those drives however you like but i do highly recommend putting LVM on
them for managability. with LVM on your system drives, you can make a
snapshot before system upgrades and revert back if the upgrade breaks. very
nice for upgrading backuppc systems :)
1GB ram will be fine as long as you dont have any clients with more than
about 2 million files. seems like a lot be a linux system can hit 500,000
files pretty easy, and with a bunch of images or something that number can
go up fast. rsync has some heavy memory usage when the file count gets
really high, its about 100B per file, so 500,000 files is about 50MB. then
multiply that by the number of clients that can be backed up at a time. 4
clients with 1 million files is 400MB or so.
one nice thing about that perc/2 is that your get to take advantage of the
cache memory even if you dont use the hardware raid :) you probably have
32MB of cache or so? that will make backuppc run much better with multiple
clients backing up.
i run a perc/4 with 256MB cache and 10 concurrent clients and the drives
never even hickup because of that fat cache. i run 36GB seagate 15krpm
drives in one server that just backs up servers. i use ubuntu gutsy with
10 of these drive + 1 hot spare on software raid5 for just over 300GB and
the drives make a very consistance write sound every 10 seconds or so as
that cache gets flushed to disk. my other server is running 4 250GB sata
disks backing up XP desktops and laptops and has no extra cache and they are
worked a lot harder during backups.
On Feb 5, 2008 7:36 AM, Jonathan Dumaresq <[EMAIL PROTECTED]> wrote:
> Wow good explanation here. I will try to answer some of the interogation
>
>
>
> 1- The os I think to use is Ubuntu (gutsy) serveur edition
>
> 2- The expansion card that I have is a PERC/II that I think I a raid
> controller. I have 4 channel on it. I use 3 of them
>
>
>
> If I understand my raid card correctly, I don't think I will be able to do
> a 16Hdd array since thay are not on the same channel. But I could be wrong
> on that. This is nearly the first time I play with hardware raid. I use
> software raid on linux for mirroring.
>
>
>
> I have 1 gig of ram on the server.
>
>
>
> The array setup is as follow
>
>
>
> 1 array of 2 hdd
>
> 1 array of 6 hdd
>
> 1 array of 10 hdd
>
>
>
> Jonathan
>
>
>
>
>
>
>
>
>
>
> ------------------------------
>
> *De :* [EMAIL PROTECTED] [mailto:
> [EMAIL PROTECTED] *De la part de* dan
> *Envoyé :* 4 février 2008 22:15
> *À :* Justin Best
> *Cc :* backuppc-users
> *Objet :* Re: [BackupPC-users] Information needed
>
>
>
> My suggestion is:
> for OS
> 2 disks, RAID1 which is a mirror. likely you will have no use for high
> performance here, all the work will be done on the other array(s)
>
> for samba. have you considered putting all 16 drives into 1 array, a
> RAID6+ hot spare? RAID6 being a RAID with 2 redundant disks. I say this
> because you will likely be doing backups at night and samba sharing during
> the day so you will get better performance and more flexible file storage.
> on top of the larger array you would then run LVM.
>
> On top of the larger 16 drive RAID6+HS you would then run LVM, and you
> could seperate out the volumes to your liking. this would give you roughly
> 500GB unformatted with a redundancy.
>
> As for filesystem, I would recommend EITHER ext3 as it is so standard, and
> very reliable, OR XFS. XFS is a very good filesystem that is very LVM
> friendly and can be grown without unmounting. It is also very good and very
> fast at making and deleting files and hardlinks which is what backuppc
> uses. XFS *CAN* be significantly faster than ext3 for backuppc. I
> personally have XFS on 1 backuppc server and ext3 on another. they are both
> exceptional.
>
> you did not mention the OS you will run here, I assumed linux. you do
> have some very very good choices in a linux, freebsd, or
> nexenta/opensolaris. I run linux servers for production but am testing a
> freebsd and a nexenta server specifically for network performance and the
> ZFS filesystem. freebsd and nexenta both have a faster network stack than
> linux which i have notice but only slightly(2-5% or so??). ZFS is pretty
> awesome though. its like software RAID + LVM + a 6 pack or Red Bull. they
> both also have the classic unix filesystem UFS which is pretty good. you
> could compare its performance and reliability to ext3, though they are not
> very similar under the hood.
>
> also, you didn't mention if you were using a hardware raid card or just a
> scsi card. if you need to run softraid, then and OS is suitable. if you
> choose linux, you can use the md system and have a pretty fast softraid
> though it will likely consume one of those processor when you hit the disk
> hard. if you go with nexenta or freebsd, you will use ZFS **BUT** you will
> need to have 1+ GB RAM. ZFS uses a lot of RAM, especially if you use
> filesystem level compression(a nice bennefit, you can turn off compression
> in backuppc and let the filesystem do the work, compressed ZFS is faster
> than backuppc's compression although it will tax a CPU pretty hard. ZFS is
> multithreaded though while backuppc's compression is not, so you would
> likely see faster compression with ZFS with 4 CPUs doing the work, but you
> probably only have 1 PCI bus which means that you will have a hard limit of
> about 66MB/s due to the 132MB/s PCI bus.
>
> hope i could help..
>
> On Feb 4, 2008 3:40 PM, Justin Best <[EMAIL PROTECTED]> wrote:
>
> > BackupPC uses *hardlinks* for pooling.
> >
>
> DOH! You are so right.
>
> Being mainly a Windows admin, I don't think I ever was completely
> clear on the difference between hardlinks and symlinks until a few
> minutes ago, when I looked it up. For anyone else who is confused on
> hardlinks vs softlinks, I would recommend the following page:
>
> http://linuxgazette.net/105/pitcher.html
>
>
>
> -------------------------------------------------------------------------
> This SF.net email is sponsored by: Microsoft
> Defy all challenges. Microsoft(R) Visual Studio 2008.
> http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
> _______________________________________________
> BackupPC-users mailing list
> [email protected]
> List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki: http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
>
>
-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
_______________________________________________
BackupPC-users mailing list
[email protected]
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/