Re: Storage server

2012-09-17 Thread Veljko
On Fri, Sep 14, 2012 at 10:48:54AM +0200, Denis Witt wrote:
 I'm currently testing obnam on our external Backup-Server together with
 6 clients. It's very easy to set up. Restore could be nicer if you need
 an older version of some file but it's rather fast and it is possible
 to restore single files only, so you might have to look at several
 versions to find the right one, but this shouldn't take too much time.

I just installed obnam and find it's tutorial very scarce. I'm planing
to run obnam from server only and to pull backups from several clients. 

I guess I need to upload public key to the client machine. Does obnam
has to be installed on both machines or it uses rsync like rsnapshot?

Regards,
Veljko


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20120917103116.ga7...@angelina.example.org



Re: Storage server

2012-09-17 Thread Veljko
On Thu, Sep 13, 2012 at 06:24:45PM -0500, Stan Hoeppner wrote:
 Due to its allocation group design, continually growing an XFS
 filesystem in such small increments, with this metadata heavy backup
 workload, will yield very poor performance.  Additionally, putting an
 XFS filesystem atop an LV is not recommended as it cannot properly align
 journal write out to the underlying RAID stripe width.  While this is
 more critical with parity arrays, it also effects non parity striped arrays.
 
 Thus my advice to you is:
 
 Do not use LVM.  Directly format the RAID10 device using the mkfs.xfs
 defaults.  mkfs.xfs will read the md configuration and automatically
 align the filesystem to the stripe width.
 
 When the filesystem reaches 85% capacity, add 4 more drives and create
 another RAID10 array.  At that point we'll teach you how to create a
 linear device of the two arrays and grow XFS across the 2nd array.

I did what you advised and formated RAID10 using xfs defaults.

Thanks for you help, Stan.

Regards,
Veljko


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20120917103136.gb7...@angelina.example.org



Re: Storage server

2012-09-16 Thread Martin Steigerwald
Am Samstag, 15. September 2012 schrieb Bob Proulx:
 Martin Steigerwald wrote:
  Am Freitag, 7. September 2012 schrieb Bob Proulx:
   Unfortunately I have some recent FUD concerning xfs.  I have had
   some recent small idle xfs filesystems trigger kernel watchdog
   timer ...
   due to these lockups.  Squeeze.  Everything current.  But when idle
   it would periodically lock up and the only messages in the syslog
   and on
  
  Squeeze and everything current?
  No way. At least when using 2.6.32 default squeeze kernel. Its really
  old. Did you try with the latest 3.2 squeeze-backports kernel?
 
 But in the future when when Debian Jessie is being released I am going
 to be reading then on the mailing list about how old and bad Linux 3.2
 is and how it should not be used because it is too old.  How can it be
 really good now when it is going to be really bad in the future when
 supposedly we know more then than we do now?  :-)

I read a complaint about the very nature of software development out of 
your statement. Developers and testers improve software and sometimes 
accidentally introduce regressions. Thats the very nature of the process 
it seems to me.

Yes, by now 2.6.32 is old. It wasn´t exactly fresh as Debian Squeeze was 
released, but now its really old. And regarding XFS 3.2 contains big load 
of improvements regarding metadata performance like delayed logging and 
more, other performance and bug fixes. Some bug fixes might have been 
backported via Stable maintainers. But not the improvements that might 
play an important role for a storage server setup.

 For my needs Debian Stable is a really very good fit.  Much better
 than Testing or Unstable or Backports.

So by all means, use it!

Actually I didn´t even recommend to upgrade to Sid. If you read my post 
carefully you can easily notice it. I specifically recommended just to 
upgrade to a squeeze-backports kernel.

But still if you do not use XFS or use XFS and do not have any issue, you 
may well decide to stick with 2.6.32. Your choice.

 Meanwhile I am running Sid on my main desktop machine.  I upgrade it
 daily.  I report bugs as I find them.  I am doing so specifically so I
 can test and find and report bugs.  I am very familiar with living on
 Unstable.  Good for developers.  Not good for production systems.

Then tell that to my production use laptop here. It obviously didn´t hear 
about Debian Sid being unfit for producation usage.

My virtual server still has Squeeze, but I am considering to upgrade it to 
Wheezy. Partly cause at the time I upgrade customer systems, I want to 
have seen Wheezy at work nicely for a while ;).

Sure, not the way for everyone. Sure, when using Sid / Wheezy the 
occassional bug can happen and I recommend using apt-listbugs and apt-
listchanges on those systems.

But I won´t sign a all-inclusive Sid is unfit for production statement. If 
I know how to look up the bug database and how to downgrade stuff possibly 
also by using snapshot.debian.org then I might decide to use Sid or Wheezy 
on some machines – preferably in the desktop usage area – and be just fine 
with it. On servers I am quite more reluctant unless its my own virtual 
server, but even there I am not running Sid.

For people new to Debian or people unwilling to deal with an occassional 
bug I recommend stable. Possibly with a backport kernel in some cases.

Well so I think we basically say almost the same, but in different wording 
and accentuation. ;)

-- 
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA  B82F 991B EAAC A599 84C7


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/201209161148.23293.mar...@lichtvoll.de



Re: Storage server

2012-09-16 Thread Martin Steigerwald
Am Freitag, 14. September 2012 schrieb Stan Hoeppner:
 On 9/14/2012 7:57 AM, Martin Steigerwald wrote:
  Am Freitag, 14. September 2012 schrieb Stan Hoeppner:
  Thus my advice to you is:
  
  Do not use LVM.  Directly format the RAID10 device using the
  mkfs.xfs defaults.  mkfs.xfs will read the md configuration and
  automatically align the filesystem to the stripe width.
 
  
 
  Just for completeness:
  
 
  It is possible to manually align XFS via mkfs.xfs / mount options.
  But  then thats an extra step thats unnecessary when creating XFS
  directly on MD.
 
 And not optimal for XFS beginners.  But the main reason for avoiding
 LVM is that LVM creates a slice and dice mentality among its users,
 and many become too liberal with the carving knife, ending up with a
 filesystem made of sometimes a dozen LVM slivers.  Then XFS
 performance suffers due to the resulting inode/extent/free space
 layout.

Agreed.

I have seen VMs with seperate /usr and minimal / and mis-estimated sizing. 
There was perfectly enough place in the VMDK, but just in the wrong 
partition. I fixed it back then by adding another VMDK file. (So even with 
partitions I found those setups.)

Something else is to split up /var/log or /var.

But then we are talking about user and not system data here anyway.

I have always recommended to leave at least 10-15% free, but from a 
discussion on XFS mailinglist where you took part, I learned that 
depending on use case for large volumes even more free space might be 
necessary for performant long term operation.

-- 
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA  B82F 991B EAAC A599 84C7


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/201209161438.22645.mar...@lichtvoll.de



Re: Storage server

2012-09-16 Thread Martin Steigerwald
Hi Kelly,

Am Samstag, 15. September 2012 schrieb Kelly Clowers:
 On Fri, Sep 14, 2012 at 2:51 PM, Stan Hoeppner s...@hardwarefreak.com wrote:
  On 9/14/2012 11:29 AM, Kelly Clowers wrote:
  On Thu, Sep 13, 2012 at 4:45 PM, Stan Hoeppner s...@hardwarefreak.com 
  wrote:
  On 9/13/2012 5:20 AM, Veljko wrote:
  On Tue, Sep 11, 2012 at 08:34:51AM -0500, Stan Hoeppner wrote:
  One of the big reasons (other than cost) that I mentioned this
  card is that Adaptec tends to be more forgiving with non RAID
  specific (ERC/TLER) drives, and lists your Seagate 3TB drives as
  compatible.  LSI and other controllers will not work with these
  drives due to lack of RAID specific ERC/TLER.
  
  Those are really valuable informations. I wasn't aware that not
  all drives works with RAID cards.
  
  Consumer hard drives will not work with most RAID cards.  As a
  general rule, RAID cards require enterprise SATA drives or SAS
  drives.
  
  They don't work with real hardware RAID? How weird! Why is that?
  
  Surely you're pulling my leg Kelly, and already know the answer.
  
  If not, the answer is the ERC/TLER timeout period.  Nearly all
  hardware RAID controllers expect a drive to respond to a command
  within 10 seconds or less.  If the drive must perform error recovery
  on a sector or group of sectors it must do so within this time
  limit.  If the drive takes longer than this period the controller
  will flag it as bad and kick it out of the array.  The assumption
  here is that a drive taking that long to respond has a problem and
  should be replaced.
  
  Most consumer drives have no such timeout limit.  They will churn
  forever attempting to recover an unreadable sector.  Thus routine
  errors on consumer drives often get them kicked instantly when used
  on read RAID controllers.
 
 Why would I be pulling your leg? I have never had opportunity to work
 with real raid cards. Nor have I ever heard anyone say that before.
 The highest end I have used was I believe a Highpoint card, about
  ~$150 range, which was fakeRAID (and I believe the drives
 attached to that were enterprise drives anyway)
 
 Thanks for the info.

Read the stuff that was linked from some other article link posted here.

Especially:

What makes a hard drive enterprise class?
Posted on 05-11-2010 23:19:18 UTC | Updated on 05-11-2010 23:43:48 UTC
Section: /hardware/disks/ | Permanent Link
http://www.pantz.org/hardware/disks/what_makes_a_hard_drive_enterprise_class.html


But also

Everything You Know About Disks Is Wrong
by ROBIN HARRIS on TUESDAY, 20 FEBRUARY, 2007
Update II: NetApp has responded. I’m hoping other vendors will as well.
http://storagemojo.com/2007/02/20/everything-you-know-about-disks-is-wrong/


Open Letter to Seagate, Hitachi GST, EMC, HP, NetApp, IBM and Sun
by ROBIN HARRIS on THURSDAY, 22 FEBRUARY, 2007
http://storagemojo.com/2007/02/22/open-letter-to-seagate-hitachi-gst-emc-hp-netapp-ibm-and-sun/


Google’s Disk Failure Experience
by ROBIN HARRIS on MONDAY, 19 FEBRUARY, 2007
http://storagemojo.com/2007/02/19/googles-disk-failure-experience/


is quite intesting.

So enterprise class drives have this configurable error correction timeout,
but that said, if you leave traditional RAID setups you may still very well
get away with using customer drives. Like Google did.

Now all that from Storagemojo is 2007 stuff. Dunno how much is changed
meanwhile.

Ciao,
-- 
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA  B82F 991B EAAC A599 84C7


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/201209161443.11148.mar...@lichtvoll.de



Re: Storage server

2012-09-16 Thread Stan Hoeppner
On 9/16/2012 7:38 AM, Martin Steigerwald wrote:

 I have always recommended to leave at least 10-15% free, but from a 
 discussion on XFS mailinglist where you took part, I learned that 
 depending on use case for large volumes even more free space might be 
 necessary for performant long term operation.

And this is due the allocation group design of XFS.  When the filesystem
is used properly, its performance with parallel workloads simply runs
away from all other filesystems.  When using LVM in the manner I've been
discussing, the way the OP of this thread wants to use it, you end up
with the following situation and problem:

1.  Create 1TB LVM and format with XFS.
2.  XFS creates 4 allocation group
3.  XFS spreads directories and files fairly evenly over all AGs
4.  When the XFS gets full, you end up with inode/files/free space
badly fragmented over the 4 AGs and performance suffers when reading
these back, or when trying to write new, or modify existing
5.  So you expand the LV by 1TB and then grow the XFS over the new space
6.  This operation simply creates 4 new AGs in the new space
7.  New inode/extent creation to these new AGs is fast and reading back
is also fast.
8.  But, here's the kicker, reading the fragmented files from the first
4 AGs is still dog slow, as well as modifying metadata in those AGs

Thus, the moral of the story is that adding more space to an XFS via LVM
can't fix performance problems that one has created while reaching the
tank full marker on the original XFS.  The result is fast access to
the new AGs in the new LVM sliver, but slow access to the original 4 AGs
in the first LVM sliver.  So as one does the LVM rinse/repeat growth
strategy, one ends up with slow access to all of their AGs in the entire
filesystem.  Thus, this method of slice/dice expansion for XFS is insane.

This is why XFS subject matter experts and power users do our best to
educate beginners about the aging behavior of XFS.  This is why we
strongly recommend that users create one large XFS of the maximum size
they foresee needing in the long term instead of doing the expand/grow
dance with LVM or doing multiple md/RAID reshape operations.

Depending on the nature of the workload, and careful, considerate,
judicious use of XFS grow operations, it is safe to grow an XFS without
the performance problems.  This should be done long before one hits the
~90% full mark.  Growing before it hit ~70% is much better.  But one
should still never grow an XFS more than a couple of times, as a general
rule, if one wishes to maintain relatively equal performance amongst all
AGs.

-- 
Stan


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/5055e3a7.5050...@hardwarefreak.com



Re: Storage server

2012-09-16 Thread Martin Steigerwald
Am Sonntag, 16. September 2012 schrieb Stan Hoeppner:
 On 9/16/2012 7:38 AM, Martin Steigerwald wrote:
  I have always recommended to leave at least 10-15% free, but from a
  discussion on XFS mailinglist where you took part, I learned that
  depending on use case for large volumes even more free space might be
  necessary for performant long term operation.
 
 And this is due the allocation group design of XFS.  When the
 filesystem is used properly, its performance with parallel workloads
 simply runs away from all other filesystems.  When using LVM in the
 manner I've been discussing, the way the OP of this thread wants to
 use it, you end up with the following situation and problem:
 
 1.  Create 1TB LVM and format with XFS.
 2.  XFS creates 4 allocation group
 3.  XFS spreads directories and files fairly evenly over all AGs
 4.  When the XFS gets full, you end up with inode/files/free space
 badly fragmented over the 4 AGs and performance suffers when
 reading these back, or when trying to write new, or modify existing 5.
  So you expand the LV by 1TB and then grow the XFS over the new space
 6.  This operation simply creates 4 new AGs in the new space
 7.  New inode/extent creation to these new AGs is fast and reading back
 is also fast.
 8.  But, here's the kicker, reading the fragmented files from the first
 4 AGs is still dog slow, as well as modifying metadata in those AGs
 
 Thus, the moral of the story is that adding more space to an XFS via
 LVM can't fix performance problems that one has created while reaching
 the tank full marker on the original XFS.  The result is fast access
 to the new AGs in the new LVM sliver, but slow access to the original
 4 AGs in the first LVM sliver.  So as one does the LVM rinse/repeat
 growth strategy, one ends up with slow access to all of their AGs in
 the entire filesystem.  Thus, this method of slice/dice expansion
 for XFS is insane.
 
 This is why XFS subject matter experts and power users do our best to
 educate beginners about the aging behavior of XFS.  This is why we
 strongly recommend that users create one large XFS of the maximum size
 they foresee needing in the long term instead of doing the expand/grow
 dance with LVM or doing multiple md/RAID reshape operations.
 
 Depending on the nature of the workload, and careful, considerate,
 judicious use of XFS grow operations, it is safe to grow an XFS without
 the performance problems.  This should be done long before one hits the
 ~90% full mark.  Growing before it hit ~70% is much better.  But one
 should still never grow an XFS more than a couple of times, as a
 general rule, if one wishes to maintain relatively equal performance
 amongst all AGs.

Thanks for your elaborate explaination.

I took note of this for my Linux Performance analysis  tuning trainings.

-- 
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA  B82F 991B EAAC A599 84C7


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/201209161744.14359.mar...@lichtvoll.de



Re: Storage server

2012-09-15 Thread Bob Proulx
Martin Steigerwald wrote:
 Am Freitag, 7. September 2012 schrieb Bob Proulx:
  Unfortunately I have some recent FUD concerning xfs.  I have had some
  recent small idle xfs filesystems trigger kernel watchdog timer
  ...
  due to these lockups.  Squeeze.  Everything current.  But when idle it
  would periodically lock up and the only messages in the syslog and on
 
 Squeeze and everything current?
 No way. At least when using 2.6.32 default squeeze kernel. Its really old.
 Did you try with the latest 3.2 squeeze-backports kernel?

But in the future when when Debian Jessie is being released I am going
to be reading then on the mailing list about how old and bad Linux 3.2
is and how it should not be used because it is too old.  How can it be
really good now when it is going to be really bad in the future when
supposedly we know more then than we do now?  :-)

For my needs Debian Stable is a really very good fit.  Much better
than Testing or Unstable or Backports.

Meanwhile I am running Sid on my main desktop machine.  I upgrade it
daily.  I report bugs as I find them.  I am doing so specifically so I
can test and find and report bugs.  I am very familiar with living on
Unstable.  Good for developers.  Not good for production systems.

Bob


signature.asc
Description: Digital signature


Re: Storage server

2012-09-15 Thread Kelly Clowers
On Sat, Sep 15, 2012 at 1:36 AM, Bob Proulx b...@proulx.com wrote:

 Meanwhile I am running Sid on my main desktop machine.  I upgrade it
 daily.  I report bugs as I find them.  I am doing so specifically so I
 can test and find and report bugs.

Wow, impressive. I run unstable+experimental, but I think I have
reported maybe two or three bugs against it in the years I have
used it. I don't know if I even reported it when there where those
really nasty hard system lockups in X. Yes, I am a terrible person.

Cheers,
Kelly Clowers


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/CAFoWM=-vZJxypMxGVSn6obCM=ogwfyagvusrfh0rvdtgy55...@mail.gmail.com



Re: Storage server

2012-09-15 Thread Stan Hoeppner
On 9/15/2012 3:36 AM, Bob Proulx wrote:

 But in the future when when Debian Jessie is being released I am going
 to be reading then on the mailing list about how old and bad Linux 3.2
 is and how it should not be used because it is too old.

So what you're saying here is that Jessie should be released with the
2.6.32 kernel of Squeeze because we already know how bad it is.  Thus
the Jessie kernel won't go from good to bad causing widespread
depression and suicide amongst Debian users.

-- 
Stan


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/5054e829.40...@hardwarefreak.com



Re: Storage server

2012-09-15 Thread Bob Proulx
Stan Hoeppner wrote:
 Bob Proulx wrote:
  But in the future when when Debian Jessie is being released I am going
  to be reading then on the mailing list about how old and bad Linux 3.2
  is and how it should not be used because it is too old.
 
 So what you're saying here is that Jessie should be released with the
 2.6.32 kernel of Squeeze because we already know how bad it is.

No.  I am saying that Jessie should release with the Linux 4.1 kernel
because I am sure we will all agree that a Linux 4.1 kernel would be
awesome and so much better than the 3.2 kernel.

 Thus the Jessie kernel won't go from good to bad causing
 widespread depression and suicide amongst Debian users.

Hopefully we haven't had widespread suicide amongst users.  Instead
perhaps more like a diabetic coma due to depression causing an
increased consumption of chocolate.  Massive amounts of chocolate!
The true cure for depression.

Isn't it depressing how a kernel goes from high praise to low disdain
in only a very short time.  Is the Linux kernel made from bananas?  I
think it might be.  It seems to turn brown so very quickly.  I wonder
if it is mildly radioactive too?  That might explain a lot.  It does
seem to have a half life.

Bob


signature.asc
Description: Digital signature


Re: Storage server

2012-09-14 Thread Denis Witt
On Thu, 13 Sep 2012 12:21:44 +0200
Veljko velj...@gmail.com wrote:

 I've heard of it, but don't know anyone who uses it. Any experience
 with it?

Our former Hosting Provider used Amanda, I never liked it (but maybe
because of the interface the Provider used for it). I think for Veljko
needs it is much to complex. Also it lacks some modern features (AFAIK)
like de-duplication, etc.

Best regards
Denis Witt


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20120914104050.3dc601ac@X200



Re: Storage server

2012-09-14 Thread Denis Witt
On Thu, 13 Sep 2012 12:22:45 +0200
Veljko velj...@gmail.com wrote:

 obnam and rdiff-backup seems to use less space, but I also like very
 clear representation of backups on rsnapshot. But during few days of
 testing each of them I'll know what to use. 

I think rdiff-backup is a good choice for your needs. It has (for the
latest backup) a similar concept like rsnapshot, so you can access the
files easily.

If you ever move on to a dedicated backup server I think obnam will be
interesting again, mainly because of the Repository-Concept.

I'm currently testing obnam on our external Backup-Server together with
6 clients. It's very easy to set up. Restore could be nicer if you need
an older version of some file but it's rather fast and it is possible
to restore single files only, so you might have to look at several
versions to find the right one, but this shouldn't take too much time.

Also it doesn't matter much where you want to restore the file, any
machine which can access (or can be accessed by) the Backup-Server via
SSH will do.

Best regards
Denis Witt


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20120914104854.45a4d231@X200



Re: Storage server

2012-09-14 Thread Pertti Kosunen

On 14.9.2012 2:45, Stan Hoeppner wrote:

Consumer hard drives will not work with most RAID cards.  As a general
rule, RAID cards require enterprise SATA drives or SAS drives.


http://wdc.com/en/products/products.aspx?id=810
http://www.anandtech.com/show/6157/

Western Digitals new Red series is RAID-compatible.


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org

Archive: http://lists.debian.org/5052fd86.4080...@pp.nic.fi



Re: Storage server

2012-09-14 Thread Jon Dowland
On Thu, Sep 13, 2012 at 12:20:55PM +0200, Veljko wrote:
 Can you please explain what design flaw is that? Isn't directory with
 complete backup (but not occupying that much space due to hard links
 usage) very usable for backup? If slow work can be avoided by the use of
 XFS, what would be wrong about rsnapshot?

Read my prior posts about it in this thread. It's fine for backup, the problem
is when you try to remove old snapshots, or perform a restore, or otherwise
manipulate the backup trees.

By comparison with a CPU-intensive program. It doesn't matter how fast your
CPU is, if your program is doing a busy-wait. It will consume 100% of whatever
CPU you throw at it. Program design is important.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20120914105751.GA3124@debian



Re: Storage server

2012-09-14 Thread Stan Hoeppner
On 9/14/2012 4:48 AM, Pertti Kosunen wrote:
 On 14.9.2012 2:45, Stan Hoeppner wrote:
 Consumer hard drives will not work with most RAID cards.  As a general
 rule, RAID cards require enterprise SATA drives or SAS drives.
 
 http://wdc.com/en/products/products.aspx?id=810
 http://www.anandtech.com/show/6157/
 
 Western Digitals new Red series is RAID-compatible.

Yes, and as such these drives do not fall into the consumer category.
Note they are marketed specifically for SOHO NAS boxen.  While they do
offer programmable TLER/ERC timeout with a suitable default of 7
seconds, they're not a good fit for anyone who desires performance along
with capacity, due to the slow 5400 RPM spindle speed.  In such a case
one's money is probably better spent buying a smaller quantity of 7.2K
RE4 or other enterprise SATA drives for about the same $$.

-- 
Stan


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/50531cff.6080...@hardwarefreak.com



Re: Storage server

2012-09-14 Thread Martin Steigerwald
Am Freitag, 14. September 2012 schrieb Stan Hoeppner:
 Thus my advice to you is:
 
 Do not use LVM.  Directly format the RAID10 device using the mkfs.xfs
 defaults.  mkfs.xfs will read the md configuration and automatically
 align the filesystem to the stripe width.

Just for completeness:

It is possible to manually align XFS via mkfs.xfs / mount options. But 
then thats an extra step thats unnecessary when creating XFS directly on 
MD.

-- 
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA  B82F 991B EAAC A599 84C7


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/201209141457.49673.mar...@lichtvoll.de



Re: Storage server

2012-09-14 Thread Kelly Clowers
On Thu, Sep 13, 2012 at 4:45 PM, Stan Hoeppner s...@hardwarefreak.com wrote:
 On 9/13/2012 5:20 AM, Veljko wrote:
 On Tue, Sep 11, 2012 at 08:34:51AM -0500, Stan Hoeppner wrote:
 One of the big reasons (other than cost) that I mentioned this card is
 that Adaptec tends to be more forgiving with non RAID specific
 (ERC/TLER) drives, and lists your Seagate 3TB drives as compatible.  LSI
 and other controllers will not work with these drives due to lack of
 RAID specific ERC/TLER.

 Those are really valuable informations. I wasn't aware that not all
 drives works with RAID cards.

 Consumer hard drives will not work with most RAID cards.  As a general
 rule, RAID cards require enterprise SATA drives or SAS drives.

They don't work with real hardware RAID? How weird! Why is that?


Thanks,
Kelly Clowers


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/CAFoWM=8780zjnkk2diwz2uuvt-g7l+dvkn3ld43fckm-a+e...@mail.gmail.com



Re: Storage server

2012-09-14 Thread Paul E Condon
On 20120910_053746, Stan Hoeppner wrote:
 On 9/9/2012 3:25 PM, Paul E Condon wrote:
 
  I've been following this thread from its beginning. My initial reading
  of OP's post was to marvel at the thought that so many things/tasks
  could be done with a single box in a single geek's cubicle. 
 
 One consumer quad core AMD Linux box of today can do a whole lot more
 than what has been mentioned.
 
  I resolved
  to follow the thread that would surely follow closely. I think you,
  Stan, did OP an enormous service with your list of questions to be
  answered. 
 
 I try to prevent other from shooting themselves in the foot when I see
 the loaded gun in their hand.
 
  This thread drifted onto the topic of XFS. I first learned of the
  existence of XFS from earlier post by you, and I have ever since been
  curious about it. But I am retired, and live at home in an environment
  where there is very little opportunity to make use of its features.
 
 You might be surprised.  The AG design and xfs_fsr make it useful for
 home users.
 
  Perhaps you could take OP's original specification as a user wish list
  and sketch a design that would fulfill the wishlist and list how XFS
  would change or resolve issues that were/are troubling him. 
 
 The OP's issues don't revolve around filesystem choice, but basic system
 administration concepts.
 
  In particular, the typical answers to questions about backup on this list
  involve rsync, or packages that depend on rsync, and on having a file
  system that uses inodes and supports hard links. 
 
 rsync works with any filesystem, but some work better with rsync
 workloads.  If one has concurrent rsync jobs running XFS is usually best.

Rsync features that invoke hard links are commonly used to do
de-duplication in backup systems that are designed with extX file
system in mind. Other parts of rsync work without hardlinks in the
file system. But, I think a common desire of people seeking advice
here is that there be some sort of automatic, easy to administer,
de-duplication.

 
  How would an XFS design
  handle de-duplication? 
 
 Deduplication isn't an appropriate function of a filesystem.

The wording of this question was too terse. I should have said
something like:

How would a backup system design that uses XFS implement
de-duplication?

I know that file-systems don't do de-duplication, but the rsysc
program does do de-duplication in the case of extX file system. What
alternative method for achieving de-duplication might be substituted
for rsync?

 
  Or is de-duplication simply a bad idea in very
  large systems?
 
 That's simply a bad, very overly broad question. 

Yes, but de-duplication is a feature that it highly touted as a good
thing. Is there some easy way to have de-duplication *and* the
benefits of XFS in a single, optimized design of a backup system?


-- 
Paul E Condon   
pecon...@mesanetworks.net


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20120914181327.gb3...@big.lan.gnu



Re: Storage server

2012-09-14 Thread Stan Hoeppner
On 9/14/2012 7:57 AM, Martin Steigerwald wrote:
 Am Freitag, 14. September 2012 schrieb Stan Hoeppner:
 Thus my advice to you is:

 Do not use LVM.  Directly format the RAID10 device using the mkfs.xfs
 defaults.  mkfs.xfs will read the md configuration and automatically
 align the filesystem to the stripe width.
 
 Just for completeness:
 
 It is possible to manually align XFS via mkfs.xfs / mount options. But 
 then thats an extra step thats unnecessary when creating XFS directly on 
 MD.

And not optimal for XFS beginners.  But the main reason for avoiding LVM
is that LVM creates a slice and dice mentality among its users, and
many become too liberal with the carving knife, ending up with a
filesystem made of sometimes a dozen LVM slivers.  Then XFS performance
suffers due to the resulting inode/extent/free space layout.

Of course, this is fine when a user knows the impact up front and can
live with a 10+ fold decrease in performance when the  FS starts filling
up.  Once this gets bad enough the only fix is to dump, format, restore
the filesystem.  And that gets expensive when we're talking many TB.

-- 
Stan


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/5053a530.5000...@hardwarefreak.com



Re: Storage server

2012-09-14 Thread Stan Hoeppner
On 9/14/2012 11:29 AM, Kelly Clowers wrote:
 On Thu, Sep 13, 2012 at 4:45 PM, Stan Hoeppner s...@hardwarefreak.com wrote:
 On 9/13/2012 5:20 AM, Veljko wrote:
 On Tue, Sep 11, 2012 at 08:34:51AM -0500, Stan Hoeppner wrote:
 One of the big reasons (other than cost) that I mentioned this card is
 that Adaptec tends to be more forgiving with non RAID specific
 (ERC/TLER) drives, and lists your Seagate 3TB drives as compatible.  LSI
 and other controllers will not work with these drives due to lack of
 RAID specific ERC/TLER.

 Those are really valuable informations. I wasn't aware that not all
 drives works with RAID cards.

 Consumer hard drives will not work with most RAID cards.  As a general
 rule, RAID cards require enterprise SATA drives or SAS drives.
 
 They don't work with real hardware RAID? How weird! Why is that?

Surely you're pulling my leg Kelly, and already know the answer.

If not, the answer is the ERC/TLER timeout period.  Nearly all hardware
RAID controllers expect a drive to respond to a command within 10
seconds or less.  If the drive must perform error recovery on a sector
or group of sectors it must do so within this time limit.  If the drive
takes longer than this period the controller will flag it as bad and
kick it out of the array.  The assumption here is that a drive taking
that long to respond has a problem and should be replaced.

Most consumer drives have no such timeout limit.  They will churn
forever attempting to recover an unreadable sector.  Thus routine errors
on consumer drives often get them kicked instantly when used on read
RAID controllers.

-- 
Stan


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/5053a6c6.8060...@hardwarefreak.com



Re: Storage server

2012-09-14 Thread Kelly Clowers
On Fri, Sep 14, 2012 at 2:51 PM, Stan Hoeppner s...@hardwarefreak.com wrote:
 On 9/14/2012 11:29 AM, Kelly Clowers wrote:
 On Thu, Sep 13, 2012 at 4:45 PM, Stan Hoeppner s...@hardwarefreak.com 
 wrote:
 On 9/13/2012 5:20 AM, Veljko wrote:
 On Tue, Sep 11, 2012 at 08:34:51AM -0500, Stan Hoeppner wrote:
 One of the big reasons (other than cost) that I mentioned this card is
 that Adaptec tends to be more forgiving with non RAID specific
 (ERC/TLER) drives, and lists your Seagate 3TB drives as compatible.  LSI
 and other controllers will not work with these drives due to lack of
 RAID specific ERC/TLER.

 Those are really valuable informations. I wasn't aware that not all
 drives works with RAID cards.

 Consumer hard drives will not work with most RAID cards.  As a general
 rule, RAID cards require enterprise SATA drives or SAS drives.

 They don't work with real hardware RAID? How weird! Why is that?

 Surely you're pulling my leg Kelly, and already know the answer.

 If not, the answer is the ERC/TLER timeout period.  Nearly all hardware
 RAID controllers expect a drive to respond to a command within 10
 seconds or less.  If the drive must perform error recovery on a sector
 or group of sectors it must do so within this time limit.  If the drive
 takes longer than this period the controller will flag it as bad and
 kick it out of the array.  The assumption here is that a drive taking
 that long to respond has a problem and should be replaced.

 Most consumer drives have no such timeout limit.  They will churn
 forever attempting to recover an unreadable sector.  Thus routine errors
 on consumer drives often get them kicked instantly when used on read
 RAID controllers.

Why would I be pulling your leg? I have never had opportunity to work
with real raid cards. Nor have I ever heard anyone say that before.
The highest end I have used was I believe a Highpoint card, about
 ~$150 range, which was fakeRAID (and I believe the drives
attached to that were enterprise drives anyway)

Thanks for the info.

Kelly Clowers


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/CAFoWM=9wo+94mpy0j4_wkfdtwczr9eq4oazk9ptwzhpyq10...@mail.gmail.com



Re: Storage server [solved???]

2012-09-14 Thread Paul E Condon
'Solved' is not a proper description. Better to say that I have
discovered some serious misunderstanding on my part. It would be
a serious waste of other peoples time to extend this sub-thread
with a detailed explanation.

Sorry.
-- 
Paul E Condon   
pecon...@mesanetworks.net


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20120915042127.ga...@big.lan.gnu



Re: Storage server

2012-09-13 Thread Veljko
On Tue, Sep 11, 2012 at 04:04:16PM +0200, Ralf Mardorf wrote:
 The cheapest, but anyway reliable German retailer for all kinds of
 electronic gear: 
 http://www.reichelt.de/index.html?;ACTION=103;LA=2;MANUFACTURER=adaptec;SID=12UE9B@H8AAAIAAEcGSWU702e805c66e3a1b7cce75cd098027793
 Perhaps you'll have good luck and find what you need.
 
 Regards,
 Ralf

Thanks, Ralf. 

If I don't find some hardware at local suppliers, I'll try at Reichelt.


Regards,
Veljko


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20120913101958.ga8...@angelina.example.org



Re: Storage server

2012-09-13 Thread Veljko
On Tue, Sep 11, 2012 at 08:34:51AM -0500, Stan Hoeppner wrote:
 One of the big reasons (other than cost) that I mentioned this card is
 that Adaptec tends to be more forgiving with non RAID specific
 (ERC/TLER) drives, and lists your Seagate 3TB drives as compatible.  LSI
 and other controllers will not work with these drives due to lack of
 RAID specific ERC/TLER.

Those are really valuable informations. I wasn't aware that not all
drives works with RAID cards.

 So now you've run into one of the problems I mentioned that is avoided
 by booting from a real RAID controller.

Yes, but problem was easily solvable.


Regards,
Veljko


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20120913102025.gb8...@angelina.example.org



Re: Storage server

2012-09-13 Thread Veljko
On Wed, Sep 12, 2012 at 01:50:04PM +0100, Jon Dowland wrote:
 On Tue, Sep 11, 2012 at 05:44:46PM -0500, Stan Hoeppner wrote:
  Which is why I recommend XFS.  It is exceptionally fast at
  traversing large btrees.  You'll need the 3.2 bpo kernel for
  Squeeze.  The old as dirt 2.6.32 kernel doesn't contain any of the
  recent (last 3 years) metadata optimizations.
 
 Yes. You are becoming a bit of a broken record on that front :)
 
 I have not performed any such timings but I am willing to believe you
 that XFS will be faster.  I'd still not recommend rsnapshot, because
 faster merely mitigates the big design flaw, it does not remove it,
 and rdiff-backup is virtually a drop-in replacement.
 
Can you please explain what design flaw is that? Isn't directory with
complete backup (but not occupying that much space due to hard links
usage) very usable for backup? If slow work can be avoided by the use of
XFS, what would be wrong about rsnapshot?

Regards,
Veljko


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20120913102055.gc8...@angelina.example.org



Re: Storage server

2012-09-13 Thread Veljko
On Wed, Sep 12, 2012 at 01:54:18PM +0100, Jon Dowland wrote:
 On Mon, Sep 10, 2012 at 08:03:43PM +0300, Andrei POPESCU wrote:
  http://www.taobackup.com/
 
 Yes indeed, great read.
 
 Also this: http://www.jwz.org/doc/backups.html
 
 A single external drive, normally stored away from the server, would be enough
 to have a backup that would survive the host going up in flames.

I'm using this concept for my private machines. I'm always backing up to
remote location.

Regards,
Veljko


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20120913102108.gd8...@angelina.example.org



Re: Storage server

2012-09-13 Thread Veljko
On Tue, Sep 11, 2012 at 05:44:46PM -0500, Stan Hoeppner wrote:
 On 9/11/2012 10:29 AM, Jon Dowland wrote:
 
  Actually, lots and lots of small files is the worst use-case for rsnapshot, 
  and
  the reason I believe it should be avoided. It creates large hard-link trees 
  and
  with lots and lots of small files, the filesystem metadata for the trees can
  consume more space than the files themselves. Also performing operations 
  that
  need to recurse over large link trees (such as simply removing an old
  increment) can be very slow in that case.
 
 Which is why I recommend XFS.  It is exceptionally fast at traversing
 large btrees.  You'll need the 3.2 bpo kernel for Squeeze.  The old as
 dirt 2.6.32 kernel doesn't contain any of the recent (last 3 years)
 metadata optimizations.
 
 -- 
 Stan

Unlike my boss, whom I faild to persuade to buy RAID card, you confince
me to use XFS. I created 1TB LV for backup (and will resize it when
necessary). Will default XFS be OK in my case?

Regards,
Veljko


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20120913102131.ge8...@angelina.example.org



Re: Storage server

2012-09-13 Thread Veljko
On Wed, Sep 12, 2012 at 06:49:22PM +0200, lee wrote:
 Denis Witt denis.w...@concepts-and-training.de writes:
 
  Anyway, I have some comparison data. I have a backup server that saves
  data from 5 other server at our hosting company using rsnapshot. The
  backups are kept for 14 days.
 
  rsnapshot:
  bup:
  obnam:
  rdiff-backup:
 
 How about amanda? It hasn't been mentioned yet and might be an
 interesting alternative.

I've heard of it, but don't know anyone who uses it. Any experience with
it?

Regards,
Veljko


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20120913102144.gf8...@angelina.example.org



Re: Storage server

2012-09-13 Thread Veljko
On Wed, Sep 12, 2012 at 10:46:37AM +0200, Denis Witt wrote:
 On Tue, 11 Sep 2012 16:29:22 +0100
 Jon Dowland j...@debian.org wrote:
 
  Denis' answer is very good, I won't re-iterate his points.
 
 Thanks. And also thanks for pointing out the Hardlinks thing, I
 over-read the lots of small files part in Velkjos Mail.
 
 Anyway, I have some comparison data. I have a backup server that saves
 data from 5 other server at our hosting company using rsnapshot. The
 backups are kept for 14 days.
 
 rsnapshot:
 
 The Backup has 186GB. 51 GB for the full backup (daily.0) and
 about 11GB for each incremental backup (daily.1 - daily.13). The backup
 includes typical small webserver files but also big logfiles and two
 ZOPE Databases (ZEOs) with about 5GB each.
 
 bup:
 
 I imported them with bup import-rsnapshot, the overall size is 15GB
 (for all 14 days) which is quite amazing. Anyway the lack of a
 possibility to delete old backup versions is (for me) a major drawback.
 What I liked was the possibility to mount the Backup with FUSE. After
 mounting the Backup one can access every backup generation as normal
 files. Each generation has its own folder with a timestamp. The files
 inside the backup have no metadata (timestamp is always 1.1.1970, etc.).
 
 The only way I can think of at the moment to get rid of old backup
 generations using bup is to mount (FUSE) the backup restore all backup
 generations you want to keep to an additional drive, delete the bup git
 repository, create a new one and backup the restore again. Of course
 this might take a lot of additional space on you disk for the
 (temporary) restore which might not be available. If any has some
 better approach I would love to hear.
 
 obnam:
 
 With obnam I made a backup of daily.0 (51GB). There was nearly no
 reduction in the size for the first backup run (47GB). The next backup
 run (one day later, which creates 11GB new data with rsnapshot) has only
 added a few MB and therefore was pretty fast.
 
 The repository approach of obnam comes very handy. You can pull or push
 backups to the repository server and can access the backups from any
 other machine (if you have SSH access). Configuration is not necessary
 but a small config containing some default parameters comes in handy:
 
 [config]
 repository = sftp://192.168.1.10/backup/obnam/
 log = /var/log/obnam.log
 log-level = warning
 client-name = dx
 
 If you now run obnam backup /var/www the backup of /var/www will be
 pushed to the repository. obnam locks the repository for the client so
 one cannot accidentally run two backups of the same host (client) at
 the same time. Running several backups of different hosts is no problem.
 
 During a backup run obnam makes snapshots every few 100MB so if the
 backup fails (e.g. disconnect from the repository server) the backup
 can be resumed from the last snapshot.
 
 A nice feature is some kind of built in nagios plugin:
 
 obnam nagios-last-backup-age --warn-age=1 --client=dx
 OK: backup is recent.  last backup was 2012-09-12 10:05:47.
 
 obnam nagios-last-backup-age --warn-age=1 --client=backup 
 WARNING: backup is old.  last backup was 2012-09-11 18:03:43.
 
 obnam nagios-last-backup-age --client=cat --critical-age=1
 CRITICAL: backup is old.  last backup was 2012-09-11 17:01:23.
 
 The restore is a bit more complex, as there is (at the moment) no FUSE
 filesystem available for obnam. Instead you need to know the name of
 the file/folder and in which backup generation your file/folder exists.
 
 obnam generations shows all available backups:
 
 101   2012-09-11 18:01:07 .. 2012-09-11 18:02:10 (26474 files,
 8598496965 bytes) 
 108   2012-09-12 10:05:47 .. 2012-09-12 10:06:36 (26474 files,
 8598500897 bytes) 
 
 Then you can use obnam ls --generation=101 to show the files.
 
 rdiff-backup:
 
 If have no real comparison data for rdiff-backup but I expect similar
 results as with obnam (about 50GB for the first backup, only several MB
 for each following daily backup).
 
 rdiff-backup can (like bup) mount the backup (all generations) using
 FUSE.
 
 Best regards
 Denis Witt

This is excellent comparison. Taking everything in consideration, I
thing I narrowed my choice to obnam, rdiff-backup and rsnapshot. I'll
try them all and see how they behave on my machine.

obnam and rdiff-backup seems to use less space, but I also like very
clear representation of backups on rsnapshot. But during few days of
testing each of them I'll know what to use. 

I also stumbled upon this one: http://rbackup.lescigales.org/

Best regards,
Veljko


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20120913102245.gg8...@angelina.example.org



Re: Storage server

2012-09-13 Thread Tony van der Hoff
On 13/09/12 11:21, Veljko wrote:
 On Wed, Sep 12, 2012 at 06:49:22PM +0200, lee wrote:
 Denis Witt denis.w...@concepts-and-training.de writes:

 Anyway, I have some comparison data. I have a backup server that saves
 data from 5 other server at our hosting company using rsnapshot. The
 backups are kept for 14 days.

 rsnapshot:
 bup:
 obnam:
 rdiff-backup:

 How about amanda? It hasn't been mentioned yet and might be an
 interesting alternative.
 
 I've heard of it, but don't know anyone who uses it. Any experience with
 it?

When I used tape for backup, I used Amanda, and it did what it was
supposed to do very well, with tape contents indexes, and a media
rotation pattern.

However, in a non-tape environment with only a few machines, I found it
excessively complex, and was happy to abandon it in favour of some
scripts around rsync to a NAS, which is much easier to administer.


-- 
Tony van der Hoff| mailto:t...@vanderhoff.org
Buckinghamshire, England |


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/5051c085.2030...@vanderhoff.org



Re: Storage server

2012-09-13 Thread Veljko
On Thu, Sep 13, 2012 at 12:16:21PM +0100, Tony van der Hoff wrote:
 When I used tape for backup, I used Amanda, and it did what it was
 supposed to do very well, with tape contents indexes, and a media
 rotation pattern.
 
 However, in a non-tape environment with only a few machines, I found it
 excessively complex, and was happy to abandon it in favour of some
 scripts around rsync to a NAS, which is much easier to administer.

Thanks for sharing your experience with Amanda, Tony.

Regards,
Veljko


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20120913112523.gh8...@angelina.example.org



Re: Storage server

2012-09-13 Thread lee
Veljko velj...@gmail.com writes:

 On Wed, Sep 12, 2012 at 06:49:22PM +0200, lee wrote:
 Denis Witt denis.w...@concepts-and-training.de writes:
 
  Anyway, I have some comparison data. I have a backup server that saves
  data from 5 other server at our hosting company using rsnapshot. The
  backups are kept for 14 days.
 
  rsnapshot:
  bup:
  obnam:
  rdiff-backup:
 
 How about amanda? It hasn't been mentioned yet and might be an
 interesting alternative.

 I've heard of it, but don't know anyone who uses it. Any experience with
 it?

Yes, I've used it at home and at work and it always worked nicely.  It
has a server and a client part so you can back up locally and remotely,
you can create backup schedules involving incremental backups and full
backups (IIRC depending on time and on how much data has changed, all
configurable).  Besides hard disks, it can use tape drives and drive
storage libraries and assists you in restores.  You can limit the
network traffic it uses and handle different file systems on different
clients differently.  Support through their mailing list was great.


-- 
Debian testing amd64


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/87mx0trday@yun.yagibdah.de



Re: Storage server

2012-09-13 Thread Stan Hoeppner
On 9/13/2012 5:21 AM, Veljko wrote:
 On Tue, Sep 11, 2012 at 05:44:46PM -0500, Stan Hoeppner wrote:
 On 9/11/2012 10:29 AM, Jon Dowland wrote:

 Actually, lots and lots of small files is the worst use-case for rsnapshot, 
 and
 the reason I believe it should be avoided. It creates large hard-link trees 
 and
 with lots and lots of small files, the filesystem metadata for the trees can
 consume more space than the files themselves. Also performing operations 
 that
 need to recurse over large link trees (such as simply removing an old
 increment) can be very slow in that case.

 Which is why I recommend XFS.  It is exceptionally fast at traversing
 large btrees.  You'll need the 3.2 bpo kernel for Squeeze.  The old as
 dirt 2.6.32 kernel doesn't contain any of the recent (last 3 years)
 metadata optimizations.

 Unlike my boss, whom I faild to persuade to buy RAID card, you confince
 me to use XFS. I created 1TB LV for backup (and will resize it when
 necessary). Will default XFS be OK in my case?

Due to its allocation group design, continually growing an XFS
filesystem in such small increments, with this metadata heavy backup
workload, will yield very poor performance.  Additionally, putting an
XFS filesystem atop an LV is not recommended as it cannot properly align
journal write out to the underlying RAID stripe width.  While this is
more critical with parity arrays, it also effects non parity striped arrays.

Thus my advice to you is:

Do not use LVM.  Directly format the RAID10 device using the mkfs.xfs
defaults.  mkfs.xfs will read the md configuration and automatically
align the filesystem to the stripe width.

When the filesystem reaches 85% capacity, add 4 more drives and create
another RAID10 array.  At that point we'll teach you how to create a
linear device of the two arrays and grow XFS across the 2nd array.

-- 
Stan


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/50526b3d@hardwarefreak.com



Re: Storage server

2012-09-13 Thread Stan Hoeppner
On 9/13/2012 5:20 AM, Veljko wrote:
 On Tue, Sep 11, 2012 at 08:34:51AM -0500, Stan Hoeppner wrote:
 One of the big reasons (other than cost) that I mentioned this card is
 that Adaptec tends to be more forgiving with non RAID specific
 (ERC/TLER) drives, and lists your Seagate 3TB drives as compatible.  LSI
 and other controllers will not work with these drives due to lack of
 RAID specific ERC/TLER.
 
 Those are really valuable informations. I wasn't aware that not all
 drives works with RAID cards.

Consumer hard drives will not work with most RAID cards.  As a general
rule, RAID cards require enterprise SATA drives or SAS drives.

-- 
Stan



-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/50527008.3000...@hardwarefreak.com



Re: Storage server

2012-09-12 Thread Denis Witt
On Tue, 11 Sep 2012 16:29:22 +0100
Jon Dowland j...@debian.org wrote:

 Denis' answer is very good, I won't re-iterate his points.

Thanks. And also thanks for pointing out the Hardlinks thing, I
over-read the lots of small files part in Velkjos Mail.

Anyway, I have some comparison data. I have a backup server that saves
data from 5 other server at our hosting company using rsnapshot. The
backups are kept for 14 days.

rsnapshot:

The Backup has 186GB. 51 GB for the full backup (daily.0) and
about 11GB for each incremental backup (daily.1 - daily.13). The backup
includes typical small webserver files but also big logfiles and two
ZOPE Databases (ZEOs) with about 5GB each.

bup:

I imported them with bup import-rsnapshot, the overall size is 15GB
(for all 14 days) which is quite amazing. Anyway the lack of a
possibility to delete old backup versions is (for me) a major drawback.
What I liked was the possibility to mount the Backup with FUSE. After
mounting the Backup one can access every backup generation as normal
files. Each generation has its own folder with a timestamp. The files
inside the backup have no metadata (timestamp is always 1.1.1970, etc.).

The only way I can think of at the moment to get rid of old backup
generations using bup is to mount (FUSE) the backup restore all backup
generations you want to keep to an additional drive, delete the bup git
repository, create a new one and backup the restore again. Of course
this might take a lot of additional space on you disk for the
(temporary) restore which might not be available. If any has some
better approach I would love to hear.

obnam:

With obnam I made a backup of daily.0 (51GB). There was nearly no
reduction in the size for the first backup run (47GB). The next backup
run (one day later, which creates 11GB new data with rsnapshot) has only
added a few MB and therefore was pretty fast.

The repository approach of obnam comes very handy. You can pull or push
backups to the repository server and can access the backups from any
other machine (if you have SSH access). Configuration is not necessary
but a small config containing some default parameters comes in handy:

[config]
repository = sftp://192.168.1.10/backup/obnam/
log = /var/log/obnam.log
log-level = warning
client-name = dx

If you now run obnam backup /var/www the backup of /var/www will be
pushed to the repository. obnam locks the repository for the client so
one cannot accidentally run two backups of the same host (client) at
the same time. Running several backups of different hosts is no problem.

During a backup run obnam makes snapshots every few 100MB so if the
backup fails (e.g. disconnect from the repository server) the backup
can be resumed from the last snapshot.

A nice feature is some kind of built in nagios plugin:

obnam nagios-last-backup-age --warn-age=1 --client=dx
OK: backup is recent.  last backup was 2012-09-12 10:05:47.

obnam nagios-last-backup-age --warn-age=1 --client=backup 
WARNING: backup is old.  last backup was 2012-09-11 18:03:43.

obnam nagios-last-backup-age --client=cat --critical-age=1
CRITICAL: backup is old.  last backup was 2012-09-11 17:01:23.

The restore is a bit more complex, as there is (at the moment) no FUSE
filesystem available for obnam. Instead you need to know the name of
the file/folder and in which backup generation your file/folder exists.

obnam generations shows all available backups:

101 2012-09-11 18:01:07 .. 2012-09-11 18:02:10 (26474 files,
8598496965 bytes) 
108 2012-09-12 10:05:47 .. 2012-09-12 10:06:36 (26474 files,
8598500897 bytes) 

Then you can use obnam ls --generation=101 to show the files.

rdiff-backup:

If have no real comparison data for rdiff-backup but I expect similar
results as with obnam (about 50GB for the first backup, only several MB
for each following daily backup).

rdiff-backup can (like bup) mount the backup (all generations) using
FUSE.

Best regards
Denis Witt


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20120912104637.5dcaa3c7@X200



Re: Storage server

2012-09-12 Thread Jon Dowland
On Tue, Sep 11, 2012 at 05:44:46PM -0500, Stan Hoeppner wrote:
 Which is why I recommend XFS.  It is exceptionally fast at traversing large
 btrees.  You'll need the 3.2 bpo kernel for Squeeze.  The old as dirt 2.6.32
 kernel doesn't contain any of the recent (last 3 years) metadata
 optimizations.

Yes. You are becoming a bit of a broken record on that front :)

I have not performed any such timings but I am willing to believe you that
XFS will be faster.  I'd still not recommend rsnapshot, because faster merely
mitigates the big design flaw, it does not remove it, and rdiff-backup is
virtually a drop-in replacement.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20120912125004.GB26534@debian



Re: Storage server

2012-09-12 Thread Jon Dowland
On Mon, Sep 10, 2012 at 08:03:43PM +0300, Andrei POPESCU wrote:
 http://www.taobackup.com/

Yes indeed, great read.

Also this: http://www.jwz.org/doc/backups.html

A single external drive, normally stored away from the server, would be enough
to have a backup that would survive the host going up in flames.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20120912125418.GC26534@debian



Re: Storage server

2012-09-12 Thread lee
Denis Witt denis.w...@concepts-and-training.de writes:

 Anyway, I have some comparison data. I have a backup server that saves
 data from 5 other server at our hosting company using rsnapshot. The
 backups are kept for 14 days.

 rsnapshot:
 bup:
 obnam:
 rdiff-backup:

How about amanda? It hasn't been mentioned yet and might be an
interesting alternative.


-- 
Debian testing amd64


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/87mx0vxk31@yun.yagibdah.de



Re: Storage server

2012-09-11 Thread Denis Witt
On Mon, 10 Sep 2012 17:38:22 +0200
Veljko velj...@gmail.com wrote:

 Any particular reason for avoiding rsnapshot? What are advantages of
 using rdiff-backup or obnam?

Hi Veljko,

I don't know a reason why someone should avoid rsnapshot. rdiff-backup
is very similar to rsnapshot but handles the backup generations
differently. rsnapshot always backup whole files (and uses hardlinks
if a file didn't change). rdiff-backup just save the newest backup as
normal files, every older version is stored as compressed delta. If you
have to backup large files like databases or huge logfiles rdiff-backup
will save you a lot of diskspace doing so (which is for me the biggest
advantage of rdiff-backup). On the other hand it takes much longer to
restore an old rdiff-backup than an rsnapshot one.

rdiff-backup is a bit more flexible when it comes to decide when
to delete old backups. rsnapshot has a fixed scheme. rdiff-backup has 
a command you can trigger manually (or by a script when the diskspace
is running low). So, for example, you can guarantee your users that
there will be a backup for at least 7 days but in fact keep files as
long as there is diskspace available. 

rdiff-backup stores metadata (such as ownership) separately. rsnapshot
just keep the settings the file has.

rsnapshot have a larger user basis, so you might can expect some
more support if you're running into problems.

obnam uses a completely different approach. Everything is stored in a
repository. It has some nice features but last time I had a look I
decided against using it (but I can't remember exactly why) so I can't
tell much about it.

bup is very interesting but at the moment not mature enough to be used,
IMHO. Also there is (at the moment) no function to delete old backups,
so if you're running out of diskspace you have to buy new hardware.

I'm using rsnapshot for most of my backup needs. It's very easy to use
and understand.

Best regards
Denis Witt


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20120911104104.7395f69c@X200



Re: Storage server

2012-09-11 Thread Jon Dowland
I would say that neither hardware nor software RAID are a replacement for
a working backup scheme.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20120911094342.GA11893@debian



Re: Storage server

2012-09-11 Thread Chris Bannister
On Mon, Sep 10, 2012 at 05:38:10PM +0200, Veljko wrote:
 Not that hard to comprehend. My boss sees backup as necessary evil. And
 only after I pushed it. Before I got here, there was no backup. None
 whatsover. I was baffled. And I had situation few days on my arrival,
 that one of databases got corrupted. Managed to find some old backup and
 with data we already had saved, restored database. But that situation is

You could use the database management tools and take a pg_dump and
stick it on a USB stick each night?

-- 
If you're not careful, the newspapers will have you hating the people
who are being oppressed, and loving the people who are doing the 
oppressing. --- Malcolm X


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20120911094513.GF28151@tal



Re: Storage server

2012-09-11 Thread Veljko
On Mon, Sep 10, 2012 at 08:03:43PM +0300, Andrei POPESCU wrote:
 If you ignore the references to the proprietary backup software this is 
 a very interesting reading
 
 http://www.taobackup.com/
 
 Kind regards,
 Andrei

Yes, very interesting. Thanks Andrei!

Regards,
Veljko


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/2012092521.gb11...@angelina.example.org



Re: Storage server

2012-09-11 Thread Veljko
On Mon, Sep 10, 2012 at 08:16:00PM +0200, Martin Steigerwald wrote:
 GRUB needs a space between MBR and first partition. Maybe that space was to 
 small? Or more likely you GPT partitioned the disk (as its 3 TB and MBR 
 does only work upto 2 TB)? Then you need a BIOS boot partition. Unless you 
 use UEFI, then you´d need about 200 MB FAT 32 EFI system partition.


Never used BIOS partition till now, but on the other hand, I've never
used 3TB disks. Debian reserve 1MB on start of the partition, but I
guess that part is used for MBR. But it wasn't hard to find necessary
information.

  I'm not sure what is being copied on freshly installed system.
 
 What do you mean by that?
 
 SoftRAID just makes sure that all devices data is in sync. Thats needed 
 for a block level based RAID. An hardware RAID controller would have to do 
 this as well. (Unless it uses some map of sectors it already used, then 
 only these would have to be kept in sync, but I am not aware of any 
 hardware RAID controller or SoftRAID mode that does this.)

Didn't think it would took that much time to sync empty disks, but now
it does make sense.

 BTRFS based RAID (RAID 1 means something different there) does not need 
 an initial sync. But then no, thats no recommendation to use BTRFS yet.
 
 Ciao,
 -- 
 Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
 GPG: 03B0 0D6C 0040 0710 4AFA  B82F 991B EAAC A599 84C7

I've tried BTRFS in testing environment, it worked well, but I wouldn't
use it until btrfsck  is ready and stable.

Regards,
Veljko


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/2012092602.gc11...@angelina.example.org



Re: Storage server

2012-09-11 Thread Veljko
On Mon, Sep 10, 2012 at 07:47:52PM +0200, lee wrote:
 Did you get it to actually install on the RAID and to boot from that?
 Last time I tried with a RAID-1, it didn't work. It's ridiculously
 difficult to get it set up so that everything is on software raid.

Yes, everything is on RAID. 2 boot partitions are on RAID1, everything
else, 4 big partition are on RAID10. Didn't work until I used mentioned
1MB BIOS boot partition at the beginning of all four disks.

  Anyhow, this is output of cat /proc/mdstat:
  Personalities : [raid1] [raid10] 
  md1 : active raid10 sda3[0] sdd2[3] sdc2[2] sdb3[1]
5859288064 blocks super 1.2 512K chunks 2 near-copies [4/4] []
[==..]  resync = 32.1% (1881658368/5859288064) 
  finish=325.2min speed=203828K/sec

  md0 : active raid1 sda2[0] sdb2[1]
488128 blocks super 1.2 [2/2] [UU]

  unused devices: none
 
  I'm not sure what is being copied on freshly installed system.
 
 On top of that, by default it'll do a check or rebuild of some sort
 every first Sunday night of every month.

Thanks for the heads up.

Regards,
Veljko


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/2012092617.gd11...@angelina.example.org



Re: Storage server

2012-09-11 Thread Veljko
On Tue, Sep 11, 2012 at 10:41:04AM +0200, Denis Witt wrote:
 On Mon, 10 Sep 2012 17:38:22 +0200
 Veljko velj...@gmail.com wrote:
 
  Any particular reason for avoiding rsnapshot? What are advantages of
  using rdiff-backup or obnam?
 
 Hi Veljko,
 
 I don't know a reason why someone should avoid rsnapshot. rdiff-backup
 is very similar to rsnapshot but handles the backup generations
 differently. rsnapshot always backup whole files (and uses hardlinks
 if a file didn't change). rdiff-backup just save the newest backup as
 normal files, every older version is stored as compressed delta. If you
 have to backup large files like databases or huge logfiles rdiff-backup
 will save you a lot of diskspace doing so (which is for me the biggest
 advantage of rdiff-backup). On the other hand it takes much longer to
 restore an old rdiff-backup than an rsnapshot one.
 
 rdiff-backup is a bit more flexible when it comes to decide when
 to delete old backups. rsnapshot has a fixed scheme. rdiff-backup has 
 a command you can trigger manually (or by a script when the diskspace
 is running low). So, for example, you can guarantee your users that
 there will be a backup for at least 7 days but in fact keep files as
 long as there is diskspace available. 
 
 rdiff-backup stores metadata (such as ownership) separately. rsnapshot
 just keep the settings the file has.
 
 rsnapshot have a larger user basis, so you might can expect some
 more support if you're running into problems.
 
 obnam uses a completely different approach. Everything is stored in a
 repository. It has some nice features but last time I had a look I
 decided against using it (but I can't remember exactly why) so I can't
 tell much about it.
 
 bup is very interesting but at the moment not mature enough to be used,
 IMHO. Also there is (at the moment) no function to delete old backups,
 so if you're running out of diskspace you have to buy new hardware.
 
 I'm using rsnapshot for most of my backup needs. It's very easy to use
 and understand.
 
 Best regards
 Denis Witt

Hi, Denis!

Thanks for your valuable input. So, in case I have to backup lot of
small files and only some of them are changed I should go with
rsnapshot. If there are big text files that changes through time, I
should go with rdiff-backup.

Would it be reasonable to use them both where appropriate or thats just
unnecessary complexity? 

Regards,
Veljko


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/2012092648.ge11...@angelina.example.org



Re: Storage server

2012-09-11 Thread Veljko
On Tue, Sep 11, 2012 at 09:45:14PM +1200, Chris Bannister wrote:
 On Mon, Sep 10, 2012 at 05:38:10PM +0200, Veljko wrote:
  Not that hard to comprehend. My boss sees backup as necessary evil. And
  only after I pushed it. Before I got here, there was no backup. None
  whatsover. I was baffled. And I had situation few days on my arrival,
  that one of databases got corrupted. Managed to find some old backup and
  with data we already had saved, restored database. But that situation is
 
 You could use the database management tools and take a pg_dump and
 stick it on a USB stick each night?
 

I'm backing it up on my machine for the moment, just to have something
if, God forbid, another situation arises. But not for long. :)

Regards,
Veljko


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/2012092714.gf11...@angelina.example.org



Re: Storage server

2012-09-11 Thread Veljko
On Mon, Sep 10, 2012 at 08:06:16PM +0200, Martin Steigerwald wrote:
 If you made sure to explain the risks to your boss you can say in case 
 anything bad happens: I recommended doing backup in a different, safer way 
 than you allowed me to do it and thats the result.

Thats exactly what I had in mind. I advised verbally and used emails so I
can always have something to put in front of his and his bosses noses.

 And you are right: Some backup is better than no backup.
 
 (PS: And I didn´t want to imply that you were an idiot - my above sentence 
 could be read as that. Sometimes its about a feeling of lack of choice. I 
 wish that you will create more choice for yourself in the future. From 
 what I read from your answers you are quite aware of the situation.)
 
 -- 
 Martin 'Helios' Steigerwald - http://www.Lichtvoll.de

I haven't read it that way. I know what you wanted to say. I'm very
grateful to all of you who took your time to read my emails and shared
your thoughts on the subject. And thanks for your wishes, Martin, much
appreciated.

Regards,
Veljko


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/2012092743.gg11...@angelina.example.org



Re: Storage server

2012-09-11 Thread Denis Witt
On Tue, 11 Sep 2012 13:26:48 +0200
Veljko velj...@gmail.com wrote:

 Would it be reasonable to use them both where appropriate or thats
 just unnecessary complexity? 

Hi Veljko,

I prefer backups as simple as it could get (one reason why I use
rsnapshot). So personally I wouldn't mix. 

But if you may want to provide an restore share for your users so they
could recover their text files on their own (even then if they
deleted/changed them some days ago) I would use rsnapshot for those
files. For your Virtual Machines rdiff-backup should be better
regarding backup size. So mixing might be worth it. 

If you don't want your users to recover files on their own, or only the
most recent version you can use rdiff-backup for all of your files.

If you want a fixed backup policy (e.g. 72h,7d,5w,12m, which means:
keep the last 72 hourly backups, the last 7 daily backups, the last 5
weekly backups and the last 12 monthly backups), delta updates and
don't want a restore share, take a look at obnam. If you want a fixed
policy but there is no need for delta backups take rsnapshot. If you
want something in between rdiff-backup might be a good choice.

As you can see, it always depends on what you're trying to achieve. So I
would suggest you do some tests on your own and choose the tool that
fits you most.

A nice comparison between rsnapshot and rdiff-backup can be found here:
http://www.saltycrane.com/blog/2008/02/backup-on-linux-rsnapshot-vs-rdiff/
(Also the comments are very insightful.)

Best regards
Denis Witt


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20120911141830.0970c664@X200



Re: Storage server

2012-09-11 Thread Stan Hoeppner
On 9/10/2012 10:41 AM, Veljko wrote:
 On Mon, Sep 10, 2012 at 08:05:49AM -0500, Stan Hoeppner wrote:
 I'm not able to find that card here (and I haven't so far), can I have 
 another one?

 That's hard to believe given the worldwide penetration Adaptec has, and
 the fact UPS/FedEx ship worldwide.  What country are you in?
 
 I'm in Serbia. I tried several web sites of more known dealers, but it's
 possible that they don't have everything listed there on their web
 sites. If my boss approve buying RAID card, I'll call them to see if
 they have it or if they can order one.

Try German suppliers.  Surely they'd have it.

 If not, how to find appropriate one? One with 8 supported devices,
 hardware RAID10? What else to look for?

One of the big reasons (other than cost) that I mentioned this card is
that Adaptec tends to be more forgiving with non RAID specific
(ERC/TLER) drives, and lists your Seagate 3TB drives as compatible.  LSI
and other controllers will not work with these drives due to lack of
RAID specific ERC/TLER.

 I didn't till 30 minutes ago. :) I just installed it for exercise if
 nothing else. Had a problem with booting. 
 
 Unable to install GRUB in /dev/sda
 Executing 'grub-intall /dev/sda' failed.
 This is a fatal error.
 
 After creating 1MB partition at the beginning of every drive with
 reserved for boot bios it worked (AHCI in BIOS). 

So now you've run into one of the problems I mentioned that is avoided
by booting from a real RAID controller.

 Anyhow, this is output of cat /proc/mdstat:
 Personalities : [raid1] [raid10] 
 md1 : active raid10 sda3[0] sdd2[3] sdc2[2] sdb3[1]
   5859288064 blocks super 1.2 512K chunks 2 near-copies [4/4] []
   [==..]  resync = 32.1% (1881658368/5859288064) 
 finish=325.2min speed=203828K/sec
   
 md0 : active raid1 sda2[0] sdb2[1]
   488128 blocks super 1.2 [2/2] [UU]
   
 unused devices: none
 
 I'm not sure what is being copied on freshly installed system.

Someone else already answered this.  RAID arrays require initialization
of the drives, i.e. filling all sectors with zeros.  Always have,
probably always will.

-- 
Stan


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/504f3dfb.80...@hardwarefreak.com



Re: Storage server

2012-09-11 Thread Stan Hoeppner
On 9/11/2012 4:43 AM, Jon Dowland wrote:
 I would say that neither hardware nor software RAID are a replacement for
 a working backup scheme.

Absolutely correct.  RAID protects against drive failure, period.  It
doesn't protect against accidental file deletion, overwriting a new file
with an older version, filesystem corruption due to various causes, etc.
 It also does not protect against controller or complete host failures,
or catastrophic loss of the facility due to man made or natural
disasters (hence why off site backup is required for crucial information).

-- 
Stan


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/504f3eee.2050...@hardwarefreak.com



Re: Storage server

2012-09-11 Thread Stan Hoeppner
On 9/11/2012 6:26 AM, Veljko wrote:
 Debian reserve 1MB on start of the partition, but I
 guess that part is used for MBR.

The MBR is stored entirely in the first sector of the drive and is only
512 bytes in size.  It includes the bootstrap code, partition table, and
boot signature.

The reason for the 1MB reservation is to ensure that any partitions
created will fall on 4096 byte sector boundaries of Advanced Format
drives, i.e. 4096B native sectors with 512B physical sectors presented
to the OS.  This prevents the drive from being required to read two 4KB
sectors instead of one, which occurs with improper partition alignment,
causing serious performance degradation.

-- 
Stan



-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/504f43c0.3020...@hardwarefreak.com



Re: Storage server

2012-09-11 Thread Ralf Mardorf
On Tue, 2012-09-11 at 08:34 -0500, Stan Hoeppner wrote:
 On 9/10/2012 10:41 AM, Veljko wrote:
  On Mon, Sep 10, 2012 at 08:05:49AM -0500, Stan Hoeppner wrote:
  I'm not able to find that card here (and I haven't so far), can I
 have another one?
 
  That's hard to believe given the worldwide penetration Adaptec has,
 and
  the fact UPS/FedEx ship worldwide.  What country are you in?
  
  I'm in Serbia. I tried several web sites of more known dealers, but
 it's
  possible that they don't have everything listed there on their web
  sites. If my boss approve buying RAID card, I'll call them to see if
  they have it or if they can order one.
 
 Try German suppliers.  Surely they'd have it.

The cheapest, but anyway reliable German retailer for all kinds of
electronic gear: 
http://www.reichelt.de/index.html?;ACTION=103;LA=2;MANUFACTURER=adaptec;SID=12UE9B@H8AAAIAAEcGSWU702e805c66e3a1b7cce75cd098027793
Perhaps you'll have good luck and find what you need.

Regards,
Ralf


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/1347372256.2252.13.camel@localhost.localdomain



Re: Storage server

2012-09-11 Thread Ralf Mardorf
On Tue, 2012-09-11 at 16:04 +0200, Ralf Mardorf wrote:
 On Tue, 2012-09-11 at 08:34 -0500, Stan Hoeppner wrote:
  On 9/10/2012 10:41 AM, Veljko wrote:
   On Mon, Sep 10, 2012 at 08:05:49AM -0500, Stan Hoeppner wrote:
   I'm not able to find that card here (and I haven't so far), can I
  have another one?
  
   That's hard to believe given the worldwide penetration Adaptec has,
  and
   the fact UPS/FedEx ship worldwide.  What country are you in?
   
   I'm in Serbia. I tried several web sites of more known dealers, but
  it's
   possible that they don't have everything listed there on their web
   sites. If my boss approve buying RAID card, I'll call them to see if
   they have it or if they can order one.
  
  Try German suppliers.  Surely they'd have it.
 
 The cheapest, but anyway reliable German retailer for all kinds of
 electronic gear: 
 http://www.reichelt.de/index.html?;ACTION=103;LA=2;MANUFACTURER=adaptec;SID=12UE9B@H8AAAIAAEcGSWU702e805c66e3a1b7cce75cd098027793
 Perhaps you'll have good luck and find what you need.

PS: Regarding to computer gear there are less expensive German retailers
and perhaps many of them are reliable too. However, in case of doubt I
would choose Reichelt.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/1347372701.2252.17.camel@localhost.localdomain



Re: Storage server

2012-09-11 Thread Jon Dowland
Denis' answer is very good, I won't re-iterate his points.

On Tue, Sep 11, 2012 at 01:26:48PM +0200, Veljko wrote:
 Thanks for your valuable input. So, in case I have to backup lot of
 small files and only some of them are changed I should go with
 rsnapshot. If there are big text files that changes through time, I
 should go with rdiff-backup.

Actually, lots and lots of small files is the worst use-case for rsnapshot, and
the reason I believe it should be avoided. It creates large hard-link trees and
with lots and lots of small files, the filesystem metadata for the trees can
consume more space than the files themselves. Also performing operations that
need to recurse over large link trees (such as simply removing an old
increment) can be very slow in that case.

 Would it be reasonable to use them both where appropriate or thats just
 unnecessary complexity? 

Sounds like unnecessary complexity to me.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20120911152922.GA2125@debian



Re: Storage server

2012-09-11 Thread lee
Veljko velj...@gmail.com writes:

 On Mon, Sep 10, 2012 at 07:47:52PM +0200, lee wrote:
 Did you get it to actually install on the RAID and to boot from that?
 Last time I tried with a RAID-1, it didn't work. It's ridiculously
 difficult to get it set up so that everything is on software raid.

 Yes, everything is on RAID. 2 boot partitions are on RAID1, everything
 else, 4 big partition are on RAID10. Didn't work until I used mentioned
 1MB BIOS boot partition at the beginning of all four disks.

Good to know, thanks :)


-- 
Debian testing amd64


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/87ipbk35uz@yun.yagibdah.de



Re: Storage server

2012-09-11 Thread Stan Hoeppner
On 9/11/2012 10:29 AM, Jon Dowland wrote:

 Actually, lots and lots of small files is the worst use-case for rsnapshot, 
 and
 the reason I believe it should be avoided. It creates large hard-link trees 
 and
 with lots and lots of small files, the filesystem metadata for the trees can
 consume more space than the files themselves. Also performing operations that
 need to recurse over large link trees (such as simply removing an old
 increment) can be very slow in that case.

Which is why I recommend XFS.  It is exceptionally fast at traversing
large btrees.  You'll need the 3.2 bpo kernel for Squeeze.  The old as
dirt 2.6.32 kernel doesn't contain any of the recent (last 3 years)
metadata optimizations.

-- 
Stan



-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/504fbede.7060...@hardwarefreak.com



Re: Storage server

2012-09-10 Thread Stan Hoeppner
On 9/9/2012 3:25 PM, Paul E Condon wrote:

 I've been following this thread from its beginning. My initial reading
 of OP's post was to marvel at the thought that so many things/tasks
 could be done with a single box in a single geek's cubicle. 

One consumer quad core AMD Linux box of today can do a whole lot more
than what has been mentioned.

 I resolved
 to follow the thread that would surely follow closely. I think you,
 Stan, did OP an enormous service with your list of questions to be
 answered. 

I try to prevent other from shooting themselves in the foot when I see
the loaded gun in their hand.

 This thread drifted onto the topic of XFS. I first learned of the
 existence of XFS from earlier post by you, and I have ever since been
 curious about it. But I am retired, and live at home in an environment
 where there is very little opportunity to make use of its features.

You might be surprised.  The AG design and xfs_fsr make it useful for
home users.

 Perhaps you could take OP's original specification as a user wish list
 and sketch a design that would fulfill the wishlist and list how XFS
 would change or resolve issues that were/are troubling him. 

The OP's issues don't revolve around filesystem choice, but basic system
administration concepts.

 In particular, the typical answers to questions about backup on this list
 involve rsync, or packages that depend on rsync, and on having a file
 system that uses inodes and supports hard links. 

rsync works with any filesystem, but some work better with rsync
workloads.  If one has concurrent rsync jobs running XFS is usually best.

 How would an XFS design
 handle de-duplication? 

Deduplication isn't an appropriate function of a filesystem.

 Or is de-duplication simply a bad idea in very
 large systems?

That's simply a bad, very overly broad question.

-- 
Stan


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/504dc2fa.1080...@hardwarefreak.com



Re: Storage server

2012-09-10 Thread Veljko
On Sat, Sep 08, 2012 at 09:59:35PM +0200, Martin Steigerwald wrote:
 Could it be that you intend to provide hosted monitoring, backup and 
 fileservices for an customer and while at it use the same machine for 
 testing own stuff?
 
 If so: Don´t.
 
 Thats at least my advice. (In addition to what I wrote already.)
 
 -- 
 Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
 GPG: 03B0 0D6C 0040 0710 4AFA  B82F 991B EAAC A599 84C7

No, I'm not doing this for an customer, but for my boss. Someone with
idea like the one you implied to me has no place in job like this, I'm
sure you'll agree.

Regards,
Veljko


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20120910104403.ga9...@angelina.example.org



Re: Storage server

2012-09-10 Thread Veljko
On Sat, Sep 08, 2012 at 09:28:09PM +0200, Martin Steigerwald wrote:
 Consider the consequenzes:
 
 If the server fails, you possibly wouldn´t know why cause the monitoring 
 information wouldn´t be available anymore. So at least least Nagios / 
 Icingo send out mails, in case these are not stored on the server as well, 
 or let it relay the information to another Nagios / Icinga instance.

Ideally, Icinga/Nagios/any server would be on HA system but that,
unfortunately is not an option. But of course, Icinga can't monitor
system it's on, so I plan to monitor it from my own machine. 

 What data do you backup? From where does it come?

Like I said, it's several dedicated, mostly web servers with users
uploaded content on one of them (that part is expected to grow). None of
them is in the same data center.

 I still think backup should be separate from other stuff. By design.
 Well for more fact based advice we´d require a lot more information on 
 your current setup and what you want to achieve.
 
 I recommend to have a serious talk about acceptable downtimes and risks 
 for the backup with the customer if you serve one or your boss if you work 
 for one.

I talked to my boss about it. Since this is backup server, downtime is
acceptable to him. Regarding risks of data loss, isn't that the reason
to implement RAID configuration? R stands for redundancy. If hard disk
fails, it will be replaced and RAID will be rebuild with no data loss.
If processor or something else fails, it will be replaced with expected
downtime of course.

 -- 
 Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
 GPG: 03B0 0D6C 0040 0710 4AFA  B82F 991B EAAC A599 84C7


Regards,
Veljko


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20120910104516.gb9...@angelina.example.org



Re: Storage server

2012-09-10 Thread Veljko
On Sun, Sep 09, 2012 at 03:42:12AM -0500, Stan Hoeppner wrote:
 Stop here.  Never use a production system as a test rig.

Noted.

 You can build a complete brand new AMD dedicated test machine with parts
 from Newegg for $238 USD, sans KB/mouse/monitor, which you already have.
  Boot it up then run it headless, use a KVM switch, etc.
 
 http://www.newegg.com/Product/Product.aspx?Item=N82E16813186189
 http://www.newegg.com/Product/Product.aspx?Item=N82E16820148262
 http://www.newegg.com/Product/Product.aspx?Item=N82E16819103888
 http://www.newegg.com/Product/Product.aspx?Item=N82E16822136771
 http://www.newegg.com/Product/Product.aspx?Item=N82E16827106289
 http://www.newegg.com/Product/Product.aspx?Item=N82E16811121118
 
 If ~$250 stretches the wallet of your employer, it's time for a new job.

Not all of us have that kind of luxury to be that picky about our job,
but I get your point. 

 Get yourself an Adaptec 8 port PCIe x8 RAID card kit for $250:
 http://www.newegg.com/Product/Product.aspx?Item=N82E16816103231
 
 The Seagate ST3000DM001 is certified.  It can't do RAID5 so you'll use
 RAID10, giving you 6TB of raw capacity, but much better write
 performance than RAID5.  You can add 4 more of these drives, doubling
 capacity to 12TB.  Comes with all cables, manuals, etc.  Anyone who has
 tried to boot a server after the BIOS configured boot drive that is
 mdraid mirrored knows why $250 is far more than worth the money.  A
 drive failure with a RAID card doesn't screw up your boot order.  It
 just works.

I'm gonna try to persuade my boss to buy one and in case he agrees and
I'm not able to find that card here (and I haven't so far), can I have another 
one?
What about something like this:
http://ark.intel.com/products/35340/Intel-RAID-Controller-SASMF8I

If not, how to find appropriate one? One with 8 supported devices,
hardware RAID10? What else to look for?

  In next few months it is expected that size of files on dedicated
  servers will grow and it case that really happen I'd like to be able to
  expand this system.
 
 See above.
 
  And, of course, thanks for your time and valuable advices, Stan, I've read
  some of your previous posts on this list and know you're storage guru.
 
 You're welcome.  And thank you. ;)  Recommending the above Adaptec card
 is the best advice you'll get.  It'll make your life much easier, with
 better performance to boot.
 
 -- 
 Stan

There is something that is not clear to me. You recommended hardware
RAID as superior solution. I already knew that it is the case, but I
thought that linux software RAID is also some solution. What would be
drawbacks of using it? In case of one drive failure, it is possible that
it won't boot or it just won't boot? In case I don't get that card,
should I remove /boot from RAID1? 

Regards,
Veljko


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20120910104739.gc9...@angelina.example.org



Re: Storage server

2012-09-10 Thread The Wanderer

On 09/09/2012 02:37 AM, Stan Hoeppner wrote:


On 9/7/2012 3:16 PM, Bob Proulx wrote:



Whjat?  Are you talking crash recovery boot time fsck?  With any modern
journaled FS log recovery is instantaneous.  If you're talking about an
actual structure check, XFS is pretty quick regardless of inode count as
the check is done in parallel.  I can't speak to EXTx as I don't use
them.


You should try an experiment and set up a terabyte ext3 and ext4 filesystem
and then perform a few crash recovery reboots of the system.  It will
change your mind.  :-)


As I've never used EXT3/4 and thus have no opinion, it'd be a bit difficult
to change my mind.  That said, putting root on a 1TB filesystem is a brain
dead move, regardless of FS flavor.  A Linux server doesn't need more than
5GB of space for root.  With /var, /home/ and /bigdata on other filesystems,
crash recovery fsck should be quick.


In my case, / is a 100GB filesystem, and 36GB of it is in use - even with both
/var and /home on separate filesystems.

All but about 3GB of that is under /root (almost all of it in the form of manual
one-off backups), and could technically be stored elsewhere, but it made sense
to put it there since root is the one who's going to be working with it.

Yes, 100GB for / is way more than is probably necessary - but I've run up
against a too-small / in the past (with a 10GB filesystem), even when not
storing more than trivial amounts of data under /root, and I'd rather err on the
side of too much than too little. Since I've got something like 7TB to play
with in total, 100GB didn't seem like too much space to potentially waste, for
the peace of mind of knowing I'd never run out of space on /. (And from the
current use level, it may not have really been wasted.)

--
  The Wanderer

Warning: Simply because I argue an issue does not mean I agree with any
side of it.

Every time you let somebody set a limit they start moving it.
  - LiveJournal user antonia_tiger


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org

Archive: http://lists.debian.org/504de563.10...@fastmail.fm



Re: Storage server

2012-09-10 Thread Stan Hoeppner
On 9/10/2012 5:47 AM, Veljko wrote:

 Not all of us have that kind of luxury to be that picky about our job,
 but I get your point. 

Small companies with really tight purse strings may seem fine this week,
then suddenly fold next week, everyone loses their jobs in the process.

 Get yourself an Adaptec 8 port PCIe x8 RAID card kit for $250:
 http://www.newegg.com/Product/Product.aspx?Item=N82E16816103231

 I'm gonna try to persuade my boss to buy one and in case he agrees and

It's the least expensive real RAID card w/8 ports on the market, and a
high quality one at that.  LSI is best, Adaptec 2nd, then the rest.

 I'm not able to find that card here (and I haven't so far), can I have 
 another one?

That's hard to believe given the worldwide penetration Adaptec has, and
the fact UPS/FedEx ship worldwide.  What country are you in?

 What about something like this:
 http://ark.intel.com/products/35340/Intel-RAID-Controller-SASMF8I

This Intel HBA with software assisted RAID is not a real RAID card.  And
it uses the LSI1068 chip so it probably doesn't support 3TB drives.  In
fact it does not, only 2TB:
http://www.intel.com/support/motherboards/server/sb/CS-032920.htm

 If not, how to find appropriate one? One with 8 supported devices,
 hardware RAID10? What else to look for?

There are many cards with the features you need.  I simply mentioned the
least expensive one.  Surely there is an international distributor in
your region that carries it.  If you're in Antarctica and you're
limiting yourself to local suppliers, you're out of luck.  Again, if you
tell us where you are it would make assisting you easier.

 There is something that is not clear to me. You recommended hardware
 RAID as superior solution. I already knew that it is the case, but I
 thought that linux software RAID is also some solution. 

You mean same solution, yet?  They are not equal.  Far from it.

 What would be
 drawbacks of using it? In case of one drive failure, it is possible that
 it won't boot or it just won't boot? 

This depends entirely on the system BIOS, its limitations, and how you
have device boot order configured.  For it to work seamlessly you must
manually configure it that way.  And you must make sure any/every time
you run lilo or grub that it targets both drives in the mirror pair,
assuming you've installed lilo/grub in the MBR.

Using a hardware RAID controller avoids all the nonsense above.  You
simply tell the system BIOS to boot from SCSI or external device,
whatever the manual calls it.

 In case I don't get that card,
 should I remove /boot from RAID1?

Post the output of

~$ cat /proc/mdstat

I was under the impression you didn't have this system built and running
yet.  Apparently you do.  Are the 4x 3TB drives the only drives in this
system?

-- 
Stan


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/504de5ad.1070...@hardwarefreak.com



Re: Storage server

2012-09-10 Thread Jon Dowland
On Sat, Sep 08, 2012 at 06:49:45PM +0200, Veljko wrote:
   a) backup (backup server for several dedicated (mainly) web servers).
   It will contain incremental backups, so only first running will take a
   lot of time, rsnapshot

Best avoid rsnapshot. Use (at least) rdiff-backup instead, which is nearly
a drop-in replacement (but scales); or consider something like bup or obnam
instead.

   and will run from cron every day. Files that will be added later are
   around 1-10 MB in size. I expect ~20 GB daily, but that number can
   grow. Some files fill be deleted, other will be added.

If you want files to eventually be purged from backups avoid bup.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20120910130747.GG26700@debian



Re: Storage server

2012-09-10 Thread Jon Dowland
On Sat, Sep 08, 2012 at 09:51:05PM +0200, lee wrote:
 Some people have argued it's even better to use software raid than a
 hardware raid controller because software raid doesn't depend on
 particular controller cards that can fail and can be difficult to
 replace. Besides that, software raid is a lot cheaper.

You also get transferrable skills: you can use the same tooling on different
systems.  If you have a heterogeneous environment, you may have to learn a
totally different set of HW RAID tools for various bits and pieces, which
can be a pain.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20120910131115.GH26700@debian



Re: Storage server

2012-09-10 Thread The Wanderer

On 09/10/2012 09:05 AM, Stan Hoeppner wrote:


On 9/10/2012 5:47 AM, Veljko wrote:



There is something that is not clear to me. You recommended hardware RAID
as superior solution. I already knew that it is the case, but I thought
that linux software RAID is also some solution.


You mean same solution, yet?  They are not equal.  Far from it.


What would be drawbacks of using it? In case of one drive failure, it is
possible that it won't boot or it just won't boot?


This depends entirely on the system BIOS, its limitations, and how you have
device boot order configured.  For it to work seamlessly you must manually
configure it that way.  And you must make sure any/every time you run lilo or
grub that it targets both drives in the mirror pair, assuming you've
installed lilo/grub in the MBR.

Using a hardware RAID controller avoids all the nonsense above.  You simply
tell the system BIOS to boot from SCSI or external device, whatever the
manual calls it.


But from what I'm told, hardware RAID has the downside that it often relies on
the exact model of RAID card; if the card dies, you'll need an exact duplicate
in order to be able to mount the RAID. It also (at least in the integrated cases
I've seen) works only with the ports provided by the card, not with any/all
ports the system may have.

Hardware RAID is simpler to configure, is easier to maintain, and is faster (or,
at least, places less load on the CPU). My own experience seems to indicate
that, all else being equal, software RAID is less hardware-dependent and more
expandable.

There are advantages and disadvantages to both options, including probably some
I haven't listed. I personally prefer software RAID for almost all cases, simply
due to my own personal evaulation of how much aggravation each of those
advantages and disadvantages provides or avoids, but hardware RAID is certainly
a legitimate choice for those who evaluate them differently.

--
  The Wanderer

Warning: Simply because I argue an issue does not mean I agree with any
side of it.

Every time you let somebody set a limit they start moving it.
  - LiveJournal user antonia_tiger


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org

Archive: http://lists.debian.org/504de8dc.4030...@fastmail.fm



Re: Storage server

2012-09-10 Thread Stan Hoeppner
On 9/10/2012 8:11 AM, Jon Dowland wrote:
 On Sat, Sep 08, 2012 at 09:51:05PM +0200, lee wrote:
 Some people have argued it's even better to use software raid than a
 hardware raid controller because software raid doesn't depend on
 particular controller cards that can fail and can be difficult to
 replace. Besides that, software raid is a lot cheaper.
 
 You also get transferrable skills: you can use the same tooling on different
 systems.  If you have a heterogeneous environment, you may have to learn a
 totally different set of HW RAID tools for various bits and pieces, which
 can be a pain.

mdraid also allows one to use the absolute cheapest, low ball hardware
on the planet, and a vast swath of mdraid users do exactly that,
assuming mdraid makes it more reliable--wrong!

See the horror threads and read of the data loss in the last few years
of the linux-raid mailing list for enlightenment.

Linux RAID is great in the right hands when used for appropriate
workloads.  Too many people are using it who should not be, and giving
it a bad rap due to no fault of the software.

Hardware RAID has a minimum price of entry, both currency and knowledge,
and forces one to use quality hardware and BCPs.  Which is why you don't
often see horror stories about hardware RAID eating TBs of filesystems
and data.  And when it does, it's usually because the vendor or user
skimped on hardware somewhere in the stack.

-- 
Stan


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/504df15a.7010...@hardwarefreak.com



Re: Storage server

2012-09-10 Thread Martin Steigerwald
Am Montag, 10. September 2012 schrieb Veljko:
 On Sat, Sep 08, 2012 at 09:28:09PM +0200, Martin Steigerwald wrote:
  Consider the consequenzes:
  
  If the server fails, you possibly wouldn´t know why cause the
  monitoring information wouldn´t be available anymore. So at least
  least Nagios / Icingo send out mails, in case these are not stored
  on the server as well, or let it relay the information to another
  Nagios / Icinga instance.
 
 Ideally, Icinga/Nagios/any server would be on HA system but that,
 unfortunately is not an option. But of course, Icinga can't monitor
 system it's on, so I plan to monitor it from my own machine.

Hmmm, sounds like a workaround… but since it seems your resources are 
tightly limited…

  What data do you backup? From where does it come?
 
 Like I said, it's several dedicated, mostly web servers with users
 uploaded content on one of them (that part is expected to grow). None
 of them is in the same data center.

Okay, so thats fine.

I would still not be comfortable mixing production stuff with a backup 
server, but I think you could get away with it.

But then you need a different backup server for the production stuff on the 
server and the files from the fileserver service that you plan to run on it, 
cause…

  I still think backup should be separate from other stuff. By design.
  Well for more fact based advice we´d require a lot more information
  on your current setup and what you want to achieve.
  
  I recommend to have a serious talk about acceptable downtimes and
  risks for the backup with the customer if you serve one or your boss
  if you work for one.
 
 I talked to my boss about it. Since this is backup server, downtime is
 acceptable to him. Regarding risks of data loss, isn't that the reason
 to implement RAID configuration? R stands for redundancy. If hard
 disk fails, it will be replaced and RAID will be rebuild with no data
 loss. If processor or something else fails, it will be replaced with
 expected downtime of course.

… no again: RAID is not a backup.

RAID is about maximizing performance and/or minimizing downtime.

Its not a backup. And thats about it.

If you or someone else or an application that goes bonkers delete data on 
the RAID by accident its gone. Immediately.

If you delete data on a filesystem that is backuped elsewhere, its still 
there provided that you notice the data loss before the backup is 
rewritten and old versions of it are rotated away.

See the difference?

Ok, so now you can argue: But if I rsnapshot the production data on this 
server onto the same server I can still access old versions of it even 
when the original data is deleted by accident.

Sure. Unless due to a hardware error like to many disks failing at once or 
a controller error or a fire or what not the RAID where the backup is 
stored is gone as well. 

This is why I won´t ever consider to carry the backup of this notebook 
around with the notebook itself. It just doesn´t make sense. Neither for a 
notebook, nor for a production server.

Thats why I recommend an *offsite* backup for any data that you think is 
important for your company. With offsite meaning at least a different 
machine and a different set of harddisks.

If that doesn´t go into the head of your boss I do not know what will.

If you follow this, you need two boxes… But if you need two boxes, why 
just don´t do the following:

1) virtualization host

2) backup host

to have a clear separation and an easier concept. Sure you could replicate 
the production data of the mixed production/dataserver to somewhere else, 
but going down this route it seems to be that you add workaround upon 
workaround upon workaround.

I find it way easier if the backup server does backup (and nothing else!) 
and the production server does backup (and nothing else). And removing 
complexity removes possible sources of human errors as well.

In case you go above route, I wouldn´t even feel to uncomfortable if you 
ran some test VMs on the virtualization host. But that depends on how 
critical the production services on it are.

Thanks,
-- 
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA  B82F 991B EAAC A599 84C7


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/201209101602.54668.mar...@lichtvoll.de



Re: Storage server

2012-09-10 Thread Stan Hoeppner
On 9/10/2012 8:19 AM, The Wanderer wrote:

 But from what I'm told, hardware RAID has the downside that it often
 relies on
 the exact model of RAID card; if the card dies, you'll need an exact
 duplicate
 in order to be able to mount the RAID. 

You've been misinformed.

And, given your admission that you have no personal experience with
hardware RAID, and almost no knowledge of it, it seems odd you'd jump
into a thread and tell everyone about its apparent limitations.

-- 
Stan


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/504df632.7000...@hardwarefreak.com



Re: Storage server

2012-09-10 Thread Martin Steigerwald
Am Montag, 10. September 2012 schrieb Jon Dowland:
 On Sat, Sep 08, 2012 at 09:51:05PM +0200, lee wrote:
  Some people have argued it's even better to use software raid than a
  hardware raid controller because software raid doesn't depend on
  particular controller cards that can fail and can be difficult to
  replace. Besides that, software raid is a lot cheaper.
 
 You also get transferrable skills: you can use the same tooling on
 different systems.  If you have a heterogeneous environment, you may
 have to learn a totally different set of HW RAID tools for various
 bits and pieces, which can be a pain.

I think you got a point here.

While the hardware of some nice LSI / Adaptec controllers appears to be 
excellent for me and the battery backed up cache can help performance a 
lot if you configure mount options correctly, the software side regarding 
administration tools in my eyes is pure and utter crap.

I usually installed 3-4 different packages from

http://hwraid.le-vert.net/wiki/DebianPackages

in order to just find out which tool it is this time. (And thats already 
from a developer who provides packages, I won´t go into downloading tools 
from the manufacturers website and installing them manually. Been there, 
done that.)

And of course each one of this goes by different parameters.

And then do Nagios/Icinga monitoring with this: You basically have to 
write or install a different check for each different type of hardware raid 
controller.

This is so utter nonsense.

I really do think this strongly calls for some standardization.

I´d love to see a standard protocol on how to talk to hardware raid 
controlllers and then some open source tool for it. Also for setting up 
the raid (from a live linux or what).

And do not get me started about the hardware RAID controller BIOS setups. 
Usabilitywise they tend to be so beyond anything sane that I do not even 
want to talk about it.

Cause thats IMHO one of the biggest advantages of software RAID. You have 
mdadm and be done with it. Sure it has a flexibility that may lure 
beginners into creating setups that are dangerous. But if you stay by best 
practices I think its pretty reliable.


Benefits of a standard + open source tool would be plenty:

1) One frontend to the controller, no need to develop and maintain a dozen 
of different tools. Granted a good (!) BIOS setup may still be nice to be 
able to set something up without booting a Linux Live USB stick.

2) Lower learning curve.

3) Uniform monitoring.


Actually its astonishing! You get pro hardware, but the software based 
admin tool is from the last century.


Thats at least what I saw. If there are controllers by now which come with 
software support that can be called decent I´d like to now. I never saw an 
Areca controller, maybe they have better software.


Otherwise I agree to Stan: Avoid dmraid. Either hardware RAID *or* 
software RAID. Avoid anything in between ;).

Ciao,
-- 
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA  B82F 991B EAAC A599 84C7


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/201209101618.43965.mar...@lichtvoll.de



Re: Storage server

2012-09-10 Thread The Wanderer

On 09/10/2012 10:16 AM, Stan Hoeppner wrote:


On 9/10/2012 8:19 AM, The Wanderer wrote:


But from what I'm told, hardware RAID has the downside that it often relies
on the exact model of RAID card; if the card dies, you'll need an exact 
duplicate in order to be able to mount the RAID.


You've been misinformed.


Then, apparently, so has everyone else other than you whom I recall having ever
seen give advice on the subject.

The oldest discussion of RAID I remember reading is specifically about the
problems people had with finding a matching RAID controller when their existing
one died. I've seen the same basic discussion repeated over and over. I've seen
this repeatedly cited as the strongest reason to consider software RAID.

If the distinction is between a RAID controller and a RAID card, then okay,
fair enough; my bad. But I was under the impression that in most cases the
controller is integrated with the card, and so getting a matching controller
would necessitate getting a matching card.


And, given your admission that you have no personal experience with hardware
RAID, and almost no knowledge of it, it seems odd you'd jump into a thread
and tell everyone about its apparent limitations.


It could be considered a bit odd, yes. It's simply that I don't like to see only
one side of an argument presented, and it seemed to me - whether accurately or
not - that you were A: leaving out known downsides of hardware RAID (as I'd seen
them described repeatedly) and B: not presenting the advantages of software
RAID.

Since I do use software RAID, and chose it over hardware RAID after conscious
consideration based on what explanations I could find of both, it seemed worth
speaking up.

--
  The Wanderer

Warning: Simply because I argue an issue does not mean I agree with any
side of it.

Every time you let somebody set a limit they start moving it.
  - LiveJournal user antonia_tiger


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org

Archive: http://lists.debian.org/504df8e6.7000...@fastmail.fm



Re: Storage server

2012-09-10 Thread Veljko
On Sat, Sep 08, 2012 at 09:53:33PM +0200, Martin Steigerwald wrote:
 For rsnapshot in my experience you need monitoring cause if it fails it 
 just complains to its log file and even just puts the rsync error code 
 without the actual error message there last I checked. 
 
 Let monitoring check whether daily.0 is not older than 24 hours.

Didn't know that. Thanks, I'll monitor it if I opt for rsnapshot.

 Did you consider putting those webservers into some bigger virtualization 
 host and then let them use NFS exports for central storage provided by 
 some server(s) that are freed by this? You may even free up a dedicated 
 machine for monitoring and another one for the backup ;).

No, they have to remain where they are, on physical remote locations.

Dedicated servers that will be backed up are ~500GB in size.
 
 How many are they?

5 of them and all 500GB total.
 
b) monitoring (Icinga or Zabbix) of dedicated servers.
 
 Then who monitors the backup? It ideally should be a different server than 
 this multi-purpose-do-everything-and-feed-the-dog machine your are talking 
 about.

Like I said, they should be on HA system, but I don't get to work in
ideally conditions. If my boss can live with it, so can I. I told him of
possible consequences and that's all I can do.

c) file sharing for employees (mainly small text files). I don't
expect this to be resource hog.
 
 Another completely different workload.
 
 Where do you intend the backup for these files? I obviously wouldn´t put it 
 on the same machine as the fileserver.
 
 See how mixing lots of stuff into one machine makes things complicated?

I shouldn't mention this one. It's not workload at all. It's few MB
(~10) that will be downloaded periodically by other employees. It
doesn't have to be backed up.

As for complicating, I root for clean and simple, but if my job requires
to struggle with complicated things, I'll just have to do it. Not my
choice anyway. 

 
 4 GiB RAM of RAM for a virtualization host that also does backup and 
 fileservices? You aren´t kidding me, are you? If using KVM I at least 
 suggest to activate kernel same page merging.
 
 Fast storage also depends on cache memory, which the machine will lack if 
 you fill it with virtual machines.
 
 And yes as explained already yet another different workload.
 
 Even this ThinkPad T520 has more RAM, 8 GiB, and I just occasionaly fire up 
 some virtual machines.

Yes, I use KVM. I never intended to fill it with virtual machines. Like
I already explained, I intended to periodically use virtual machine with
300MB of RAM. That can't be amount of memory to suffocate host machine
with 4GB. And as I said, RAM is cheap and can be added if 4GB is not
enough. 
 
 Well extension of RAID needs some thinking ahead.

That's why I'm here. ;)

 While you can just add 
 disks to add capacity – not redundancy – into an existing RAID the risks 
 of a non recoverable failure of the RAID increases. How do you intend to 
 grow the RAID? And to what maximum size?
 
 At least you do not intend to use RAID-5 or something like that. See
 
 http://baarf.com


At my previous place of employment I worked with IBM storage that was
attached with optic cables via Brocade 4GB switch to load balanced
system. Storage provided shared block storage with GFS on it.
Performance sucked. And this was production mail server. I learned hard
way, no GFS ever again. Don't know if GFS2 is any better.

Same goes with RAID5. Had a Qnap nas server with RAID5. It was backup
server. It got things done, but performance was terrible. No data loss,
but this just sucked. Moral of the story for me was: don't use RAID5. 
 
 So the customer is willing to use dedicated servers for different web sites 
 and other services, but more than one machine for the workloads you 
 described above is too much?
 
 Sorry, I do not get this.

Not that hard to comprehend. My boss sees backup as necessary evil. And
only after I pushed it. Before I got here, there was no backup. None
whatsover. I was baffled. And I had situation few days on my arrival,
that one of databases got corrupted. Managed to find some old backup and
with data we already had saved, restored database. But that situation is
not acceptable. I had to push things and to propose some cheap solution,
so I can have something I can work with. 

 Serious and honest consulting here IMHO includes exposing the risks of 
 such a setup in an absolutely clear to comprehend way to those managers.

Already did that.

 Are these managers willing to probably loose the backup and face a several 
 day downtime of fileserver, backup and monitoring services in case of a 
 failure of this desktop class machine?

They didn't have any backup, monitoring and that file sharing is done
now using someones windows share dir. This would be a huge step forward
for them. 

 If so, if I would be in the position to say no, I would just say no 
 thanks, search yourself a different idiot for setting up such an 

Re: Storage server

2012-09-10 Thread Veljko
On Mon, Sep 10, 2012 at 02:07:47PM +0100, Jon Dowland wrote:
 On Sat, Sep 08, 2012 at 06:49:45PM +0200, Veljko wrote:
a) backup (backup server for several dedicated (mainly) web servers).
It will contain incremental backups, so only first running will take a
lot of time, rsnapshot
 
 Best avoid rsnapshot. Use (at least) rdiff-backup instead, which is nearly
 a drop-in replacement (but scales); or consider something like bup or obnam
 instead.

Any particular reason for avoiding rsnapshot? What are advantages of
using rdiff-backup or obnam?


Regards,
Veljko


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20120910153821.ge9...@angelina.example.org



Re: Storage server

2012-09-10 Thread Veljko
On Mon, Sep 10, 2012 at 04:02:54PM +0200, Martin Steigerwald wrote:
  Like I said, it's several dedicated, mostly web servers with users
  uploaded content on one of them (that part is expected to grow). None
  of them is in the same data center.
 
 Okay, so thats fine.
 
 I would still not be comfortable mixing production stuff with a backup 
 server, but I think you could get away with it.
 
 But then you need a different backup server for the production stuff on the 
 server and the files from the fileserver service that you plan to run on it, 
 cause…

Those files that will be on file sharing service are not critical. They
are disposable and therefore doesn't have to be backed up.

 … no again: RAID is not a backup.
 
 RAID is about maximizing performance and/or minimizing downtime.
 
 Its not a backup. And thats about it.

I've never thought that RAID is backup. It's not. Server I'm trying to
set up is backup. It's not perfect solution, but is better then nothing.
Yes, in a perfect world I would set another one in case something
happened to this one, but that's the road I can't go. So if two disks in
same mirror pair dies simultaneously I'll lose all data. I'm aware of
that. RAID, however, provides certain level of redundancy. If one disk
dies, I didn't lose data. I will rebuild it. 

It all comes to what if. What if you lose production, backup server
and backup of your backup server? Well, that is not very likely, but
still can happen. I won't have that backup of backup, but will be muck
more happier then now, having no backup at all. 

 If you follow this, you need two boxes… But if you need two boxes, why 
 just don´t do the following:
 
 1) virtualization host
 
 2) backup host
 
 to have a clear separation and an easier concept. Sure you could replicate 
 the production data of the mixed production/dataserver to somewhere else, 
 but going down this route it seems to be that you add workaround upon 
 workaround upon workaround.
 
 I find it way easier if the backup server does backup (and nothing else!) 
 and the production server does backup (and nothing else). And removing 
 complexity removes possible sources of human errors as well.
 
 In case you go above route, I wouldn´t even feel to uncomfortable if you 
 ran some test VMs on the virtualization host. But that depends on how 
 critical the production services on it are.
 
 Thanks,
 -- 
 Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
 GPG: 03B0 0D6C 0040 0710 4AFA  B82F 991B EAAC A599 84C7

That does make sense, having two different machines for two types of
work, but I don't have them right now. But when my boss recovers from
this recent spending, I'll try to acquire another one.

Regards,
Veljko



-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20120910153838.gf9...@angelina.example.org



Re: Storage server

2012-09-10 Thread Veljko
On Mon, Sep 10, 2012 at 08:05:49AM -0500, Stan Hoeppner wrote:
  I'm not able to find that card here (and I haven't so far), can I have 
  another one?
 
 That's hard to believe given the worldwide penetration Adaptec has, and
 the fact UPS/FedEx ship worldwide.  What country are you in?

I'm in Serbia. I tried several web sites of more known dealers, but it's
possible that they don't have everything listed there on their web
sites. If my boss approve buying RAID card, I'll call them to see if
they have it or if they can order one.

  If not, how to find appropriate one? One with 8 supported devices,
  hardware RAID10? What else to look for?
 
 You mean same solution, yet?  They are not equal.  Far from it.

I meant some as in it's something. I'm aware that they are not equal.
 
  In case I don't get that card,
  should I remove /boot from RAID1?
 
 Post the output of
 
 ~$ cat /proc/mdstat
 
 I was under the impression you didn't have this system built and running
 yet.  Apparently you do.  Are the 4x 3TB drives the only drives in this
 system?
 
 -- 
 Stan

I didn't till 30 minutes ago. :) I just installed it for exercise if
nothing else. Had a problem with booting. 

Unable to install GRUB in /dev/sda
Executing 'grub-intall /dev/sda' failed.
This is a fatal error.

After creating 1MB partition at the beginning of every drive with
reserved for boot bios it worked (AHCI in BIOS). 


Anyhow, this is output of cat /proc/mdstat:
Personalities : [raid1] [raid10] 
md1 : active raid10 sda3[0] sdd2[3] sdc2[2] sdb3[1]
  5859288064 blocks super 1.2 512K chunks 2 near-copies [4/4] []
  [==..]  resync = 32.1% (1881658368/5859288064) 
finish=325.2min speed=203828K/sec
  
md0 : active raid1 sda2[0] sdb2[1]
  488128 blocks super 1.2 [2/2] [UU]
  
unused devices: none

I'm not sure what is being copied on freshly installed system.

Regards,
Veljko


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20120910154128.gg9...@angelina.example.org



Re: Storage server

2012-09-10 Thread lee
Stan Hoeppner s...@hardwarefreak.com writes:

 Linux RAID is great in the right hands when used for appropriate
 workloads.  Too many people are using it who should not be, and giving
 it a bad rap due to no fault of the software.

Hm, interesting, so what would you say we should use it for and for what
not? I'm using it to survive the failure of a disk, and so far, it's
been working fine for that.

 Hardware RAID has a minimum price of entry, both currency and knowledge,

The currency is the problem. Knowledge applies the same to software
raid. Decent hardware raid cards are expensive, and hardware changes
over time, so you might find yourself with something that doesn't really
have the performance you'd wish for before much time passes. And what if
the card fails?


-- 
Debian testing amd64


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/87sjap7slx@yun.yagibdah.de



Re: Storage server

2012-09-10 Thread Andrei POPESCU
On Lu, 10 sep 12, 17:38:39, Veljko wrote:
 
 I've never thought that RAID is backup. It's not. Server I'm trying to
 set up is backup. It's not perfect solution, but is better then nothing.
 Yes, in a perfect world I would set another one in case something
 happened to this one, but that's the road I can't go. So if two disks in
 same mirror pair dies simultaneously I'll lose all data. I'm aware of
 that. RAID, however, provides certain level of redundancy. If one disk
 dies, I didn't lose data. I will rebuild it. 
 
 It all comes to what if. What if you lose production, backup server
 and backup of your backup server? Well, that is not very likely, but
 still can happen. I won't have that backup of backup, but will be muck
 more happier then now, having no backup at all. 

If you ignore the references to the proprietary backup software this is 
a very interesting reading

http://www.taobackup.com/

Kind regards,
Andrei
-- 
Offtopic discussions among Debian users and developers:
http://lists.alioth.debian.org/mailman/listinfo/d-community-offtopic


signature.asc
Description: Digital signature


Re: Storage server

2012-09-10 Thread Martin Steigerwald
Am Montag, 10. September 2012 schrieb Veljko:

[… no backup before and then backup as necessary evil …]

  If so, if I would be in the position to say no, I would just say no 
  thanks, search yourself a different idiot for setting up such an
  insane  setup. I understand, you probably do not feel yourself
  being in that position…
 
 Exactly, I'm not.

You have my sympathy.

I hope some of the answers to your questions help you to make something 
good out of the situation you are in.

If you made sure to explain the risks to your boss you can say in case 
anything bad happens: I recommended doing backup in a different, safer way 
than you allowed me to do it and thats the result.

And you are right: Some backup is better than no backup.

(PS: And I didn´t want to imply that you were an idiot - my above sentence 
could be read as that. Sometimes its about a feeling of lack of choice. I 
wish that you will create more choice for yourself in the future. From 
what I read from your answers you are quite aware of the situation.)

-- 
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA  B82F 991B EAAC A599 84C7


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/201209102006.16902.mar...@lichtvoll.de



Re: Storage server

2012-09-10 Thread Martin Steigerwald
Am Montag, 10. September 2012 schrieb Veljko:
 On Mon, Sep 10, 2012 at 08:05:49AM -0500, Stan Hoeppner wrote:
[…]
   In case I don't get that card,
   should I remove /boot from RAID1?
  
  Post the output of
  
  ~$ cat /proc/mdstat
  
  I was under the impression you didn't have this system built and
  running yet.  Apparently you do.  Are the 4x 3TB drives the only
  drives in this system?
 
 I didn't till 30 minutes ago. :) I just installed it for exercise if
 nothing else. Had a problem with booting.
 
 Unable to install GRUB in /dev/sda
 Executing 'grub-intall /dev/sda' failed.
 This is a fatal error.
 
 After creating 1MB partition at the beginning of every drive with
 reserved for boot bios it worked (AHCI in BIOS).

GRUB needs a space between MBR and first partition. Maybe that space was to 
small? Or more likely you GPT partitioned the disk (as its 3 TB and MBR 
does only work upto 2 TB)? Then you need a BIOS boot partition. Unless you 
use UEFI, then you´d need about 200 MB FAT 32 EFI system partition.

 Anyhow, this is output of cat /proc/mdstat:
 Personalities : [raid1] [raid10]
 md1 : active raid10 sda3[0] sdd2[3] sdc2[2] sdb3[1]
   5859288064 blocks super 1.2 512K chunks 2 near-copies [4/4]
 [] [==..]  resync = 32.1% (1881658368/5859288064)
 finish=325.2min speed=203828K/sec
 
 md0 : active raid1 sda2[0] sdb2[1]
   488128 blocks super 1.2 [2/2] [UU]
 
 unused devices: none
 
 I'm not sure what is being copied on freshly installed system.

What do you mean by that?

SoftRAID just makes sure that all devices data is in sync. Thats needed 
for a block level based RAID. An hardware RAID controller would have to do 
this as well. (Unless it uses some map of sectors it already used, then 
only these would have to be kept in sync, but I am not aware of any 
hardware RAID controller or SoftRAID mode that does this.)

BTRFS based RAID (RAID 1 means something different there) does not need 
an initial sync. But then no, thats no recommendation to use BTRFS yet.

Ciao,
-- 
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA  B82F 991B EAAC A599 84C7


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/201209102016.00856.mar...@lichtvoll.de



Re: Storage server

2012-09-10 Thread lee
Veljko velj...@gmail.com writes:

 I didn't till 30 minutes ago. :) I just installed it for exercise if
 nothing else. Had a problem with booting. 

 Unable to install GRUB in /dev/sda
 Executing 'grub-intall /dev/sda' failed.
 This is a fatal error.

 After creating 1MB partition at the beginning of every drive with
 reserved for boot bios it worked (AHCI in BIOS). 

Did you get it to actually install on the RAID and to boot from that?
Last time I tried with a RAID-1, it didn't work. It's ridiculously
difficult to get it set up so that everything is on software raid.

 Anyhow, this is output of cat /proc/mdstat:
 Personalities : [raid1] [raid10] 
 md1 : active raid10 sda3[0] sdd2[3] sdc2[2] sdb3[1]
   5859288064 blocks super 1.2 512K chunks 2 near-copies [4/4] []
   [==..]  resync = 32.1% (1881658368/5859288064) 
 finish=325.2min speed=203828K/sec
   
 md0 : active raid1 sda2[0] sdb2[1]
   488128 blocks super 1.2 [2/2] [UU]
   
 unused devices: none

 I'm not sure what is being copied on freshly installed system.

On top of that, by default it'll do a check or rebuild of some sort
every first Sunday night of every month.


-- 
Debian testing amd64


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/87k3w17os7@yun.yagibdah.de



Re: Storage server

2012-09-09 Thread Stan Hoeppner
On 9/7/2012 3:16 PM, Bob Proulx wrote:

 Agreed.  But for me it isn't about the fsck time.  It is about the
 size of the problem.  If you have full 100G filesystem and there is a
 problem then you have a 100G problem.  It is painful.  But you can
 handle it.  If you have a full 10T filesystem and there is a problem
 then there is a *HUGE* problem.  It is so much more than painful.

This depends entirely on the nature of the problem.  Most filesystem
problems are relatively easy to fix even on 100TB+ filesystems,
sometimes with some data loss, often with only a file or few being lost
or put in lost+found.  If you have a non-redundant hardware device
failure that roasts your FS, then you replace the hardware make a new
FS, and restore from D2D or tape.  That's not painful, that's procedure.

 Therefore when practical I like to compartmentalize things so that
 there is isolation between problems.  Whether the problem is due to a
 hardware failure, a software failure or a human failure.  All of which
 are possible.  Having compartmentalization makes dealing with the
 problem easier and smaller.

Sounds like you're mostly trying to mitigate human error.  When you
identify that solution, let me know, then patent it. ;)

 Whjat?  Are you talking crash recovery boot time fsck?  With any
 modern journaled FS log recovery is instantaneous.  If you're talking
 about an actual structure check, XFS is pretty quick regardless of inode
 count as the check is done in parallel.  I can't speak to EXTx as I
 don't use them.
 
 You should try an experiment and set up a terabyte ext3 and ext4
 filesystem and then perform a few crash recovery reboots of the
 system.  It will change your mind.  :-)

As I've never used EXT3/4 and thus have no opinion, it'd be a bit
difficult to change my mind.  That said, putting root on a 1TB
filesystem is a brain dead move, regardless of FS flavor.  A Linux
server doesn't need more than 5GB of space for root.  With /var, /home/
and /bigdata on other filesystems, crash recovery fsck should be quick.

 XFS has one unfortunate missing feature.  You can't resize a
 filesystem to be smaller.  You can resize them larger.  But not
 smaller.  This is a missing feature that I miss as compared to other
 filesystems.

If you ever need to shrink a server filesystem: you're doing IT wrong.

 Unfortunately I have some recent FUD concerning xfs.  I have had some
 recent small idle xfs filesystems trigger kernel watchdog timer
 recoveries recently.  Emphasis on idle.

If this is the bug I'm thinking of, Idle has nothing to do with the
problem, which was fixed in 3.1 and backported to 3.0.  The fix didn't
hit Debian 2.6.32.  I'm not a Debian kernel dev, ask them why--likely
too old.  Upgrading to the BPO 3.2 kernel should fix this and give you
some nice additional performance enhancements.  2.6.32 is ancient BTW,
released almost 3 years ago.  That's 51 in Linux development years. ;)

If you're going to recommend to someone against XFS, please
qualify/clarify that you're referring to 3 year old XFS, not the current
release.

 Definitely XFS can handle large filesystems.  And definitely when
 there is a good version of everything all around it has been a very
 good and reliable performer for me. I wish my recent bad experiences
 were resolved.

The fix is quick and simple, install BPO 3.2.  Why haven't you already?

 But for large filesystems such as that I think you need a very good
 and careful administrator to manage the disk farm.  And that includes
 disk use policies as much as it includes managing kernel versions and
 disk hardware.  Huge problems of any sort need more careful management.

Say I have a 1.7TB filesystem and a 30TB filesystem.  How do you feel
the two should be managed differently, or that the 30TB filesystem needs
kid gloves?

 When using correctly architected reliable hardware there's no reason one
 can't use a single 500TB XFS filesystem.
 
 Although I am sure it would work I would hate to have to deal with a
 problem that large when there is a need for disaster recovery.  I
 guess that is why *I* don't manage storage farms that are that large. :-)

The only real difference at this scale is that your backup medium is
tape, not disk, and you have much phatter pipes to the storage host.  A
500TB filesystem will reside on over 1000 disk drives.  It isn't going
to be transactional or primary storage, but nearline or archival
storage.  It takes a tape silo and intelligent software to back it up,
but a full restore after catastrophe doesn't have (many) angry users
breathing down your neck.

On the other hand, managing a 7TB transactional filesystem residing on
48x 300GB SAS drives in a concatenated RADI10 setup, housing, say,
corporate mailboxes for 10,000 employees, including the CxOs, is a much
trickier affair.  If you wholesale lose this filesystem and must do a
full restore, you are red meat, and everyone is going to take a bite out
of your ass.  And you very well may get a pink slip 

Re: Storage server

2012-09-09 Thread Stan Hoeppner
On 9/8/2012 11:49 AM, Veljko wrote:

 Well, it did sound a little to complex and that is why I posted to this
 list, hoping to hear some other opinions.
 
 1. This machine will be used for 
   a) backup (backup server for several dedicated (mainly) web servers).
   It will contain incremental backups, so only first running will take a
   lot of time, rsnapshot will latter download only changed/added files
   and will run from cron every day. Files that will be added later are
   around 1-10 MB in size. I expect ~20 GB daily, but that number can
   grow. Some files fill be deleted, other will be added.
   Dedicated servers that will be backed up are ~500GB in size.
   b) monitoring (Icinga or Zabbix) of dedicated servers.
   c) file sharing for employees (mainly small text files). I don't
   expect this to be resource hog.

Stop here.  Never use a production system as a test rig.

   d) Since there is enough space (for now), and machine have four cores
   and 4GB RAM (that can be easily increased), I figured I can use it for 
   test virtual machines. I usually work with 300MB virtual machines and
   no intensive load. Just testing some software. 

You can build a complete brand new AMD dedicated test machine with parts
from Newegg for $238 USD, sans KB/mouse/monitor, which you already have.
 Boot it up then run it headless, use a KVM switch, etc.

http://www.newegg.com/Product/Product.aspx?Item=N82E16813186189
http://www.newegg.com/Product/Product.aspx?Item=N82E16820148262
http://www.newegg.com/Product/Product.aspx?Item=N82E16819103888
http://www.newegg.com/Product/Product.aspx?Item=N82E16822136771
http://www.newegg.com/Product/Product.aspx?Item=N82E16827106289
http://www.newegg.com/Product/Product.aspx?Item=N82E16811121118

If ~$250 stretches the wallet of your employer, it's time for a new job.

 2. There is no fixed implementation date, but I'm expected to start
 working on it. Sooner the better, but no dead lines.
Equipment I have to work with is desktop class machine: Athlon X4,
4GB RAM and 4 3TB Seagate ST3000DM001 7200rpm. Server will be in my
office and will perform backup over internet. I do have APC UPS to
power off machine in case of power loss (apcupsd will take care of
that). 

Get yourself an Adaptec 8 port PCIe x8 RAID card kit for $250:
http://www.newegg.com/Product/Product.aspx?Item=N82E16816103231

The Seagate ST3000DM001 is certified.  It can't do RAID5 so you'll use
RAID10, giving you 6TB of raw capacity, but much better write
performance than RAID5.  You can add 4 more of these drives, doubling
capacity to 12TB.  Comes with all cables, manuals, etc.  Anyone who has
tried to boot a server after the BIOS configured boot drive that is
mdraid mirrored knows why $250 is far more than worth the money.  A
drive failure with a RAID card doesn't screw up your boot order.  It
just works.

 In next few months it is expected that size of files on dedicated
 servers will grow and it case that really happen I'd like to be able to
 expand this system.

See above.

 And, of course, thanks for your time and valuable advices, Stan, I've read
 some of your previous posts on this list and know you're storage guru.

You're welcome.  And thank you. ;)  Recommending the above Adaptec card
is the best advice you'll get.  It'll make your life much easier, with
better performance to boot.

-- 
Stan


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/504c5664.8060...@hardwarefreak.com



Re: Storage server

2012-09-09 Thread Stan Hoeppner
On 9/8/2012 1:10 PM, Martin Steigerwald wrote:
 Am Freitag, 7. September 2012 schrieb Stan Hoeppner:
 On 9/7/2012 12:42 PM, Dan Ritter wrote:
 […]
 Now, the next thing: I know it's tempting to make a single
 filesystem over all these disks. Don't. The fsck times will be
 horrendous. Make filesystems which are the size you need, plus a
 little extra. It's rare to actually need a single gigantic fs.

 Whjat?  Are you talking crash recovery boot time fsck?  With any
 modern journaled FS log recovery is instantaneous.  If you're talking
 about an actual structure check, XFS is pretty quick regardless of
 inode count as the check is done in parallel.  I can't speak to EXTx
 as I don't use them.  For a multi terabyte backup server, XFS is the
 only way to go anyway.  Using XFS also allows infinite growth without
 requiring array reshapes nor LVM, while maintaining striped write
 alignment and thus maintaining performance.

 There are hundreds of 30TB+ and dozens of 100TB+ XFS filesystems in
 production today, and I know of one over 300TB and one over 500TB,
 attached to NASA's two archival storage servers.

 When using correctly architected reliable hardware there's no reason
 one can't use a single 500TB XFS filesystem.
 
 I assume that such correctly architected hardware contains a lot of RAM in 
 order to be able to xfs_repair the filesystem in case of any filesystem 
 corruption.
 
 I know RAM usage of xfs_repair has been lowered, but still such a 500 TiB 
 XFS filesystem can contain a lot of inodes.

The system I've been referring to with the ~500TB XFS is an IA64 SGI
Altix with 64P and 128GB RAM.  I'm pretty sure 128GB is plenty for
xfs_repair on filesystems much larger than 500TB.

 But for upto 10 TiB XFS filesystem I wouldn´t care too much about those 
 issues.

Yeah, an 8GB machine typically allows for much larger than 10TB xfs_repair.

-- 
Stan



-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/504c587b.90...@hardwarefreak.com



Re: Storage server

2012-09-09 Thread Stan Hoeppner
On 9/8/2012 2:53 PM, Martin Steigerwald wrote:

 I would love to learn more about those really big XFS installations and 
 how there were made. I never dealt with more than about 4 TiB big XFS 
 setups.

About the only information that's still available is at the link below,
and it lacks configuration details.  I read those long ago when this
system was fresh.  The detailed configuration info has since disappeared.

http://www.nas.nasa.gov/hecc/resources/storage_systems.html

NAS cannibalized the Columbia super quite some time ago and recycled
some nodes into these archive systems (and other systems) after they
installed the big Pleiades cluster and the users flocked to it.  A bit
of a shame as Columbia had 60TF capability, and for shared memory
applications to boot.  No system on earth had that capability until SGI
released the Altix UV, albeit with half the sockets/node of the IA64
Altix machines.

The coolest part about both 512P IA64 and 256P x86-64 Altix?  Debian
will install and run with little to no modifications required, just as
shrink wrapped SLES and RHEL run out of the box, thanks to the Linux
Scalability Effort in the early 2000s.

-- 
Stan


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/504c5cb7.9060...@hardwarefreak.com



Re: Storage server

2012-09-09 Thread Paul E Condon
On 20120909_040911, Stan Hoeppner wrote:
 On 9/8/2012 2:53 PM, Martin Steigerwald wrote:
 
  I would love to learn more about those really big XFS installations and 
  how there were made. I never dealt with more than about 4 TiB big XFS 
  setups.
 
 About the only information that's still available is at the link below,
 and it lacks configuration details.  I read those long ago when this
 system was fresh.  The detailed configuration info has since disappeared.
 
 http://www.nas.nasa.gov/hecc/resources/storage_systems.html
 
 NAS cannibalized the Columbia super quite some time ago and recycled
 some nodes into these archive systems (and other systems) after they
 installed the big Pleiades cluster and the users flocked to it.  A bit
 of a shame as Columbia had 60TF capability, and for shared memory
 applications to boot.  No system on earth had that capability until SGI
 released the Altix UV, albeit with half the sockets/node of the IA64
 Altix machines.
 
 The coolest part about both 512P IA64 and 256P x86-64 Altix?  Debian
 will install and run with little to no modifications required, just as
 shrink wrapped SLES and RHEL run out of the box, thanks to the Linux
 Scalability Effort in the early 2000s.
 
 -- 
 Stan

Stan,

I've been following this thread from its beginning. My initial reading
of OP's post was to marvel at the thought that so many things/tasks
could be done with a single box in a single geek's cubicle. I resolved
to follow the thread that would surely follow closely. I think you,
Stan, did OP an enormous service with your list of questions to be
answered. 

This thread drifted onto the topic of XFS. I first learned of the
existence of XFS from earlier post by you, and I have ever since been
curious about it. But I am retired, and live at home in an environment
where there is very little opportunity to make use of its features.
Perhaps you could take OP's original specification as a user wish list
and sketch a design that would fulfill the wishlist and list how XFS
would change or resolve issues that were/are troubling him. 

In particular, the typical answers to questions about backup on this list
involve rsync, or packages that depend on rsync, and on having a file
system that uses inodes and supports hard links. How would an XFS design
handle de-duplication? Or is de-duplication simply a bad idea in very
large systems?

Sincerely,
-- 
Paul E Condon   
pecon...@mesanetworks.net


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20120909202535.ga3...@big.lan.gnu



Re: Storage server

2012-09-08 Thread Veljko
On Fri, Sep 07, 2012 at 01:26:13PM -0500, Stan Hoeppner wrote:
 On 9/7/2012 11:29 AM, Veljko wrote:
 
  I'm in the process of making new backup server, so I'm thinking of best
  way of doing it. I have 4 3TB disks and I'm thinking of puting them in
  software RAID10.
 
 [what if stream of consciousness rambling snipped for brevity]
 
  What do you think of this setup? Good sides? Bad sides of this approach?
 
 Applying the brakes...
 
 As with many tech geeks with too much enthusiasm for various tools and
 not enough common sense and seasoning, you've made the mistake of
 approaching this backwards.  Always start here:
 
 1.  What are the requirements of the workload?
 2.  What is my budget and implementation date?
 3.  How can I accomplish #1 given #2 with the
 4.  Least complexity and
 5.  Highest reliability and
 6.  Easiest recovery if the system fails?
 
 You've described a dozen or so overly complex technical means to some
 end that tend to violate #4 through #6.
 
 Slow down, catch your breath, and simply describe #1 and #2.  We'll go
 from there.
 
 -- 
 Stan

Well, it did sound a little to complex and that is why I posted to this
list, hoping to hear some other opinions.

1. This machine will be used for 
  a) backup (backup server for several dedicated (mainly) web servers).
  It will contain incremental backups, so only first running will take a
  lot of time, rsnapshot will latter download only changed/added files
  and will run from cron every day. Files that will be added later are
  around 1-10 MB in size. I expect ~20 GB daily, but that number can
  grow. Some files fill be deleted, other will be added.
  Dedicated servers that will be backed up are ~500GB in size.
  b) monitoring (Icinga or Zabbix) of dedicated servers.
  c) file sharing for employees (mainly small text files). I don't
  expect this to be resource hog.
  d) Since there is enough space (for now), and machine have four cores
  and 4GB RAM (that can be easily increased), I figured I can use it for 
  test virtual machines. I usually work with 300MB virtual machines and
  no intensive load. Just testing some software. 

2. There is no fixed implementation date, but I'm expected to start
working on it. Sooner the better, but no dead lines.
   Equipment I have to work with is desktop class machine: Athlon X4,
   4GB RAM and 4 3TB Seagate ST3000DM001 7200rpm. Server will be in my
   office and will perform backup over internet. I do have APC UPS to
   power off machine in case of power loss (apcupsd will take care of
   that). 

In next few months it is expected that size of files on dedicated
servers will grow and it case that really happen I'd like to be able to
expand this system. Hardware RAID controllers are expensive and managers
always want to go with least expenses possible, so I'm stuck with
software RAID only.

But, one of dedicated server is slowly running out of space, so I don't
think they will go for cheapest option there. I'll have to take care of
that too, but first things first.


And, of course, thanks for your time and valuable advices, Stan, I've read
some of your previous posts on this list and know you're storage guru.

Rregards,
Veljko


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20120908164945.ga12...@angelina.example.org



Re: Storage server

2012-09-08 Thread Veljko
On Fri, Sep 07, 2012 at 09:43:57PM -0400, tdowg1 news wrote:
  I'm in the process of making new backup server, so I'm thinking of best
  way of doing it. I have 4 3TB disks and I'm thinking of puting them in
  software RAID10.
 
  I created 2 500MB partitions for /boot (RAID1) and the rest it RAID10.
 
  So far, so good.
 
  LVM will provide me a way to expand storage with extra disks, but if
  there is no more room for that kind of expansion, I was thinking of
  GlusterFS for scaling-out.
 
  Let me suggest a different approach. It sounds like you're
  planning on a lot of future expansion.
 
  Get a high-end SAS RAID card. One with two external SFF8088
  connectors.
 
 I would 2nd the suggestion of investing in a high-end SAS RAID card.  I
 would also avoid any kind of software raid if at all possible, at least for
 partitions that see a lot of I/O.  I guess /boot would be ok, but I def
 would not put my root or /home under software raid if I had a discrete
 controller.  However, you did say this is a storage/backup server and not a
 main machine... so I don't know... just something to think I guess.
  Software raid is free, so if performance becomes an issue you can always
 upgrade :)
 
 There are a lot of discrete hard disk drive controllers on the market.  If
 you go this route, try to be sure that it supports SMART passthrough so
 that you can at least get _some_ kind status your drives.
 
 --tdowg1

If it was my call, I would go with high-end RAID card as well. But in
this case I have to work without them. However, I've heard that
software RAID is good for one thing. You can rebuild it in any other
machine. If you use hardware controller and it dies, you have to buy
same of very similar one to be able to save your data. Was I
misinformed? 

Regards,
Veljko


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20120908165012.gb12...@angelina.example.org



Re: Storage server

2012-09-08 Thread Veljko
On Fri, Sep 07, 2012 at 01:43:47PM -0600, Bob Proulx wrote:
 Veljko wrote:
  Dan Ritter wrote:
OS I would use is Wheezy. Guess he will be stable soon enough and I
don't want to reinstall everything again in one year, when support for
old stable is dropped.
   
   This is Debian. Since 1997 or so, you have had the ability to
   upgrade from major version n to version n+1 without
   reinstalling. You won't need to reinstall unless you change
   architectures (i.e. from x86_32 to x86_64).
  
  But, isn't complete reintall safest way? Dist-upgrade can go wrong
  sometime.
 
 If you follow the release notes there is no reason you shouldn't be
 able to upgrade from one major release to another.  I have systems
 that started out as Woody that are currently running Squeeze.
 Upgrades work great.  Debian, unlike some other distros, is all about
 being able to successfully upgrade.  Upgrades work just fine.  I have
 upgraded many systems and will be upgrading many more.
 
 But it is important to follow the release notes for the upgrade for
 each major release because there is special handling needed and it
 will be fully documented.
 
 Sometimes this special handling annoys me because the required manual
 cleanup mostly seems unnecessary to me if the packaging was done
 better.  I would call them bugs.  But regardless of the small
 packaging issues here and there the overall system upgrades just fine.
 
 Bob

I've acctualy never did distro upgrade, but allways thought that clean
reinstall is the safest option. Somehow I've thought that something can
allways go wrong with big version changes and that kind of things in
production environment is a big no-no. I stand corrected. This is, after
all, Debian and is expected to be highly stable.

Regards,
Veljko


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20120908165037.gc12...@angelina.example.org



Re: Storage server

2012-09-08 Thread Martin Steigerwald
Am Freitag, 7. September 2012 schrieb Veljko:
  This is Debian. Since 1997 or so, you have had the ability to
  upgrade from major version n to version n+1 without
  reinstalling. You won't need to reinstall unless you change
  architectures (i.e. from x86_32 to x86_64).

 But, isn't complete reintall safest way? Dist-upgrade can go wrong
 sometime.

The Debian Wheezy on my ThinkPad T42 as a Debian Sarge or something like 
this on my ThinkPad T23. Same with my workstation at work, heck I even 
recovered from a bit error restore from a hardware raid controller by 
reinstalling any package that debsums complained about.

That should give you an idea of the upgradeability of Debian.

I only every installed a new system on 32 = 64 bit switch. And in the not 
to distant future even that might not be needed anymore. (Yes, I know of 
the inofficial hacks that may even work without multiarch support, i.e. the 
website with the big fat blinking warning not to do this;)

-- 
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA  B82F 991B EAAC A599 84C7


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/201209082006.08167.mar...@lichtvoll.de



Re: Storage server

2012-09-08 Thread Martin Steigerwald
Am Freitag, 7. September 2012 schrieb Stan Hoeppner:
 On 9/7/2012 12:42 PM, Dan Ritter wrote:
[…]
  Now, the next thing: I know it's tempting to make a single
  filesystem over all these disks. Don't. The fsck times will be
  horrendous. Make filesystems which are the size you need, plus a
  little extra. It's rare to actually need a single gigantic fs.
 
 Whjat?  Are you talking crash recovery boot time fsck?  With any
 modern journaled FS log recovery is instantaneous.  If you're talking
 about an actual structure check, XFS is pretty quick regardless of
 inode count as the check is done in parallel.  I can't speak to EXTx
 as I don't use them.  For a multi terabyte backup server, XFS is the
 only way to go anyway.  Using XFS also allows infinite growth without
 requiring array reshapes nor LVM, while maintaining striped write
 alignment and thus maintaining performance.
 
 There are hundreds of 30TB+ and dozens of 100TB+ XFS filesystems in
 production today, and I know of one over 300TB and one over 500TB,
 attached to NASA's two archival storage servers.
 
 When using correctly architected reliable hardware there's no reason
 one can't use a single 500TB XFS filesystem.

I assume that such correctly architected hardware contains a lot of RAM in 
order to be able to xfs_repair the filesystem in case of any filesystem 
corruption.

I know RAM usage of xfs_repair has been lowered, but still such a 500 TiB 
XFS filesystem can contain a lot of inodes.

But for upto 10 TiB XFS filesystem I wouldn´t care too much about those 
issues.

-- 
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA  B82F 991B EAAC A599 84C7


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/201209082010.26694.mar...@lichtvoll.de



Re: Storage server

2012-09-08 Thread Martin Steigerwald
Am Freitag, 7. September 2012 schrieb Bob Proulx:
 Unfortunately I have some recent FUD concerning xfs.  I have had some
 recent small idle xfs filesystems trigger kernel watchdog timer
 recoveries recently.  Emphasis on idle.  Active filesystems are always
 fine.  I used /tmp as a large xfs filesystem but swapped it to be ext4
 due to these lockups.  Squeeze.  Everything current.  But when idle it
 would periodically lock up and the only messages in the syslog and on
 the system console were concerning xfs threads timed out.  When the
 kernel froze it always had these messages displayed[1].  It was simply
 using /tmp as a hundred gig or so xfs filesystem.  Doing nothing but
 changing /tmp from xfs to ext4 resolved the problem and it hasn't seen
 a kernel lockup since.  I saw that problem on three different machines
 but effectively all mine and very similar software configurations.
 And by kernel lockup I mean unresponsive and it took a power cycle to
 free it.
 
 I hesitated to say anything because of lacking real data but it means
 I can't completely recommend xfs today even though I have given it
 strong recommendations in the past.  I am thinking that recent kernels
 are not completely clean specifically for idle xfs filesystems.
 Meanwhile active ones seem to be just fine.  Would love to have this
 resolved one way or the other so I could go back to recommending xfs
 again without reservations.

Squeeze and everything current?

No way. At least when using 2.6.32 default squeeze kernel. Its really old.

Did you try with the latest 3.2 squeeze-backports kernel?

-- 
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA  B82F 991B EAAC A599 84C7


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/201209082013.01653.mar...@lichtvoll.de



Re: Storage server

2012-09-08 Thread Martin Steigerwald
Am Samstag, 8. September 2012 schrieb Veljko:
 On Fri, Sep 07, 2012 at 01:26:13PM -0500, Stan Hoeppner wrote:
  On 9/7/2012 11:29 AM, Veljko wrote:
  
 
   I'm in the process of making new backup server, so I'm thinking of
   best way of doing it. I have 4 3TB disks and I'm thinking of
   puting them in software RAID10.
 
  
 
  [what if stream of consciousness rambling snipped for brevity]
 
  
 
   What do you think of this setup? Good sides? Bad sides of this
   approach?
 
  
 
  Applying the brakes...
 
  
 
  As with many tech geeks with too much enthusiasm for various tools
  and not enough common sense and seasoning, you've made the mistake
  of
 
  approaching this backwards.  Always start here:
  
 
  1.  What are the requirements of the workload?
  2.  What is my budget and implementation date?
  3.  How can I accomplish #1 given #2 with the
  4.  Least complexity and
  5.  Highest reliability and
  6.  Easiest recovery if the system fails?
 
  
 
  You've described a dozen or so overly complex technical means to some
  end that tend to violate #4 through #6.
 
  
 
  Slow down, catch your breath, and simply describe #1 and #2.  We'll
  go from there.
 
  
 
  -- 
  Stan
 
 Well, it did sound a little to complex and that is why I posted to this
 list, hoping to hear some other opinions.
 
 1. This machine will be used for 
   a) backup (backup server for several dedicated (mainly) web servers).
   It will contain incremental backups, so only first running will take
 a lot of time, rsnapshot will latter download only changed/added files
 and will run from cron every day. Files that will be added later are
 around 1-10 MB in size. I expect ~20 GB daily, but that number can
 grow. Some files fill be deleted, other will be added.
   Dedicated servers that will be backed up are ~500GB in size.
   b) monitoring (Icinga or Zabbix) of dedicated servers.
   c) file sharing for employees (mainly small text files). I don't
   expect this to be resource hog.
   d) Since there is enough space (for now), and machine have four cores
   and 4GB RAM (that can be easily increased), I figured I can use it
 for  test virtual machines. I usually work with 300MB virtual machines
 and no intensive load. Just testing some software.
 
 2. There is no fixed implementation date, but I'm expected to start
 working on it. Sooner the better, but no dead lines.
Equipment I have to work with is desktop class machine: Athlon X4,
4GB RAM and 4 3TB Seagate ST3000DM001 7200rpm. Server will be in my
office and will perform backup over internet. I do have APC UPS to
power off machine in case of power loss (apcupsd will take care of
that). 
 
 In next few months it is expected that size of files on dedicated
 servers will grow and it case that really happen I'd like to be able to
 expand this system. Hardware RAID controllers are expensive and
 managers always want to go with least expenses possible, so I'm stuck
 with software RAID only.

Are you serious about that?

You are planning to mix backup, productions workloads and testing on a 
single *desktop class* machine?

If you had a redundant and failsafe virtualization cluster with 2-3 hosts 
and redundant and failsafe storage cluster, then maybe – except for the 
backup. But for a single desktop class machine I´d advice against putting 
such different workloads on it. Especially in a enterprise scenario.

While you may get away with running test and production VMs on a 
virtualization host, I would at least physically (!) separate the backup 
so that breaking the machine by testing stuff would not make the backup 
inaccessible. And no: RAID is not a backup! So please forget about mixing 
a backup with production/testing workloads. Now.

I personally do not see a strong reason against SoftRAID although I 
battery backed up hardware RAID controller can be quite nice for 
performance as you can disable cache flushing / barriers. But then that 
should be possible with a battery backed up non RAID controller, if there 
is any, as well.

Thanks Stan for asking the basic questions. The answers made obvious to me 
that in the current form this can´t be a sane setup.

-- 
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA  B82F 991B EAAC A599 84C7


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/201209082023.36940.mar...@lichtvoll.de



Re: Storage server

2012-09-08 Thread Veljko
On Sat, Sep 08, 2012 at 08:23:36PM +0200, Martin Steigerwald wrote:
 Are you serious about that?
 
 You are planning to mix backup, productions workloads and testing on a 
 single *desktop class* machine?
 
 If you had a redundant and failsafe virtualization cluster with 2-3 hosts 
 and redundant and failsafe storage cluster, then maybe – except for the 
 backup. But for a single desktop class machine I´d advice against putting 
 such different workloads on it. Especially in a enterprise scenario.
 
 While you may get away with running test and production VMs on a 
 virtualization host, I would at least physically (!) separate the backup 
 so that breaking the machine by testing stuff would not make the backup 
 inaccessible. And no: RAID is not a backup! So please forget about mixing 
 a backup with production/testing workloads. Now.
 
 I personally do not see a strong reason against SoftRAID although I 
 battery backed up hardware RAID controller can be quite nice for 
 performance as you can disable cache flushing / barriers. But then that 
 should be possible with a battery backed up non RAID controller, if there 
 is any, as well.
 
 Thanks Stan for asking the basic questions. The answers made obvious to me 
 that in the current form this can´t be a sane setup.
 
 -- 
 Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
 GPG: 03B0 0D6C 0040 0710 4AFA  B82F 991B EAAC A599 84C7

Yes, I know how that sounds. But testing in my case is installing
slim Debian, apache on top of it and running some light web application
for a few hours. Nothing intensive. Just to have fresh machine with
nothing on it. But if running it sounds too bad I could just run it
somewhere else. Thanks for your advice, Martin!

On the other hand, monitoring has to be here, no place else to put it.

Regards,
Veljko


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20120908185343.ga21...@angelina.example.org



Re: Storage server

2012-09-08 Thread Martin Steigerwald
Am Samstag, 8. September 2012 schrieb Veljko:
 On Sat, Sep 08, 2012 at 08:23:36PM +0200, Martin Steigerwald wrote:
  Are you serious about that?
  
  You are planning to mix backup, productions workloads and testing on
  a single *desktop class* machine?
  
  If you had a redundant and failsafe virtualization cluster with 2-3
  hosts and redundant and failsafe storage cluster, then maybe –
  except for the backup. But for a single desktop class machine I´d
  advice against putting such different workloads on it. Especially in
  a enterprise scenario.
  
  While you may get away with running test and production VMs on a
  virtualization host, I would at least physically (!) separate the
  backup so that breaking the machine by testing stuff would not make
  the backup inaccessible. And no: RAID is not a backup! So please
  forget about mixing a backup with production/testing workloads. Now.
  
  I personally do not see a strong reason against SoftRAID although I
  battery backed up hardware RAID controller can be quite nice for
  performance as you can disable cache flushing / barriers. But then
  that should be possible with a battery backed up non RAID
  controller, if there is any, as well.
  
  Thanks Stan for asking the basic questions. The answers made obvious
  to me that in the current form this can´t be a sane setup.
 
 Yes, I know how that sounds. But testing in my case is installing
 slim Debian, apache on top of it and running some light web application
 for a few hours. Nothing intensive. Just to have fresh machine with
 nothing on it. But if running it sounds too bad I could just run it
 somewhere else. Thanks for your advice, Martin!
 
 On the other hand, monitoring has to be here, no place else to put it.

Consider the consequenzes:

If the server fails, you possibly wouldn´t know why cause the monitoring 
information wouldn´t be available anymore. So at least least Nagios / 
Icingo send out mails, in case these are not stored on the server as well, 
or let it relay the information to another Nagios / Icinga instance.

What data do you backup? From where does it come?

I still think backup should be separate from other stuff. By design.

Well for more fact based advice we´d require a lot more information on 
your current setup and what you want to achieve.

I recommend to have a serious talk about acceptable downtimes and risks 
for the backup with the customer if you serve one or your boss if you work 
for one.

-- 
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA  B82F 991B EAAC A599 84C7


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/201209082128.09526.mar...@lichtvoll.de



Re: Storage server

2012-09-08 Thread Martin Steigerwald
Am Samstag, 8. September 2012 schrieb Veljko:
 On Fri, Sep 07, 2012 at 01:26:13PM -0500, Stan Hoeppner wrote:
  On 9/7/2012 11:29 AM, Veljko wrote:
   I'm in the process of making new backup server, so I'm thinking of
   best way of doing it. I have 4 3TB disks and I'm thinking of
   puting them in software RAID10.
  
  [what if stream of consciousness rambling snipped for brevity]
  
   What do you think of this setup? Good sides? Bad sides of this
   approach?
  
  Applying the brakes...
  
  As with many tech geeks with too much enthusiasm for various tools
  and not enough common sense and seasoning, you've made the mistake
  of approaching this backwards.  Always start here:
  
  1.  What are the requirements of the workload?
  2.  What is my budget and implementation date?
  3.  How can I accomplish #1 given #2 with the
  4.  Least complexity and
  5.  Highest reliability and
  6.  Easiest recovery if the system fails?
  
  You've described a dozen or so overly complex technical means to some
  end that tend to violate #4 through #6.
  
  Slow down, catch your breath, and simply describe #1 and #2.  We'll
  go from there.
 
 Well, it did sound a little to complex and that is why I posted to this
 list, hoping to hear some other opinions.
 
 1. This machine will be used for
   a) backup (backup server for several dedicated (mainly) web servers).
   It will contain incremental backups, so only first running will take
 a lot of time, rsnapshot will latter download only changed/added files
 and will run from cron every day. Files that will be added later are
 around 1-10 MB in size. I expect ~20 GB daily, but that number can
 grow. Some files fill be deleted, other will be added.

For rsnapshot in my experience you need monitoring cause if it fails it 
just complains to its log file and even just puts the rsync error code 
without the actual error message there last I checked. 

Let monitoring check whether daily.0 is not older than 24 hours.

Did you consider putting those webservers into some bigger virtualization 
host and then let them use NFS exports for central storage provided by 
some server(s) that are freed by this? You may even free up a dedicated 
machine for monitoring and another one for the backup ;).

But well any advice depends highly on the workload, so this is just guess 
work.

   Dedicated servers that will be backed up are ~500GB in size.

How many are they?

   b) monitoring (Icinga or Zabbix) of dedicated servers.

Then who monitors the backup? It ideally should be a different server than 
this multi-purpose-do-everything-and-feed-the-dog machine your are talking 
about.

   c) file sharing for employees (mainly small text files). I don't
   expect this to be resource hog.

Another completely different workload.

Where do you intend the backup for these files? I obviously wouldn´t put it 
on the same machine as the fileserver.

See how mixing lots of stuff into one machine makes things complicated?

You may spare some hardware costs. But IMHO thats easily offset by higher 
maintenance costs as well at higher risk of service outage and the costs 
it causes.

   d) Since there is enough space (for now), and machine have four cores
   and 4GB RAM (that can be easily increased), I figured I can use it
 for test virtual machines. I usually work with 300MB virtual machines
 and no intensive load. Just testing some software.

4 GiB RAM of RAM for a virtualization host that also does backup and 
fileservices? You aren´t kidding me, are you? If using KVM I at least 
suggest to activate kernel same page merging.

Fast storage also depends on cache memory, which the machine will lack if 
you fill it with virtual machines.

And yes as explained already yet another different workload.

Even this ThinkPad T520 has more RAM, 8 GiB, and I just occasionaly fire up 
some virtual machines.

 2. There is no fixed implementation date, but I'm expected to start
 working on it. Sooner the better, but no dead lines.
Equipment I have to work with is desktop class machine: Athlon X4,
4GB RAM and 4 3TB Seagate ST3000DM001 7200rpm. Server will be in my
office and will perform backup over internet. I do have APC UPS to
power off machine in case of power loss (apcupsd will take care of
that).

Server based loads on a desktop class machine and possibly desktop class 
harddrives - didn´t look these up so if there are enterprise based ones 
with extended warranty ignore my statement regarding them?

 In next few months it is expected that size of files on dedicated
 servers will grow and it case that really happen I'd like to be able to
 expand this system. Hardware RAID controllers are expensive and
 managers always want to go with least expenses possible, so I'm stuck
 with software RAID only.

Well extension of RAID needs some thinking ahead. While you can just add 
disks to add capacity – not redundancy – into an existing RAID the risks 
of a non recoverable failure of the RAID increases. 

Re: Storage server

2012-09-08 Thread Martin Steigerwald
Am Samstag, 8. September 2012 schrieb Martin Steigerwald:
  And, of course, thanks for your time and valuable advices, Stan, I've
  read some of your previous posts on this list and know you're storage
  guru.
 
 It wasn´t Stan who wrote the mail you replied to here, but yes I think
 I  can learn a lot from him regarding storage setups, too.

Ah, forget this. Of course you answered to Stans mail with the good 
questions in it and I replied to your reply to Stan a second time with 
some more considerations and questions for you.

-- 
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA  B82F 991B EAAC A599 84C7


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/201209082156.22556.mar...@lichtvoll.de



Re: Storage server

2012-09-08 Thread Martin Steigerwald
Am Samstag, 8. September 2012 schrieb Veljko:
 Well, it did sound a little to complex and that is why I posted to this
 list, hoping to hear some other opinions.
 
 1. This machine will be used for 
   a) backup (backup server for several dedicated (mainly) web servers).
   It will contain incremental backups, so only first running will take
 a lot of time, rsnapshot will latter download only changed/added files
 and will run from cron every day. Files that will be added later are
 around 1-10 MB in size. I expect ~20 GB daily, but that number can
 grow. Some files fill be deleted, other will be added.
   Dedicated servers that will be backed up are ~500GB in size.
   b) monitoring (Icinga or Zabbix) of dedicated servers.
   c) file sharing for employees (mainly small text files). I don't
   expect this to be resource hog.
   d) Since there is enough space (for now), and machine have four cores
   and 4GB RAM (that can be easily increased), I figured I can use it
 for  test virtual machines. I usually work with 300MB virtual machines
 and no intensive load. Just testing some software.

Could it be that you intend to provide hosted monitoring, backup and 
fileservices for an customer and while at it use the same machine for 
testing own stuff?

If so: Don´t.

Thats at least my advice. (In addition to what I wrote already.)

-- 
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA  B82F 991B EAAC A599 84C7


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/201209082159.36172.mar...@lichtvoll.de



Re: Storage server

2012-09-08 Thread lee
Veljko velj...@gmail.com writes:

 On Fri, Sep 07, 2012 at 09:43:57PM -0400, tdowg1 news wrote:

 If it was my call, I would go with high-end RAID card as well. But in
 this case I have to work without them. However, I've heard that
 software RAID is good for one thing. You can rebuild it in any other
 machine. If you use hardware controller and it dies, you have to buy
 same of very similar one to be able to save your data. Was I
 misinformed? 

Some people have argued it's even better to use software raid than a
hardware raid controller because software raid doesn't depend on
particular controller cards that can fail and can be difficult to
replace. Besides that, software raid is a lot cheaper.

So what is better, considering reliability? Performance might be a
different issue.


-- 
Debian testing amd64


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/87sjasuwd2@yun.yagibdah.de



Re: Storage server

2012-09-07 Thread Dan Ritter
On Fri, Sep 07, 2012 at 06:29:35PM +0200, Veljko wrote:
 
 Hi!
 
 I'm in the process of making new backup server, so I'm thinking of best
 way of doing it. I have 4 3TB disks and I'm thinking of puting them in
 software RAID10.
 
 I created 2 500MB partitions for /boot (RAID1) and the rest it RAID10.

So far, so good.

 LVM will provide me a way to expand storage with extra disks, but if
 there is no more room for that kind of expansion, I was thinking of
 GlusterFS for scaling-out.

Let me suggest a different approach. It sounds like you're
planning on a lot of future expansion.

Get a high-end SAS RAID card. One with two external SFF8088
connectors.

When you start running out of places to put disks, buy external
chassis that take SFF8088 and have daisy-chaining ports. 2U
boxes often hold 12 3.5 disks.

You can put cheap SATA disks in, instead of expensive SAS disks.
The performance may not be as good, but I suspect you are
looking at sheer capacity rather than IOPS.

Now, the next thing: I know it's tempting to make a single
filesystem over all these disks. Don't. The fsck times will be
horrendous. Make filesystems which are the size you need, plus a
little extra. It's rare to actually need a single gigantic fs.

 OS I would use is Wheezy. Guess he will be stable soon enough and I
 don't want to reinstall everything again in one year, when support for
 old stable is dropped.

This is Debian. Since 1997 or so, you have had the ability to
upgrade from major version n to version n+1 without
reinstalling. You won't need to reinstall unless you change
architectures (i.e. from x86_32 to x86_64).

-dsr-


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20120907174232.gy4...@randomstring.org



Re: Storage server

2012-09-07 Thread Stan Hoeppner
On 9/7/2012 11:29 AM, Veljko wrote:

 I'm in the process of making new backup server, so I'm thinking of best
 way of doing it. I have 4 3TB disks and I'm thinking of puting them in
 software RAID10.

[what if stream of consciousness rambling snipped for brevity]

 What do you think of this setup? Good sides? Bad sides of this approach?

Applying the brakes...

As with many tech geeks with too much enthusiasm for various tools and
not enough common sense and seasoning, you've made the mistake of
approaching this backwards.  Always start here:

1.  What are the requirements of the workload?
2.  What is my budget and implementation date?
3.  How can I accomplish #1 given #2 with the
4.  Least complexity and
5.  Highest reliability and
6.  Easiest recovery if the system fails?

You've described a dozen or so overly complex technical means to some
end that tend to violate #4 through #6.

Slow down, catch your breath, and simply describe #1 and #2.  We'll go
from there.

-- 
Stan


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/504a3c45.1050...@hardwarefreak.com



  1   2   >