Thanks to good advice from many people, here are my findings and
conclusions.
(1) Splitting the RAID works. I have now implemented this technique on
the production system and am making a backup right now.
(2) NBD is cool, works well on Debian, and is very convenient. A
couple experiments suggest
On Wed, 26 Oct 2005, Jeff Breidenbach wrote:
>
> Norman> What you should be able to do with software raid1 is the
> Norman> following: Stop the raid, mount both underlying devices
> Norman> instead of the raid device, but of course READ ONLY. Both
> Norman> contain the complete data and filesyste
Jeff Breidenbach wrote:
Hi all,
[...]
So - I'm thinking of the following backup scenario. First, remount
/dev/md0 readonly just to be safe. Then mount the two component
paritions (sdc1, sdd1) readonly. Tell the webserver to work from one
component partition, and tell the backup process to w
I noticed some discussion of both netcat and NBD in this thread, and
these are both things I've investigated at various points during the
last 365.25 days. :)
On the subject of netcat:
You might be interested in my pnetcat program at
http://dcs.nac.uci.edu/~strombrg/pnetcat.htm
Norman> What you should be able to do with software raid1 is the
Norman> following: Stop the raid, mount both underlying devices
Norman> instead of the raid device, but of course READ ONLY. Both
Norman> contain the complete data and filesystem, and in addition to
Norman> that the md superblock at
Jeff Breidenbach wrote:
>Individual directories contain up to about 150,000 files. If I run ls
>-U on all directories, it completes in a reasonably amount of time (I
>forget how much, but I think it is well under an hour). Reiserfs is
>supposed to be good at this sort of thing. If I were to stat e
Hi,
don't know if my comment helps:
how about using xfs instead of reiserfs ? xfs was designed for high
performance on heavy-loaded big irons.
since we now have such a heavy-loaded big iron here, it would be
interesting to see the difference in performance.
--
Best regards,
Rainer
On Mon, 24 Oct 2005, Jeff Breidenbach wrote:
>
> Ok... thanks everyone!
>
> David, you said you are worried about failure scenarios
> involved with RAID splitting. Could you please elaborate?
> My biggest concern is I'm going to accidentally trigger
> a rebuild no matter what I try but maybe you
John Stoffel schrieb:
Norman> What you should be able to do with software raid1 is the
Norman> following: Stop the raid, mount both underlying devices
Norman> instead of the raid device, but of course READ ONLY. Both
Norman> contain the complete data and filesystem, and in addition to
Norman> tha
Jeff Breidenbach wrote:
your suggestion about kernel 2.6.13 and intent logging and
having mdadm pull a disk sounds like a winner. I'm going to to try it
if the software looks mature enough. Should I be scared?
There have been a couple bug fixes in the bitmap stuff since 2.6.13 was
released, b
Jeff> Ok... thanks everyone!
You're welcome! :]
Jeff> John, I'm using 4KB blocks in reiserfs with tail packing. All
Jeff> sorts of other details are in the dmesg output [1]. I agree
Jeff> seeks are a major bottleneck, and I like your suggestion about
Jeff> putting extra spindles in.
I think t
Norman> What you should be able to do with software raid1 is the
Norman> following: Stop the raid, mount both underlying devices
Norman> instead of the raid device, but of course READ ONLY. Both
Norman> contain the complete data and filesystem, and in addition to
Norman> that the md superblock at
Jeff Breidenbach schrieb:
Ok... thanks everyone!
Something from me:
What you should be able to do with software raid1 is the following:
Stop the raid, mount both underlying devices instead of the raid device,
but of course READ ONLY. Both contain the complete data and filesystem,
and in addi
On 10/24/05, Thomas Garner <[EMAIL PROTECTED]> wrote:
> Should there be any consideration for the utilization of the gigabit
> interface that is passing all of this backup traffic, as well as the
> speed of the drive that is doing all of the writing during this
> transaction? Is the 18MB/s how fa
Should there be any consideration for the utilization of the gigabit
interface that is passing all of this backup traffic, as well as the
speed of the drive that is doing all of the writing during this
transaction? Is the 18MB/s how fast the data is being copied over the
network, or is it some
On Mon, 24 Oct 2005, Jeff Breidenbach wrote:
> Dean, the comment about "write-mostly" is confusing to me. Let's say
> I somehow marked one of the component drives write-mostly to quiet it
> down. How do I get at it? Linux will not let me mount the component
> partition if md0 is also mounted. Do
Ok... thanks everyone!
David, you said you are worried about failure scenarios
involved with RAID splitting. Could you please elaborate?
My biggest concern is I'm going to accidentally trigger
a rebuild no matter what I try but maybe you have something
more serious in mind.
Brad, your suggestion
> "Jeff" == Jeff Breidenbach writes:
Jeff> # mount | grep md0
Jeff> /dev/md0 on /data1 type reiserfs (rw,noatime,nodiratime)
Ah, you're using reiserfs on here. It may or may not be having
problems with all those files per-directory that you have. Is there
any way you can split them up mor
Thanks for all the suggestions.
> a big hint you're suffering from atime updates is write traffic when your
> fs is mounted rw, and your static webserver is the only thing running (and
> your logs go elsewhere)... atime updates are probably the only writes
> then. try "iostat -x 5".
I think ati
On Mon, 24 Oct 2005, Jeff Breidenbach wrote:
>
> Hi all,
>
> I have a two drive RAID1 serving data for a busy website. The
> partition is 500GB and contains millions of 10KB files. For reference,
> here's /proc/mdstat
>
> Personalities : [raid1]
> md0 : active raid1 sdc1[0] sdd1[1]
> 4883
On Mon, 24 Oct 2005, Jeff Breidenbach wrote:
> >First of all, if the data is mostly static, rsync might work faster.
>
> Any operation that stats the individual files - even to just look at
> timestamps - takes about two weeks. Therefore it is hard for me to see
> rsync as a viable solution, even
Jeff Breidenbach wrote:
However you will endure a rebuild on md0 when you re-add the disk, but
given everything is mounted read-only, you should not practically be
doing anything
If the rebuild operation is a no-op, then that sounds like a great
idea. If the rebuild operation requires scannin
>First of all, if the data is mostly static, rsync might work faster.
Any operation that stats the individual files - even to just look at
timestamps - takes about two weeks. Therefore it is hard for me to see
rsync as a viable solution, even though the data is mostly
static. About 400,000 files
Jeff Breidenbach wrote:
So - I'm thinking of the following backup scenario. First, remount
/dev/md0 readonly just to be safe. Then mount the two component
paritions (sdc1, sdd1) readonly. Tell the webserver to work from one
component partition, and tell the backup process to work from the
other
>
> Hi all,
>
> I have a two drive RAID1 serving data for a busy website. The
> partition is 500GB and contains millions of 10KB files. For reference,
> here's /proc/mdstat
>
> Personalities : [raid1]
> md0 : active raid1 sdc1[0] sdd1[1]
> 488383936 blocks [2/2] [UU]
>
> For backups, I set th
Hi all,
I have a two drive RAID1 serving data for a busy website. The
partition is 500GB and contains millions of 10KB files. For reference,
here's /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdc1[0] sdd1[1]
488383936 blocks [2/2] [UU]
For backups, I set the md0 partition to r
26 matches
Mail list logo