Simon Baxter linu...@nzbaxters.com writes:
Anyway, I've bought 3x 1.5 TB SATA disks which I'd like to put into a
software (mdadm) raid 5 array.
[...]
But does anyone have any production VDR experience with mdadm - good or bad?
If you like good performance and simple recovery, then do not
But does anyone have any production VDR experience with mdadm - good or
bad?
I've now tested and implemented RAID5 on my system. The biggest CPU hit is
still with the OSD or noad processes - below is a bunch of tests I ran and
the top processes during the test:
1 recording to raid,
On 18.11.2009 18:28, H. Langos wrote:
I/O-load can have some nasty effects. E.g. if your heads have to jump
back and forth between an area from where you are reading and an area
to which you are recording.
I remember reading some tests about file system write strategies that
showed major
On Tue, Nov 17, 2009 at 03:34:59PM +, Steve wrote:
Alex Betis wrote:
I don't record much, so I don't worry about speed.
While there's no denying that RAID5 *at best* has a write speed
equivalent to about 1.3x a single disk and if you're not careful with
stride/block settings can be a
H. Langos wrote:
Depending on the amount of RAM, the cache can screw up your results
quite badly. For something a little more realistic try:
Good point!
sync; dd if=/dev/zero of=foo bs=1M count=1024 conv=fsync
Interestingly, not much difference:
# sync; dd if=/dev/zero
Pasi Kärkkäinen wrote:
You should use oflag=direct to make it actually write the file to disk..
And now most probably the file will come from linux kernel cache.
Use iflag=direct to read it actually from the disk.
However, in the real world data _is_ going to be cached via the kernel
On Thu, Nov 19, 2009 at 01:37:46PM +, Steve wrote:
Pasi Kärkkäinen wrote:
You should use oflag=direct to make it actually write the file to disk..
And now most probably the file will come from linux kernel cache.
Use iflag=direct to read it actually from the disk.
However, in the
Hi Alex,
On Tue, Nov 17, 2009 at 03:34:59PM +, Steve wrote:
Alex Betis wrote:
I don't record much, so I don't worry about speed.
While there's no denying that RAID5 *at best* has a write speed
equivalent to about 1.3x a single disk and if you're not careful with
stride/block settings
Alex Betis wrote:
I don't record much, so I don't worry about speed.
While there's no denying that RAID5 *at best* has a write speed
equivalent to about 1.3x a single disk and if you're not careful with
stride/block settings can be a lot slower, that's no worse for our
purposes that, erm,
Simon,
Pay attention that /boot can be installed only on a single disk or RAID-1
where every disk can actually work as a stand alone disk.
I personally decided to use RAID-5 on 3 disks with RAID-1 on 3xsmall
partitions for /boot and RAID-5 on the rest.
RAID-5 also allows easier expansion in the
To: VDR Mailing List
Sent: Friday, November 13, 2009 1:03 AM
Subject: Re: [vdr] mdadm software raid5 arrays?
Simon,
Pay attention that /boot can be installed only on a single disk or RAID-1
where every disk can actually work as a stand alone disk.
I personally decided to use RAID-5 on 3 disks
experience with RAID5?
- Original Message -
From: Alex Betis
To: VDR Mailing List
Sent: Friday, November 13, 2009 1:03 AM
Subject: Re: [vdr] mdadm software raid5 arrays?
Simon,
Pay attention that /boot can be installed only on a single disk or RAID-1
where every disk can actually
/XBMC
frontends so maybe it would affect OSD performance if I had it on the same
machine.
/Magnus H
_
Från: vdr-boun...@linuxtv.org [mailto:vdr-boun...@linuxtv.org] För Alex
Betis
Skickat: den 13 november 2009 08:00
Till: VDR Mailing List
Ämne: Re: [vdr] mdadm software raid5 arrays
Ok.. short comparison, using a single disk as baseline.
Good chart, perhaps you should also mention the capacity. With this I
mean what happens.
1 disk = 1TB for simplicity.
using 2 disks
raid0: (striping)
++ double read throughput,
++ double write throughput,
-- half the
What about a simple raid 1 mirror set?
- Original Message -
From: H. Langos henrik-...@prak.org
To: VDR Mailing List vdr@linuxtv.org
Sent: Tuesday, November 10, 2009 6:49 AM
Subject: Re: [vdr] mdadm software raid5 arrays?
Hi Simon,
On Sat, Nov 07, 2009 at 07:38:03AM +1300, Simon
On Tue, Nov 10, 2009 at 09:46:52PM +1300, Simon Baxter wrote:
What about a simple raid 1 mirror set?
Ok.. short comparison, using a single disk as baseline.
using 2 disks
raid0: (striping)
++ double read throughput,
++ double write throughput,
-- half the reliability (read: only use
Thanks - very useful!
So what I'll probably do is as follows...
* My system has 4x SATA ports on the motherboard, to which I'll connect my
4x 1.5TB drives.
* Currently 1 drive is in use with ~30G for / /boot and swap and ~1.4TB for
/media
* I'll create /dev/md2, using mdadm, in RAID1 across 2
Hello Simon,
what you also can do is to create the two RAID1 md devices with missing
disks, e.g.:
mdadm --create /dev/md2 --level=1 --raid-disks=2 missing /dev/sdb3
mdadm --create /dev/md3 --level=1 --raid-disks=2 missing /dev/sdd3
mdadm --create /dev/md1 --level=0 --raid-disks=2 /dev/md2
Hi Simon,
On Sat, Nov 07, 2009 at 07:38:03AM +1300, Simon Baxter wrote:
Hi
I've been running logical volume management (LVMs) on my production VDR
box for years, but recently had a drive failure. To be honest, in the
~20 years I've had PCs in the house, this is the first time a drive
19 matches
Mail list logo