All the drives are identical, and they are on identical usb enclosures.
I am starting to suspect USB. It frequently resets the enclosures. I'll
have to look at that first. Anyway I had it working before for some time.
Justin Piszcz wrote:
On Mon, 1 Oct 2007, Daniel Santos wrote:
It stopped
Hello list,
some folks reported severe filesystem-crashes with ext3 and reiserfs on
mdraid level 1 and 5.
Is this safe now? Or should i only use non-journalling-filesystems on
software-raid-devices?
Kind regards, Florian Rustedt
On Tue, 2 Oct 2007, Rustedt, Florian wrote:
Hello list,
some folks reported severe filesystem-crashes with ext3 and reiserfs on
mdraid level 1 and 5.
Is this safe now? Or should i only use non-journalling-filesystems on
software-raid-devices?
Kind regards, Florian Rustedt
Hi,
we (Q-Leap networks) are in the process of setting up a high speed
storage cluster and we are having some problems getting proper
performance.
Our test system consists of a 2x dual core system with 2 dual channel
UW scsi controlers connected to 2 external raid boxes and we use
iozone with
On Tue, 2 Oct 2007, Goswin von Brederlow wrote:
Hi,
we (Q-Leap networks) are in the process of setting up a high speed
storage cluster and we are having some problems getting proper
performance.
Our test system consists of a 2x dual core system with 2 dual channel
UW scsi controlers
Justin Piszcz [EMAIL PROTECTED] writes:
Have you tried a 1024k stripe and 16384k stripe_cache_size?
I'd be curious what kind of performance/write speed you get with that
configuration.
Justin.
stripe_cache_size is not in KiB of memory but in multiples of some
internal structures. So 16384
maximilian attems wrote:
klibc still misses a lot functionality to let mdadm link against,
this small step helps to get to the real trouble.. :)
Signed-off-by: maximilian attems [EMAIL PROTECTED]
---
mdadm.h |9 -
1 files changed, 8 insertions(+), 1 deletions(-)
diff --git