On Wed, Dec 15, 2004 at 11:14:03PM -0800, William Yu wrote:
>
> I'm not sure why people say one is better than the other. Both will
> survive the loss of 2 drives -- they're just different drives.
Partly, I think, people who've used both hate 0+1 because of the
recovery cost. In most 0+1 arrang
Iain wrote:
As bytepile has it, failure of 1 disk in 0+1 leaves you with just RAID 0
so one more failure on the other pair and your data is gone. On the
other hand, failure of 1 disk in raid 10 leaves you with a working raid
1 that can sustain a second failure.
What they're saying is in the case
Hi Aaron,
You are correct. That kind of functionality can't be found on standard
x86 gear (sans the $150K NEC redundant behemoth).
I reckon that the day is coming though, and I expect Linux will ready when
the hardware arrives :-)
cheers
Iain
---(end of broadcast)---
Thanks again for your valuable input William.
I'm not sure why people say one is better than the other. Both will
survive the loss of 2 drives -- they're just different drives.
RAID 0+1: A(1m1) s B(1m1) <-- any drive on A and any drive on B
RAID 10: A(1s1) m B(1s1) <-- both drives on A or both d
On Thu, 16 Dec 2004 11:18:37 +0900, Iain <[EMAIL PROTECTED]> wrote:
> Also, someone asked me what happens if one of the CPUs fails on this system,
> will the system continue to operate on 1 CPU. I havn't really considered
> this, and have never read anything either way, so my assumption is "no, it
Iain wrote:
Hi William,
SOmething to think about. Let's suppose a channel/cable completely
dies. How would you protect against it? Split a logical mirror device
over 2 channels.
This effectively implements RAID 0+1, right? RAID 1 (mirroring) over
RAID 0 striped volumes. I can certainly see your
Hi William,
SOmething to think about. Let's suppose a channel/cable completely dies.
How would you protect against it? Split a logical mirror device over 2
channels.
This effectively implements RAID 0+1, right? RAID 1 (mirroring) over RAID 0
striped volumes. I can certainly see your point regard
Iain wrote:
Upon further investigation, I think this the way to go, specificaly
I'llbe recommending the 320-2 as I plan to put the WAL and DB on
seperate channels and partitions (as opposed to to putting them on the
same logical partition split over the two channels).
SOmething to think about. L
Hi William,
They don't list any Tyan's as "certified" so I dunno. Perhaps you should
look into a Tyan w/o integrated SCSI and just get the MegaRAID 320-1 or
320-2 to avoid any possible issues.
Upon further investigation, I think this the way to go, specificaly I'llbe
recommending the 320-2 as I pl
Iain wrote:
> Thanks to all for your feedback on this.
>
> I'm looking forward to getting my hands on the system.
>
> It seems that the battery backed cache is an important factor, though I
> havn't found any information specifically concerning this on the tyan
> site. I can't tell if it's an
Thanks to all for your feedback on this.
I'm looking forward to getting my hands on the system.
It seems that the battery backed cache is an important factor, though I
havn't found any information specifically concerning this on the tyan site.
I can't tell if it's an option or
k,
> regards
> Iain
>
>
> - Original Message -
> From: "Ericson Smith" <[EMAIL PROTECTED]>
> To: <[EMAIL PROTECTED]>
> Sent: Tuesday, December 14, 2004 12:11 AM
> Subject: Re: [ADMIN] Tyan Thunder MB for postgres server
>
>
> > We
I've used a slew of LSI controllers (22915A, integrated 53C1010,
22320-R, MegaRAID 320-1) and they've all performed w/o issues. Now I
have had some hardware die though. One was probably our fault -- during
an attempted upgrade, we probably weren't careful enough with the
22320-R (cramped [EMAIL PRO
To: <[EMAIL PROTECTED]>
Sent: Tuesday, December 14, 2004 12:11 AM
Subject: Re: [ADMIN] Tyan Thunder MB for postgres server
> We use that exact configuration right now, except with an Adaptec card
> and more RAM.
>
> We used RHEL 3.0, then switched to Fedo
We use that exact configuration right now, except with an Adaptec card
and more RAM.
We used RHEL 3.0, then switched to Fedora core 2 64Bit as a test, since
this server was since placed into standby duties. Well, we needed to use
the server when the main server went into maintenance
15 matches
Mail list logo