On Sun, 5 Feb 2006, David Liontooth wrote:
In designing an archival system, we're trying to find data on when it
pays to power or spin the drives down versus keeping them running.
Is there a difference between spinning up the drives from sleep and from
a reboot? Leaving out the cost imposed on
Hi
I've just noticed that setting an array readonly doesn't really make
it readonly.
I have a RAID1 array and LVM on top of it.
When I run
/sbin/mdadm --misc --readonly /dev/md0
/proc/mdstat shows:
Personalities : [raid1]
md0 : active (read-only) raid1 sda[0] sdb[1]
160436096 blocks
Mattias Wadenstein wrote:
On Sun, 5 Feb 2006, David Liontooth wrote:
In designing an archival system, we're trying to find data on when it
pays to power or spin the drives down versus keeping them running.
Hitachi claims 5 years (Surface temperature of HDA is 45°C or less)
Life of the
2006/2/6, David Liontooth [EMAIL PROTECTED]:
Mattias Wadenstein wrote:
On Sun, 5 Feb 2006, David Liontooth wrote:
For their deskstar (sata/pata) drives I didn't find life time
estimates beyond 5 start-stop-cycles.
If components are in fact manufactured to fail simultaneously under
On 2/5/06, Neil Brown [EMAIL PROTECTED] wrote:
I've looked through the patches - not exhaustively, but hopefully
enough to get a general idea of what is happening.
There are some things I'm not clear on and some things that I could
suggest alternates too...
I have a few questions to check
On Sun, 2006-02-05 at 15:42 -0800, David Liontooth wrote:
In designing an archival system, we're trying to find data on when it
pays to power or spin the drives down versus keeping them running.
Is there a difference between spinning up the drives from sleep and from
a reboot? Leaving out
Drives are probably going to have a lifetime that is proportionate to a
variety of things, and while I'm not a physicist or mechanical engineer,
nor in the hard disk business, the things that come to mind first are:
1) Thermal stress due to temperate changes - with more rapid changes
being more
Hi all.
After every reboot, my brand new Raid1 array comes up degraded. It's always
/dev/sdb1 that is unavailable or removed.
The hardware is as follows..
2x200MB Seagate SATA drives, in RAID 1 These are for data only, OS is on
a separate IDE disk.
LVM Partitions for my data on
Neil Brown wrote:
What constitutes 'a piece of data'? A bit? a byte?
I would say that
msdos:fd
is one piece of data. The 'fd' is useless without the 'msdos'.
The 'msdos' is, I guess, not completely useless with the fd.
I would lean towards the composite, but I wouldn't fight a
On Sun, 5 Feb 2006, Lewis Shobbrook wrote:
On Saturday 04 February 2006 11:22 am, you wrote:
On Sat, 4 Feb 2006, Lewis Shobbrook wrote:
Is there any way to avoid this requirement for input, so that the system
skips the missing drive as the raid/initrd system did previously?
what boot
I recently acquired a 7TB Xserve RAID. It is configured in hardware as
2 RAID 5 arrays of 3TB each.
Now I'm trying to configure a RAID 0 over these 2 drives (so RAID 50 in total).
I only wanted to make 1 large partition on each array, so I used
parted as follows:
parted /dev/sd[bc]
(parted)
On Mon, Feb 06, 2006 at 12:25:22PM -0700, Dan Williams ([EMAIL PROTECTED])
wrote:
On 2/5/06, Neil Brown [EMAIL PROTECTED] wrote:
I've looked through the patches - not exhaustively, but hopefully
enough to get a general idea of what is happening.
There are some things I'm not clear on and
12 matches
Mail list logo