Bill Davidsen wrote:
> John McMonagle wrote:
>
>> Have a raid1 backup server that seems to get corrupted.
>> This is the 3rd time in about a year.
>> Have 2 other backup servers that were cloned from this one that have
>> no problems.
>>
>> Done a c
Have a raid1 backup server that seems to get corrupted.
This is the 3rd time in about a year.
Have 2 other backup servers that were cloned from this one that have no
problems.
Done a couple kernel upgrades recently.
Now has 2.6.18-2 kernel.
It's based on Debian sarge.
It's a low end Intel ser
Neil Brown wrote:
> On Sunday September 3, [EMAIL PROTECTED] wrote:
>
>>On Sun, 3 Sep 2006, Clive Messer wrote:
>>
>>
>>>This leads me to a question. I understand from reading the linux-raid
>>>archives
>>>that the current behaviour when rebuilding with a single badblock on another
>>>disk is f
I did lvm over 2 raid 1 arrays recently.
Running dd on the lv gives about 3 times the drive speed.
Should be 4 times .
I hate to give up performance but I'm willing to live with it as lvm
makes disk management so much easier.
John
Gregory Seidman wrote:
Is there any advantage to RAID1
Anyone establish optimum blockdev --setra settings for raid on a 2.6 kernel?
There has been some discussions on the lvm mailing list.
In the case of lvm on raid sounds like it's best to use 0 on the md and
disk devices and something around 1024 and 4096 on the lvm devices.
It seems make some sense
Luca Berra wrote:
On Sun, Apr 17, 2005 at 05:04:13PM -0500, John McMonagle wrote:
Need to duplicate some computers that are using raid 1.
I was thinking of just adding adding an extra drive and then moving
it to the new system. The only problem is the clones will all have
the same uuids. If at
Need to duplicate some computers that are using raid 1.
I was thinking of just adding adding an extra drive and then moving it
to the new system. The only problem is the clones will all have the same
uuids. If at some later date the drives got mixed up I could see a
possibilities for disaster.
s that there should be provision to send
notification when it happens.
With both that would really help.
John
Brad Campbell wrote:
John McMonagle wrote:
Was planning to adding a hot spare to my 3 disk raid5 array and was
thinking if I go to 4 drives I would be a better off as 2 raid1
arrays considering th
Was planning to adding a hot spare to my 3 disk raid5 array and was
thinking if I go to 4 drives I would be a better off as 2 raid1 arrays
considering the current state of raid5.
If you think that is wrong please speak up now :)
Thinking I would make a raid1 array for /.
The rest of the firs
Can a spare be added to an existing raid 5 array?
I do not see any way to do it.
John
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
he 3rd failure. Had 2 bad drives in the first
week.
Even then is it possible for a drive failure to cause a kernel panic???
Really appreciate feedback even if it's what hardware works for you.
John
John McMonagle wrote:
It kernel panicked again at around 30% with the 2.6.10 kernel :(
I'm
suggestions?
Is it possible to stop the resync to to get some data off it? It's
really unresponsive with it running.
My guess is there is something wrong with a drive and with the raid
resync code.
I'll do some tests on the drives. The rest I'll need help with.
John
John McMonag
Have a backup system that recently had a kernel panic and am having
problems with rsync.
It's a p2 460mhz mb 512 mb ram.
promise sata comtroler.
3 200gb sata drives
debain sarge
2.6.10 kernel from kernel.org with smart sata patches.
mdadm v1.9.0
/proc/mdstat
Personalities : [raid1] [raid5]
md0
13 matches
Mail list logo