Re: raidframe
On Mon, 2 Jun 2003, Simon L. Nielsen wrote: I have an asr based RAID controller (Adaptec 2400A), though it is an IDE RAID controller, it uses the SCSI asr driver. My controller has worked very well with FreeBSD 5.x, and the server is currently running 5.1-BETA. The only thing that doesn't work, is the userland utilities to control / get status from the card, but that's not so important. I have an asr based SCSI RAID controller and I have been testing it with a 5.1 system. So far it seems to be working just fine, and the performance is generally good. The system has been running for a few weeks while I have been doing some random testing. The latest make world and kernel rebuild was a few days ago. uname -a: FreeBSD f4.berkeley.edu 5.1-BETA FreeBSD 5.1-BETA #2: Thu May 29 18:40:44 PDT 2003 [EMAIL PROTECTED]:/usr/obj/usr/src/sys/F4-SMP i386 from dmesg: asr0: Adaptec Caching SCSI RAID mem 0xfc00-0xfdff irq 16 at device 3.0 on pci3 asr0: major=154 asr0: ADAPTEC 2110S FW Rev. 380E, 1 channel, 256 CCBs, Protocol I2O michael ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: raidframe
On Mon, Jun 02, 2003 at 03:31:49PM +0300, Petri Helenius [EMAIL PROTECTED] wrote: FreeBSD 5.x series is slowly progressing, but is nowhere near to production quality. As the things are currently, you simply waste your time. This is only my opinion and I don't want to offend anyone. IMO, software does not magically get better but it must be actively being used and problems reported and fixed in reasonable time. So if 5.x never gets users it never gets production quality. As do I, but I initially thought you needed stable platform. I vaguely remember your mails about some network related things etc. which seemed to indicate such need. I've sent you personal reply. Sorry. -- Vallo Kallaste ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: raidframe
Ouch, this is a serious bug indeed. I apologize if you reported this before and I missed it. I'll look at it today. Other than this bug (which appears to only be related to the management app), are there any other problems with aac? Note that aac is one of the few cards that even supports a management app! Scott Petri Helenius wrote: So far I´ve tried asr and aac, both cards end up in kernel panics and/or array hang in a few minutes (multiple hardware platforms so I don´t think the motherboard is to blame) After the bad experience with asr I thought I give aac a try with somewhat similar results. And asr does not work with 4G machines anyway (although I don´t put that amount of memory in the servers now, I don´t want to generate a barrier because of a disk controller) Judging from the limited set of responses, Mylex stuff seems to work. My offer to help you to debug the aac code is still valid. You mean this one as shot in the dark? I got this when configuring an array on 5.1-BETA and aaccli: lock order reversal 1st 0xc2671134 AAC sync FIB lock (AAC sync FIB lock) @ /usr/src/sys/dev/aac/aac.c:1703 2nd 0xc03f5f20 Giant (Giant) @ vm/vm_fault.c:210 Stack backtrace: backtrace(c0397297,c03f5f20,c0393ecb,c0393ecb,c03a4e34) at backtrace+0x17 witness_lock(c03f5f20,8,c03a4e34,d2,d1d33ad8) at witness_lock+0x697 _mtx_lock_flags(c03f5f20,0,c03a4e2b,d2,2) at _mtx_lock_flags+0xb1 vm_fault(c03f1940,0,1,0,c267) at vm_fault+0x59 trap_pfault(d1d33c70,0,8,d1d33c40,8) at trap_pfault+0xef trap(18,c2670010,c2670010,d26402a4,c2671000) at trap+0x39d calltrap() at calltrap+0x5 --- trap 0xc, eip = 0xc24e5959, esp = 0xd1d33cb0, ebp = 0xd1d33ce0 --- aac_handle_aif(c2671000,d2640284,d1d33cfc,d1d33d00,7d0) at aac_handle_aif+0x139 aac_command_thread(c2671000,d1d33d48,c0392341,310,0) at aac_command_thread+0x179 fork_exit(c24e36c0,c2671000,d1d33d48) at fork_exit+0xc0 fork_trampoline() at fork_trampoline+0x1a --- trap 0x1, eip = 0, esp = 0xd1d33d7c, ebp = 0 --- Fatal trap 12: page fault while in kernel mode fault virtual address = 0x8 fault code = supervisor read, page not present instruction pointer = 0x8:0xc31f3959 stack pointer = 0x10:0xd1d39cb0 frame pointer = 0x10:0xd1d39ce0 code segment= base 0x0, limit 0xf, type 0x1b = DPL 0, pres 1, def32 1, gran 1 processor eflags= interrupt enabled, resume, IOPL = 0 current process = 550 (aac0aif) kernel: type 12 trap, code=0 Stopped at aac_handle_aif+0x139: cmpl0x8(%edx),%ecx db trace aac_handle_aif(c2679000,d2635284,d1d39cfc,d1d39d00,7d0) at aac_handle_aif+0x139 aac_command_thread(c2679000,d1d39d48,c0392341,310,c2670260) at aac_command_thread+0x179 fork_exit(c31f16c0,c2679000,d1d39d48) at fork_exit+0xc0 fork_trampoline() at fork_trampoline+0x1a --- trap 0x1, eip = 0, esp = 0xd1d39d7c, ebp = 0 --- db Before that I got some message on GEOM not being properly locked but that scrolled off buffer before I could catch it. Pete - Original Message - From: Scott Long [EMAIL PROTECTED] To: Petri Helenius [EMAIL PROTECTED] Cc: Tim Robbins [EMAIL PROTECTED]; [EMAIL PROTECTED] Sent: Sunday, June 01, 2003 11:20 PM Subject: Re: raidframe Petri Helenius wrote: RAIDframe is non-functional in 5.1 and -current [kern/50541] and it would be unwise to use it in 5.0 for anything other than experimentation. Hopefully it will be fixed before 5.2. Makes one wonder how broken code ever got into the tree in the first place... Pete Just settle down a bit. If you rewind to last October, RAIDFrame worked well. Unfortunately, some kernel interfaces changed in between now and then and RAIDFrame was left behind. I am remis in not fixing it, but please understand that I also have quite a few other responsibilities, and I get paid $0 to work on RAIDframe. As for hardware raid, what cards have you tried, and what problems have you experienced? You last message was jsut a shot in the dark that few of us would be able to help with. Scott ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: raidframe
Sent to you, Mark and obrien on 15th May. Didn´t copy lists nor did a send-pr. Suggestions for future bug reporting appreciated. Ouch, this is a serious bug indeed. I apologize if you reported this before and I missed it. I'll look at it today. Other than this bug (which appears to only be related to the management app), are there any other problems with aac? Note that aac is one of the few cards that even supports a management app! I returned the card, they only had 14 day return policy. I didn´t spend more time with the card since after reporting the bug I didn´t get any replies and without a management utility the card would be useless for me anyway. I´ll ask them for another loaner if needed. While we´re at it, what´s the utility command to turn off the alarm on the card? It got annoying after a while practising pulling out drives :-) Pete Scott Petri Helenius wrote: So far I´ve tried asr and aac, both cards end up in kernel panics and/or array hang in a few minutes (multiple hardware platforms so I don´t think the motherboard is to blame) After the bad experience with asr I thought I give aac a try with somewhat similar results. And asr does not work with 4G machines anyway (although I don´t put that amount of memory in the servers now, I don´t want to generate a barrier because of a disk controller) Judging from the limited set of responses, Mylex stuff seems to work. My offer to help you to debug the aac code is still valid. You mean this one as shot in the dark? I got this when configuring an array on 5.1-BETA and aaccli: lock order reversal 1st 0xc2671134 AAC sync FIB lock (AAC sync FIB lock) @ /usr/src/sys/dev/aac/aac.c:1703 2nd 0xc03f5f20 Giant (Giant) @ vm/vm_fault.c:210 Stack backtrace: backtrace(c0397297,c03f5f20,c0393ecb,c0393ecb,c03a4e34) at backtrace+0x17 witness_lock(c03f5f20,8,c03a4e34,d2,d1d33ad8) at witness_lock+0x697 _mtx_lock_flags(c03f5f20,0,c03a4e2b,d2,2) at _mtx_lock_flags+0xb1 vm_fault(c03f1940,0,1,0,c267) at vm_fault+0x59 trap_pfault(d1d33c70,0,8,d1d33c40,8) at trap_pfault+0xef trap(18,c2670010,c2670010,d26402a4,c2671000) at trap+0x39d calltrap() at calltrap+0x5 --- trap 0xc, eip = 0xc24e5959, esp = 0xd1d33cb0, ebp = 0xd1d33ce0 --- aac_handle_aif(c2671000,d2640284,d1d33cfc,d1d33d00,7d0) at aac_handle_aif+0x139 aac_command_thread(c2671000,d1d33d48,c0392341,310,0) at aac_command_thread+0x179 fork_exit(c24e36c0,c2671000,d1d33d48) at fork_exit+0xc0 fork_trampoline() at fork_trampoline+0x1a --- trap 0x1, eip = 0, esp = 0xd1d33d7c, ebp = 0 --- Fatal trap 12: page fault while in kernel mode fault virtual address = 0x8 fault code = supervisor read, page not present instruction pointer = 0x8:0xc31f3959 stack pointer = 0x10:0xd1d39cb0 frame pointer = 0x10:0xd1d39ce0 code segment= base 0x0, limit 0xf, type 0x1b = DPL 0, pres 1, def32 1, gran 1 processor eflags= interrupt enabled, resume, IOPL = 0 current process = 550 (aac0aif) kernel: type 12 trap, code=0 Stopped at aac_handle_aif+0x139: cmpl0x8(%edx),%ecx db trace aac_handle_aif(c2679000,d2635284,d1d39cfc,d1d39d00,7d0) at aac_handle_aif+0x139 aac_command_thread(c2679000,d1d39d48,c0392341,310,c2670260) at aac_command_thread+0x179 fork_exit(c31f16c0,c2679000,d1d39d48) at fork_exit+0xc0 fork_trampoline() at fork_trampoline+0x1a --- trap 0x1, eip = 0, esp = 0xd1d39d7c, ebp = 0 --- db Before that I got some message on GEOM not being properly locked but that scrolled off buffer before I could catch it. Pete - Original Message - From: Scott Long [EMAIL PROTECTED] To: Petri Helenius [EMAIL PROTECTED] Cc: Tim Robbins [EMAIL PROTECTED]; [EMAIL PROTECTED] Sent: Sunday, June 01, 2003 11:20 PM Subject: Re: raidframe Petri Helenius wrote: RAIDframe is non-functional in 5.1 and -current [kern/50541] and it would be unwise to use it in 5.0 for anything other than experimentation. Hopefully it will be fixed before 5.2. Makes one wonder how broken code ever got into the tree in the first place... Pete Just settle down a bit. If you rewind to last October, RAIDFrame worked well. Unfortunately, some kernel interfaces changed in between now and then and RAIDFrame was left behind. I am remis in not fixing it, but please understand that I also have quite a few other responsibilities, and I get paid $0 to work on RAIDframe. As for hardware raid, what cards have you tried, and what problems have you experienced? You last message was jsut a shot in the dark that few of us would be able to help with. Scott ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: raidframe
On Mon, 2 Jun 2003 08:09:17 +0300, Vallo Kallaste [EMAIL PROTECTED] said: FreeBSD 5.x series is slowly progressing, but is nowhere near to production quality. As the things are currently, you simply waste your time. I'm running an old 5.1-current and a more recent 5.1-beta of about a week ago in production as news servers and am reasonably pleased with the results. Other than the cvsup mirror I don't have any more intensive test workload than that. The 5.1-beta installation replaced a W2K Advanced Server running NNTPRelay, and so far it's stayed up three whole days, which is a hell of a lot longer than W2K ever did. 5.x is getting there. It has been stable enough for desktop use for a long time, and now the rest of the system is catching up. -GAWollman ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: raidframe
On Sun, Jun 01, 2003 at 02:51:07PM +0300, Petri Helenius wrote: Is there anyone actually successfully using raidframe and if yes, what kind of hardware? RAIDframe is non-functional in 5.1 and -current [kern/50541] and it would be unwise to use it in 5.0 for anything other than experimentation. Hopefully it will be fixed before 5.2. Tim ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: raidframe
I am successfully using a Mylex DAC1164PVX RAID controller on 5-CURRENT: mlx0: Mylex version 5 RAID interface port 0x2000-0x207f mem 0xf800-0xfbff,0xec91-0xec91007f irq 5 at device 8.0 on pci2 mlx0: controller initialisation in progress... mlx0: initialisation complete. mlx0: DAC1164PVX, 3 channels, firmware 5.08-0-87, 64MB RAM mlxd0: Mylex System Drive on mlx0 mlxd0: 105009MB (215058432 sectors) RAID 5 (online) On Sun, Jun 01, 2003 at 02:51:07PM +0300, Petri Helenius wrote: Is there anyone actually successfully using raidframe and if yes, what kind of hardware? Same question goes for any recent SCSI RAID controllers supported by FreeBSD. I admit not having tried all combinations but it seems that using anything else than simple ahc scsi stuff results in kernel panic with 5.x. Pete ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to [EMAIL PROTECTED] -- Bob Willcox Patience is a minor form of despair, disguised as virtue. [EMAIL PROTECTED]-- Ambrose Bierce, on qualifiers Austin, TX ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: raidframe
RAIDframe is non-functional in 5.1 and -current [kern/50541] and it would be unwise to use it in 5.0 for anything other than experimentation. Hopefully it will be fixed before 5.2. Makes one wonder how broken code ever got into the tree in the first place... Pete ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: raidframe
Petri Helenius wrote: RAIDframe is non-functional in 5.1 and -current [kern/50541] and it would be unwise to use it in 5.0 for anything other than experimentation. Hopefully it will be fixed before 5.2. Makes one wonder how broken code ever got into the tree in the first place... Pete Just settle down a bit. If you rewind to last October, RAIDFrame worked well. Unfortunately, some kernel interfaces changed in between now and then and RAIDFrame was left behind. I am remis in not fixing it, but please understand that I also have quite a few other responsibilities, and I get paid $0 to work on RAIDframe. As for hardware raid, what cards have you tried, and what problems have you experienced? You last message was jsut a shot in the dark that few of us would be able to help with. Scott ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: raidframe
So far I´ve tried asr and aac, both cards end up in kernel panics and/or array hang in a few minutes (multiple hardware platforms so I don´t think the motherboard is to blame) After the bad experience with asr I thought I give aac a try with somewhat similar results. And asr does not work with 4G machines anyway (although I don´t put that amount of memory in the servers now, I don´t want to generate a barrier because of a disk controller) Judging from the limited set of responses, Mylex stuff seems to work. My offer to help you to debug the aac code is still valid. You mean this one as shot in the dark? I got this when configuring an array on 5.1-BETA and aaccli: lock order reversal 1st 0xc2671134 AAC sync FIB lock (AAC sync FIB lock) @ /usr/src/sys/dev/aac/aac.c:1703 2nd 0xc03f5f20 Giant (Giant) @ vm/vm_fault.c:210 Stack backtrace: backtrace(c0397297,c03f5f20,c0393ecb,c0393ecb,c03a4e34) at backtrace+0x17 witness_lock(c03f5f20,8,c03a4e34,d2,d1d33ad8) at witness_lock+0x697 _mtx_lock_flags(c03f5f20,0,c03a4e2b,d2,2) at _mtx_lock_flags+0xb1 vm_fault(c03f1940,0,1,0,c267) at vm_fault+0x59 trap_pfault(d1d33c70,0,8,d1d33c40,8) at trap_pfault+0xef trap(18,c2670010,c2670010,d26402a4,c2671000) at trap+0x39d calltrap() at calltrap+0x5 --- trap 0xc, eip = 0xc24e5959, esp = 0xd1d33cb0, ebp = 0xd1d33ce0 --- aac_handle_aif(c2671000,d2640284,d1d33cfc,d1d33d00,7d0) at aac_handle_aif+0x139 aac_command_thread(c2671000,d1d33d48,c0392341,310,0) at aac_command_thread+0x179 fork_exit(c24e36c0,c2671000,d1d33d48) at fork_exit+0xc0 fork_trampoline() at fork_trampoline+0x1a --- trap 0x1, eip = 0, esp = 0xd1d33d7c, ebp = 0 --- Fatal trap 12: page fault while in kernel mode fault virtual address = 0x8 fault code = supervisor read, page not present instruction pointer = 0x8:0xc31f3959 stack pointer = 0x10:0xd1d39cb0 frame pointer = 0x10:0xd1d39ce0 code segment= base 0x0, limit 0xf, type 0x1b = DPL 0, pres 1, def32 1, gran 1 processor eflags= interrupt enabled, resume, IOPL = 0 current process = 550 (aac0aif) kernel: type 12 trap, code=0 Stopped at aac_handle_aif+0x139: cmpl0x8(%edx),%ecx db trace aac_handle_aif(c2679000,d2635284,d1d39cfc,d1d39d00,7d0) at aac_handle_aif+0x139 aac_command_thread(c2679000,d1d39d48,c0392341,310,c2670260) at aac_command_thread+0x179 fork_exit(c31f16c0,c2679000,d1d39d48) at fork_exit+0xc0 fork_trampoline() at fork_trampoline+0x1a --- trap 0x1, eip = 0, esp = 0xd1d39d7c, ebp = 0 --- db Before that I got some message on GEOM not being properly locked but that scrolled off buffer before I could catch it. Pete - Original Message - From: Scott Long [EMAIL PROTECTED] To: Petri Helenius [EMAIL PROTECTED] Cc: Tim Robbins [EMAIL PROTECTED]; [EMAIL PROTECTED] Sent: Sunday, June 01, 2003 11:20 PM Subject: Re: raidframe Petri Helenius wrote: RAIDframe is non-functional in 5.1 and -current [kern/50541] and it would be unwise to use it in 5.0 for anything other than experimentation. Hopefully it will be fixed before 5.2. Makes one wonder how broken code ever got into the tree in the first place... Pete Just settle down a bit. If you rewind to last October, RAIDFrame worked well. Unfortunately, some kernel interfaces changed in between now and then and RAIDFrame was left behind. I am remis in not fixing it, but please understand that I also have quite a few other responsibilities, and I get paid $0 to work on RAIDframe. As for hardware raid, what cards have you tried, and what problems have you experienced? You last message was jsut a shot in the dark that few of us would be able to help with. Scott ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: raidframe
If you rewind to last October, RAIDFrame worked well. Unfortunately, some kernel interfaces changed in between now and then and RAIDFrame was left behind. I am remis in not fixing it, but please understand that I also have quite a few other responsibilities, and I get paid $0 to work on RAIDframe. Not being a native english speaker I probably didn´t understand that experimental equals broken. If that equation cannot be justified, then the release notes should have said has critical defects or broken, not just experimental. I appreciate the work you and everybody else puts in, it just does not make sense to have people go through the same hoops and hit the wall when that could be saved by a single line noting that that the wall exists. Pete ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: raidframe
On 2003.06.02 11:18:34 +0300, Petri Helenius wrote: So far I´ve tried asr and aac, both cards end up in kernel panics and/or array hang in a few minutes (multiple hardware platforms so I don´t think the motherboard is to blame) I have an asr based RAID controller (Adaptec 2400A), though it is an IDE RAID controller, it uses the SCSI asr driver. My controller has worked very well with FreeBSD 5.x, and the server is currently running 5.1-BETA. The only thing that doesn't work, is the userland utilities to control / get status from the card, but that's not so important. -- Simon L. Nielsen pgp0.pgp Description: PGP signature
Re: raidframe
On Mon, Jun 02, 2003 at 11:36:18AM +0300, Petri Helenius [EMAIL PROTECTED] wrote: left behind. I am remis in not fixing it, but please understand that I also have quite a few other responsibilities, and I get paid $0 to work on RAIDframe. Not being a native english speaker I probably didnt understand that experimental equals broken. If that equation cannot be justified, then the release notes should have said has critical defects or broken, not just experimental. I appreciate the work you and everybody else puts in, it just does not make sense to have people go through the same hoops and hit the wall when that could be saved by a single line noting that that the wall exists. FreeBSD 5.x series is slowly progressing, but is nowhere near to production quality. As the things are currently, you simply waste your time. This is only my opinion and I don't want to offend anyone. -- Vallo Kallaste ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: raidframe
FreeBSD 5.x series is slowly progressing, but is nowhere near to production quality. As the things are currently, you simply waste your time. This is only my opinion and I don't want to offend anyone. IMO, software does not magically get better but it must be actively being used and problems reported and fixed in reasonable time. So if 5.x never gets users it never gets production quality. Pete ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to [EMAIL PROTECTED]
raidframe
Is there anyone actually successfully using raidframe and if yes, what kind of hardware? Same question goes for any recent SCSI RAID controllers supported by FreeBSD. I admit not having tried all combinations but it seems that using anything else than simple ahc scsi stuff results in kernel panic with 5.x. Pete ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: raidframe
I'm using a Ami MegaRaid 1500 in 5.x without any issues. -m On Sun, 1 Jun 2003, Petri Helenius wrote: Is there anyone actually successfully using raidframe and if yes, what kind of hardware? Same question goes for any recent SCSI RAID controllers supported by FreeBSD. I admit not having tried all combinations but it seems that using anything else than simple ahc scsi stuff results in kernel panic with 5.x. Pete ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to [EMAIL PROTECTED] ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to [EMAIL PROTECTED]
raidframe
hi: I figured since raidframe was in FreeBSD, it would be a good chance to try it. I've already used vinum and ccd. I run it in Vmware. I remember seeing that raidframe is still work in progress. Maybe something is wrong with my setup. Or maybe it's broken ATM and I should give it a shot later. thank you all for your time, elliot FreeBSD current-x86.DOMAIN 5.0-CURRENT FreeBSD 5.0-CURRENT #0: Sun Feb 16 19:45:31 EST 2003 [EMAIL PROTECTED]:/usr/obj/usr/current/src/sys/VMWARE i386 # cat raid START array 1 3 0 START disks /dev/da0s1e /dev/da1s1e /dev/da2s1e START layout 32 1 1 5 START queue fifo 100 # raidctl -c /root/raid raid0 RAIDFRAME: protectedSectors is 64 Waiting for DAG engine to start panic: lockmgr: thread 0xc8f30c94, not exclusive lock holder 0xc8f31620 unlocking Debugger(panic) Stopped at Debugger+0x54: xchgl %ebx,in_Debugger.0 db trace Debugger(c05432fc,c05ebbc0,c05419e2,d8527864,1) at Debugger+0x54 panic(c05419e2,c8f30c94,c05419cc,c8f31620,c8f30c94) at panic+0xab lockmgr(c8f59ce8,6,c8f59c34,c8f30c94,d85278a8) at lockmgr+0x49e vop_stdunlock(d85278d0,d85278b4,c031b048,d85278d0,d8527910) at vop_stdunlock+0x2f vop_defaultop(d85278d0,d8527910,c029c59c,d85278d0,d85278cc) at vop_defaultop+0x18 spec_vnoperate(d85278d0,d85278cc,0,c8f7b180,3) at spec_vnoperate+0x18 raidlookup(c8f82800,c8f30c94,d852794c,c05ecc40,258) at raidlookup+0x18c raid_getcomponentsize(c8f69000,0,0,c8f69000,0) at raid_getcomponentsize+0x47 rf_ConfigureDisk(c8f69000,c8b7f64c,c8f82800,0,0) at rf_ConfigureDisk+0x96 rf_ConfigureDisks(c8f6910c,c8f69000,c8b7f000,1e7,1b7) at rf_ConfigureDisks+0xd1 rf_Configure(c8f69000,c8b7f000,0,d8527afc,c0374c53) at rf_Configure+0xe60 raidctlioctl(c8bdc400,80047201,d8527c54,3,c8f31620) at raidctlioctl+0x358 spec_ioctl(d8527b5c,d8527c28,c03b2071,d8527b5c,d8527b70) at spec_ioctl+0x16e spec_vnoperate(d8527b5c,d8527b70,c034a04d,c05ecc40,c0565a60) at spec_vnoperate+0x18 vn_ioctl(c8c08000,80047201,d8527c54,c8d0a480,c8f31620) at vn_ioctl+0x1a1 ioctl(c8f31620,d8527d10,c055d207,407,3) at ioctl+0x479 syscall(2f,2f,2f,4,3) at syscall+0x28e Xint0x80_syscall() at Xint0x80_syscall+0x1d --- syscall (54, FreeBSD ELF32, ioctl), eip = 0x804bdff, esp = 0xbfbf7cfc, ebp = 0xbfbf7d18 --- db reset current-x86# disklabel /dev/da0s1 disklabel: ioctl DIOCGDINFO: Inappropriate ioctl for device current-x86# disklabel -r /dev/da0s1 # /dev/da0s1: type: SCSI disk: da0s1 label: flags: bytes/sector: 512 sectors/track: 32 tracks/cylinder: 64 sectors/cylinder: 2048 cylinders: 500 sectors/unit: 1024000 rpm: 3600 interleave: 1 trackskew: 0 cylinderskew: 0 headswitch: 0 # milliseconds track-to-track seek: 0 # milliseconds drivedata: 0 8 partitions: #size offsetfstype [fsize bsize bps/cpg] c: 10240000unused0 0 # (Cyl.0 - 499) e: 1023936 32 raid# (Cyl.0*- 499*) To Unsubscribe: send mail to [EMAIL PROTECTED] with unsubscribe freebsd-current in the body of the message
raidframe issues
I'm trying to create a raid set but after hitting the end of initializing a raid5 stripe the following is displayed on the console: raid0: node (Rod) returned fail, rolling backward raid0: IO Error. Marking /dev/da2s1e as failed. raid0: node (Rod) returned fail, rolling backward raid0: IO Error. Marking /dev/da1s1e as failed. raid0: node (Rod) returned fail, rolling backward Unable to verify parity: can't read the stripe Could not verify parity raid0: Error re-writing parity! Multiple disks failed in a single group! Aborting I/O operation. Multiple disks failed in a single group! Aborting I/O operation. Multiple disks failed in a single group! Aborting I/O operation. [Failed to create a DAG] panic: raidframe error at line 459 file /usr/src/sys/dev/raidframe/rf_states.c cpuid = 0; lapic.id = boot() called on cpu#0 syncing disks, buffers remaining... panic: bdwrite: buffer is not busy cpuid = 0; lapic.id = boot() called on cpu#0 Uptime: 38m47s Shutting down raid0 Terminate ACPI Automatic reboot in 15 seconds - press a key on the console to abort Rebooting... cpu_reset called on cpu#0 cpu_reset: Stopping other CPUs dmesg gives the following as disks: da0 at ahd0 bus 0 target 0 lun 0 da0: IBM IC35L036UCD210-0 S5BS Fixed Direct Access SCSI-3 device da0: 160.000MB/s transfers (80.000MHz, offset 63, 16bit), Tagged Queueing Enabled da0: 35003MB (71687340 512 byte sectors: 255H 63S/T 4462C) da1 at ahd0 bus 0 target 1 lun 0 da1: IBM IC35L036UCD210-0 S5BS Fixed Direct Access SCSI-3 device da1: 160.000MB/s transfers (80.000MHz, offset 63, 16bit), Tagged Queueing Enabled da1: 35003MB (71687340 512 byte sectors: 255H 63S/T 4462C) da2 at ahd0 bus 0 target 2 lun 0 da2: IBM IC35L036UCD210-0 S5BS Fixed Direct Access SCSI-3 device da2: 160.000MB/s transfers (80.000MHz, offset 63, 16bit), Tagged Queueing Enabled da2: 35003MB (71687340 512 byte sectors: 255H 63S/T 4462C) da3 at ahd0 bus 0 target 3 lun 0 da3: IBM IC35L036UCD210-0 S5BS Fixed Direct Access SCSI-3 device da3: 160.000MB/s transfers (80.000MHz, offset 63, 16bit), Tagged Queueing Enabled da3: 35003MB (71687340 512 byte sectors: 255H 63S/T 4462C) cat /etc/raid0.conf START array 1 3 0 START disks /dev/da1s1e /dev/da2s1e /dev/da3s1e START layout 16 1 1 5 START queue fifo 100 Pete To Unsubscribe: send mail to [EMAIL PROTECTED] with unsubscribe freebsd-current in the body of the message
How can I test RAIDFrame based on DEVFS GEOM?
Hi, everybody, I could not create the device name based on DEVFS. I use devfs rule apply path raidctl unhide command, but in /dev/, there is nothing about 'raidctl'. So I could not RAIDFrame. Another, I can use the 'disklabel -e da2s2' command, but I could not modify it. I need your help. Thank you! Best Regards Ouyang Kai _ Ãâ·ÑÏÂÔØ MSN Explorer: http://explorer.msn.com/lccn/ To Unsubscribe: send mail to [EMAIL PROTECTED] with unsubscribe freebsd-current in the body of the message
RE: Is RAIDframe currently usable?
Hi, Yesterday I installed a jpsnap (FreeBSD christine.energyhq.tk 5.0-CURRENT-20021115-JPSNAP i386), and tried to setup a RAID0 config with two disks. raidctl -C goes fine, so does -I and -iv. Then fdisk'ed and disklabelled. But when I tried to newfs the newly created partition I started getting a bunch of errors like this: RAIDframe should be considered highly experimental. While it was tested extensively under SMP in FreeBSD 4.x, it received very little testing under SMP in -current. If you have more details on this failure, I'd appreciate it. Scott --- Nov 16 00:04:59 christine kernel: raid0: node (R ) returned fail, rolling backward Nov 16 00:04:59 christine kernel: raid0: DAG failure: r addr 0x40 (64) nblk 0x20 (32) buf 0xcdd50ea0 Nov 16 00:04:59 christine kernel: raid0: node (R ) returned fail, rolling backw byte 76893 --- and so on. Anyway, the newfs process finishes, then a simple piped tar to move the old /usr to the RAID partition creates a myriad of those messages, to a point where the system spends all the time logging those errors and tar no longer continues. The system is a dual P3 box, running a slightly modified GENERIC (added SMP, raidframe and pcm), with INVARIANTS and WITNESS enabled (tried without them too, no success). So, any known issues with RAIDframe? I can supply more detailed info if needed. Cheers, -- Miguel Mendez - [EMAIL PROTECTED] GPG Public Key :: http://energyhq.homeip.net/files/pubkey.txt EnergyHQ :: http://www.energyhq.tk Of course it runs NetBSD! To Unsubscribe: send mail to [EMAIL PROTECTED] with unsubscribe freebsd-current in the body of the message To Unsubscribe: send mail to [EMAIL PROTECTED] with unsubscribe freebsd-current in the body of the message
Is RAIDframe currently usable?
Hi, Yesterday I installed a jpsnap (FreeBSD christine.energyhq.tk 5.0-CURRENT-20021115-JPSNAP i386), and tried to setup a RAID0 config with two disks. raidctl -C goes fine, so does -I and -iv. Then fdisk'ed and disklabelled. But when I tried to newfs the newly created partition I started getting a bunch of errors like this: --- Nov 16 00:04:59 christine kernel: raid0: node (R ) returned fail, rolling backward Nov 16 00:04:59 christine kernel: raid0: DAG failure: r addr 0x40 (64) nblk 0x20 (32) buf 0xcdd50ea0 Nov 16 00:04:59 christine kernel: raid0: node (R ) returned fail, rolling backw byte 76893 --- and so on. Anyway, the newfs process finishes, then a simple piped tar to move the old /usr to the RAID partition creates a myriad of those messages, to a point where the system spends all the time logging those errors and tar no longer continues. The system is a dual P3 box, running a slightly modified GENERIC (added SMP, raidframe and pcm), with INVARIANTS and WITNESS enabled (tried without them too, no success). So, any known issues with RAIDframe? I can supply more detailed info if needed. Cheers, -- Miguel Mendez - [EMAIL PROTECTED] GPG Public Key :: http://energyhq.homeip.net/files/pubkey.txt EnergyHQ :: http://www.energyhq.tk Of course it runs NetBSD! To Unsubscribe: send mail to [EMAIL PROTECTED] with unsubscribe freebsd-current in the body of the message