Re: Large msdosfs disk will not mount on RELENG_6

2006-11-26 Thread secmgr
On Sun, 2006-11-26 at 19:23, Richard Coleman wrote:
 I just bought a large external hard drive for home backups (500g Western 
 Digital My Book).  When I 
 plug it in to my machine (RELENG_6 from about a week ago), the system sees 
 the device just fine:
 
 Nov 26 22:03:21 neptune kernel: umass0: Western Digital External HDD, rev 
 2.00/1.06, addr 2
 Nov 26 22:03:21 neptune kernel: uhid1: Western Digital External HDD, rev 
 2.00/1.06, addr 2, iclass 8/6
 Nov 26 22:03:21 neptune kernel: da0 at umass-sim0 bus 0 target 0 lun 0
 Nov 26 22:03:21 neptune kernel: da0: WD 5000YS External 106a Fixed Direct 
 Access SCSI-4 device
 Nov 26 22:03:21 neptune kernel: da0: 40.000MB/s transfers
 Nov 26 22:03:21 neptune kernel: da0: 476940MB (976773168 512 byte sectors: 
 255H 63S/T 60801C)
 
 
 But when I try to mount the drive (mount -t msdos /dev/da0 /mnt), the system 
 gives the following error:
 
 Nov 26 22:06:41 neptune kernel: mountmsdosfs(): disk too big, sorry
 
 I was surprised to see a file system limitation on FreeBSD that Windows does 
 not have.  I will 
 probably reformat the system to ufs2, but thought I would mention this error 
 message.  I'm sure 
 these drives will become increasingly common.

Would you share how you initialized this drive, and what parameters you
used?  FAT32 has a 2 TB limit for the filesystem and 2 GB for a file. 
The error you saw is thrown when the # of sectors exceeds an unsigned 32
bit integer.

BTW, the limit is based on the DOS spec, not FreeBSD.

jim


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Large msdosfs disk will not mount on RELENG_6

2006-11-26 Thread secmgr
On Sun, 2006-11-26 at 22:09, Clayton Milos wrote:


 
 As far as I am aware windows 2000 and xp will only allow you to format up to 
 a 32G dive with FAT32. Any bigger and it will force you to use NTFS. The 
 other strange thing is tht you are trying to mount /dev/da0 and not 
 /dev/de0s1.

The 32 gb restriction was artificial.  You can look it up in the M$
knowledge base.  Watch out for the hand waving.  FreeBSD and Linux
(and probably other cluefull OS's)can handle a 500gb FAT32 drive
(assuming intelligent format values) w/o problem.

The real issue with large FAT32 volumes is wasted space.  You have to
crank up the cluster size so small files eat way more on disk space than
they should.  Large directories don't perform all that well either.

My problem with using them for container type backups is that my user
partition is way bigger than 2gb, so a level 0 dump wouldn't fit.  

jim


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: fsck_ufs locked in snaplk

2006-04-26 Thread secmgr

Chris Dillon wrote:


I had problems with snapshots and hangs in 5.x.  For that, a daily 
reboot would keep the problems at bay.  I upgraded to 6.0 and the 
problems completely disappeared.  I kept 6.0-STABLE running for weeks. 
 Somewhere along the line, as 6.1 approached, similar problems 
re-appeared, but not exactly the same as what I had in 5.x.  Now 
instead of a complete system hang, individual processes will hang 
while attempting to access a certain filesystem.  
I'm running 5.5-pre #5 and dumps using the snap option.  No quotas, low 
usage.  Nightly dumps hung last weekend, and any new dumps (till I 
rebooted) hung.  Other process' access to the drives seemed ok, but hung 
dump processes were unkillable.  Sorry, didn't have any debug stuff 
turned on, so I realize this info is of limited or maybe even no use.  
Hadn't seen this before.  If it crops up again, I'll build a new kernel 
for debugging and try and get more data.


thanks
jim
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: LSI Trouble (resend)

2006-02-17 Thread secmgr

Anton Nikiforov wrote:

Today we were changing power supply so we shut all servers down at 17:00.

After changing of UPS we start everything up and my LSI controller 
starts to claim that disk and NVRAM configuration mismatch.
Okay, not really a FreeBSD issue.  Which adapter are you using?  They 
made more than one ;'). 
After rebooting (via simple exitting the configuration console withiut 
changing anything), server stopped to claim that configuration 
mismutch, but shows 0 Logical drives configured (while before i have 
had 2 drives):

Raid1 for the OS (2*140GB drives)
Raid5 for the data (3*140GB drives)
And one 140GB drive as a hot spare disk
LSI FW version is G119 from Aug 27 2004
I suspect the raid1 drives are probably the most recoverable.  Have you 
actually tried setting one of them as a jbod and booting from it?  the 
raid5 set is a different matter.


jim
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Upgrading 5.3 6.0 buildworld failure now in libmagic

2005-12-08 Thread secmgr

Peter Jeremy wrote:


On Wed, 2005-Dec-07 13:34:53 -0800, Vizion wrote:
 

That is a judgment call - franky my experience has been that developers who 
are bad at ensuring their work is well documentated are second rate rather 
than top rate developers.
   



Software developers are notoriously poor at writing documentation for
non-technical people.  There are probably very few developers who
enjoy writing end-user documentation (and can write).  

My personal expectation is *not* that the FreeBSD developers tell me 
what a cdrom is.  My expectation is that they tell me what works, what 
doesn't, and warn me about whats in the middle.  Trust me, there are 
damn few non-technical people installing FreeBSD, and I'm pretty sure 
both of them gave up in sysinstall.  I can read (most) code and I can 
search PR's.  However, if it's 2 am and my server has puked on it's 
shoes during an upgrade due to an undocumented issue the developer knew 
about, I'm not going to recommend FreeBSD to anyone other than as a 
hobby for single men with beards.



In my
experience, especially on large projects, it's rare for developers to
write the end-user documentation.  They may write a rough outline but
it's the technical writers who actually do the documentation.  The
problem is finding people with technical writing skills who are
interested in helping with FreeBSD.

It's also worth noting that a number of FreeBSD developers are not native
English speakers.  It's probably unreasonable to expect them to write
polished English documentation.
 

Again, I'm not asking them to write chapters in the handbook and I 
understand (and assumed) they may not be native English speakers.  How 
hard is it to get a, ata.c broke with via 666 sata chipset under heavy 
load?  If I have a via 666 sata chipset, now I know to go looking in 
the code.  Even if don't go looking in the code, I know that I might 
want to look at a different adapter.  Don't tell me whats little more 
than a subject line of a mail message is beyond even a junior 
non-English speaking coder and a few minutes with a translation program.



Are you volunteering?


Yes, I'd like to help, not that I think my writing skills are all that 
great.  But no if the developers won't be forthcoming with details.


P.S. I'm not picking on the ata code or it's owners.  It was just a 
module name I knew off hand.


jim
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: FreeBSD 6.0 as storage server with raid5?

2005-12-08 Thread secmgr

Christian Brueffer wrote:


On Thu, Dec 08, 2005 at 12:18:57AM +0100, Torfinn Ingolfsen wrote:
   


In short, don't write gvinum off just yet. Documentation is around the
corner (as a result of a SoC project).
   



Actually gvinum(8) has been committed to CURRENT and RELENG_6 a couple
of days ago.

- Christian
 

Whatever you do, don't complain about it on this list, or you'll just be 
told that if you really wanted raid, you should be running SCSI disks 
and a raid adapter.  They may allow that 3ware does ok, but no ATA drive 
should ever be relied on and even s/w raid on scsi is only for ignorant 
lusers who are too cheap to do the right thing.


Those who think I run to hyperbole need only visit the archives.  One 
can only hope that gvinum actually works in 6 vs the buggy and 
incomplete alpha code that shipped in 5.x.  Having a man page is nice, 
but I'd rather have a raid 5 set that didn't panic the system and 
corrupt the set when it lost a drive (and this with modern scsi drives 
and adapter).  I'd strongly suggest anyone using GEOM raid to do some 
fault insertion testing of their setup prior to actually relying on it.


jim
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Upgrading 5.3 6.0 buildworld failure now in libmagic

2005-12-07 Thread secmgr

Doug Barton wrote:


How does this change to UPDATING in RELENG_6 look to you:

Index: UPDATING
===
RCS file: /home/ncvs/src/UPDATING,v
retrieving revision 1.416.2.7
diff -u -r1.416.2.7 UPDATING
--- UPDATING1 Nov 2005 23:44:40 -   1.416.2.7
+++ UPDATING7 Dec 2005 00:42:04 -
@@ -229,7 +229,13 @@
page for more details.

Due to several updates to the build infrastructure, source
-   upgrades from versions prior to 5.3 no longer supported.
+   upgrades from versions prior to 5.4-STABLE are not likely
+   to succeed.
+
+   When upgrading from one major version to another, it is
+   generally best to upgrade to the latest code in the branch
+   currently installed first, then do another upgrade to the
+   new branch.


Or as another poster said, just say latest RELENG_5 prior to upgrade

This is an open source project. The only way that things improve is if 
people help make it better. It's also worth pointing out that this 
issue of upgrading to the latest version of the branch you're in has 
been common knowledge for, basically, always; so if the folks that 
wrote the release notes neglected to include it, it's understandable. 
(Although, as you point out, potentially frustrating for new(er) users.)


Well, if it's common knowledge, lets see it documented.  We're only 
talking a few lines in the handbook or the release notes, not an entire 
chapter.


If RE wants to change the requirements for upgrading, then how 
bleeping hard would it be to update either release notes or errata.  
It's not so much that I now need to do multiple upgrades (ok, that IS 
pretty annoying), it's that I'd never of known unless I followed this 
thread.



Ok, so, after you calm down a bit, why don't you write a message to 
[EMAIL PROTECTED] and mention this issue.


rant3
My frustration comes from the fact that this seems to be getting worse, 
not better.  In addition, every time I bring this up, I'm told (usually 
by someone with a freebsd.org address) that, oh we all know/knew about 
that  or, it's common knowledge. In the case of the 
vinum/gvinum/gmirror trainwreck, I got silence, even though I strongly 
suspect multiple people knew there were problems, but just didn't want 
to talk about them.  I'd gladly help document some of this, but I'm not 
the one who knows where the skeletons are snoozing (at least till I trip 
on a femur)


So whats the big issue with letting the rest of us in on the secrets?  
I'm not looking for a book, just a line or two saying here be dragons 
somewhere /other /than the basement of the planing department in the 
bottom of a locked filing cabinet stuck in a disused lavatory with a 
sign on the door saying 'Beware of the Leopard' (apologies to Doug Adams).

/rant

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Upgrading 5.3 6.0 buildworld failure now in libmagic

2005-12-06 Thread secmgr

Kris Kennaway wrote:


On Tue, Dec 06, 2005 at 01:20:44PM -0800, Vizion wrote:
 

On Tuesday 06 December 2005 11:47,  the author Vizion contributed to the 
dialogue which was on-
Re: Upgrading 5.3  6.0 buildworld failure in libkrb5 but is currently: 
Upgrading 5.3  6.0 buildworld failure now in libmagic
   


On Tuesday 06 December 2005 04:00,  the author Ruslan Ermilov contributed to
the dialogue on-
 

snip 
   


The example of setting up ccache in /etc/make.conf is just plain
wrong.  It shouldn't be hardcoding CC to /usr/bin/cc, similarly
for CXX.  Comment out the ccache stuff completely in /etc/make.conf
(or at least the last else part), make sure your PATH doesn't
include the ccache path, and try again with an empty /usr/obj.
Please report back if it succeeded (it should).  Please send your
complaints to the ccache port MAINTAINER as he did not respond to
my email explaining the problem, and I'm getting really tired of
explaining this for the Nth time.
   


Thanks very much - I am building right now --after deinstalling ccache, make
cleandir x3 and an empty /usr/obj. I will post the results here
 


Well certainly made a difference but now it fails in magic
   



Update to 5.4 before trying to update to 6.0.

Kris
 

So is there any supported direct 5.3-6.0 upgrade path, or is a stop in 
5.4 ville manditory now.?


jim
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Upgrading 5.3 6.0 buildworld failure now in libmagic

2005-12-06 Thread secmgr

Kris Kennaway wrote:


On Tue, Dec 06, 2005 at 02:36:05PM -0700, secmgr wrote:

 


Update to 5.4 before trying to update to 6.0.

Kris
 

So is there any supported direct 5.3-6.0 upgrade path, or is a stop in 
5.4 ville manditory now.?
   



I tried to say that you have to update to 5.4 before you can update to
6.0, i.e. updates from older versions are not supported.  It may be
easier to do a binary upgrade (i.e. download release media and use the
upgrade option).

Kris
 

Not to belabour this, but the 6.0 release notes do specificly say 5.3 
RELEASE and newer.


Source upgrades to FreeBSD 6.0-RELEASE are only supported from FreeBSD 
5.3-RELEASE or later. Users of older systems wanting to upgrade 
6.0-RELEASE will need to update to FreeBSD 5.3 or newer first, then to 
FreeBSD 6.0-RELEASE.


This is one of my pet peeves with FreeBSD.  You can read the Release 
Notes, the UPDATING, the ERRATA, the HARDWARE, thinking you've prepped 
yourself for the upgrade, and still be screwed.  If RE wants to change 
the requirements for upgrading, then how bleeping hard would it be to 
update either release notes or errata.  It's not so much that I now need 
to do multiple upgrades (ok, that IS pretty annoying), it's that I'd 
never of known unless I followed this thread.


I've run into this while installing since 4.4, and it's gotten really 
bad since the whole 5.3 mess.  Now it's deja vu all over again.


jim
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: State of gvinum RELENG_5 or RELENG_5_4 ?!

2005-05-11 Thread secmgr
Peter Orlowski wrote:
On Tue, May 10, 2005 at 11:24:12AM -0600, secmgr wrote:
 

Gabor Esperon wrote:
   

How reliable is the gmirror subsystem?
 

gmirror seems fine as best as I can tell.  I've been running it for a 
few months on sata drives.
   

I had gmirror running on two IDE drives for system and two SATA
drives for user data. That was RELENG_5_3. When one of the SATA 
drives broke (later I got errors like 

ad4: FAILURE - READ_DMA status=51READY,DSC,ERROR  

on fscking) the whole system just stopped. I could switch
consoles but nothing else. Nothing in the logs either. I believe
it had lost all it's file systems.
After power cycling, both mirrors were found but marked as broken.  
Gmirror chose the defective SATA disk as the intact one and started
rebuilding the mirror from that - but fscking the mirror failed,
obviously, so I ended up with one disk that was broken, but marked intact
by gmirror and one disk the other way round.

I had to erase the gmirror metadata on the intact disk to get it
to work again. 

I'd say gmirror will save your data but it won't save you from some
downtime...
Greetings,  Peter
 

What steps did you take to the gmirror the disk so you could get your 
data back?

At this point, I'm thinking that as far as S/W RAID goes in FreeBSD, the 
R is pretty meaningless

jim
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: State of gvinum RELENG_5 or RELENG_5_4 ?!

2005-05-10 Thread secmgr
Edwin Brown wrote:
All:
I've been asked to setup a RAID-5 server using software raid. I've not
worked with gvinum or vinum for that matter. I know from following the
mailing lists there have been some concerns over gvinum in the
5.3-RELEASE. How are things working today?
The plan would be use a 40GB ATA/100 Drive for the system and 4 SATA
drives for the RAID-5 array. Will gvinum be happy with this kind of
arrangement?
Well, you want to stay away from vinum in 5.x.  Under 5.3 gvinum and 
raid 5 was not robust, and I could kill a system and corrupt the array 
by pulling a drive.  I haven't tried lately, but i'd definitly want to 
do some serious testing before putting that into production.  If you 
really need raid 5, I'd look into a H/W based solution.

and yes, I filed PR's that so far have not been answered.
jim
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: State of gvinum RELENG_5 or RELENG_5_4 ?!

2005-05-10 Thread secmgr
Gabor Esperon wrote:
How reliable is the gmirror subsystem?
gmirror seems fine as best as I can tell.  I've been running it for a 
few months on sata drives.

jim
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Common disk format between 2.4.current linux and FreeBSD 5.3 and later

2005-05-09 Thread secmgr
I'm building a usb harddrive and i'll be using it under both linux and 
FreeBSD.  vfat is not a contender due to a 2gb limit of file size (I'm 
using it as a dump disk and I don't want to deal with multiple 
volumes).  It seems that BSD can talk to ext2 partitions and Linux can 
talk to the older UFS format.  Suggestions on which is the more stable 
implementation for a r/w environment?

I've read that there were issues in the past, but I could see anything 
within the last year or so.

Thanks
jim
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: New Nforce2 variant doesn't recognize SATA as SATA

2005-03-11 Thread secmgr
secmgr wrote:
I just picked up a new ABIT NF7-S2G motherboard (NF2 MCP chip set with 
gigE and SATA/RAID).  I'm running 5.3 release on it.  The standard 
parallel ATA chipset returns a known id .  The SATA i/f  seems to be 
too new, and freebsd treats it like an unknown udma33 controller.  
There's no Marvel bridge on the mobo.  Is this a true SATA interface?  
Should I be able to get away with putting entries in the following? 
(i'm not concerned about the raid thing).  They curently work as 
udma33 disks.

thanks
jim
ata-pci.h:#define ATA_NFORCE2_MCP2  0x008e10de
ata-chipset.c: { ATA_NFORCE2_MCP2, 0, AMDNVIDIA, NVIDIA, 
ATA_UDMA6, nVidia nForce2 MCP2 },
well, to follow up my own mail, the trick above didn't help.  the ata 
code still thinks it's a brain dead ata device and limits it to UDMA33.  
Help!  When I looked through the H/W compat list, it said that Nforce 2 
was supported, which is why I felt confident buying the board.  I'd 
really like to get this working at SATA 150, even if the current drives 
might be bridged.

Failing that, is there a definitive list of SATA PCI boards which 
FreeBSD does full support?

thanks
jim
atapci0: nVidia nForce2 MCP UDMA133 controller port 
0xf000-0xf00f,0x376,0x170-
0x177,0x3f6,0x1f0-0x1f7 at device 9.0 on pci0
ata0: channel #0 on atapci0
ata1: channel #1 on atapci0
atapci1: nVidia nForce2 MCP2 UDMA133 controller port 
0xcc00-0xcc0f,0xb70-0xb73
,0x970-0x977,0xbf0-0xbf3,0x9f0-0x9f7 irq 11 at device 11.0 on pci0
ata2: channel #0 on atapci1
ata3: channel #1 on atapci1
.
.ata2-master: DMA limited to UDMA33, non-ATA66 cable or device
ad4: 152627MB SAMSUNG SP1614C/SW100-30 [310101/16/63] at ata2-master 
UDMA33
ata3-master: DMA limited to UDMA33, non-ATA66 cable or device
ad6: 152627MB SAMSUNG SP1614C/SW100-30 [310101/16/63] at ata3-master 
UDMA33


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: New Nforce2 variant doesn't recognize SATA as SATA

2005-03-11 Thread secmgr
Rob MacGregor wrote:
On Friday, March 11, 2005 5:26 PM, secmgr  unleashed the infinite monkeys and
produced:
 

Failing that, is there a definitive list of SATA PCI boards which
FreeBSD does full support?
   

See the release notes for the version of FreeBSD you're using.  For 5.3 that's
http://www.freebsd.org/releases/5.3R/hardware-i386.html#DISK
 

Well, thanks, but I looked there before I bought the board.
1. I challange you to derive from that page a list of supported PCI SATA 
adapters (and I don't mean RAID adapters like from highpoint or 3ware)
2. Even following the link to ata(4), what you get is a list of 
supported chips (some pata, some sata, not specifed as to which is 
which), NOT a list of supported boards.  Within that was a reference 
that the Nforce II chipset WAS supported, which is why I bought what I 
did.  Now you may say, well, the mfg added to the chipset.  Thats not 
FreeBSD's problem.  Yes and no.  The documentation isn't definitive as 
to what is supported.  If the doc's had said Nforce II PATA, then I 
would have questioned support of SATA before I purchased, but you can't 
tell that from the chip list.

jim
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


New Nforce2 variant doesn't recognize SATA as SATA

2005-03-10 Thread secmgr
I just picked up a new ABIT NF7-S2G motherboard (NF2 MCP chip set with 
gigE and SATA/RAID).  I'm running 5.3 release on it.  The standard 
parallel ATA chipset returns a known id .  The SATA i/f  seems to be 
too new, and freebsd treats it like an unknown udma33 controller.  
There's no Marvel bridge on the mobo.  Is this a true SATA interface?  
Should I be able to get away with putting entries in the following? (i'm 
not concerned about the raid thing).  They curently work as udma33 disks.

thanks
jim
ata-pci.h:#define ATA_NFORCE2_MCP2  0x008e10de
ata-chipset.c: { ATA_NFORCE2_MCP2, 0, AMDNVIDIA, NVIDIA, ATA_UDMA6, 
nVidia nForce2 MCP2 },

pciconf -lv says...
[EMAIL PROTECTED]:9:0:   class=0x01018a card=0x1c08147b chip=0x008510de 
rev=0xa3 hdr=0x00
   vendor   = 'NVIDIA Corporation'
   device   = 'MCP2S Parallel ATA Controller'
   class= mass storage
   subclass = ATA
[EMAIL PROTECTED]:11:0:  class=0x010185 card=0x1c08147b chip=0x008e10de 
rev=0xa3 hdr=0x00
   vendor   = 'NVIDIA Corporation'
   device   = 'MCP2S Serial ATA Compatible Controller'
   class= mass storage
   subclass = ATA

dmesg says...
atapci0: nVidia nForce2 MCP UDMA133 controller port 
0xf000-0xf00f,0x376,0x170-
0x177,0x3f6,0x1f0-0x1f7 at device 9.0 on pci0
ata0: channel #0 on atapci0
ata1: channel #1 on atapci0
atapci1: GENERIC ATA controller port 
0xcc00-0xcc0f,0xb70-0xb73,0x970-0x977,0xb
f0-0xbf3,0x9f0-0x9f7 irq 11 at device 11.0 on pci0

Just for completeness, they use the VIA vge driven chip.
pciconf says...
[EMAIL PROTECTED]:11:0: class=0x02 card=0x1c08147b chip=0x31191106 rev=0x11 
hdr=0x00
   vendor   = 'VIA Technologies Inc'
   device   = 'VT3119 Rhine-GE Gigabit Ethernet Controller'
   class= network
   subclass = ethernet
desg says...
vge0: VIA Networking Gigabit Ethernet port 0xb000-0xb0ff mem 
0xe800-0xe800
00ff irq 10 at device 11.0 on pci2
miibus0: MII bus on vge0
ciphy0: Cicada CS8201 10/100/1000TX PHY on miibus0
ciphy0:  10baseT, 10baseT-FDX, 100baseTX, 100baseTX-FDX, 1000baseT, 
1000baseT-FDX, auto

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Beastie

2004-11-29 Thread secmgr
Michael Nottebrock wrote:
Hanspeter Roth wrote:
The problem here is, that the Beastie picture is disabled by
default. This is obviously not compatible with the majority of
people.
You're not up-to-date with the latest events, the beasty menu was 
completely nuked from CVS yesterday after the most recent complaints 
and we're now stuck again with the plain old loader prompt.
Why nuke the menu?  I can understand (although not agree with) taking 
out beastie, but the menu was handy, and added useful (to me at least) 
functionality.  Is there some way to tweak it back in?

thanks
jim
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: 5.3R p1 only boots in safe mode

2004-11-25 Thread secmgr
On Thu, 2004-11-25 at 00:22, Scott Long wrote:
 secmgr wrote:
  Can someone point me to what safe mode sets so I can debug whats
  broke?
  
  thanks
  jim
  
 
 It disables ACPI, APIC, ATA DMA, ATAPI DMA, ATA Write Cache, and EISA.
 
 Scott
Many thanks!  I tried disabling  ATA/ATAPI DMA.  That didn't work.  On a
hunch, I disabled device acpi and bata bing, it boots clean.  It's
acting as a server box, so I don't think I'm missing much.

As a reference, it's a MSI KM4M-V, and the system is using the SATA
drives

___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


5.3R p1 only boots in safe mode

2004-11-24 Thread secmgr
Can someone point me to what safe mode sets so I can debug whats
broke?

thanks
jim

___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: buildkernel problem

2004-11-22 Thread secmgr
Tejas Kokje wrote:
Hi,
I am trying to do buildkernel with my own configuration file 
(MYKERNEL_CONFIG). I am getting the following error. I am not sure 
what is wrong. Any help will be appreciated .

# uname -sr
FreeBSD 4.9-RELEASE-p11
#make buildkernel KERNCONF=MYKERNEL_CONFIG
--
 Kernel build for MYKERNEL_CONFIG started on Mon Nov 22 12:25:36 
PST 2004
--
=== MYKERNEL_CONFIG
mkdir -p /usr/obj/usr/src/sys
cd /usr/src/sys/i386/conf;  
PATH=/usr/obj/usr/src/i386/usr/sbin:/usr/obj/usr/src/i386/usr/bin:/usr/obj/usr/src/i386/usr/games:/sbin:/bin:/usr/sbin:/usr/bin  
config  -d /usr/obj/usr/src/sys/MYKERNEL_CONFIG  
/usr/src/sys/i386/conf/MYKERNEL_CONFIG
Don't forget to do a ``make depend''
Warning: device cz is unknown
*** Error code 1

Stop in /usr/src.
*** Error code 1
Stop in /usr/src.
no cz.c in your build path (/usr/src/sys/dev/).  Is there a cyclades Z 
series card in the system?  If you need the driver you can get it from 
their website.
jim

___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: vinum / 5.3

2004-11-16 Thread secmgr
Sam wrote:
Also, i kicked up gvinum to compare, but he doesn't seem to
have a stripe command (despite saying he does in the help).
there is a geom stripe class called gstripe thats outside of the gvinum 
user interface.

jim
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: freebsd 5.3 have any problem with vinum ?

2004-11-11 Thread secmgr
Matthias Schuendehuette wrote:
I'm not sure if this is a problem of (g)vinum or if FreeBSD has other 
problems in this area.
 

just logged a  kern bug on this
And we all have to consider that gvinum is in a relatively early 
development phase (IMHO) - it is basically working, that is, it's 
possible to continue an existing 'classic' vinum installation with 
gvinum but it's still not fully functional in all depth. 
 

(minor deletes)
I guess my issue is that there should be something in the release 
notes/updating that says gvinum raid5 is not fully funtional at this 
time.  I would argue that if can't survive a disk failure, it's not 
really RAID5.  You might as well just go stripe and at least get the 
disk space back.  If I hadn't sat down and tested this, I wouldn't have 
known it was broke till I had a drive failure which is not a good time 
to find out.  I like (in general) where this is heading, but we seem to 
be inbetween relieable s/w raid5 solutions in FreeBSD.

jim
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: freebsd 5.3 have any problem with vinum ?

2004-11-10 Thread secmgr
ok, your instructions worked like a charm.  So i'm running my nice 4 
member SCSI gvinum raid5 array (with softupdates turned on), and it's 
zipping along.  Now I need to test just how robust this is.  camcontrol 
is too nice.  I want to test a more real world failure.  I'm running 
dbench and just pull one of  the drives.  My expectation is that  I 
should see a minor pause, and then the array continue in some slower, 
degraded mode.  What I get is a kernel trap 12 (boom!).  I reboot, and 
it will not mount the degraded set till I replace the drive.

I turned off softupdates, and had the same thing happen.  Is this a 
bogus test?  Is it reasonable to expect that a scsi drive failure should 
of been tolerated w/o crashing?

(bunch of scsi msgs to console)
sub-disk down
plex degraded
g_access failed:6
trap 12
page fault while in kernel mode
cpuid=1 apic id=01
fault virtual address   =0x18c
fault code   supervisor write, page not present
instruction pointer  =0x8:0xc043d72c
stack pointer =0x10:cbb17bf0
code segment=base 0x0, limit 0xfff, type 0x1b
=DPL0, pres1,def32,gran1
Processor flagsinterupt enabled, resume,IOPL=0
current process22(irq11:ahc1)
Matthias Schuendehuette wrote:
gvinum start plexname
This (as far as I investigated :-)
a) initializes a newly created RAID5-plexor
b) recalculates parity informations on a degraded RAID5-plex with
  a new replaced subdisk.
So, a 'gvinum start raid5.p0' initializes my RAID5-plex if newly 
created. You can monitor the initialization process with subsequent 
'gvinum list' commands.

If you degrade a RAID5-plex with 'camcontrol stop diskname' (in case 
of SCSI-Disks) and 'repair' it afterwards with 'camcontrol start 
diskname', the 'gvinum start raid5.p0' (my volume here is called 
'raid5') command recalculates the parity and revives the subdisk which 
was on disk diskname.
 

___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: freebsd 5.3 have any problem with vinum ?

2004-11-07 Thread secmgr
It did, but can you tell me anywhere in the docs it says to do that?  Or
maybe that vinum should sense that and throw some error rather than just
blindly corrupting itself.

jim
On Sun, 2004-11-07 at 09:38, Joe Koberg wrote:
 secmgr wrote:
 
 
 No, I mean self corrupting raid5 sets during initialization.  Discussed
 about 2-3 weeks ago.
   
 
 In the following message you seemed to claim that adding 64 sectors of 
 slack to the
 beginning of the vinum partition fixed this problem, as I suggested. Did 
 that fix it or not?
 
 
  The reason is empirically derived.  When I created a 7 disk raid 5 set 
  using len 0 or all the space available, the raid set would be 
  corrupt after initializing.  Every time.  When I reserved back  that 
  extra space, no corruption.
  (freebsd 4.10-p3)  There was a thread on this a few days ago.
 
  jim 
 
 
 
 Joe Koberg
 joe at osoft dot us
 
 


___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: freebsd 5.3 have any problem with vinum ?

2004-11-06 Thread secmgr
No, I mean self corrupting raid5 sets during initialization.  Discussed
about 2-3 weeks ago.

On Sat, 2004-11-06 at 05:09, Matthias Schuendehuette wrote:
 
 If you mean the 'dangling vnode'-problem with vinum-classic:
 
 Try to start 'classic' vinum *after* the system has come up. Either 
 manually or perhaps with a script in /usr/local/etc/rc.d. This works 
 for me until now under 5-STABLE (a.k.a. RELENG_5).

___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: freebsd 5.3 have any problem with vinum ?

2004-11-06 Thread secmgr
I did a gvinum start.

On Sat, 2004-11-06 at 04:16, Matthias Schuendehuette wrote:
 Am Mittwoch, 3. November 2004 21:27 schrieb secmgr:
  Just ran into this myself.  I had a perfectly happy raid 5 plex under
  5.3 RC1.  I upgrade to RC2, and the whole plex goes stale.  I deleted
  everything from the volume on down (except for the drives), and tried
  to recreate the vol/plex/sd's.  gvinum creates them, but they come
  back (like the undead) as stale and unusable (just like they were
  before). I'm finding commands documented (in help), but unimplemented
  (checkparity?  init?).
 
 Did you try to simply 'start' the plex? This works for initialisation of 
 a newly created RAID5-Plex as well as for recalculating parity 
 informations on a degraded RAID5-Plex.
 
 It's that simple (at least for me on 5-STABLE) but admittedly 
 undocumented.

___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: freebsd 5.3 have any problem with vinum ?

2004-11-03 Thread secmgr
Adrian Wontroba wrote:
On Mon, Nov 01, 2004 at 10:05:16AM +1100, Carl Makin wrote:
 

Do you want to yank it in 5 or 6-CURRENT?  There are a *lot* of people 
using vinum and yanking it in 5-STABLE would force us all to use the 5.3 
security branch until gvinum caught up.
   


From my experiences today with setting up a very old machine[1] with
5.3-RC2, I think it would be best to keep both until gvinum had caught
up.  Vinum can do things which gvinum appears incapable of - such as
initialising a RAID-5 plex.
 

Just ran into this myself.  I had a perfectly happy raid 5 plex under 
5.3 RC1.  I upgrade to RC2, and the whole plex goes stale.  I deleted 
everything from the volume on down (except for the drives), and tried to 
recreate the vol/plex/sd's.  gvinum creates them, but they come back 
(like the undead) as stale and unusable (just like they were before).  
I'm finding commands documented (in help), but unimplemented 
(checkparity?  init?).  I hate to sound like a whiney baby, but WTF is 
going on?  It feels like vinum from 4.x has basicly been abandoned 
(short of crashes with no workaround), and gvinum ain't near ready for 
primetime.  We need a stable, working s/w raid solution (or admit that 
as of right now, FreeBSD doesn't have one).  At the very least, we need 
re and the authors to document what works, what doesn't, and what never 
will.  I'd happily help with docs, but right now it seems like product 
function is linked to /dev/random.  If gvinum isn't ready for release 
(my personal opinion after RC2), it needs to be pulled until whats 
documented works, and what works, is documented correctly.

/whiney_baby
jim
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: freebsd 5.3 have any problem with vinum ?

2004-10-29 Thread secmgr
Greg 'groggy' Lehey wrote:
A bit of background: we know that 'gvinum' will replace Vinum; the
original intention had been to do it seamlessly, but for various
reasons that did happen.  Then we decided that we should leave them
both in the tree until gvinum had the full functionality of Vinum.
It's beginning to look like that's a bad idea.  Lukas is
(understandably) working only on gvinum, and since I know it's nearly
there, I'm not going to do any further work on Vinum in FreeBSD 5.
Given the problems, I'd suggest that we yank it.  I'm copying the
release engineering team.  Re, what do you think?
Greg
 

Not that I've got a lot of say in the matter, but I would vote for this 
too. (along with migration info in the release notes).  There are also 
some minor changes needed to the docs on vinumvm.org for gvinum/newfs 
(5.3) to correct new paths and switches.

thanks
jim
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: question on vinum

2004-10-28 Thread secmgr
Marko Raiha wrote:
Hi,
On Wed, Oct 27, 2004 at 09:10:39AM -0600, secmgr wrote:
 

The reason is empirically derived.  When I created a 7 disk raid 5 set 
using len 0 or all the space available, the raid set would be corrupt 
after initializing.  Every time.  When I reserved back  that extra 
space, no corruption.
(freebsd 4.10-p3)  There was a thread on this a few days ago.
   

Thanks about pointing to this another thread. I read the thread and one
thing is still a bit unclear. Do you mean by leaving 64 blocks unused to
leave it unused at the beginning of disk before vinum partition (when
doing partition to vinum by disklabeling freebsd slice) like this:
# /dev/aacd2s1:
8 partitions:
#size   offsetfstype   [fsize bsize bps/cpg]
 c: 1433479320unused0 0 
 h: 143347868   64vinum

or somewhere in subdisk as you said? I guess that inadvertently not
right terms were used (subdisk instead of disk), because terminology
with vinum is a bit complicated.
All documentations I have found says that only 16 block should be left
unused at the beginning of the disk like this:
# /dev/aacd2s1:
8 partitions:
#size   offsetfstype   [fsize bsize bps/cpg]
 c: 1433479320unused0 0
 h: 143347916   16vinum
Regards,
 

That probably would have worked too.  My mistake was to clone the c 
partition entry, call it h, label it vinum, and use that
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: question on vinum

2004-10-27 Thread secmgr
Chuck Swiger wrote:
No.  All of the drives need to be the same size.  With RAID-5, which 
drive is used to hold parity data rotates on a stripe-by-stripe basis 
to balance out the load.
Just to clarify this point.  All the subdisks in the plex need to be the 
same size.  The physical drives and even the bsd partitions/vinum drives 
do not actually have to be the same.

An example would be you have 2x 40gig partitions, and 2x 60 gig 
partitions.  You could make a stripe set using 4 40gig (160gig) subdisks 
(or a 120gig raid 5 set). You just would't use the remaining 20 gig on 
the other two drives.  You could use the space, just not for this plex. 
(you could create a 40gig (20+20) stripe or concat set or a mirrored 
20gig area) Concat, on the other hand doesn't care and would happily use 
40+40+60+60 (just not with the same perf levels)

jim
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: question on vinum

2004-10-27 Thread secmgr
Marko Raiha wrote:
Hi,
On Tue, Oct 26, 2004 at 11:33:43PM -0600, secmgr wrote:
 

...
when you define the subdisk, don't use the whole drive.  Leave at least
64 blocks unused.
...
   

Would you mind to tell the reason behind this rule of leaving at least
64 blocks unused?
Regards,
 

The reason is empirically derived.  When I created a 7 disk raid 5 set 
using len 0 or all the space available, the raid set would be corrupt 
after initializing.  Every time.  When I reserved back  that extra 
space, no corruption.
(freebsd 4.10-p3)  There was a thread on this a few days ago.

jim
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: question on vinum

2004-10-26 Thread secmgr
On Tue, 2004-10-26 at 18:27, Mikhail P. wrote:

 I haven't worked with Vinum previously, but hear a lot about it. My question 
 is how to implement the above (unite four drives into single volume) using 
 Vinum, and what will happen if let's say one drive fails in volume? Am I 
 loosing the whole data, or I can just unplug the drive, tell vinum to use 
 remaining drives with the data each drive holds? I'm not looking for fault 
 tolerance solution.
Since you don't care about fault tolerance, you probably want to do
striping, also known as raid0.

 From my understanding, in above scenario, Vinum will first fill up the first 
 drive, then second, etc.
thats called concatenation, which is different than striping.  striping
balances the load across all the spindles

 I have read the handbook articles, and I got general understanding of Vinum.
 I'm particularly interested to know if I will still be able to use volume in 
 case of failed drive.
If you want to do that, then you want raid5.  If either a concat or
stripe set looses a drive, the data will need to be restored.  

 Some minimal configuration examples would be greatly appreciated!
Read the following.  Really!
http://www.vinumvm.org/vinum/vinum.ps  
http://www.vinumvm.org/cfbsd/vinum.txt
Both of these have examples and will clear up your confusion about
concat vs stripe vs raid5.

concat is the easiest to add to, stripe has the best performance, raid5
trades write speed and n+1 drives for resilience.  raid10 gets back the
performance at the cost of 2*n drives 

Broken down:
volume - top level.  what the filesystem talks to.  mirroring is defined
at the volume level as is raid 10 (mirrored stripe plexes)
plex   - a virtual storage area made up of 1 or more subdisks for concat
2 or more for stripe, or 3 or more subdisks for raid 5.
subdisk - area delegated from a bsd partition
drive - the actual bsd partition (as in /dev/da1s1h)

generally, the order is as follows:
-fdisk the drives to be used so they have at least one bsd slice each. 
-use disklabel to edit the slice label so you have at least one
partition of type vinum (that isn't the C partition)
-in an editor, create the configuration
drives
volume
  plex
sd

when you define the subdisk, don't use the whole drive.  Leave at least
64 blocks unused.

use the file you created as input to vinum
vinum create -v -f config

Or you can cheat and just say, 
vinum stripe -n volname /dev/ad0s1h /dev/ad1s1h /dev/ad2s1h
/dev/ad3s1h (should be all on one line)

Raid5 plexs have to be init'ed 

newfs -v /dev/vinum/volname
mount /dev/vinum/volname /mnt

hopefully I haven't made you're understanding worse



___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


[Fwd: Re: freebsd 5.3 have any problem with vinum ?]

2004-10-25 Thread secmgr

I have freebsd 5.3 rc1 installed perfectly, i configure vinum with the handbook and 
all work perfect
but when try run vinum with rc.conf there display the error:
panic: unmount: dangling vnode
cpuid: 0
uptime= 4s
Cannot dump. No dump device defined
Automatic reboot in 15 seconds.
why happened ? I configure freebsd 4.x with this config and work perfectly :(
thx to alls.
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]
 

just hit the same exact thing myself.  two single drive plexes, 
mirored.  If I boot standalone, bring up vinum by hand, and then mount 
the mirror, it's fine.  Only happens when I use the start_vinum=YES in 
rc.conf.

jim
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: freebsd 5.3 have any problem with vinum ?

2004-10-25 Thread secmgr
Andrew Konstantinov wrote:
On Mon, 2004-10-25 at 05:55, Oliver Torres Delgado wrote:
 

I have freebsd 5.3 rc1 installed perfectly, i configure vinum with the handbook and 
all work perfect
but when try run vinum with rc.conf there display the error:
panic: unmount: dangling vnode
cpuid: 0
uptime= 4s
Cannot dump. No dump device defined
Automatic reboot in 15 seconds.
   

Just like me, you should have checked the mailing list archive first.
Here is a nice solution:
http://lists.freebsd.org/pipermail/freebsd-current/2004-August/035547.html
I switched from 5.2.1 to 5.3 and had the same problem which you'ved
described above. Once I've switched to gvinum, everything went back to
normal.
Andrew
 

thanks, I appreciate the pointer.  That being said, if this is a KNOWN 
problem (as far back as August) how come no mention in either the 
release notes OR errata.  I like and use FreeBSD for it's stability, but 
at this point in time, I've got broken vinum-classic  (with workarounds) 
raid5 in 4.10 and raid 1 in 5.3-stable.  Both of these installed per 
current documentation. I haven't logged bug reports becasue it seems 
like everything is moving to gvinum, and they'd probably just end up 
closed out as FINV

jim
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


newfs -v on vinum raid5 panics system (4.10-p3 RELEASE)

2004-10-21 Thread secmgr
I've got 7x9.1 gig scsi drives off an adaptec AIC7880 class controller. 
(da0-6).  I want to raid5 these using vinum.  I'm a newbie with vinum.  
If I use a bsd partition the same size as the slice (c=h), and then let 
the subdisks use the entire partition (len 0), the raid set is corrupted 
every time after initializing.  This usually leads to a kernel panic 
during newfs.  If I leave some amount free, (ie the subdisk only uses 
8000mb of a 8675mb drive) no problem.  If  i'm RTFM'ing  correctly, it 
looks like it should reserve 132kb per drive.  I guess I assumed that 
would be automagically reserved, or if it's supposed to be, it's getting 
walked on.

first I fdisk each drive to create the slice (using the fdisk in 
sysinstall and letting it autocreate the slice)
The data for partition 1 is:
sysid 165,(FreeBSD/NetBSD/386BSD)
   start 63, size 17767827 (8675 Meg), flag 80 (active)
   beg: cyl 0/ head 1/ sector 1;
   end: cyl 1023/ head 254/ sector 63
The data for partition 2 is:
UNUSED
...

Next, I disklabel each drive (they're all identical)
# /dev/da0s1c:
#size   offsetfstype   [fsize bsize bps/cpg]
 c: 177678270unused0 0 # (Cyl.0 - 
1105*)
 h: 177678270 vinum# (Cyl.0 - 
1105*)

and finally, I use the following to create the volume
drive d0 device /dev/da0s1h
drive d1 device /dev/da1s1h
drive d2 device /dev/da2s1h
drive d3 device /dev/da3s1h
drive d4 device /dev/da4s1h
drive d5 device /dev/da5s1h
drive d6 device /dev/da6s1h
volume bob plex bp
plex name bp org raid5 500k
sd len 0 drive d0
sd len 0 drive d1
sd len 0 drive d2
sd len 0 drive d3
sd len 0 drive d4
sd len 0 drive d5
sd len 0 drive d6
I init the plex (about 90 minutes go by) and all subdisks show initialized.
I type 'vinum start',  and then 'vinum list'
greybrd# vinum list
14 drives:
1 volumes:
V bob   State: up   Plexes:   1 Size: 50 GB
1 plexes:
P bp R5 State: up   Subdisks: 7 Size: 50 GB
7 subdisks:
S bp.s0 State: up   PO:0  B Size:   8675 MB
S bp.s1 State: up   PO:  500 kB Size:   8675 MB
S bp.s2 State: up   PO: 1000 kB Size:   8675 MB
S bp.s3 State: up   PO: 1500 kB Size:   8675 MB
S bp.s4 State: up   PO: 2000 kB Size:   8675 MB
S bp.s5 State: up   PO: 2500 kB Size:   8675 MB
S bp.s6 State: up   PO: 3000 kB Size:   8675 MB
all seems well (except that the drives don't show).  Just for grins, I 
do a 'vinum checkparity bp', and now all the subdisks show as crashed.  
I assume this is what happened when I tried the newfs also.

greybrd# vinum list
14 drives:
1 volumes:
V bob   State: up   Plexes:   1 Size: 50 GB
1 plexes:
P bp R5 State: up   Subdisks: 7 Size: 50 GB
7 subdisks:
S bp.s0 State: crashed  PO:0  B Size:   8675 MB
S bp.s1 State: crashed  PO:  500 kB Size:   8675 MB
S bp.s2 State: crashed  PO: 1000 kB Size:   8675 MB
S bp.s3 State: crashed  PO: 1500 kB Size:   8675 MB
S bp.s4 State: crashed  PO: 2000 kB Size:   8675 MB
S bp.s5 State: crashed  PO: 2500 kB Size:   8675 MB
S bp.s6 State: crashed  PO: 3000 kB Size:   8675 MB
If I stop and start vinum, the plex is hosed.
greybrd# vinum start
*** Warning: configuration updates are disabled. ***
Warning: defective objects
V bob   State: down Plexes:   1 Size:  0  B
P bp R5 State: init Subdisks:   -28 Size:  0  B
*** Warning: configuration updates are disabled. ***
(syslog output)
Oct 20 04:54:17 greybrd /kernel: vinum: bp.s0 is up
Oct 20 04:54:17 greybrd /kernel: vinum: bp.s1 is up
(repeated for all subdisks)
Oct 20 04:54:17 greybrd /kernel: vinum: bp is up (the plex)
Oct 20 04:54:17 greybrd /kernel: vinum: bob is up (the volume)
Oct 20 05:15:41 greybrd /kernel: vinum: bp.s0 is crashed by force
Oct 20 05:15:41 greybrd /kernel: vinum: bp.s1 is crashed by force
(msg repeated for all the subdisks)
Oct 20 05:21:29 greybrd /kernel: vinum: unloaded
Oct 20 05:21:34 greybrd /kernel: vinum: loaded
Oct 20 05:21:34 greybrd /kernel: vinum: reading configuration from 
/dev/da6s1h
Oct 20 05:21:34 greybrd /kernel: vinum: bp.s0: invalid keyword: 17767000s
Oct 20 05:21:34 greybrd /kernel: Disabling configuration updates
Oct 20 05:21:34 greybrd /kernel: vinum: No space for  on d0
Oct 20 05:21:35 greybrd /kernel: vinum: bp.s1: invalid keyword: 17767000s
Oct 20 05:21:35 greybrd /kernel: vinum: No space for  on d1
(msg repeated for the remaining subdisks)

This is what I dumped out of the disk header
IN VINOgreybrd.FQDNd0I7vA;^NAvA^Q;^^volume bob state down
plex