Buildworld takeing a long time in 5.2.1
Hi, I have two dual processor systems with PIII 450 MHz processors both have internal SCSI 9 GB disk on Adaptec 2940UW controlers. One is installed with FreeBSD 4.9-STABLE and 1024 MB ram the other has installed FreeBSD 5.2-CURRENT wit 512 MB ram, both boxes have 1GB of swap but neither has ever used anything but the absolute minimum. The 4.9 server has cvsup installed to sync the src, ports and dev repositories, this is NFS mounted to the 5.2 server and extracted to a local copy of the 5 src tree. The 4.9 box extracts the RELENG_4 src locally. On a frequent basis I buildworld and kernels for all my servers, currently on the 4.9 box I buildworld in approx 1:03 hours and build 3 kernels in 30 mins. On the 5 box I buildworld and it takes 4.5 hours and takes 1 hour to build 1 kernel. This not a problem as it all happens automatically over night, but I am just interested in why the disparity of the timeings Thanks for any information. David. ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
Using syslog to seperate out log messages
Hi, I have a small problem, well more of an annoyance than anything, but I was hoping that someone would be able to solve it for me. I have an Internet connection from Demon in the UK and I am using a D-Link DSL-300G+ ADSL modem to connect via. This device uses DHCP to request the network configuration from Demon and then offers it via DHCP to my firewall system. This all work fine and I am now in the process of securing the link with ipfw. Now it would appear the the D-Link box advertises the connection information to the firewall every 30 seconds or so with the following entries Dec 31 00:07:21 gate dhclient: New Network Number: 62.49.18.0 Dec 31 00:07:21 gate dhclient: New Broadcast Address: 62.49.18.255 Dec 31 00:07:50 gate dhclient: New Network Number: 62.49.18.0 Dec 31 00:07:50 gate dhclient: New Broadcast Address: 62.49.18.255 Dec 31 00:08:19 gate dhclient: New Network Number: 62.49.18.0 Dec 31 00:08:19 gate dhclient: New Broadcast Address: 62.49.18.255 as you can see I get quite a lot of this rubbish in the 'messages' file and I would like to move all the dhclient traffic into another log file that I can truncate/remove/ignore on a regular basis. When I try and direct the above entries with a line like !+dhclient *.* /var/log/dhclient.log or !dhclient *.* /var/log/dhclient.log or !-dhclient *.* /var/log/dhclient.log The above traffic still occurs but now I get the following additional messages every 30 seconds or so in the new file Dec 31 00:06:53 gate dhclient: DHCPREQUEST on fxp0 to 62.49.18.138 port 67 Dec 31 00:06:53 gate dhclient: DHCPACK from 62.49.18.138 Dec 31 00:06:53 gate dhclient: New Network Number: 62.49.18.0 Dec 31 00:06:53 gate dhclient: New Broadcast Address: 62.49.18.255 Dec 31 00:06:53 gate dhclient: bound to 62.49.18.137 -- renewal in 28 seconds. Can any one tell me how to stop these messages in the 'messages' file? Thanks for your time David -- David Dooley [EMAIL PROTECTED] msg13752/pgp0.pgp Description: PGP signature
Using symbolic links in directories for /usr/src and /usr/obj
Hi, I am using Freebsd 4.7 Anybody know if there is a way to have the build system use relative paths that include symbolic links rather than absolute path. The question has come about, because, I would like to be able to auto mount my /usr/build file system from my dev box, where everything is compiled, to the target systems for installation. But using auto mounter the paths are changed and it would appear the build system uses the absolute paths rather than the symbolic paths. I am currently doing static mounts through /etc/fstab of a file system /usr/build. The /usr/src and /usr/obj directories are symbolically linked into /usr/build, but as I export /usr/build from my 'dev' box and it is mounted as /usr/build on the remote systems and the same symbolic links exist on the remote system all is well as the paths on all the boxes remain the same. As soon as I try and use the auto mounter to mount the /usr/build file system the paths on the remote box are now different and the install fails complaining about not being able to find things as the paths to things have changed. Any help would be most appreciated. David. To Unsubscribe: send mail to [EMAIL PROTECTED] with unsubscribe freebsd-questions in the body of the message
Help with possible incantation for auto mounter
Hi, Does any one know if it is at all possible to use 'amd' to auto mount a file system locally and then have that file system be auto mount remotely? I would need to export the real mount point of a file system on box A and have the file system be mounted at the same mount point on the remote box B Are their any alternatives to using 'amd' at the moment to doing on deman mounting of file systems. David. To Unsubscribe: send mail to [EMAIL PROTECTED] with unsubscribe freebsd-questions in the body of the message
Has something broken the de network driver
Hi All I am having problems with a Kernel that was built from cvssuped source last Sunday at 9pm. When I try and boot the kernel everything appears to work apart from it no longer probes the de network card correctly I have booted with the 4.4 Generic kernel and that can find the card and everything is working fine in that the network stuff comes up fine. This is output from the dmesg from the kernel.GENERIC booted tonight Nov 26 23:21:56 dev /kernel.GENERIC: de0: Digital 21140A Fast Ethernet port 0xc400-0xc47f mem 0xe501-0xe501007f irq 11 at device 5.0 on pci2 Nov 26 23:21:56 dev /kernel.GENERIC: de0: DEC DE500-AA 21140A [10-100Mb/s] pass 2.0 Nov 26 23:21:56 dev /kernel.GENERIC: de0: address 00:00:f8:02:fc:b2 This is the output from my previous working kernel that was booted on the 6th of October Oct 6 09:58:17 dev /kernel: de0: Digital 21140A Fast Ethernet port 0xc400-0xc47f mem 0xe501-0xe501007f irq 11 at device 5.0 on pci2 Oct 6 09:58:17 dev /kernel: de0: DEC DE500-AA 21140A [10-100Mb/s] pass 2.0 Oct 6 09:58:17 dev /kernel: de0: address 00:00:f8:02:fc:b2 and here is the output from my latest kernel that doesn't work Nov 25 22:48:11 dev /kernel: pci2: unknown card (vendor=0x1011, dev=0x0009) at 5.0 irq 11 Any one any ideas as to what might have broken? I appologise for the wrapping of the demesg output Thanks for your help David. To Unsubscribe: send mail to [EMAIL PROTECTED] with unsubscribe freebsd-questions in the body of the message
Network Driver not being probed during boot
Hi, Can any one tell me if they are using the de network interface card and if it is still working with a 4.7 kernel. I built a kernel approx 50 days ago and all was fine. Any kernel I build now with the latest supped sources and the network card is not being probed during boot. The card works fine if I boot a 4.4 Generic kernel. Has the de NIC been depreciated in favour of another driver if so what. I can find nothing in the UPDATING file in /usr/src. A friend informs me that the de driver source has not been touched since some time in 2000, but something has changed. I am using the same kernel configuration file that I have used on this box for almost a year now and it normally get an upgrade every couple of months. If any further information is required I'd be pleased to supply. Thanks for your time and consideration of this problem David. To Unsubscribe: send mail to [EMAIL PROTECTED] with unsubscribe freebsd-questions in the body of the message
Flash Plugin error with Mozillia
Hi I get the following error when I start Mozilla. LoadPlugin: failed to initialize shared library /usr/X11R6/lib/mozilla/plugins/libflashplayer_linux.so [/usr/X11R6/lib/mozilla/plugins/libflashplayer_linux.so: Undefined symbol overrideShellClassRec] I have installed the www/flashplugin-mozilla-0.4.10_2 de-installed it and re-installed and still get the error. I have also installed the www/linux_flashplugin-5.0r51, but still no joy. Can anyone tell me what port I need to install to get the flash plugin working. Thanks for your time. Any hints gratefully recived. David. -- David Dooley [EMAIL PROTECTED] msg19682/pgp0.pgp Description: PGP signature
Re: Mirroring/load-balance two servers
The only problem with using DNS round robin like this, is that, in this scenario when 1 server is down, on average 1 in 3 requests to the web server will fail. But as previous posters have commented DNS should respond with the same 3 addresses, but it will rotate the order each time, in the version ( named 8.3.4-REL Sun Feb 9 01:23:18 GMT 2003 on 4.7-STABLE of the same date ) I am using it appears to return the addresses in some sort of random order at least it does for me in my test. On Thu, 6 Mar 2003 12:09:06 -0800 Aaron Burke [EMAIL PROTECTED] wrote: Aaron Burke [EMAIL PROTECTED] writes: To my knowlege, yes. Lets say you had a server called www. You would just give it two addresses in your domain configuration files. www IN CNAME 12.34.56.78 www IN CNAME 9.10.11.12 www IN CNAME 65.4.3.21 That should be A records, not CNAMEs. Err, you are correct, my mistake. The DNS standard will give out a different address for every query. To get the address 12.34.56.78 twice, you would have to make 4 unique queries for the server records. Where does the standard say that? Most servers will return the records in the same order each time by default, and my reading of the standards is that this is perfectly acceptable behaviour. I have personally not read the standard. It is just information thats been given to me by some knowlegable friends. To Unsubscribe: send mail to [EMAIL PROTECTED] with unsubscribe freebsd-questions in the body of the message -- David Dooley [EMAIL PROTECTED] pgp0.pgp Description: PGP signature
Sysinstall and drive geometry
Hi, I wondering if any one can offer me some advice with a problem I am having configuring drives and installing FreeBSD 5.2 on an Intel box. The Box has a SuperMicro motherboard with an Intel 1.6GH P4 686-class CPU/1024GB Ram and there are currently 3 IDE controllers with 6 drive channels, 2 channels provided by the motherboard (FreeBSD probe reports the system controller as an Intel ICHR UDMA100 Controller), this has what I want to be the bootable drive configure as Primary master, no primary slave, the secondary channel has a Ricoh CDRW drive (Master) and a Samsung DVD-ROM (Slave). Then, their are 2 Promise PDC20286 UDMA100 IDE controllers installed that have drives configured thus Controller 1 Primary Master 200GB drive WDC WD2000JB-32EVA0/15.05R15 Primary Slave 100GB drive WDC1000BB-00CCB0/22.04A22 Secondary Master 200GB drive WDC WD2000JB-32EVA0/15.05R15 Secondary Slave100GB drive WDC1000BB-00CCB0/22.04A22 Controller 2 Primary Master 200GB drive WDC WD2000JB-32EVA0/15.05R15 Primary Slave 100GB drive WDC1000BB-00CCB0/22.04A22 Secondary Master 200GB drive WDC WD2000JB-32EVA0/15.05R15 Secondary Slave100GB drive WDC1000BB-00CCB0/22.04A22 All drives are connected with 80pin flat ribbon cables to their respective controllers with the master drive being at the end of the cable The problem I am having is that when ever I go into sysinstall either from the installer or /stand/sysinstall it complains that the drive geometry is incorrect and suggests that it should be fixed, but if I try and set it to what the boot probe reports it comes back and says that the geometry is still incorrect and won't allow me to change it. If I let it run with the geometry it selects the install currently panics and hangs the box. The FreeBSD boot probe reports the drive geometries of the WD800 system disk as 155061/16/63 sysinstall wants to use 9729/255/63 All the 200GB drives are reported by the boot probe as 387621/16/63 sysinstall would rather 24321/255/63 And like wise with the 100GB drives, the probe reports 193821/16/63 and again sysinstall wants 12161/255/63 If I allow sysinstall to select what it thinks is a suitable drive geometry after a reboot it has returned to the original and sysinstall complaines all over again. Please could any one offer me an explanation as to the discrepancy between the probe and sysinstall and how to marry the 2 together. I will be changing all the drive connector cables to those that support UDMA100 so as to get the best speed out of the drives, I am unsure if this is the source of my problem, but at this point I will try anything. Thanks for any help with this Regards David Dooley -- David Dooley ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
FreeBSD and IEEE1394 Disks
Hi, I am mucking about with firewire disks at the moment, curently on a Windose XP box and having nothing, but problems with Delayed Write Failures when I try and write data to the disk. I am thinking of moving the card to my FreeBSD 5.2 box, but I was just curious as to how stable firewire disk are under FreeBSD and if any users experiance problems wit hdata loss when reading and writing to disks. I eventually want to use these disks as a backup device, but can only do so if they are rock solidly reliable. I am using an Adaptect 8300 card (64-bit card in a 32-bit PCI Slot) connected to a FireWire bridge board with the Initio-2430L (http://www.span.com/catalog/product_info.php?cPath=29_1308products_id=5625) chip set, which is in turn connected to 4 x Western Digital 200GB disks. Thanks for your thoughts and sugestions Regards David Dooley ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
Creating Vinum objects on Hardware raided disks
Hi, My problem is that I cannot create vinum objects on a Promise TX200 ATA raid device. The system is Dual P450 PIII on a Asus motherboard with 512MB RAM, 2 9GB SCSI drives as my boot devices. I have installed a Promise TX2000 ATA raid controller and attached 4 x 200GB drives, that I have configured as a single 400GB Raid 0+1 on the controller and the OS 'FreeBSD ball.lan.raffles-it.com 4.9-STABLE FreeBSD 4.9-STABLE#1: Fri Dec 19 19:34:25 GMT 2003' sees the drives and I can create a standard UFS partition and mount the drive. The output from dmesg showing the probed devices is: ar0: 381469MB ATA RAID0+1 array [48630/255/63] status: READY subdisks: 0 READY ad0: 190782MB WDC WD2000JB-32EVA0 [387621/16/63] at ata2-master UDMA100 1 READY ad1: 190782MB WDC WD2000JB-32EVA0 [387621/16/63] at ata2-slave UDMA100 2 READY ad2: 190782MB WDC WD2000JB-32EVA0 [387621/16/63]at ata3-master UDMA100 3 READY ad3: 190782MB WDC WD2000JB-32EVA0 [387621/16/63] at ata3-slave UDMA100 I can create between 7 and 32 partitions using this method, depending on how I use fdisk and disklabel, but they are fairly inflexible in terms of resizing the partitions and i would be almost impossible to grow a file system. So I though it might be possible to create lots of small vinum sub-disks out of my single disk and then build plexes and volumes as and when I require them and when I want to resize a partition, add a new sub-disk/plex to the volume and grow the file system in to the new space. The disklabel that is currently installed on ar0 is # disklabel -r ar0 /dev/ar0c: type: ESDI disk: ar0s1 label: flags: bytes/sector: 512 sectors/track: 63 tracks/cylinder: 255 sectors/cylinder: 16065 cylinders: 48629 sectors/unit: 781240887 rpm: 3600 interleave: 1 trackskew: 0 cylinderskew: 0 headswitch: 0 # milliseconds track-to-track seek: 0 # milliseconds drivedata: 0 8 partitions: #size offsetfstype [fsize bsize bps/cpg] c: 7812408870unused0 0# (Cyl.0 - 48629*) e: 7812408870 vinum # (Cyl.0 - 48629*) The configuration file I used to define the drive was drive bigdrive device /dev/ar0 or drive bigdrive device /dev/ar0e or drive bigdrive device /dev/ar0s1e and the response I got each time was 1: drive bigdrive device /dev/ar0e ** 1 Can't initialize drive bigdrive: Operation not supported by device0 drives: 0 volumes: 0 plexes: 0 subdisks: between each invocation pf the create command I did a resetconfig. Thanks for any light you can shed on this. David -- David Dooley [EMAIL PROTECTED] ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
Vinum and Promise ATS Raid card
Hi I am having a problems creating vinum objects. I have 4 200GB disks connected to a Promise ATA FastTRAK raid controller and the disk are defined as a 400GB Raid 0+1 drive. This all appears to be fine and is viasable to the OS ar0: 381469MB ATA RAID0+1 array [48630/255/63] status: READY subdisks: 0 READY ad0: 190782MB WDC WD2000JB-32EVA0 [387621/16/63] at ata2-master UDMA100 1 READY ad1: 190782MB WDC WD2000JB-32EVA0 [387621/16/63] at ata2-slave UDMA100 2 READY ad2: 190782MB WDC WD2000JB-32EVA0 [387621/16/63] at ata3-master UDMA100 3 READY ad3: 190782MB WDC WD2000JB-32EVA0 [387621/16/63] at ata3-slave UDMA100 As that stands I can label the disk and newfs it and all is working fine. # /dev/ar0c: type: ESDI disk: ar0s1 label: flags: bytes/sector: 512 sectors/track: 63 tracks/cylinder: 255 sectors/cylinder: 16065 cylinders: 48629 sectors/unit: 781240887 rpm: 3600 interleave: 1 trackskew: 0 cylinderskew: 0 headswitch: 0 # milliseconds track-to-track seek: 0 # milliseconds drivedata: 0 8 partitions: #size offsetfstype [fsize bsize bps/cpg] c: 7812408870unused0 0# (Cyl.0 - 48629*) e: 78124088704.2BSD 2048 1638489 # (Cyl.0 - 48629*) and after newfs with the file system mounted Filesystem 1K-blocks Used Avail Capacity Mounted on /dev/ar0e 3845142802 353753136 0%/mnt Now what I would really like to do is is create lots of vinum objects that I can use to create file systems out of and to extend them as and when needed and also to remove the whole 7 partitions limitation. So after unmounting the file system and changeing the disk label to from 4.2BSD to vinum # disklabel -r ar0 # /dev/ar0c: type: ESDI disk: ar0s1 label: flags: bytes/sector: 512 sectors/track: 63 tracks/cylinder: 255 sectors/cylinder: 16065 cylinders: 48629 sectors/unit: 781240887 rpm: 3600 interleave: 1 trackskew: 0 cylinderskew: 0 headswitch: 0 # milliseconds track-to-track seek: 0 # milliseconds drivedata: 0 8 partitions: #size offsetfstype [fsize bsize bps/cpg] c: 7812408870unused0 0# (Cyl.0 - 48629*) e: 7812408870 vinum # (Cyl.0 - 48629*) I tried to do the first step in creating a vinum volume si I created a config file with the line drive a device /dev/ar0e I then fired up vinum and ran the command vinum - create -f /etc/vinum.config and got the response 1: drive a device /dev/ar0e ** 1 Can't initialize drive a: Operation not supported by device 0 drives: 0 volumes: 0 plexes: 0 subdisks: I tried the config with drive a device /dev/ar0s1e and got the same response can any one tell me if vinum supports hardware raid disks. I really want to use vinum to get of the limitation of 7 file system / partitions per drive so I can create a number of small volumes and add plexes as and when require to grow the file system and not for the redundancy/resiliency aspects as that is taken care of by the hardware. Thanks for any help and advice. David. -- David Dooley [EMAIL PROTECTED] ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
Vinum on Promise ATA Raid card
Hi I am having a problems creating vinum objects. I have 4 200GB disks connected to a Promise ATA FastTRAK raid controller and the disk are defined as a 400GB Raid 0+1 drive. This all appears to be fine and is viasable to the OS ar0: 381469MB ATA RAID0+1 array [48630/255/63] status: READY subdisks: 0 READY ad0: 190782MB WDC WD2000JB-32EVA0 [387621/16/63] at ata2-master UDMA100 1 READY ad1: 190782MB WDC WD2000JB-32EVA0 [387621/16/63] at ata2-slave UDMA100 2 READY ad2: 190782MB WDC WD2000JB-32EVA0 [387621/16/63] at ata3-master UDMA100 3 READY ad3: 190782MB WDC WD2000JB-32EVA0 [387621/16/63] at ata3-slave UDMA100 As that stands I can label the disk and newfs it and all is working fine. # /dev/ar0c: type: ESDI disk: ar0s1 label: flags: bytes/sector: 512 sectors/track: 63 tracks/cylinder: 255 sectors/cylinder: 16065 cylinders: 48629 sectors/unit: 781240887 rpm: 3600 interleave: 1 trackskew: 0 cylinderskew: 0 headswitch: 0 # milliseconds track-to-track seek: 0 # milliseconds drivedata: 0 8 partitions: #size offsetfstype [fsize bsize bps/cpg] c: 7812408870unused0 0# (Cyl.0 - 48629*) e: 78124088704.2BSD 2048 1638489 # (Cyl.0 - 48629*) and after newfs with the file system mounted Filesystem 1K-blocks Used Avail Capacity Mounted on /dev/ar0e 3845142802 353753136 0%/mnt Now what I would really like to do is is create lots of vinum objects that I can use to create file systems out of and to extend them as and when needed and also to remove the whole 7 partitions limitation. So after unmounting the file system and changeing the disk label to from 4.2BSD to vinum # disklabel -r ar0 # /dev/ar0c: type: ESDI disk: ar0s1 label: flags: bytes/sector: 512 sectors/track: 63 tracks/cylinder: 255 sectors/cylinder: 16065 cylinders: 48629 sectors/unit: 781240887 rpm: 3600 interleave: 1 trackskew: 0 cylinderskew: 0 headswitch: 0 # milliseconds track-to-track seek: 0 # milliseconds drivedata: 0 8 partitions: #size offsetfstype [fsize bsize bps/cpg] c: 7812408870unused0 0# (Cyl.0 - 48629*) e: 7812408870 vinum # (Cyl.0 - 48629*) I tried to do the first step in creating a vinum volume si I created a config file with the line drive a device /dev/ar0e I then fired up vinum and ran the command vinum - create -f /etc/vinum.config and got the response 1: drive a device /dev/ar0e ** 1 Can't initialize drive a: Operation not supported by device 0 drives: 0 volumes: 0 plexes: 0 subdisks: I tried the config with drive a device /dev/ar0s1e and got the same response can any one tell me if vinum supports hardware raid disks. I really want to use vinum to get of the limitation of 7 file system / partitions per drive so I can create a number of small volumes and add plexes as and when require to grow the file system and not for the redundancy/resiliency aspects as that is taken care of by the hardware. Thanks for any help and advice. David. -- David Dooley [EMAIL PROTECTED] pgp0.pgp Description: PGP signature
Re: Vinum and Promise ATS Raid card
MB V usr State: up Plexes: 2 Size: 2530 MB V Home State: down Plexes: 1 Size: 10 GB V Build State: down Plexes: 1 Size: 5120 MB V SharesState: down Plexes: 1 Size: 5120 MB V Books State: down Plexes: 1 Size: 5120 MB V MP3 State: down Plexes: 1 Size:120 GB V Pictures State: down Plexes: 1 Size: 20 GB 12 plexes: P var.p0 C State: up Subdisks: 1 Size: 1024 MB P tmp.p0 C State: up Subdisks: 1 Size: 1024 MB P usr.p0 C State: up Subdisks: 1 Size: 2530 MB P var.p1 C State: up Subdisks: 1 Size: 1024 MB P tmp.p1 C State: up Subdisks: 1 Size: 1024 MB P usr.p1 C State: up Subdisks: 1 Size: 2530 MB P Home.p0 C State: faulty Subdisks: 1 Size: 10 GB P Build.p0C State: faulty Subdisks: 1 Size: 5120 MB P Shares.p0 C State: faulty Subdisks: 1 Size: 5120 MB P Books.p0C State: faulty Subdisks: 1 Size: 5120 MB P MP3.p0 C State: faulty Subdisks: 1 Size:120 GB P Pictures.p0 C State: faulty Subdisks: 1 Size: 20 GB 12 subdisks: S var.p0.s0 State: up PO:0 B Size: 1024 MB S tmp.p0.s0 State: up PO:0 B Size: 1024 MB S usr.p0.s0 State: up PO:0 B Size: 2530 MB S var.p1.s0 State: up PO:0 B Size: 1024 MB S tmp.p1.s0 State: up PO:0 B Size: 1024 MB S usr.p1.s0 State: up PO:0 B Size: 2530 MB S Home.p0.s0State: crashed PO:0 B Size: 10 GB S Build.p0.s0 State: crashed PO:0 B Size: 5120 MB S Shares.p0.s0 State: crashed PO:0 B Size: 5120 MB S Books.p0.s0 State: crashed PO:0 B Size: 5120 MB S MP3.p0.s0 State: crashed PO:0 B Size:120 GB S Pictures.p0.s0State: crashed PO:0 B Size: 20 GB This not looking so good. I cannot find a command to revive any of the components I only get a response of the like # vinum start Home Can't start Pictures: Device busy (16) or # vinum init Home Initializing volumes is not implemented yet If I vinum rm -rf faulty volume name for all the now faulty components and use my create script again, all my drives are magically OK, and I can fsck and mount them just fine. Not that I anticipate rebooting this box much, beyond when I patch it to the latest release, it will be a tad annoying to have to rebuild my volume configuration on each reboot, and once I start extending volumes it might get a bit on the stress full side trying to build everything in the crorect order. Has any one else any experience of using vinum on the ar0 device that doesn't require rebuilding the configuration after each reboot. Thanks for any help David. On Sat, 20 Dec 2003 16:32:28 + David Dooley [EMAIL PROTECTED] wrote: Hi I am having a problems creating vinum objects. I have 4 200GB disks connected to a Promise ATA FastTRAK raid controller and the disk are defined as a 400GB Raid 0+1 drive. This all appears to be fine and is viasable to the OS ar0: 381469MB ATA RAID0+1 array [48630/255/63] status: READY subdisks: 0 READY ad0: 190782MB WDC WD2000JB-32EVA0 [387621/16/63] at ata2-master UDMA100 1 READY ad1: 190782MB WDC WD2000JB-32EVA0 [387621/16/63] at ata2-slave UDMA100 2 READY ad2: 190782MB WDC WD2000JB-32EVA0 [387621/16/63] at ata3-master UDMA100 3 READY ad3: 190782MB WDC WD2000JB-32EVA0 [387621/16/63] at ata3-slave UDMA100 As that stands I can label the disk and newfs it and all is working fine. # /dev/ar0c: type: ESDI disk: ar0s1 label: flags: bytes/sector: 512 sectors/track: 63 tracks/cylinder: 255 sectors/cylinder: 16065 cylinders: 48629 sectors/unit: 781240887 rpm: 3600 interleave: 1 trackskew: 0 cylinderskew: 0 headswitch: 0 # milliseconds track-to-track seek: 0 # milliseconds drivedata: 0 8 partitions: #size offsetfstype [fsize bsize bps/cpg] c: 7812408870unused0 0# (Cyl.0 - 48629*) e: 78124088704.2BSD 2048 1638489 # (Cyl.0 - 48629*) and after newfs with the file system mounted Filesystem 1K-blocks Used Avail Capacity Mounted on /dev/ar0e 3845142802 353753136 0%/mnt Now what I would really like to do is is create lots of vinum objects that I can use to create file systems out of and to extend them as and when needed
Geom and 5.4
Hi, I hope somebody can guide me as to where I have gone wrong. I have an Intel system with FreeBSD 5.3 installed. I have 1 system disk and 8 other drives connected to 2 Promice IDE controllers. The system disk is un-mirrored. The non-system disk are labled ad4 thru ad11. ad4 thru ad7 on controller 1 and ad8 thru ad11 on contoller 2. I have mirrored between controllers ad4 mirroed with ad8 ad5 '' ad9 ad6 '' ad10 ad7 '' ad11 I was having lots and lots of stability problems from the disk subsystems and was advised in an answer to a previous question to go to 5.4, so... I downloded the CD ISO images of FreeBSD 5. I did a completly new install of 5.4 on the system disk. When I ran the gmirror load to my shock and delight all the geom drives were recognised and made available. after a little time resyncing the mirrors all was working fine. Yah. The document I used to set up the GEOM stuff can be found at http://people.freebsd.org/~rse/mirror/ I used the second set of instructions, obviously I did not do it on the system partions but pretty much massaged it to what I wanted to do on all my other disks. So like a prat that can't leave well alone I decided to set up a local copy of the source and rebuild the world and the kernel. This was done, and all appeard to work fine. Builds completed with out errors and the installed happend. I did a mergemaster and at this point I blated a locally edited copy of /boot/loader.conf and removed the 'geom_mirror_load=YES' directive, I didn't notice at the time, and went ahead and did the reboot. The system started, but failed to complete when it came to the fsck of the disk as mentioned in /etc/fstab. Looked around and found that I had lost the directive from the loader.conf as detailed in the document mentiond above, so put it back in and went for a reboot. This time the system booted and the geom drives were all attached and the system then did it's 15 second wait for the SCSI stuff to settle, then disaster. The system complained with the follwoing message Mounting root from ufs:/dev/da0s1a setrootbyname failed ffs_mountroot: can't find rootvp Root mount failed: 6 Manual root filesystem specification: fstype:device Mount device using filesystem fstype eg. ufs:/dev/da0s1a ? List valid disk boot devices empty line Abort manual input mountroot but any thing I type results in a error and the Manual root filesystem specification being reprinted. Now I can take the geom_mirror_load load directive out of the the loader.conf and the have the boot fail as it cannot fsck the geom disk and then I cna do a manual load of the geom_mirror, fsck the disk and do a control-D to finish the boot. This is fine but a little messy and not very automatic in the event of a system boot. Can anyone sudjest a suitable work rounf that it more automatic or a way to fix the problem permenantly. I was thinking of creating a single disk geom'ed root disk, but I cannot find an incantation to mirror the root disk partitons while preserving the data on them. Any assistance would be most apreciated. Regards David Dooley ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]