Re: Problem decryption disk
Hello Bruno, . You need to install also the packages cryptsetup-initramfs and cryptsetup-run, which may not be installed automatically with cryptsetup. Nicolas Le 13/10/2021 à 11:29, bruno pinto a écrit : Hello, When I encrypt my client's disk with cryptsetup, at the end of the installation it reboots. Everything is going well. But when it boots after install, the boot sends me to the initriamfs. We think he can't decrypt the encrypted disk. Thomas told me about a package story. I already have the cryptsetup package configured on my server but do I need to put other packages that are unknown to me. Anyone could help me? Kind regards, --- Bruno FERREIRA PINTOTEL:0164468580 Service Informatique IJCLab IN2P3/CNRS Universite Paris-Sud 11 Rue André Ampere Bt 200 Pce 033 BP 33 91898 Orsay Cédex ---
Re: nfs not responding
Hello, Le 10/06/2014 02:41, Peter Keller a écrit : Hello, I have a question: Sometimes, maybe 2% of the time, when FAI 4.2 finishes installing and is shutting down to reboot, I get into a state where messages are logged to the screen about NFS not responding, and then ok, and then not responding, and then ok, and so on. They repeat every 5 minutes or so. The machine stays in this state and never actually reboots causing a manual interrupt in the automated install. The NFS server, AFAICT, was ok the whole time. The faiserver is a wheezy machine and I'm not using nfs 4. Has anyone ever seen this before? I've seen this on a slow link, and solved it by reducing the size of the NFS packets: fai-chboot ... -k nfsopts=-orsize=65536,wsize=65536 ... This might work even if the problem is not related to network performance. -- Nicolas
2 small issues with 4.0.5
Hello, FAI 4.0.5 behaves differently from my old 4.0 beta on 2 points, I wonder if those are bugs or features; 1. In fai-make-nfsroot, procedure copy_fai_files(), the following line has been added before the script copies the fai configuration files: return # do not copy fai files at all As I'm using a custom directory, fai.conf is not configured in the NFSROOT, and FAI boot fails because FAI_CONFIG_SRC is not configured. I just had to remove the 'return' line to have it work as expected. 2. When booting in 'sysinfo' mode, the client always reboots at the end because there is no error.log file, although I would like it to wait for me to press RETURN. Also very easy to fix, but this behavior looks weird to me. Any clue? Regards, -- Nicolas
Re: fai 4.0.5: dracut boot ends with kernel panic
Le 24/01/2013 11:22, Thomas Lange a écrit : I'm trying to use dracut in FAI 4.0.5 on a wheezy 64bits server, and the initial boot on nfsroot quickly goes into kernel panic: [ 11.578824] dracut: Mounted root filesystem a.b.c.d:/srv/fai/wheezy/nfsroot-amd64 [ 11.581213] aufs: module is from the staging directory, the quality is unknown, you have been warned [ 11.582117] aufs 3.2-20120827 warning: can't open /etc/fstab: No such file or directory [ 12.010629] aufs test_add:261:mount[366]: uid/gid/perm /live/image 65534/65534/0755, 0/0/01777 [ 12.015811] type=1702 audit(1358959337.356:2): op=follow_link action=denied pid=371 comm=ls path=/sysroot/initrd.img dev=aufs ino=149 [ 12.015995] type=1702 audit(1358959337.356:2): op=follow_link action=denied pid=371 comm=ls path=/sysroot/vmlinuz dev=aufs ino=165 /init: 42://lib/dracut/hooks/pre-pivot/50mount-usr.sh: cannot open /sysroot/etc/fstab: No such file [ 12.260523] dracut: Switching root pcbind: rpcbind terminating on signal. Restart with rpcbind -u switch_root: failed to execute /sbin/init: Not a directory [ 12.262090] Kernel panic - not syncing: attempting to kill init! I just ran into the same problem, when I tried an installation from an newly installed machine running wheezy. The problem is, that nfsv4 is not configured properly in your setup (and mine) and therefore the install client (and dracut) cannot mount the nfsroot. I tried to configure idmapd on the server to have nfsv4 work as expected, but as idmapd needs the nfsroot to be mounted before it can start on the client, this can hardly work. A line in /etc/exports is missing that contains an entry including fsid=0. I just added this line to my /etc/exports, created an empty directory /srv/nfs4 and called exportfs -ra. /srv/nfs41.2.3.4/28(rw,sync,fsid=0,crossmnt,no_subtree_check) That did the trick. Yes it does, thanks, I had not found this one. P.S.: I did not manage to disable nfsv4 on the install server, because it's not sufficient to edit RPCMOUNTDOPTS in /etc/default/nfs-kernel-server. A client still mounts via nfs4. Same for me, and the -onfsvers=3 on the client does not work either. -- Nicolas
Re: FAI performance
Le 24/09/2012 14:03, Thomas Lange a écrit : On Fri, 21 Sep 2012 22:06:27 +0200, Michał Dwużnikmichal.dwuz...@gmail.com said: by the way - what are the default options of mounting the NFS by FAI when installing? (rsize in particular, atime?) Using a squeeze install server and FAI 3.4.8 I get these NFS parameters from cat /proc/mounts 1.2.3.149:/srv/fai/nfsroot-squeeze64 /live/image nfs ro,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,nolock,proto=tcp,port=65535,timeo=7,retrans=3,sec=sys,mountport=65535,addr=1.2.3.149 0 0 IMO there's no need to set the rsize parameter. I had to do so to install a few hosts through a 10 Mb/s WAN : rsize=65536,wsize=65536. Higher values lead to frequent NFS time-outs. -- Nicolas
Re: Still having problem with configuring FAI
Le 15/06/2012 15:39, Michael Senizaiz a écrit : If your kernel append lines only difference is that you are referencing a different kernel and initrd with all the other bits the same I would suggest adding the /lib/modules/`64 bit kernel` into the initrd of the i386 and boot with that. This portion of the boot isn't actually FAI specific, but just part of the initrd boot up process. If your 64-bit kernel isn't a debian style kernel with script/live etc. then it won't get all the variables the same (or if it's using a different method for setting up the network, the rootserver variable is actually set by the binary /bin/ipconfig in the initrd) It would be helpful if you posted your PXE boot config for both kernels. Have a look here at what I found about the way networking is set up with the initrd. http://www.mail-archive.com/linux-fai@uni-koeln.de/msg04573.html On Fri, Jun 15, 2012 at 3:13 AM, Nicolas Courtel cour...@cena.fr mailto:cour...@cena.fr wrote: Le 14/06/2012 19:00, Michael Senizaiz a écrit : What kernel are you using? Are you using 'boot=live'? The boot= tells it what script in scripts/ to run after init, and only live and nfs will use the rootserver variable. Yes, boot=live is added by fai-chboot; I'm pretty sure my config is correct, as it used to work a couple of weeks ago, and is still working on i386. -- Nicolas Well, I'm sure now that the problem is not related to FAI or to my config : I just need to build the nfsroot with the 27th May Wheezy snapshot to make my server work fine (of course the initrd only is relevant) Between this snapshot and the current version, live-initramfs has not changed, but the klibc-utils package has been upgraded from version 2.0~rc3-1 to 2.0-2, so I believe the change is in ipconfig. -- Nicolas
Re: Still having problem with configuring FAI
Le 15/06/2012 17:46, Nicolas Courtel a écrit : Well, I'm sure now that the problem is not related to FAI or to my config : I just need to build the nfsroot with the 27th May Wheezy snapshot to make my server work fine (of course the initrd only is relevant) Between this snapshot and the current version, live-initramfs has not changed, but the klibc-utils package has been upgraded from version 2.0~rc3-1 to 2.0-2, so I believe the change is in ipconfig. Ok, I got it: in version 2.0-2 of klibc-utils, ipconfig dumps ethernet variables in /run/net-eth0.conf, and not in /tmp/net-eth0.conf as it used to. but live is still looking for the latter. so no variable is set. live-initramfs needs to be updated to work again. -- Nicolas
Re: Still having problem with configuring FAI
Le 12/06/2012 19:14, Steve B. a écrit : I keep getting the error when the target boots; Trying netboot from :/srv/fai/nfsroot .. Begin: Trying nfsmount -o nolock - ro :/srv/fai/nfsroot /live/image .. nfsmount: can't parse IP address ' ', then an endless loop of error can't parse IP address ' '. An easy workaround for this problem is to prepend the server adress in the NFSROOT variable, like this: NFSROOT=192.168.1.1:/srv/fai/nfsroot. And then run fai-chboot again for the target. -- Nicolas
Re: Still having problem with configuring FAI
Hello, I note the same problem since a few days, only on amd64 arch : the name of the NFS server is missing in front of :/srv/fai/nfsroot. It's not an FAI bug, as FAI has not started at this time, it looks more to me like a kernel bug. FAI 4 has been working fine for me before this bug has shown up, and is still working on i386 arch with the same configuration (Server is using Squeeze, and uses multiple nfsroots). I haven't figured out a workaround for now, if anyone has an idea I would be glad to try it. -- Nicolas Le 12/06/2012 19:14, Steve B. a écrit : I keep getting the error when the target boots; Trying netboot from :/srv/fai/nfsroot .. Begin: Trying nfsmount -o nolock - ro :/srv/fai/nfsroot /live/image .. nfsmount: can't parse IP address ' ', then an endless loop of error can't parse IP address ' '. Now these issue started after Fai upgrade to v4 and not sure what to config since v4 changes a lot of the config files and the online doc is still for v3. Thanks Steve B.
Re: Still having problem with configuring FAI
Le 13/06/2012 13:56, Thomas Lange a écrit : I note the same problem since a few days, only on amd64 arch : the name of the NFS server is missing in front of :/srv/fai/nfsroot. It's not an FAI bug, as FAI has not started at this time, it looks more to me like a kernel bug. FAI 4 has been working fine for me before this bug has shown up, and is still working on i386 arch with the same configuration (Server is using Squeeze, and uses multiple nfsroots). Are you using dracut or live-boot inside the nfsroot? AFAIK I use vanilla live-initramfs, haven't added any extra feature. -- Nicolas
Setup-storage failure for LVM on wheezy
Hello, Trying to install a wheezy host using FAI 4.0~beta3+experimental7, I have some troubles with setup-storage; the current disk partitioning is the following: sda1 = primary partition for /boot sda2 = primary partition for lvm volumes = vg0/root, vg0/usr, ... The new partitioning should be the same, without preserving anything because of alignment issues. This is what happens: [...] Executing: wipefs -a /dev/sda1 Executing: vgchange -a n vg0 Executing: wipefs -a vg0/var Command had non-zero exit code There seems to be 2 problems: - wipefs argument should be /dev/vg0/var - vgchange -a n should not be called before wipefs, as it prevents it from seeing the partition I have tweaked Commands.pm so that it does not fail in this case, but a clean fix might be useful in a future version of setup-storage. -- Nicolas
Re: lvm unbootable disk
Le 24/11/2011 12:10, Natxo Asenjo a écrit : disk_config sda bootable:1 primary/boot 500 ext3 rw primary- 4096- - - disk_config lvm vg my_pv sda2 my_pv-_swapswap 2048swap sw my_pv-_root/ 2048ext3 rw which is exactly the config in the simple lvm example in the setup-storage man page. The system is after installation unbootable: gave up waiting for root device ALERT!!! /dev/mapper/my_pv-root does not exist, dropping to shell what am I doing wrong? does anybody have a working config for lvm setup-storage with one disk? This is on on a vmware vm, but that should not matter. Your problem may be related to using underlines, could you try using something like 'pv-swap' 'pv-root' for your partitions? I use a very similar config file that works fine. -- Nicolas
Re: Problem partitioning dual-boot
Le 04/10/2011 18:01, John G. Heim a écrit : Before we go any further on this problem, I should ask if anybody else is creating dual-boot systems with Windows 7 and FAI? I think an important thing to know would be whether this is a FAI problem or if its just me. I am doing a rather weird Win7 install with an autounattend.xml answer file. So maybe its just me. I have successfully installed some. AFAIR I have preserved the 2 partitions that are used by Windows 7, and sometimes the diagnostic partition, and once Debian is installed os-prober inserts the appropriate line into Grub2, and both Windows Debian Squeeze boot normally. -- Nicolas
Re: Disk config
Le 16/06/2011 12:14, Ahmed Altaher a écrit : Dearest all, I succeded installing Ubuntu 10.10 from Debian squeeze based FAI. But my problem is; how could I preserve the 1st partition that includes Windows 7. my disk config is disk_config sda preserve_always:1,2 bootable:3 primary - 0 - - primary - 0 - - primary /boot 512ext3rw logical - 22000--- From what you asked on irc, I guess you just want to boot your Windows 7 after Ubuntu has been installed. This is a grub issue, you just need to install the 'os-prober' package and then run 'update-grub' to make Windows show up in the Grub menu. In the future I suggest you send complete and comprehensive questions to either the mailing list or irc rather than sending parts of the puzzle on either. Otherwise no one will bother to answer you. -- Nicolas
Re: xen tools problems
Le 10/01/2011 14:49, mamadou diop a écrit : Hello, After creating my virtual machine with xen-tools and restartn xend, i have this error: grep:/proc/xen/capabilities: No such file or directory You might want to check whether /proc/xen has been mounted, and post your problems about Xen to the appropriate list. -- Nicolas
Re: setup-storage fails when preserving LVM volume
Le 17/12/2010 09:23, Michael Tautschnig a écrit : [...] Use of uninitialized value $p in concatenation (.) or string at /usr/share/fai/setup-storage//Init.pm line 289. /dev/sda2 will be preserved vg0/local will be preserved [...] (CMD) parted -s /dev/sda mklabel msdos 1 /tmp/A_p3dpsLFC 2 /tmp/ZQLJLH5omp Executing: parted -s /dev/sda mklabel msdos Command had non-zero exit code (STDOUT) Error: Partition(s) 2 on /dev/sda have been written, but we have been unable to inform the kernel of the change, probably because it/they are in use. As a result, the old partition(s) will remain in use. You should reboot now before making further changes. [...] I hope to have addressed all the above problems in 4.0~beta2+experimental48. Could you please give that one another try and report back? Could you please even send debug logs in case it works fine? As said in the other thread just moments ago, I'm trying to to a bit of cleanup work as well. This new version does fix the issue, thanks. I send you the format.log in a private mail. -- Nicolas
Re: setup-storage fails when preserving LVM volume
I hope to have addressed all the above problems in 4.0~beta2+experimental48. Could you please give that one another try and report back? Could you please even send debug logs in case it works fine? As said in the other thread just moments ago, I'm trying to to a bit of cleanup work as well. This new version does fix the issue, thanks. I send you the format.log in a private mail. Well well, I finally happen to have another problem with this new version, setup-storage prints the following sentence and then quits quickly: Executing: mdadm --examine --scan --verbose -c partitions Previous partitions overflow begin of preserved partition /dev/sda2 I tried several values for the /local volume, but always get the same message. Here's the debug format.log, which happens to be pretty short. -- Nicolas Starting setup-storage 1.3+exp disklist: sda Using config file: /var/lib/fai/config/disk_config/auzon Input was: # Configuration LVM standard : les partitions sont extensibles disk_config disk1 primary /boot 512 ext3 rw primary - 32000- - - disk_config lvm fstabkey:uuid preserve_reinstall:vg0-local #disk_config lvm fstabkey:uuid vg vg0 disk1.2 vg0-swapswap2GiBswapsw vg0-root/ 1GiBext3rw vg0-var /var4GiBext3rw vg0-usr /usr8GiBext3rw vg0-opt /opt2GiBext3rw vg0-local /local 20GiB- ext3rw (CMD) parted -s /dev/sda unit TiB print 1 /tmp/VoENpO9GP9 2 /tmp/iAyirJMaGs Executing: parted -s /dev/sda unit TiB print (STDOUT) Model: ATA WDC WD1600AAJS-6 (scsi) (STDOUT) Disk /dev/sda: 0.15TiB (STDOUT) Sector size (logical/physical): 512B/512B (STDOUT) Partition Table: msdos (STDOUT) (STDOUT) Number StartEnd Size Type File system Flags (STDOUT) 1 0.00TiB 0.00TiB 0.00TiB primary ext3 boot (STDOUT) 2 0.00TiB 0.15TiB 0.15TiB primary lvm (STDOUT) (CMD) parted -s /dev/sda unit B print free 1 /tmp/eKm1feG9g2 2 /tmp/QoiYnqPQxE Executing: parted -s /dev/sda unit B print free (STDOUT) Model: ATA WDC WD1600AAJS-6 (scsi) (STDOUT) Disk /dev/sda: 160041885696B (STDOUT) Sector size (logical/physical): 512B/512B (STDOUT) Partition Table: msdos (STDOUT) (STDOUT) Number Start EndSize Type File system Flags (STDOUT) 1 32256B 534643199B 534610944B primary ext3 boot (STDOUT) 2 534643200B 160039272959B 159504629760B primary lvm (STDOUT) 160039272960B 160041885695B 2612736BFree Space (STDOUT) (CMD) parted -s /dev/sda unit chs print free 1 /tmp/ECQg6UZJU_ 2 /tmp/P1410qoz1z Executing: parted -s /dev/sda unit chs print free (STDOUT) Model: ATA WDC WD1600AAJS-6 (scsi) (STDOUT) Disk /dev/sda: 19457,80,62 (STDOUT) Sector size (logical/physical): 512B/512B (STDOUT) BIOS cylinder,head,sector geometry: 19457,255,63. Each cylinder is 8225kB. (STDOUT) Partition Table: msdos (STDOUT) (STDOUT) Number Start End Type File system Flags (STDOUT) 1 0,1,0 64,254,62 primary ext3 boot (STDOUT) 2 65,0,0 19456,254,62 primary lvm (STDOUT) 19457,0,0 19457,80,62Free Space (STDOUT) Finding all volume groups Finding volume group vg0 Creating directory /etc/lvm/archive Archiving volume group vg0 metadata (seqno 11). Creating directory /etc/lvm/backup Creating volume group backup /etc/lvm/backup/vg0 (seqno 11). Finding all volume groups Finding volume group vg0 Finding all volume groups Finding volume group vg0 Finding all volume groups Finding volume group vg0 (CMD) mdadm --examine --scan --verbose -c partitions 1 /tmp/JPF0vBUYns 2 /tmp/jxNtkFqh_Z Executing: mdadm --examine --scan --verbose -c partitions Previous partitions overflow begin of preserved partition /dev/sda2 Current disk layout $VAR1 = { '/dev/sda' = { 'bios_heads' = '255', 'disklabel' = 'msdos', 'partitions' = { '1' = { 'count_byte' = '534610944', 'filesystem' = 'ext3', 'begin_byte' = '32256', 'is_extended' = 0, 'end_byte' = '534643199' }, '2' = { 'count_byte' = '159504629760', 'filesystem' = '', 'begin_byte' = '534643200', 'is_extended' =
Re: setup-storage fails when preserving LVM volume
Hello Michael, Executing: mdadm --examine --scan --verbose -c partitions Previous partitions overflow begin of preserved partition /dev/sda2 I tried several values for the /local volume, but always get the same message. Here's the debug format.log, which happens to be pretty short. [...] There seems to be some problem with the size or position of your /boot partition. It might be caused by a change in alignment, which may even have introduced a bug!? Could you please change the size of /boot to 511, report back and, if it works, also do yet another run and send the debug logs of both runs? The /boot partition has be built by FAI 3.3.5 (setup-storage 1.2.1), asking for a size of 512 like I still do now. The parted command was the following: parted -s /dev/sda mkpart primary ext3 32256B 534643199B The installation does not work better after setting the size of /boot to 511, the log is exactly the same as with 512. Setting the size to 480 makes it work, though, the log is joined. I have also tried to preserve the existing /boot partition, and this has worked fine: disk_config disk1 preserve_lazy:1 always_format:1 primary/boot 0ext3rw [...] -- Nicolas format.log.bz2 Description: application/bzip
Re: setup-storage fails when preserving LVM volume
Hello Michael, The full log is there: http://paste.debian.net/101879/ As I'm utterly slow in processing all mails right now I am too late - the paste has expired. Could you paste it again, or, even better, send the logs via email? Of course, here it is. I should have posted it at once as it's quite small. -- Nicolas format.log.bz2 Description: application/bzip
Re: probably no grub-install in FAI installation
Hello, we are currently testing a FAI-Squeeze installation and struggled with the grub2 installation. We are using the FAI-server v 3.4.5 and install the current Squeeze grub-ps package v 1.98+20100804-8. After the installation the system does not boot from the hard drive. It turned out that '/usr/sbin/grub-install' never has being called in the postinst script of the grub-pc package. A quick and dirty work around is a '$ROOTCMD grub-install' in a script or a hook which is probably not meant to be the preferred way. We added 'set -x' into the postinst script of the grub-pc package to increase the verbosity. The 'db_get grub-pc/install_devices' in line 494 does not set the RET variable which avoids the next 'for' loop. So a grub-install is not being called. We are not sure, but probably an interactive debconf configuration is required at this point. Has anybody seen a similar symptom? Is there a clean way to solve this problem? Thank you and cheers, Henning You might be able to do it with debconf, but the easy way is to call grub-install in an FAI script, like it's done in fai-doc's examples (/usr/share/doc/fai-doc/examples/simple/scripts/GRUB_PC) : [...] $ROOTCMD grub-mkdevicemap -n -m /boot/grub/device.map $ROOTCMD grub-mkconfig -o /boot/grub/grub.cfg $ROOTCMD grub-install --no-floppy (hd0) [...] -- Nicolas
Re: preserving dos partitions
mamadou diop a écrit : Hello, i have configured my /srv/fai/config/disk_config/FAIBASE so that after installation the Windows partition is preserved. I am sure that the Windows partition is preserved because during installation, setup_harddisks told me that the /dev/sda1 driver was going to be preserved. Also, i have added the GRUB class to my host. After installation, i didn't see Windows in the OS choice menu. There are only Ubuntu in the menu of choice of Operating Systems. What have i forgotten to do? man grub? When using grub legacy, you need to add a few lines at the end of /boot/grub/menu.lst, like the following: title Windows rootnoverify (hd0,0) makeactive chainloader +1 boot If you use grub-pc, just install os-prober, and run update-grub, Windows should show up. -- Nicolas
Re: FAI 3.4.0 config space mount problem
Hello Michael, please send a patch against debian-old-2.0 branch of live-boot, thanks. Patch against debian-old-1.0 (couldn't find debian-old-2.0 in public live-boot, though shouldn't be a difference): http://grml.org/patches/0001-workaround-aufs-issue-in-kernel-versions-around-2.6..patch -mika- I have a similar problem (/var/lib/fai doesn't exist in the aufs system) with a squeeze nfsroot built from fai 3.3.5 experimental (Lenny), and the patch does not fit in live-initramfs 2.0.0-1 that is installed in the nfsroot, as there is no script 05mountpoints. I have tried to insert your patch in live-bottom/08persistence_excludes, as you can see below, but it doesn't work. Do you have an idea of what I could do to make it work? -- Nicolas --- 08persistence_excludes.~1~ 2010-08-10 01:51:40.0 +0200 +++ 08persistence_excludes 2010-08-30 11:30:21.0 +0200 @@ -72,6 +72,13 @@ # Bind mount it to origin mount -o bind ${PERSTMP}/${dir} /root/${dir} + +# aufs2 in kernel versions around 2.6.33 has a regression: +# directories can't be accessed when read for the first the time, +# causing a failure for example when accessing /var/lib/fai +# when booting FAI, this simple workaround solves it +ls /root/* /dev/null 21 + done log_end_msg
setup-storage sets wrong data in /etc/fstab
Hello, Using FAI 3.4~beta6+experimental2 (also checked with experimental4 this morning) for squeeze and LVM, I have a funny fstab problem. setup-storage looks in /dev/LVM volume during installation to build /etc/fstab: (CMD) readlink -f /dev/vg0/var 1 /tmp/mPvYMCBCMk 2 /tmp/TcD7OA66rq Executing: readlink -f /dev/vg0/var (STDOUT) /dev/dm-0 [...] (CMD) readlink -f /dev/vg0/swap 1 /tmp/ls625DMAbK 2 /tmp/cAQYclvajY Executing: readlink -f /dev/vg0/swap (STDOUT) /dev/dm-1 [...] But when the host boots, the links are all mixed up, so it doesn't work so well: # readlink /dev/vg0/var ../dm-1 # readlink /dev/vg0/swap ../dm-2 ... Shouldn't setup-storage use links in /dev/volume or /dev/mapper, or even UUIDs, rather than /dev/dm-* , for /etc/fstab? -- Nicolas
Re: setup-storage sets wrong data in /etc/fstab
Michael Tautschnig a écrit : Hi Nicolas, Using FAI 3.4~beta6+experimental2 (also checked with experimental4 this morning) for squeeze and LVM, I have a funny fstab problem. setup-storage looks in /dev/LVM volume during installation to build /etc/fstab: (CMD) readlink -f /dev/vg0/var 1 /tmp/mPvYMCBCMk 2 /tmp/TcD7OA66rq Executing: readlink -f /dev/vg0/var (STDOUT) /dev/dm-0 [...] (CMD) readlink -f /dev/vg0/swap 1 /tmp/ls625DMAbK 2 /tmp/cAQYclvajY Executing: readlink -f /dev/vg0/swap (STDOUT) /dev/dm-1 [...] But when the host boots, the links are all mixed up, so it doesn't work so well: # readlink /dev/vg0/var ../dm-1 # readlink /dev/vg0/swap ../dm-2 ... Shouldn't setup-storage use links in /dev/volume or /dev/mapper, or even UUIDs, rather than /dev/dm-* , for /etc/fstab? Thanks for the report and debugging effort. Hmm, so doing the readlink thing is not a good idea it seems, maybe we should change this, yes. But for the moment I'd suggest you just go for UUIDs, which we might want to make the default. You can achieve this by adding fstabkey:uuid to your disk/LVM config line, like disk_config lvm fstabkey:uuid Hope this helps, Michael I didn't know this option, it works much better this way :-) . Thanks for the quick reply, -- Nicolas
Re: problem no create initrd.img fai-setup debian squeeze
Dionis Hernandez a écrit : that such companions I'm having problems with fai-setup when you create the debian chroot of a squeeze, it is not me creating the initrd.img, which is necessary in the netboot. I am using the kernel-image-trunk 06/02/1932 NFSROOT the file. You need to use option -U of fai-setup. -- Nicolas
Re: FAI installation instructions
The problem is; I'm stuck at creating the minimal Ubuntu system/debootstrap base image. In other words: what is the next step after: apt-get install debootstrap ? Roughly, it should be something like this: # debootstrap hardy /some/where/hardy http://some-ubuntu-mirror/ # tar zc -C /some/where/hardy -f /my/fai/config/basefiles/HARDY.tar.gz . And then add your host in /my/fai/config/class/50-host-class, with the HARDY variable, configure disk, packages, etc..., and start the installation. ...and fail, because /etc/apt/sources.list got overwritten during prepareapt. Well, he's supposed to write a hook for this, it's paragraph e. of his blog. I was only completing the debootstrap part , no doubt he will also have problems with the other parts of the configuration like any of us ;-) . -- Nicolas
Re: FAI installation instructions
jurgen.lams...@telenet.be a écrit : [...] What I exactly want, is making FAI install Ubuntu 8.04/10.04 and CentOS 5.4 on Dell PE1X50/RX10 hardware, and documenting my journey by creating a step-by-step guide. I want to use Debian Lenny as installserver/mirrorhost (because I want FAI 3.5.5 and as far as I tested that's not possible on Ubuntu 8.04 because of a sylinux-common dependeny problem). In other words; I want my colleague sitting next to me to be able to setup the complete automatic provisioning system using my instructions at http://akoestica.be/blog/home/sysadmin/15-provisioning/49-installing-fai. The problem is; I'm stuck at creating the minimal Ubuntu system/debootstrap base image. In other words: what is the next step after: apt-get install debootstrap ? Roughly, it should be something like this: # debootstrap hardy /some/where/hardy http://some-ubuntu-mirror/ # tar zc -C /some/where/hardy -f /my/fai/config/basefiles/HARDY.tar.gz . And then add your host in /my/fai/config/class/50-host-class, with the HARDY variable, configure disk, packages, etc..., and start the installation. -- Nicolas
setup-storage: preserve_lazy for LVM fails on empty disk
Hello, Context: FAI 3.4~beta1+experimental11, disk is empty, and the preserve_lazy option is set for two LVM volumes. The volume group is not created by setup-storage, pvcreate is followed by vgextend instead of vgcreate. I'm pretty sure this has been working before, I think I have successfully used it with experimental9. Of course, I just need to remove the preserve_lazy option to make it work again. The format.log file is on http://paste.debian.net/71013/ -- Nicolas
Re: setup-storage: preserve_lazy for LVM fails on empty disk
Context: FAI 3.4~beta1+experimental11, disk is empty, and the preserve_lazy option is set for two LVM volumes. The volume group is not created by setup-storage, pvcreate is followed by vgextend instead of vgcreate. I'm pretty sure this has been working before, I think I have successfully used it with experimental9. Of course, I just need to remove the preserve_lazy option to make it work again. The format.log file is on http://paste.debian.net/71013/ It is claimed that vg0 exists (lines 68-72), but that seems pretty odd. Could I ask you to redo the experiments, maybe even just do a setup-storage run without -X, but rename vg0 to vg1 in the config? It seems likely that this entry in the hash is unintentionally created. Could you try to confirm that? Same result, vg1 is in the hash although it has never existed on this disk. -- Nicolas
Re: Still puzzled by setup-storage
Today I've got twice the same result with preserve_lazy, where setup-storage claims that it will preserve volumes, nevertheless removes them, and then fails because it can't find them. But no kernel error. http://paste.debian.net/69021/ Ok, I had a strange last; to break out of the loop that works on logical volumes to be preserved. This is fixed as of 3.4~beta1+experimental9. Could you give that one another try? Works quite well, whether the preserved partitions already exist or not. Looks like you got rid of all the bugs I submitted, good job! :-) -- Nicolas
Re: Still puzzled by setup-storage
Is it by any means possible for me to get remote access to that system to do some more tryerror? Otherwise I can just ask to to try to re-do the same steps setup-storage makes, manually. It's too strange. I would be happy to give you a remote access but... can't reproduce the problem today, although I did 2 days ago. Same hardware, same setup-storage configuration, but quite a few packages have been updated, including experimental FAI. I just found out that a similar error message has also shown up during several successful installations, until 2 days ago, for example http://paste.debian.net/69022/ Today I've got twice the same result with preserve_lazy, where setup-storage claims that it will preserve volumes, nevertheless removes them, and then fails because it can't find them. But no kernel error. http://paste.debian.net/69021/ -- Nicolas
Re: resizing an lvm volume with setup-storage
Could you run e2fsck interactively, doing e2fsck -p -f /dev/vg0/usr to see whether a safe repair can be done non-interactively? I wonder if your filesystem is corrupted anyway and that e2fsck run before resize2fs wouldn't even be necessary otherwise. Works fine after an fai-sysinfo boot: r...@lutil:~# e2fsck -p -f /dev/vg0/usr /dev/vg0/usr: 41374/393216 files (0.6% non-contiguous), 227117/1572864 blocks r...@lutil:~# At this time resize2fs doesn't require a prior e2fsck, though. So I hope that the addition of -p fixes this (3.4~beta1+experimental7). It does :-) . Looks like it's the end of this everlasting thread. Thanks for your great job with setup-storage. -- Nicolas
Re: resizing an lvm volume with setup-storage
You will need one more try, as resize2fs is still complaining: (CMD) resize2fs /dev/vg0/usr 16777216s 1 /tmp/Hdv5kRmDAd 2 /tmp/FCrWPAlXCo Executing: resize2fs /dev/vg0/usr 16777216s Command resize2fs /dev/vg0/usr 16777216s had exit code 1 (STDERR) resize2fs 1.41.11 (14-Mar-2010) (STDERR) Please run 'e2fsck -f /dev/vg0/usr' first. The full log is at http://paste.debian.net/68019/. Could you give 3.4~beta1+experimental4 another try? This one does the suggested e2fsck instead of forcing something on resize2fs. Still failing, the last option given to e2fsck seems to be wrong: [...] Executing: lvresize -L 8192 vg0/usr (STDOUT) Extending logical volume usr to 8.00 GiB (STDOUT) Logical volume usr successfully resized (CMD) e2fsck -f /dev/vg0/usr 16777216s 1 /tmp/7enTWTQVxG 2 /tmp/U_mywiuc75 Executing: e2fsck -f /dev/vg0/usr 16777216s Command e2fsck -f /dev/vg0/usr 16777216s had exit code 16 (STDERR) Usage: e2fsck [-panyrcdfvtDFV] [-b superblock] [-B blocksize] [...] The full log is at http://paste.debian.net/68671/. -- Nicolas
Re: Still puzzled by setup-storage
Michael Tautschnig a écrit : Ok, this will need some more experimental work I'm afraid. Could you try to do some manual steps and report back before we actually build this into setup-storage? On that particular host, I'd like to know how parted -s /dev/sda mklabel msdos parted -s /dev/sda mkpart primary ext3 512B 536871423B parted -s /dev/sda mkpart primary 536871424B 80026361855B parted -s /dev/sda set 1 boot on mkfs.ext3 /dev/sda1 parted -s /dev/sda set 2 lvm on vgchange -a y vg0 behaves in comparison to vgchange -a n vg0 parted -s /dev/sda mklabel msdos parted -s /dev/sda mkpart primary ext3 512B 536871423B parted -s /dev/sda mkpart primary 536871424B 80026361855B parted -s /dev/sda set 1 boot on mkfs.ext3 /dev/sda1 parted -s /dev/sda set 2 lvm on vgchange -a y vg0 Apart from the extra output line, I can see no difference : in both cases, the LVM volumes are back and active after the last vgchange. Here's the full output of the second command list. [...] Could you maybe do an explicit vgchange -a y vg0 beforehand? Somehow it should be possible to reprocduce the error that parted reports when doing, e.g., the second mkpart. That error was Error: Error informing the kernel about modifications to partition /dev/sda2 -- Device or resource busy. in http://paste.debian.net/67978/. I had started lvm before running the commands, but it's not enough to reproduce the problem. I have also tried to run them after having stopped the installation, didn't work either. I finally inserted the vgchange line in Commands.pm, it seems to have been run as expected, and the result is still an error that happens later. The result is at http://paste.debian.net/68689/, and my (quite sloppy :-) ) patch is below: --- Commands.pm.~1~ 2010-04-13 13:32:32.0 +0200 +++ Commands.pm 2010-04-13 15:20:11.0 +0200 @@ -806,6 +806,7 @@ or die Can't change disklabel, partitions are to be preserved\n; # write the disklabel to drop the previous partition table +FAI::push_command( vgchange -a n vg0, vg_created_vg0, vg_disabled_vg0 ); FAI::push_command( parted -s $disk mklabel $label, exist_$disk, cleared1_$disk ); -- Nicolas
Re: resizing an lvm volume with setup-storage
Still failing, the last option given to e2fsck seems to be wrong: [...] Oh well, copypaste is evil, isn't it? Could you please give 3.4~beta1+experimental6 another try? e2fsck is still grumpy: (CMD) e2fsck -f /dev/vg0/usr 1 /tmp/2jcv6xhbVE 2 /tmp/qT7zDl1RU4 Executing: e2fsck -f /dev/vg0/usr Command e2fsck -f /dev/vg0/usr had exit code 8 (STDERR) e2fsck 1.41.11 (14-Mar-2010) (STDERR) e2fsck: need terminal for interactive repairs http://paste.debian.net/68701/ -- Nicolas
Re: resizing an lvm volume with setup-storage
e2fsck is still grumpy: (CMD) e2fsck -f /dev/vg0/usr 1 /tmp/2jcv6xhbVE 2 /tmp/qT7zDl1RU4 Executing: e2fsck -f /dev/vg0/usr Command e2fsck -f /dev/vg0/usr had exit code 8 (STDERR) e2fsck 1.41.11 (14-Mar-2010) (STDERR) e2fsck: need terminal for interactive repairs http://paste.debian.net/68701/ Could you run e2fsck interactively, doing e2fsck -p -f /dev/vg0/usr to see whether a safe repair can be done non-interactively? I wonder if your filesystem is corrupted anyway and that e2fsck run before resize2fs wouldn't even be necessary otherwise. Works fine after an fai-sysinfo boot: r...@lutil:~# e2fsck -p -f /dev/vg0/usr /dev/vg0/usr: 41374/393216 files (0.6% non-contiguous), 227117/1572864 blocks r...@lutil:~# At this time resize2fs doesn't require a prior e2fsck, though. -- Nicolas
Re: Still puzzled by setup-storage
Michael Tautschnig a écrit : Michael Tautschnig a écrit : [...] The much nicer approach is introducing something like preserve_lazy:1 (a tribute to lazyformat...; but hints for better names are welcome) which causes behaviour exactly as you described. Shouldn't be too much of a hassle, but I cannot tell whether it'll be done in a day or a week. Now that one took effectively nearly three months... Sorry for the delay. I've added a patch to the experimental builds, it's in version 3.3.4~beta1+experimental2. Testing of the new preserve_lazy option is much appreciated. Looks like I'm the first one again :-) . It works as expected on a partitionless disk, but fails if the LVM volume already exists (I just tried to install again the same host) : although setup-storage claims that '(volume) will be preserved', it runs parted to create the volume group. Debug log is at http://paste.debian.net/65875/. Unfortunately, I was too busy recently and didn't manage to fix it quick enough: The paste has timed out/is lost. Nicolas, could you do another try and paste the logs again? Sure, I've sent the same log again on http://paste.debian.net/67978/ -- Nicolas
Re: resizing an lvm volume with setup-storage
Michael Tautschnig a écrit : I believe resizing of logical volumes with ext2/ext3 should work as of 3.3.5+experimental1; for the moment, resize2fs will *not* be used on normal partitions as I'd need to take huge pains to make that work reliably, because resizing of the underlying partition cannot be done using parted *without* resizing the filesystem as well. Adding support for this is scheduled for some later release of parted (see first Q in resizing section of http://www.gnu.org/software/parted/faq.shtml). I'm afraid it doesn't, fai fails in task partition. fai.log is at http://paste.debian.net/68000/. I don't understand the difference you make here between the underlying partition and the filesystem : AFAIK we only need to resize the LVM volume, which has nothing to do with parted, and the filesystem. The volume group does not need to be resized. I surely missed something... -- Nicolas
Re: resizing an lvm volume with setup-storage
In this case, the problem is somewhat unrelated: The partitions don't seem to fit on disk in this way. That is, there isn't sufficient space for 512 * 1024 * 1024 bytes before sda2. Was that layout created using setup-storage? Probably yes. What I do suspect is some rounding issue, and, well this is the culprit: The partition has been created such as to end at a cylinder boundary, which is considered for the final disk layout, but not for intermediate checks. My mistake, sorry. I built the filesystem with setup-storage, using version FAI version 3.3.4 and for sda1 a size of '512', with no unit. And then ran 3.3.5-experimental2 to resize /usr, with the same size of '512' for sda1, which seems to give a different result. I have done it again using a size of 512MiB, and it works as expected: the volume is resized, but the filesystem is not. -- Nicolas
Re: resizing an lvm volume with setup-storage
Michael Tautschnig a écrit : In this case, the problem is somewhat unrelated: The partitions don't seem to fit on disk in this way. That is, there isn't sufficient space for 512 * 1024 * 1024 bytes before sda2. Was that layout created using setup-storage? Probably yes. What I do suspect is some rounding issue, and, well this is the culprit: The partition has been created such as to end at a cylinder boundary, which is considered for the final disk layout, but not for intermediate checks. My mistake, sorry. I built the filesystem with setup-storage, using version FAI version 3.3.4 and for sda1 a size of '512', with no unit. And then ran 3.3.5-experimental2 to resize /usr, with the same size of '512' for sda1, which seems to give a different result. I have done it again using a size of 512MiB, and it works as expected: the volume is resized, but the filesystem is not. Huch? Why is that expected behavior? Shouldn't everything be resized? Could you paste the logs? http://paste.debian.net/68014/ I really misunderstood your previous mail, where you said resize2fs will *not* be used on normal partitions. I see you're using resize2fs in this case, but it fails. -- Nicolas
Re: resizing an lvm volume with setup-storage
In this case, the problem is somewhat unrelated: The partitions don't seem to fit on disk in this way. That is, there isn't sufficient space for 512 * 1024 * 1024 bytes before sda2. Was that layout created using setup-storage? Probably yes. What I do suspect is some rounding issue, and, well this is the culprit: The partition has been created such as to end at a cylinder boundary, which is considered for the final disk layout, but not for intermediate checks. My mistake, sorry. I built the filesystem with setup-storage, using version FAI version 3.3.4 and for sda1 a size of '512', with no unit. And then ran 3.3.5-experimental2 to resize /usr, with the same size of '512' for sda1, which seems to give a different result. I have done it again using a size of 512MiB, and it works as expected: the volume is resized, but the filesystem is not. Huch? Why is that expected behavior? Shouldn't everything be resized? Could you paste the logs? http://paste.debian.net/68014/ I really misunderstood your previous mail, where you said resize2fs will *not* be used on normal partitions. I see you're using resize2fs in this case, but it fails. Err, I should have looked at the resize2fs man page more closely. I somehow had assumed that 512 byte sectors was the default. Added the necessary s as unit in 3.3.5+experimental3. Could you please retry and report back whether it's fixed? You will need one more try, as resize2fs is still complaining: (CMD) resize2fs /dev/vg0/usr 16777216s 1 /tmp/Hdv5kRmDAd 2 /tmp/FCrWPAlXCo Executing: resize2fs /dev/vg0/usr 16777216s Command resize2fs /dev/vg0/usr 16777216s had exit code 1 (STDERR) resize2fs 1.41.11 (14-Mar-2010) (STDERR) Please run 'e2fsck -f /dev/vg0/usr' first. The full log is at http://paste.debian.net/68019/. -- Nicolas
Re: resizing an lvm volume with setup-storage
In this case, the problem is somewhat unrelated: The partitions don't seem to fit on disk in this way. That is, there isn't sufficient space for 512 * 1024 * 1024 bytes before sda2. Was that layout created using setup-storage? Probably yes. What I do suspect is some rounding issue, and, well this is the culprit: The partition has been created such as to end at a cylinder boundary, which is considered for the final disk layout, but not for intermediate checks. My mistake, sorry. I built the filesystem with setup-storage, using version FAI version 3.3.4 and for sda1 a size of '512', with no unit. And then ran 3.3.5-experimental2 to resize /usr, with the same size of '512' for sda1, which seems to give a different result. I have done it again using a size of 512MiB, and it works as expected: the volume is resized, but the filesystem is not. Huch? Why is that expected behavior? Shouldn't everything be resized? Could you paste the logs? http://paste.debian.net/68014/ I really misunderstood your previous mail, where you said resize2fs will *not* be used on normal partitions. I see you're using resize2fs in this case, but it fails. Err, I should have looked at the resize2fs man page more closely. I somehow had assumed that 512 byte sectors was the default. Added the necessary s as unit in 3.3.5+experimental3. Could you please retry and report back whether it's fixed? You will need one more try, as resize2fs is still complaining: (CMD) resize2fs /dev/vg0/usr 16777216s 1 /tmp/Hdv5kRmDAd 2 /tmp/FCrWPAlXCo Executing: resize2fs /dev/vg0/usr 16777216s Command resize2fs /dev/vg0/usr 16777216s had exit code 1 (STDERR) resize2fs 1.41.11 (14-Mar-2010) (STDERR) Please run 'e2fsck -f /dev/vg0/usr' first. The full log is at http://paste.debian.net/68019/. Could you briefly hack a -f to the resize2fs calls in Commands.pm? The man page doesn't quite tell whether this will override this e2fsck requirement; if not, we'll really need to do so, which might be pretty time consuming. It does override the e2fsck requirement, resize2fs seems ok, but the result is well, unexpected :-) : Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/vg0-usr6159800 -1278720 7123948 - /target/usr After the end of the installation, the partition is still 6G wide. -- Nicolas
Re: resizing an lvm volume with setup-storage
Hello Michael, Could you give 3.3.4+experimental2 another chance? That one should not do pvcreate on volumes that are part of that volume group already. I found another issue on the LVM resize option : it always preserves the partition. This is not appropriate, as when resizing /usr like I do one is expecting the partition to be cleaned before the new installation. Also, setup-storage fails if the partition does not already exist : Can't preserve /dev/vg0/usr because it does not exist IMHO the resize option should behave as follows: if (the volume exists) { resize the volume if (a preserve* flag is also set for this volume) { resize the filesystem } else { create a new filesystem } } else { ignore the resize flag and create a new volume + filesystem } -- Nicolas
Re: resizing an lvm volume with setup-storage
I found another issue on the LVM resize option : it always preserves the partition. This is not appropriate, as when resizing /usr like I do one is expecting the partition to be cleaned before the new installation. Also, setup-storage fails if the partition does not already exist : Can't preserve /dev/vg0/usr because it does not exist IMHO the resize option should behave as follows: if (the volume exists) { resize the volume if (a preserve* flag is also set for this volume) { resize the filesystem } else { create a new filesystem } } else { ignore the resize flag and create a new volume + filesystem } I believe that it's mostly a matter of naming and/or using the options somewhat differently: - If you don't want to preserve data, there is no reason to use resize or preserve. Just specify the size and you get what you want. - If you want to preserve data but need to change sizes, you use resize. - If you really want setup-storage not to touch some volume, use one of the preserve options. Where preserve is one of preserve_always, preserve_reinstall or preserve_lazy. The latter seems to be still buggy, as you noted, that has to be fixed. I'm not sure whether this was a problem of missing documentation/clarification or rather a real wish to alter behaviour. Right, remove the existing volume and create a new one is just as good as resizing. I have just missed the easier way to do it. The only remaining question is what to do if the volume to be resized does not exist : setup-storage may either complain as it currently does, or ignore the resize flag. As you say it's mainly a documentation issue. My feeling is that the volume line (IE I want an 8GiB volume) has a higher weight than the resize option, but although I represent 100% of the users by now I may be wrong. -- Nicolas
Re: Still puzzled by setup-storage
Michael Tautschnig a écrit : [...] The much nicer approach is introducing something like preserve_lazy:1 (a tribute to lazyformat...; but hints for better names are welcome) which causes behaviour exactly as you described. Shouldn't be too much of a hassle, but I cannot tell whether it'll be done in a day or a week. Now that one took effectively nearly three months... Sorry for the delay. I've added a patch to the experimental builds, it's in version 3.3.4~beta1+experimental2. Testing of the new preserve_lazy option is much appreciated. Looks like I'm the first one again :-) . It works as expected on a partitionless disk, but fails if the LVM volume already exists (I just tried to install again the same host) : although setup-storage claims that '(volume) will be preserved', it runs parted to create the volume group. Debug log is at http://paste.debian.net/65875/. -- Nicolas
Re: resizing an lvm volume with setup-storage
Hello Michael, I think you haven't landed yet :-) There are still some obstacles out there in space. But let me give you some new coordinates: 3.3.4+experimental1. That version ought to work better. Well, /usr is not removed any more, but the kernel seems unhappy : http://paste.debian.net/64861/ Could you give 3.3.4+experimental2 another chance? That one should not do pvcreate on volumes that are part of that volume group already. It's working much better now, the volume is resized, but the filesystem is not. A call to resize2fs seems to be missing in the log: http://paste.debian.net/65380/ -- Nicolas
Re: resizing an lvm volume with setup-storage
Michael Tautschnig a écrit : Hello Michael, I think you haven't landed yet :-) There are still some obstacles out there in space. But let me give you some new coordinates: 3.3.4+experimental1. That version ought to work better. Well, /usr is not removed any more, but the kernel seems unhappy : http://paste.debian.net/64861/ Could you give 3.3.4+experimental2 another chance? That one should not do pvcreate on volumes that are part of that volume group already. It's working much better now, the volume is resized, but the filesystem is not. A call to resize2fs seems to be missing in the log: http://paste.debian.net/65380/ Well, ideally parted would do. But that doesn't seem to work: Command parted -s /dev/vg0/usr resize 1 0 8192B had exit code 1 (STDOUT) Error: File system has an incompatible feature enabled. Compatible features are has_journal, dir_index, filetype, sparse_super and large_file. Use tune2fs or debugfs to remove features. Do you have any ideas which options you enabled on that filesystem that aren't supported by parted? I'm not sure whether resize2fs works in those cases. I haven't set any options on the filesystem, it has been created by setup-storage using mkfs.ext3 /dev/vg0/usr. Current options are the following: has_journal ext_attr resize_inode dir_index filetype sparse_super large_file. I can clear 'resize_inode' with tune2fs, but not ext_attr, so parted still fails. However, resize2fs works fine, whatever the options are, and doesn't need the final size: r...@lutil:~# resize2fs /dev/vg0/usr resize2fs 1.41.11 (14-Mar-2010) Resizing the filesystem on /dev/vg0/usr to 2097152 (4k) blocks. The filesystem on /dev/vg0/usr is now 2097152 blocks long. -- Nicolas
Re: Side effect in setup-storage using 3.3.4~beta1+experimental3
Michael Tautschnig a écrit : Err, sorry, one should really test their code before releasing it. 3.3.4~beta1+experimental7 should be online in a few minutes and finally fix those problems; at least it worked on my system. Best, Michael Still failing a little later, I've put the log on http://paste.debian.net/64576/. -- Nicolas
Re: Side effect in setup-storage using 3.3.4~beta1+experimental3
Michael Tautschnig a écrit : Michael Tautschnig a écrit : Err, sorry, one should really test their code before releasing it. 3.3.4~beta1+experimental7 should be online in a few minutes and finally fix those problems; at least it worked on my system. Best, Michael Still failing a little later, I've put the log on http://paste.debian.net/64576/. Hmm, that's weird, it seemingly cannot match sda to any of the configured disks; could you please replace /usr/share/fai/setup-storage/Volumes.pm (in your NFSROOT) with the attached one and once again paste the output? Here you go: http://paste.debian.net/64596/ -- Nicolas
resizing an lvm volume with setup-storage
Hello, I'm trying to resize the /usr volume while installing a host in squeeze with 3.3.4~beta2+experimental1, and setup-storage strangely removes the volume before trying to resize it: Starting setup-storage 1.2.1+exp [...] vg0/usr will be resized [...] Executing: lvremove -f vg0/usr [...] Executing: lvresize -L 8192 vg0/usr Command lvresize -L 8192 vg0/usr had exit code 5 [...] Following is the config file; as it's the first time I use this option it might be wrong somewhere. disk_config sda disklabel:msdos bootable:1 primary /boot 512 ext3rw primary - 30GiB- - - disk_config lvm resize:vg0-usr vg vg0 sda2 vg0-swapswap2GiBswapsw vg0-root/ 1GiBext3rw vg0-var /var4GiBext3rw vg0-usr /usr8GiBext3rw vg0-opt /opt2GiBext3rw vg0-home/export/home6GiBext3rw vg0-projet /export/projet 6GiBext3rw I have also tried to preserve and resize the volume at the same time; not that I find it useful, but just to see if it would work better: it's preserved, but not resized. Looks to me that in this case setup-storage should either say don't do that you idiot, or gracefully resize both the volume and the filesystem. Unless the filesystem doesn't support resizing, of course. -- Nicolas
Side effect in setup-storage using 3.3.4~beta1+experimental3
Hello Michael, After rebuilding my squeeze nfsroot with fai 3.3.4~beta1+experimental3, setup-storage fails on a disk config that was working with experimental1: Can't use string () as an ARRAY ref while strict refs in use at /usr/share/fai/setup-storage//Volumes.pm line 83, $config_file line 1. The full debug log is on http://paste.debian.net/64515/. -- Nicolas
squeeze, setup-storage and LVM
Hello all, In Debian Squeeze, the get_volume_group_list function from Linux::LVM returns sizes in GiB instead of GB, and setup-storage fails on these. As the module documentation says units are in GB, it looks to me like a bug in Linux::LVM, caused by a new default behavior of vgdisplay, which uses now Gib as default unit. However, I haven't seen such a bug report, so my observation may be wrong. Should I open a bug report? For now I have patched setup-storage so that it ignores the extra 'i'. -- Nicolas
Re: setup-storage: error
Alexander Bugl a écrit : Hi Nicolas! Could you post the result of the following commands, they are the ones used by setup-storage to find the disk configuration: # parted -s /dev/sda unit TiB print # parted -s /dev/sda unit chs print free Both produce the same result: # parted -s /dev/sda unit TiB print Error: /dev/sda: unrecognised disk label # parted -s /dev/sda unit chs print free Error: /dev/sda: unrecognised disk label But shouldn't the disk label be set by setup-storage? disk_config sda disklabel:msdos Yes, it's supposed to do so with parted -s /dev/sda mklabel msdos, but for some reason it has not, and neither has printed any error message about it. Strange. I'm afraid you will have to wait for Michael for an explanation. In the meanwhile you could try to label the disk yourself, it might be enough to make setup-storage happy. -- Nicolas
Re: setup-storage: error
Alexander Bugl a écrit : Tried that, after manually labeling the drive both parted commands produced no errors, so I restarted the installation. But it still stops with error: ++ debug=1 Calling task_install Calling task_partition Partitioning local harddisks using setup-storage Starting setup-storage 1.0.5 disklist was: Using config file: /var/lib/fai/config/disk_config/SDA_VAR_SRV Oops, I 've missed this line in your former mail, disklist is not supposed to be empty! Output should be at least disklist was: sda. It looks like the nfsroot doesn't see /dev/sda. -- Nicolas
Re: setup-storage: error
Alexander Bugl a écrit : Oops, I 've missed this line in your former mail, disklist is not supposed to be empty! Output should be at least disklist was: sda. It looks like the nfsroot doesn't see /dev/sda. After the installation stops with the error, I have a shell on the still running machine: # fdisk -l Disk /dev/sda: 146.6 GB, 146685296640 bytes 255 heads, 63 sectors/track, 17833 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x00090545 Device Boot Start End Blocks Id System So I think inside the NFSroot sda is visible without problems. I must agree, but will try again :-) . Can you run '/usr/lib/fai/disk-info' in your shell? -- Nicolas
Re: setup-storage: error
Alexander Bugl a écrit : Hi Nicolas, thanks for your reply. disk_config sda disklabel:msdos primary / 12G ext3 rw,errors=remount-ro primary swap4G swap rw logical /tmp2G ext3 rw createopts=-m 1 logical /var50G-ext3 rw createopts=-m 5 logical /srv0- ext3 rw,nosuid createopts=-m 0 tuneopts=-c 0 -i 0 Finding all volume groups (CMD) mdadm --detail --scan --verbose -c partitions 1 /tmp/aaLkZ6M0w9 2 /tmp/QAfOE1bFA4 Executing: mdadm --detail --scan --verbose -c partitions Use of uninitialized value in multiplication (*) at /usr/share/fai/setup-storage//Sizes.pm line 628. The machine is a Sun x4240 with currently 15 HDDs, the first two are configured as RAID 1 Volume using the HW RAID Controller built into the x4240. Looks like setup-storage can't compute the sector size or the number of sectors per track. Isn't your raid disk larger than 2 Tb? In this case you probably need to put a gpt label on it instead of msdos. Sorry, I forgot to include the fdisk output, which would have shown that I don't have a problem with 2 TB: # fdisk -l Disk /dev/sda: 146.6 GB, 146685296640 bytes 255 heads, 63 sectors/track, 17833 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x Disk /dev/sda doesn't contain a valid partition table The first disk in this RAID 1 previously contained a Solaris installation, which I wiped out with dd if=/dev/zero of=/dev/sda bs=1M count=1. Any further ideas? With regards, Alex Could you post the result of the following commands, they are the ones used by setup-storage to find the disk configuration: # parted -s /dev/sda unit TiB print # parted -s /dev/sda unit chs print free -- Nicolas
Re: setup-storage for raid5 + lvm
# mdadm --zero-superblock /dev/sda2 # /lib/udev/vol_id -u /dev/sda2 6428a2d1-c30d-4916-ab6b-625117989651 # I wonder how this mdadm data was still there, though... Thanks for your help, Ok, good to know, thanks for testing this. I wonder whether we should do something about this in setup-storage, but I believe that doing mdadm --zero-superblock on each and every non-RAID device is pure overkill. I agree. You may want to add a few words in the error message, something like Failed to obtain UUID [...], check that $device_name is not or has not been a RAID partition. -- Nicolas
Re: setup-storage for raid5 + lvm
Michael Tautschnig a écrit : Executing: /lib/udev/vol_id -u /dev/sda2 Command /lib/udev/vol_id -u /dev/sda2 had exit code 4 Failed to obtain UUID for /dev/sda2 I originally wanted to have one swap slice on each disk, but in this case the error is on the sdb swap partition: Executing: /lib/udev/vol_id -u /dev/sdb2 Command /lib/udev/vol_id -u /dev/sdb2 had exit code 4 Failed to obtain UUID for /dev/sdb2 Any clue? That should only occur with some old mkswap versions that did not set up the UUID, but that doesn't seem to be the case here: Executing: udevsettle --timeout=10 mkswap /dev/sda2 (STDOUT) Setting up swapspace version 1, size = 1069281 kB (STDOUT) no label, UUID=3aa0ad8c-b7c9-428d-a2be-3511298c86af So, mysteriously, that information is lost afterwards. Hmm, looking at the code of vol_id it seems that parted might have overridden the volume id for /dev/sda2 (instead of /dev/sda1 or /dev/sda3). Could you re-run that failing installation and, once it aborts, do parted -s /dev/sda print This looks ok: # parted -s /dev/sda print [...] Number Start End SizeType File system Flags 1 32.3kB 535MB 535MB primary ext2 raid 2 535MB 1604MB 1069MB primary linux-swap 3 1604MB 320GB 318GB primary raid No raid on sda2, but vol_id disagrees: # /lib/udev/vol_id --export /dev/sda2 ID_FS_USAGE=raid ID_FS_TYPE=linux_raid_member [...] So vol_id needs the --skip-raid option to return the uuid; a small patch of setup-storage makes the installation work much better: --- Fstab.pm.~1~2009-04-22 11:41:56.0 + +++ Fstab.pm2009-04-27 12:41:55.0 + @@ -94,7 +94,7 @@ # or labels, use these if available my @uuid = (); FAI::execute_ro_command( -/lib/udev/vol_id -u $device_name, \...@uuid, 0); +/lib/udev/vol_id --skip-raid -u $device_name, \...@uuid, 0); # every device must have a uuid, otherwise this is an error (unless we # are testing only) I don't know if this patch is a good idea, though, or if this behavior of vol_id should be considered as a feature or a bug. -- Nicolas
Re: setup-storage for raid5 + lvm
Michael Tautschnig a écrit : [...] So, mysteriously, that information is lost afterwards. Hmm, looking at the code of vol_id it seems that parted might have overridden the volume id for /dev/sda2 (instead of /dev/sda1 or /dev/sda3). Could you re-run that failing installation and, once it aborts, do parted -s /dev/sda print This looks ok: # parted -s /dev/sda print [...] Number Start End SizeType File system Flags 1 32.3kB 535MB 535MB primary ext2 raid 2 535MB 1604MB 1069MB primary linux-swap 3 1604MB 320GB 318GB primary raid No raid on sda2, but vol_id disagrees: # /lib/udev/vol_id --export /dev/sda2 ID_FS_USAGE=raid ID_FS_TYPE=linux_raid_member [...] Did /dev/sda2 ever belong to a RAID array? Could you please try - parted -s /dev/sda set 2 raid off - run vol_id - mkswap /dev/sda2 - run vol_id Nothing interesting happens... The disk has been in a raid 1 array a while ago, but the partitioning was different, and has been cleaned up by the new installation. -- Nicolas
setup-storage for raid5 + lvm
Hello, With FAI 3.2.19, I'm trying to install on a host with 3 identical hard disks a system on Raid5 + LVM. But installation keeps failing in task_partition, and I can't figure out where my setup-storage config is wrong: disk_config sda primary -512- - primary swap1024swapsw primary - 0- - - disk_config sdb primary -512- - primary - 1024- - primary - 0- - - disk_config sdc primary -512- - primary - 1024- - primary - 0- - - disk_config raid raid5 /boot sda1,sdb1,sdc1 ext2rw raid5 - sda3,sdb3,sdc3 ext2default disk_config lvm vg vg0 md1 vg0-root/ 1024ext3rw vg0-var /var2048ext3rw vg0-usr /usr6144ext3rw vg0-data/data 6144ext3rw The error message is the following (full log is at http://paste.debian.net/34215/) Executing: /lib/udev/vol_id -u /dev/sda2 Command /lib/udev/vol_id -u /dev/sda2 had exit code 4 Failed to obtain UUID for /dev/sda2 I originally wanted to have one swap slice on each disk, but in this case the error is on the sdb swap partition: Executing: /lib/udev/vol_id -u /dev/sdb2 Command /lib/udev/vol_id -u /dev/sdb2 had exit code 4 Failed to obtain UUID for /dev/sdb2 Any clue? -- Nicolas
Re: FAI 3.2.12: new problem with setup-storage
Michael Tautschnig a écrit : Thanks for the detailed report, this is now known as bug #502462 and fixed in the experimental packages which are now available using deb http://www.informatik.uni-koeln.de/fai/download experimental koeln in your sources.list (you need to tweak /etc/fai/apt/sources.list as well). This is in fact the first time that the experimental builds are available in a proper apt repository, so consider this as a very first announcement :-) The experimental repository et FAI version both work fine, I have checked that both problems I've had are now fixed. The only thing I've seen that could work better is management of errors in the disk config file: with the following files, setup-storage seems to crash, and even suggests to report the bug, whereas it could just say that the file is lousy. File 1: forget the line for the preserved primary partition disk_config sda preserve_always:1 bootable:2 primary/boot 512ext3rw logical- 37000--- disk_config lvm vg vg0 sda5 vg0-swapswap2048swapsw [...] File 2: define a primary partition and use a logical one disk_config sda preserve_always:1 bootable:2 primary - 0- - primary/boot 512ext3rw primary- 37000--- disk_config lvm vg vg0 sda5 vg0-swapswap2048swapsw [...] Thanks for setup-storage, and all these quick fixes, it's a very nice tool! -- Nicolas
Re: FAI 3.2.12: new problem with setup-storage
Michael Tautschnig a écrit : debug disabled: http://paste.debian.net/19413/, http://paste.debian.net/19414/ debug enabled: http://paste.debian.net/19415/, http://paste.debian.net/19417/ The second one should be fixed in experimental7, the first one requires a bit more work it seems, but I'll try to get it done as well. Thanks a lot for the logs, and if you find the time to give it another try, I'd be even more grateful :-) It does work, the error message is now easy to understand. My pleasure :-) . -- Nicolas
FAI 3.2.12: new problem with setup-storage
Hello, After installing fai 3.2.12, I have a new problem with setup-storage, with the following disk config; the full log is on http://paste.debian.net/19353/. The disk config file is the following: disk_config sda preserve_always:1 bootable:2 primary - 0- - primary/boot 512ext3rw logical- 22000--- disk_config lvm vg vg0 sda5 vg0-swapswap2000swapsw vg0-root/6000ext3rw vg0-var/var2000ext3rw vg0-usr/usr6000ext3rw vg0-local/local6000ext3rw I get a similar error message when removing the preserved partition, or when changing the lvm partition to primary. -- Nicolas
Re: setup-storage does not preserve my partitions
Michael Tautschnig a écrit : [...] Now sda1 sda2 are preserved, but setup-storage fails when building the ext3 filesystem on /boot (debug log is on http://paste.debian.net/19133/). This is weird, as running 'mkfs.ext3 /dev/sda3' works fine from the shell just after installation has stopped. Oh no, seems we need to move around the calls to udevsettle :-( I'll try to work out a patch later on this week; meanwhile: retrying might just work in this case... Could you indeed retry for me to see how high up this must be put in this week's priority list? Well, it works much better today, as my 4 successive tries on the same workstation have all succeeded. Yesterday, it has failed twice before I gave up. I guess I will only use setup-storage on even days from now :-) . Meanwhile I have only installed Etch on it, with my older FAI 3.1.8 server, for some other tests. -- Nicolas
Re: setup-storage does not preserve my partitions
Michael Tautschnig a écrit : disk_config sda preserve_always:1,2 bootable:3 primary /boot 512ext3rw logical - 22000--- disk_config lvm vg vg0 sda5 vg0-swapswap 2000swapsw vg0-root/6000ext3rw vg0-var /var 2000ext3rw vg0-usr /usr 6000ext3rw vg0-local /local 6000ext3rw The installation works fine, but does not preserve my partitions, /boot goes to sda1. The log is on http://paste.debian.net/18763/. There are in fact two problems here: 1. Your configuration does not specify the partitions to be preserved; you should have disk_config sda preserve_always:1,2 bootable:3 primary - 0 - - primary - 0 - - primary /boot 512ext3rw logical - 22000--- Ok, my mistake, I have been mislead by a sentence on the FAI wiki, that seems to mean that the preserve option is now given _only_ on the disk_config line: The preserveX and boot options are one of the options now given on the disk_config line... Now sda1 sda2 are preserved, but setup-storage fails when building the ext3 filesystem on /boot (debug log is on http://paste.debian.net/19133/). This is weird, as running 'mkfs.ext3 /dev/sda3' works fine from the shell just after installation has stopped. 2. There was a bug in setup-storage (just reported as #501772), which your report made visible (thanks!!): It should not have destroyed your partitions, but instead have failed because your preserve_always specification and the disk_config wouldn't have been feasible. Given the corrected disk_config things should be fine even with the current setup-storage version, but you might also want to try the experimental packages from http://fai.alioth.debian.org, which include a fix for #501772. How am I supposed to use this url? adding it to sources.list doesn't work, as there is no Packages.gz file; shall I just use dpkg? -- Nicolas
setup-storage does not preserve my partitions
Hello, To install a multi-boot Vista/Lenny laptop, I need to preserve the first 2 primary partitions of the hard disk. So I tried the following setup on a test machine, with empty partitions: disk_config sda preserve_always:1,2 bootable:3 primary /boot 512ext3rw logical - 22000--- disk_config lvm vg vg0 sda5 vg0-swapswap 2000swapsw vg0-root/6000ext3rw vg0-var /var 2000ext3rw vg0-usr /usr 6000ext3rw vg0-local /local 6000ext3rw The installation works fine, but does not preserve my partitions, /boot goes to sda1. The log is on http://paste.debian.net/18763/. Any clue? -- Nicolas
Re: FAI on Debian Lenny failing
Michael Tautschnig a écrit : It was failing when it was trying to insert the unionfs modules. I don't have the web sites that I found that others were having the same issue. I just did a Google search on it. Is there anybody else on the list who has tried to use current FAI on lenny? We should really get FAI in shape for lenny :-), even though all of us prefer stable systems... Best, Michael I had the same unionfs problem, which is Debian bug #469338, supposed to be fixed in 2.6.25. It might be better for FAI to use aifs than unionfs, thought, as suggested in the replies to the bug report. -- Nicolas