Re: [gentoo-user] Localmount starts before LVM

2012-12-11 Thread Nilesh Govindrajan
On Tuesday 11 December 2012 01:14:39 PM IST, J. Roeleveld wrote:
 Hi,

 I have a raid0 (kernel autodetect) array, over which I have put LVM
 and then there are volumes on the LVM for /var, /tmp, swap and /home.

 The problem is, raid0 array gets recognized, but localmount fails to
 mount because lvm doesn't seem to start before localmount (due to my
 root being on SSD, I can't watch the output of openrc easily).

 For now I have added this to my rc.conf -
 rc_localmount_before=lvm

 In other words: localmount should run before lvm

 rc_localmount_need=lvm

 localmount requires lvm

 rc_lvm_after=localmount

 lvm should run after localmount

 Line 1 and 3 do the same. Line 2 is a contradiction.

 This fixes the problem, but localmount still executes before lvm and
 terminates with operational error. Then lvm starts up and localmount
 runs again successfully.

 Any idea why this happens?

 Yes (See above)

 The localmount script in init.d has proper depends:

 depend()
 {
 need fsck
 use lvm modules mtab
 after lvm modules
 keyword -jail -openvz -prefix -vserver -lxc
 }

 This should work.

 I actually have a similar setup and did not need to add the lines to rc.conf.
 All I did was do what I was told:
 Add lvm to the boot runlevel.

 Can you remove the lines from rc.conf, ensure lvm is in the boot
 runlevel (And not in any other, like default) and then let us know if
 you still get the error during reboot?

 If it all goes by too fast, can you press I during boot to get
 interactive and then let us know:
 1) Which starts first, lvm or localmount
 2) What error messages do you see for any of the services.

 Kind regards,

 Joost Roeleveld



Removing those lines didn't help, but I removed my stupidity there -- 
the contradicting dependency issue.
It still doesn't start up before localmount.

What I get when rc.conf is default without any manually inserted 
depends/etc: 
https://dl.dropbox.com/u/25780056/2012-12-11%2012.46.59.jpg  
https://dl.dropbox.com/u/25780056/2012-12-11%2012.48.13.jpg

My current rc.conf has this:

rc_localmount_need=lvm
rc_localmount_after=lvm
rc_fsck_after=lvm
rc_fsck_need=lvm
rc_lvm_before=localmount

At least I have a usable system now and doesn't use my SSD for /var due 
to the failed LVM mount.
But this results in one localmount failure - lvm - localmount success.

I'm on openrc 0.11.8.

--
Nilesh Govindarajan
http://nileshgr.com



Re: [gentoo-user] Localmount starts before LVM

2012-12-11 Thread J. Roeleveld
 On Tuesday 11 December 2012 01:14:39 PM IST, J. Roeleveld wrote:
 Hi,

 I have a raid0 (kernel autodetect) array, over which I have put LVM
 and then there are volumes on the LVM for /var, /tmp, swap and /home.

 The problem is, raid0 array gets recognized, but localmount fails to
 mount because lvm doesn't seem to start before localmount (due to my
 root being on SSD, I can't watch the output of openrc easily).

 For now I have added this to my rc.conf -
 rc_localmount_before=lvm

 In other words: localmount should run before lvm

 rc_localmount_need=lvm

 localmount requires lvm

 rc_lvm_after=localmount

 lvm should run after localmount

 Line 1 and 3 do the same. Line 2 is a contradiction.

 This fixes the problem, but localmount still executes before lvm and
 terminates with operational error. Then lvm starts up and localmount
 runs again successfully.

 Any idea why this happens?

 Yes (See above)

 The localmount script in init.d has proper depends:

 depend()
 {
 need fsck
 use lvm modules mtab
 after lvm modules
 keyword -jail -openvz -prefix -vserver -lxc
 }

 This should work.

 I actually have a similar setup and did not need to add the lines to
 rc.conf.
 All I did was do what I was told:
 Add lvm to the boot runlevel.

 Can you remove the lines from rc.conf, ensure lvm is in the boot
 runlevel (And not in any other, like default) and then let us know if
 you still get the error during reboot?

 If it all goes by too fast, can you press I during boot to get
 interactive and then let us know:
 1) Which starts first, lvm or localmount
 2) What error messages do you see for any of the services.

 Kind regards,

 Joost Roeleveld



 Removing those lines didn't help,

What is the end-result without the lines?


 but I removed my stupidity there --
 the contradicting dependency issue.
 It still doesn't start up before localmount.

 What I get when rc.conf is default without any manually inserted
 depends/etc:
 https://dl.dropbox.com/u/25780056/2012-12-11%2012.46.59.jpg 
 https://dl.dropbox.com/u/25780056/2012-12-11%2012.48.13.jpg

 My current rc.conf has this:

 rc_localmount_need=lvm
 rc_localmount_after=lvm
 rc_fsck_after=lvm
 rc_fsck_need=lvm
 rc_lvm_before=localmount

 At least I have a usable system now and doesn't use my SSD for /var due
 to the failed LVM mount.
 But this results in one localmount failure - lvm - localmount success.

 I'm on openrc 0.11.8.

I use an older version still.
In rc.conf, I only set the need lines for init-scripts I created myself.
I never used the other lines.

Do you have /usr on / ? Or on a seperate partition?

Which metadata version did you use for the software raid setup?

Can you add mdadm to the boot-runlevel?

--
Joost





Re: [gentoo-user] Localmount starts before LVM

2012-12-11 Thread Nilesh Govindrajan
On Tuesday 11 December 2012 04:52 PM, J. Roeleveld wrote:
 
 What is the end-result without the lines?
 
 

localmount fails at mounting /var /home and /tmp (while swap gets
mounted which is *also* on LVM because lvm starts up before the swap
gets activated).

 
 I use an older version still.
 In rc.conf, I only set the need lines for init-scripts I created myself.
 I never used the other lines.
 

I still have no idea why the sequence is messed up. I tried reverting to
0.10.5, but that didn't help either.

 Do you have /usr on / ? Or on a seperate partition?
 

/usr is not a separate partition, it's on the same partition as root.

 Which metadata version did you use for the software raid setup?
 
 Can you add mdadm to the boot-runlevel?
 

I'm using metadata version 1.2 for the raid0 array and the type is
kernel based autodetect.
Earlier I went by the raid guide on gentoo.org, but I configured it to
use kernel based autodetect.
mdadm anyway was reporting nothing detected (when added to boot
runlevel) so it's not there in the boot runlevel.

Moreover, since lvm starts up successfully, it doesn't seem to be an
issue because of mdadm. It's just the sequence that's messed up. :S

-- 
Nilesh Govindarajan
http://nileshgr.com



Re: [gentoo-user] Localmount starts before LVM

2012-12-11 Thread J. Roeleveld
 On Tuesday 11 December 2012 04:52 PM, J. Roeleveld wrote:

SNIP

 Which metadata version did you use for the software raid setup?

 Can you add mdadm to the boot-runlevel?


 I'm using metadata version 1.2 for the raid0 array and the type is
 kernel based autodetect.

Ouch, auto-detect does not work with metadata 1.2.
Please read the man-page section:
===
--auto-detect
Request that the kernel starts any auto-detected arrays. This can only
work if md is compiled into the kernel - not if it is a module. Arrays
can be auto-detected by the kernel if all the components are in
primary MS-DOS partitions with partition type FD, and all use v0.90
metadata. In-kernel autodetect is not recommended for new
installations. Using mdadm to detect and assemble arrays - possibly in
an initrd - is substantially more flexible and should be preferred.
===

 Earlier I went by the raid guide on gentoo.org, but I configured it to
 use kernel based autodetect.
 mdadm anyway was reporting nothing detected (when added to boot
 runlevel) so it's not there in the boot runlevel.

 Moreover, since lvm starts up successfully, it doesn't seem to be an
 issue because of mdadm. It's just the sequence that's messed up. :S

Please rebuild the raid-device using v0.90 metadata and try again.

Kind regards,

Joost Roeleveld




Re: [gentoo-user] Localmount starts before LVM

2012-12-11 Thread Neil Bothwick
On Tue, 11 Dec 2012 12:48:13 +0100, J. Roeleveld wrote:

  I'm using metadata version 1.2 for the raid0 array and the type is
  kernel based autodetect.  
 
 Ouch, auto-detect does not work with metadata 1.2.
 Please read the man-page section:
 
 Please rebuild the raid-device using v0.90 metadata and try again.

I don't understand why your using RAID at all. LVM on top of RAID0 makes
no sense to me when you can simply make each device a PV and add it to
the VG. That's more flexible and easier to repair.


-- 
Neil Bothwick

SITCOM: Single Income, Two Children, Oppressive Mortgage


signature.asc
Description: PGP signature


Re: [gentoo-user] Localmount starts before LVM

2012-12-11 Thread Alan McKinnon
On Tue, 11 Dec 2012 12:08:12 +
Neil Bothwick n...@digimed.co.uk wrote:

 On Tue, 11 Dec 2012 12:48:13 +0100, J. Roeleveld wrote:
 
   I'm using metadata version 1.2 for the raid0 array and the type is
   kernel based autodetect.  
  
  Ouch, auto-detect does not work with metadata 1.2.
  Please read the man-page section:
  
  Please rebuild the raid-device using v0.90 metadata and try again.
 
 I don't understand why your using RAID at all. LVM on top of RAID0
 makes no sense to me when you can simply make each device a PV and
 add it to the VG. That's more flexible and easier to repair.
 
 

Some folks like to do the striping in RAID, it's more controllable. 1st
block on this disk, 2nd block on that disk, 3rd block on first disk
again...

Pooling LVM PVs into a VG is a huge gigantic basket of stuff where you
don't really get to control very much - LVM sticks data wherever it
wants to and you do little more than give some gentle hints (which
I strongly suspect are mostly ignored)

But yes, in the usual case RAID-0 on LVM doesn't make much sense for
most folks.

Personally, I prefer ZFS. This whole huge list of shit just goes away:

disk partitions
partition types
disk labels
worrying about if my block size is right
worrying if my boundaries are correct
PVs as different from VGs and LVs
VGs as different from PVs and LVs
LVs as different from PVs and VGs
lvextend  growfs to make stuff bigger
umount  shrinkfs  lvreduce  growfs  mount to make stuff smaller

I can now take a much simpler view of things:

I have these disks, use 'em. When I've figured out the actual quotas
and sizes I need, I'll let you know. Meanwhile just get on with it and
store my stuff in some reasonable fashion, 'mkay? kthankxbye! I have
real work to do.

:-)


-- 
Alan McKinnon
alan.mckin...@gmail.com




Re: [gentoo-user] Localmount starts before LVM

2012-12-11 Thread Nilesh Govindrajan
On Tuesday 11 December 2012 05:18 PM, J. Roeleveld wrote:
 
 Ouch, auto-detect does not work with metadata 1.2.
 Please read the man-page section:
 ===
 --auto-detect
 Request that the kernel starts any auto-detected arrays. This can only
 work if md is compiled into the kernel - not if it is a module. Arrays
 can be auto-detected by the kernel if all the components are in
 primary MS-DOS partitions with partition type FD, and all use v0.90
 metadata. In-kernel autodetect is not recommended for new
 installations. Using mdadm to detect and assemble arrays - possibly in
 an initrd - is substantially more flexible and should be preferred.
 ===
 
 Please rebuild the raid-device using v0.90 metadata and try again.
 

I never had mdadm running in boot runlevel and I don't have a modular
kernel. I have compiled everything into the kernel and hence no initrd
either as I said earlier.

Raid autodetection seems to work even _without_ mdadm running.

--

[1.202481] md: Waiting for all devices to be available before autodetect
[1.204268] md: If you don't use raid, use raid=noautodetect
[1.206201] md: Autodetecting RAID arrays.
[1.232482] md: invalid raid superblock magic on sdb1
[1.234306] md: sdb1 does not have a valid v0.90 superblock, not
importing!
[1.263187] md: invalid raid superblock magic on sdd1
[1.265034] md: sdd1 does not have a valid v0.90 superblock, not
importing!
[1.285106] md: invalid raid superblock magic on sdc1
[1.286960] md: sdc1 does not have a valid v0.90 superblock, not
importing!
[1.288787] md: Scanned 3 and added 0 devices.
[1.290590] md: autorun ...
[1.292380] md: ... autorun DONE.
[1.340838] UDF-fs: warning (device sda1): udf_fill_super: No
partition found (1)
[1.350473] XFS (sda1): Mounting Filesystem
[1.454096] usb usb5: suspend_rh (auto-stop)
[1.454130] usb usb4: suspend_rh (auto-stop)
[1.455673] usb usb2: suspend_rh (auto-stop)
[1.455698] usb usb3: suspend_rh (auto-stop)
[1.573933] XFS (sda1): Ending clean mount
[1.575762] VFS: Mounted root (xfs filesystem) readonly on device 8:1.
[1.578193] Freeing unused kernel memory: 456k freed
[1.580154] BFS CPU scheduler v0.425 by Con Kolivas.
[2.503599] systemd-udevd[974]: starting version 196
[2.704048] hub 2-0:1.0: hub_suspend
[2.704063] usb usb2: bus auto-suspend, wakeup 1
[2.704068] usb usb2: suspend_rh
[2.704091] hub 3-0:1.0: hub_suspend
[2.704098] usb usb3: bus auto-suspend, wakeup 1
[2.704102] usb usb3: suspend_rh
[2.708031] hub 4-0:1.0: hub_suspend
[2.708041] usb usb4: bus auto-suspend, wakeup 1
[2.708046] usb usb4: suspend_rh
[2.712023] hub 5-0:1.0: hub_suspend
[2.712030] usb usb5: bus auto-suspend, wakeup 1
[2.712034] usb usb5: suspend_rh
[2.794061] hub 1-6:1.0: hub_suspend
[2.794072] usb 1-6: unlink qh256-0001/8800bb832980 start 1 [1/0 us]
[2.797202] usb 1-6: usb auto-suspend, wakeup 1
[2.973953] md: bindsdb1
[3.020879] md: bindsdc1
[3.086724] md: bindsdd1
[3.087690] bio: create slab bio-1 at 1
[3.087705] md/raid0:md0: md_size is 2266111488 sectors.
[3.087708] md: RAID0 configuration for md0 - 3 zones
[0.524821] ACPI: Invalid Power Resource to register!
[3.087711] md: zone0=[
[3.087714] sdb1/sdc1/sdd1]
[3.087721]   zone-offset= 0KB, device-offset=
0KB, size= 468863328KB
[3.087723] md: zone1=[sdb1/sdc1]
[3.087730]   zone-offset= 468863328KB, device-offset=
156287776KB, size= 664191360KB
[3.087732] md: zone2=[sdb1]
[3.087737]   zone-offset=1133054688KB, device-offset=
488383456KB, size=  1056KB

[3.087752] md0: detected capacity change from 0 to 1160249081856
[3.098375]  md0: unknown partition table

-

Where did md0 come from if it was not setup by mdadm?

-- 
Nilesh Govindarajan
http://nileshgr.com



Re: [gentoo-user] Localmount starts before LVM

2012-12-11 Thread Nilesh Govindrajan
On Tuesday 11 December 2012 05:57 PM, Alan McKinnon wrote:
 On Tue, 11 Dec 2012 12:08:12 +
 Neil Bothwick n...@digimed.co.uk wrote:
 
 On Tue, 11 Dec 2012 12:48:13 +0100, J. Roeleveld wrote:

 I'm using metadata version 1.2 for the raid0 array and the type is
 kernel based autodetect.  

 Ouch, auto-detect does not work with metadata 1.2.
 Please read the man-page section:

 Please rebuild the raid-device using v0.90 metadata and try again.

 I don't understand why your using RAID at all. LVM on top of RAID0
 makes no sense to me when you can simply make each device a PV and
 add it to the VG. That's more flexible and easier to repair.


 
 Some folks like to do the striping in RAID, it's more controllable. 1st
 block on this disk, 2nd block on that disk, 3rd block on first disk
 again...
 
 Pooling LVM PVs into a VG is a huge gigantic basket of stuff where you
 don't really get to control very much - LVM sticks data wherever it
 wants to and you do little more than give some gentle hints (which
 I strongly suspect are mostly ignored)
 
 But yes, in the usual case RAID-0 on LVM doesn't make much sense for
 most folks.
 
 Personally, I prefer ZFS. This whole huge list of shit just goes away:
 
 disk partitions
 partition types
 disk labels
 worrying about if my block size is right
 worrying if my boundaries are correct
 PVs as different from VGs and LVs
 VGs as different from PVs and LVs
 LVs as different from PVs and VGs
 lvextend  growfs to make stuff bigger
 umount  shrinkfs  lvreduce  growfs  mount to make stuff smaller
 
 I can now take a much simpler view of things:
 
 I have these disks, use 'em. When I've figured out the actual quotas
 and sizes I need, I'll let you know. Meanwhile just get on with it and
 store my stuff in some reasonable fashion, 'mkay? kthankxbye! I have
 real work to do.
 
 :-)
 
 

Exactly the reason why I wanted RAID0 and LVM in combination: more IOPS.
ZFS looks very interesting, how stable is it?

-- 
Nilesh Govindarajan
http://nileshgr.com



Re: [gentoo-user] Localmount starts before LVM

2012-12-11 Thread J. Roeleveld
 On Tuesday 11 December 2012 05:18 PM, J. Roeleveld wrote:

 Ouch, auto-detect does not work with metadata 1.2.
 Please read the man-page section:
 ===
 --auto-detect
 Request that the kernel starts any auto-detected arrays. This can
 only
 work if md is compiled into the kernel - not if it is a module. Arrays
 can be auto-detected by the kernel if all the components are in
 primary MS-DOS partitions with partition type FD, and all use v0.90
 metadata. In-kernel autodetect is not recommended for new
 installations. Using mdadm to detect and assemble arrays - possibly in
 an initrd - is substantially more flexible and should be preferred.
 ===

 Please rebuild the raid-device using v0.90 metadata and try again.


 I never had mdadm running in boot runlevel and I don't have a modular
 kernel. I have compiled everything into the kernel and hence no initrd
 either as I said earlier.

 Raid autodetection seems to work even _without_ mdadm running.

 --

 [1.202481] md: Waiting for all devices to be available before
 autodetect
 [1.204268] md: If you don't use raid, use raid=noautodetect
 [1.206201] md: Autodetecting RAID arrays.
 [1.232482] md: invalid raid superblock magic on sdb1
 [1.234306] md: sdb1 does not have a valid v0.90 superblock, not
 importing!
 [1.263187] md: invalid raid superblock magic on sdd1
 [1.265034] md: sdd1 does not have a valid v0.90 superblock, not
 importing!
 [1.285106] md: invalid raid superblock magic on sdc1
 [1.286960] md: sdc1 does not have a valid v0.90 superblock, not
 importing!
 [1.288787] md: Scanned 3 and added 0 devices.

This clearly indicates that the autostart is not working


 [3.087705] md/raid0:md0: md_size is 2266111488 sectors.
 [3.087708] md: RAID0 configuration for md0 - 3 zones
 [0.524821] ACPI: Invalid Power Resource to register!
 [3.087711] md: zone0=[
 [3.087714] sdb1/sdc1/sdd1]
 [3.087721]   zone-offset= 0KB, device-offset=
 0KB, size= 468863328KB
 [3.087723] md: zone1=[sdb1/sdc1]
 [3.087730]   zone-offset= 468863328KB, device-offset=
 156287776KB, size= 664191360KB
 [3.087732] md: zone2=[sdb1]
 [3.087737]   zone-offset=1133054688KB, device-offset=
 488383456KB, size=  1056KB

 [3.087752] md0: detected capacity change from 0 to 1160249081856
 [3.098375]  md0: unknown partition table

Something found and started md0 after the autoraid-detect clearly failed.

 Where did md0 come from if it was not setup by mdadm?

What does rc-status show right after boot?

--
Joost




Re: [gentoo-user] Localmount starts before LVM

2012-12-11 Thread Michael Mol
On Dec 11, 2012 7:57 AM, Nilesh Govindrajan m...@nileshgr.com wrote:

 On Tuesday 11 December 2012 05:18 PM, J. Roeleveld wrote:
 
  Ouch, auto-detect does not work with metadata 1.2.
  Please read the man-page section:
  ===
  --auto-detect
  Request that the kernel starts any auto-detected arrays. This can
only
  work if md is compiled into the kernel - not if it is a module. Arrays
  can be auto-detected by the kernel if all the components are in
  primary MS-DOS partitions with partition type FD, and all use v0.90
  metadata. In-kernel autodetect is not recommended for new
  installations. Using mdadm to detect and assemble arrays - possibly in
  an initrd - is substantially more flexible and should be preferred.
  ===
 
  Please rebuild the raid-device using v0.90 metadata and try again.
 

 I never had mdadm running in boot runlevel and I don't have a modular
 kernel. I have compiled everything into the kernel and hence no initrd
 either as I said earlier.

 Raid autodetection seems to work even _without_ mdadm running.

 --

 [1.202481] md: Waiting for all devices to be available before
autodetect
 [1.204268] md: If you don't use raid, use raid=noautodetect
 [1.206201] md: Autodetecting RAID arrays.
 [1.232482] md: invalid raid superblock magic on sdb1
 [1.234306] md: sdb1 does not have a valid v0.90 superblock, not
 importing!
 [1.263187] md: invalid raid superblock magic on sdd1
 [1.265034] md: sdd1 does not have a valid v0.90 superblock, not
 importing!
 [1.285106] md: invalid raid superblock magic on sdc1
 [1.286960] md: sdc1 does not have a valid v0.90 superblock, not
 importing!
 [1.288787] md: Scanned 3 and added 0 devices.
 [1.290590] md: autorun ...
 [1.292380] md: ... autorun DONE.
 [1.340838] UDF-fs: warning (device sda1): udf_fill_super: No
 partition found (1)
 [1.350473] XFS (sda1): Mounting Filesystem
 [1.454096] usb usb5: suspend_rh (auto-stop)
 [1.454130] usb usb4: suspend_rh (auto-stop)
 [1.455673] usb usb2: suspend_rh (auto-stop)
 [1.455698] usb usb3: suspend_rh (auto-stop)
 [1.573933] XFS (sda1): Ending clean mount
 [1.575762] VFS: Mounted root (xfs filesystem) readonly on device 8:1.
 [1.578193] Freeing unused kernel memory: 456k freed
 [1.580154] BFS CPU scheduler v0.425 by Con Kolivas.
 [2.503599] systemd-udevd[974]: starting version 196
 [2.704048] hub 2-0:1.0: hub_suspend
 [2.704063] usb usb2: bus auto-suspend, wakeup 1
 [2.704068] usb usb2: suspend_rh
 [2.704091] hub 3-0:1.0: hub_suspend
 [2.704098] usb usb3: bus auto-suspend, wakeup 1
 [2.704102] usb usb3: suspend_rh
 [2.708031] hub 4-0:1.0: hub_suspend
 [2.708041] usb usb4: bus auto-suspend, wakeup 1
 [2.708046] usb usb4: suspend_rh
 [2.712023] hub 5-0:1.0: hub_suspend
 [2.712030] usb usb5: bus auto-suspend, wakeup 1
 [2.712034] usb usb5: suspend_rh
 [2.794061] hub 1-6:1.0: hub_suspend
 [2.794072] usb 1-6: unlink qh256-0001/8800bb832980 start 1 [1/0
us]
 [2.797202] usb 1-6: usb auto-suspend, wakeup 1
 [2.973953] md: bindsdb1
 [3.020879] md: bindsdc1
 [3.086724] md: bindsdd1
 [3.087690] bio: create slab bio-1 at 1
 [3.087705] md/raid0:md0: md_size is 2266111488 sectors.
 [3.087708] md: RAID0 configuration for md0 - 3 zones
 [0.524821] ACPI: Invalid Power Resource to register!
 [3.087711] md: zone0=[
 [3.087714] sdb1/sdc1/sdd1]
 [3.087721]   zone-offset= 0KB, device-offset=
 0KB, size= 468863328KB
 [3.087723] md: zone1=[sdb1/sdc1]
 [3.087730]   zone-offset= 468863328KB, device-offset=
 156287776KB, size= 664191360KB
 [3.087732] md: zone2=[sdb1]
 [3.087737]   zone-offset=1133054688KB, device-offset=
 488383456KB, size=  1056KB

 [3.087752] md0: detected capacity change from 0 to 1160249081856
 [3.098375]  md0: unknown partition table

 -

 Where did md0 come from if it was not setup by mdadm?

Metadata format 0.9 supports auto-detection by the kernel.


 --
 Nilesh Govindarajan
 http://nileshgr.com



Re: [gentoo-user] Localmount starts before LVM

2012-12-11 Thread Alan McKinnon
On Tue, 11 Dec 2012 18:28:37 +0530
Nilesh Govindrajan m...@nileshgr.com wrote:

  I have these disks, use 'em. When I've figured out the actual
  quotas and sizes I need, I'll let you know. Meanwhile just get on
  with it and store my stuff in some reasonable fashion, 'mkay?
  kthankxbye! I have real work to do.
  
  :-)
  

 
 Exactly the reason why I wanted RAID0 and LVM in combination: more
 IOPS. ZFS looks very interesting, how stable is it?


On Linux, not at all (it doesn't exist there except using fuse)

On FreeBSD, rock solid.
On Solaris, rock solid.

It almost seems to be everything btrfs is not...

-- 
Alan McKinnon
alan.mckin...@gmail.com




Re: [gentoo-user] Localmount starts before LVM

2012-12-11 Thread Randy Barlow

Alan McKinnon wrote:

On Tue, 11 Dec 2012 18:28:37 +0530
Nilesh Govindrajan m...@nileshgr.com wrote:

Exactly the reason why I wanted RAID0 and LVM in combination: more
IOPS. ZFS looks very interesting, how stable is it?


On Linux, not at all (it doesn't exist there except using fuse)

On FreeBSD, rock solid.
On Solaris, rock solid.

It almost seems to be everything btrfs is not...


The details why this is the case are something I can never remember 
straight in my head, but I recall that it's due to licensing that ZFS 
cannot be included in the Linux kernel directly. I think it might be 
because the ZFS license doesn't have the Copyleft clause that the GPL 
requires?


It's sad, because ZFS is really pretty great. I think btrfs will be 
pretty great too once it is stabilized, so I look forward to that.


Also, I had seen some kernel patches that you can apply yourself to get 
ZFS in Linux without FUSE a year or two back. I never tried them, and 
can't attest to how stable or unstable they might be, but you could look 
into that as well.


--
R



Re: [gentoo-user] Localmount starts before LVM

2012-12-11 Thread Alan McKinnon
On Tue, 11 Dec 2012 10:46:19 -0500
Randy Barlow ra...@electronsweatshop.com wrote:

 Alan McKinnon wrote:
  On Tue, 11 Dec 2012 18:28:37 +0530
  Nilesh Govindrajan m...@nileshgr.com wrote:
  Exactly the reason why I wanted RAID0 and LVM in combination: more
  IOPS. ZFS looks very interesting, how stable is it?
 
  On Linux, not at all (it doesn't exist there except using fuse)
 
  On FreeBSD, rock solid.
  On Solaris, rock solid.
 
  It almost seems to be everything btrfs is not...
 
 The details why this is the case are something I can never remember 
 straight in my head, but I recall that it's due to licensing that ZFS 
 cannot be included in the Linux kernel directly. I think it might be 
 because the ZFS license doesn't have the Copyleft clause that the GPL 
 requires?

That's the one - The ZFS license from Sun is incompatible with GPL-2

That only stops Linus and distros from redistributing the code, the
rest of us are free to downloaded it, patch the kernel and run it to
our heart's content.



 
 It's sad, because ZFS is really pretty great. I think btrfs will be 
 pretty great too once it is stabilized, so I look forward to that.
 
 Also, I had seen some kernel patches that you can apply yourself to
 get ZFS in Linux without FUSE a year or two back. I never tried them,
 and can't attest to how stable or unstable they might be, but you
 could look into that as well.
 



-- 
Alan McKinnon
alan.mckin...@gmail.com




[gentoo-user] Localmount starts before LVM

2012-12-10 Thread Nilesh Govindrajan
Hi,

I have a raid0 (kernel autodetect) array, over which I have put LVM
and then there are volumes on the LVM for /var, /tmp, swap and /home.

The problem is, raid0 array gets recognized, but localmount fails to
mount because lvm doesn't seem to start before localmount (due to my
root being on SSD, I can't watch the output of openrc easily).

For now I have added this to my rc.conf -
rc_localmount_before=lvm
rc_localmount_need=lvm
rc_lvm_after=localmount

This fixes the problem, but localmount still executes before lvm and
terminates with operational error. Then lvm starts up and localmount
runs again successfully.

Any idea why this happens?

The localmount script in init.d has proper depends:

depend()
{
need fsck
use lvm modules mtab
after lvm modules
keyword -jail -openvz -prefix -vserver -lxc
}

--
Nilesh Govindrajan
http://nileshgr.com



Re: [gentoo-user] Localmount starts before LVM

2012-12-10 Thread Florian Philipp
Am 10.12.2012 15:08, schrieb Nilesh Govindrajan:
 Hi,
 
 I have a raid0 (kernel autodetect) array, over which I have put LVM
 and then there are volumes on the LVM for /var, /tmp, swap and /home.
 
 The problem is, raid0 array gets recognized, but localmount fails to
 mount because lvm doesn't seem to start before localmount (due to my
 root being on SSD, I can't watch the output of openrc easily).
 
 For now I have added this to my rc.conf -
 rc_localmount_before=lvm
 rc_localmount_need=lvm
 rc_lvm_after=localmount
 
 This fixes the problem, but localmount still executes before lvm and
 terminates with operational error. Then lvm starts up and localmount
 runs again successfully.
 
 Any idea why this happens?
 
 The localmount script in init.d has proper depends:
 
 depend()
 {
 need fsck
 use lvm modules mtab
 after lvm modules
 keyword -jail -openvz -prefix -vserver -lxc
 }
 
 --
 Nilesh Govindrajan
 http://nileshgr.com
 

Please provide `/sbin/rc-update show`.

Have you tried toggling rc_depend_strict?

Regards,
Florian Philipp



signature.asc
Description: OpenPGP digital signature


Re: [gentoo-user] Localmount starts before LVM

2012-12-10 Thread Salvatore Borgia
Hi, do you have put dolvm option in your kernel string in grub.conf?


2012/12/10 Nilesh Govindrajan m...@nileshgr.com

 Hi,

 I have a raid0 (kernel autodetect) array, over which I have put LVM
 and then there are volumes on the LVM for /var, /tmp, swap and /home.

 The problem is, raid0 array gets recognized, but localmount fails to
 mount because lvm doesn't seem to start before localmount (due to my
 root being on SSD, I can't watch the output of openrc easily).

 For now I have added this to my rc.conf -
 rc_localmount_before=lvm
 rc_localmount_need=lvm
 rc_lvm_after=localmount

 This fixes the problem, but localmount still executes before lvm and
 terminates with operational error. Then lvm starts up and localmount
 runs again successfully.

 Any idea why this happens?

 The localmount script in init.d has proper depends:

 depend()
 {
 need fsck
 use lvm modules mtab
 after lvm modules
 keyword -jail -openvz -prefix -vserver -lxc
 }

 --
 Nilesh Govindrajan
 http://nileshgr.com




Re: [gentoo-user] Localmount starts before LVM

2012-12-10 Thread Kerin Millar

Nilesh Govindrajan wrote:

Hi,

I have a raid0 (kernel autodetect) array, over which I have put LVM
and then there are volumes on the LVM for /var, /tmp, swap and /home.

The problem is, raid0 array gets recognized, but localmount fails to
mount because lvm doesn't seem to start before localmount (due to my
root being on SSD, I can't watch the output of openrc easily).

For now I have added this to my rc.conf -
rc_localmount_before=lvm
rc_localmount_need=lvm
rc_lvm_after=localmount

This fixes the problem, but localmount still executes before lvm and
terminates with operational error. Then lvm starts up and localmount
runs again successfully.

Any idea why this happens?


I assisted somebody experiencing the same problem recently. The cause 
was simple: the individual concerned had added a runscript to the boot 
runlevel, whereas it should have been added to the default runlevel 
(net.eth0 in this particular case). You could run find /etc/runlevels 
and check to see if you have done something similar.


--Kerin



Re: [gentoo-user] Localmount starts before LVM

2012-12-10 Thread Nilesh Govindrajan
On Monday 10 December 2012 09:03 PM, Florian Philipp wrote:
 Am 10.12.2012 15:08, schrieb Nilesh Govindrajan:
 Hi,

 I have a raid0 (kernel autodetect) array, over which I have put LVM
 and then there are volumes on the LVM for /var, /tmp, swap and /home.

 
 Please provide `/sbin/rc-update show`.
 
 Have you tried toggling rc_depend_strict?
 
 Regards,
 Florian Philipp
 

Output of rc-update show:

Linux ~ # rc-update show
alsasound |  default  x11
 bootmisc | boot
  chronyd |   sysinit
cupsd |  default  x11
 dbus |   x11
devfs |   sysinit
   dhcpcd |  default  x11
dmesg |   sysinit
  dropbox |  default
fcron |  default  x11
 fsck | boot
 hostname | boot
  hwclock | boot
  keymaps | boot
killprocs |  shutdown
   lm_sensors | boot
local |  default  x11
   localmount | boot
  lvm | boot
  metalog |   sysinit
  modules | boot
 mount-ro |  shutdown
  mpd |  default  x11
  mpdscribble |  default  x11
 mtab | boot
   net.lo | boot
   procfs | boot
 root | boot
savecache |  shutdown
 swap | boot
swapfiles | boot
   sysctl | boot
sysfs |   sysinit
 termencoding | boot
   tmpfiles.setup | boot
 udev |   sysinit
   udev-mount |   sysinit
  urandom | boot
  xdm |   x11


Tried rc_depend_strict, didn't fix the problem.

-- 
Nilesh Govindarajan
http://nileshgr.com



Re: [gentoo-user] Localmount starts before LVM

2012-12-10 Thread Nilesh Govindrajan
On Monday 10 December 2012 09:14 PM, Salvatore Borgia wrote:
 Hi, do you have put dolvm option in your kernel string in grub.conf?
 
 

I'm using lilo and a static / monolithic kernel, so dolvm  grub doesn't
hold here.

-- 
Nilesh Govindarajan
http://nileshgr.com



Re: [gentoo-user] Localmount starts before LVM

2012-12-10 Thread Nilesh Govindrajan
On Monday 10 December 2012 11:59 PM, Kerin Millar wrote:
 
 I assisted somebody experiencing the same problem recently. The cause
 was simple: the individual concerned had added a runscript to the boot
 runlevel, whereas it should have been added to the default runlevel
 (net.eth0 in this particular case). You could run find /etc/runlevels
 and check to see if you have done something similar.
 

There seemed to be an almost similar issue here, with net.lo and
lm_sensors running in the boot runlevel, but removing those didn't solve
the problem.

Here's the find on /etc/runlevels:

/etc/runlevels
/etc/runlevels/default
/etc/runlevels/default/local
/etc/runlevels/default/dhcpcd
/etc/runlevels/default/fcron
/etc/runlevels/default/alsasound
/etc/runlevels/default/dropbox
/etc/runlevels/default/cupsd
/etc/runlevels/default/mpd
/etc/runlevels/default/mpdscribble
/etc/runlevels/default/net.lo
/etc/runlevels/shutdown
/etc/runlevels/shutdown/mount-ro
/etc/runlevels/shutdown/savecache
/etc/runlevels/shutdown/killprocs
/etc/runlevels/sysinit
/etc/runlevels/sysinit/udev-mount
/etc/runlevels/sysinit/dmesg
/etc/runlevels/sysinit/devfs
/etc/runlevels/sysinit/udev
/etc/runlevels/sysinit/sysfs
/etc/runlevels/sysinit/metalog
/etc/runlevels/sysinit/chronyd
/etc/runlevels/boot
/etc/runlevels/boot/mtab
/etc/runlevels/boot/swap
/etc/runlevels/boot/modules
/etc/runlevels/boot/termencoding
/etc/runlevels/boot/hostname
/etc/runlevels/boot/urandom
/etc/runlevels/boot/hwclock
/etc/runlevels/boot/keymaps
/etc/runlevels/boot/root
/etc/runlevels/boot/fsck
/etc/runlevels/boot/procfs
/etc/runlevels/boot/bootmisc
/etc/runlevels/boot/sysctl
/etc/runlevels/boot/lvm
/etc/runlevels/boot/swapfiles
/etc/runlevels/boot/tmpfiles.setup
/etc/runlevels/boot/localmount
/etc/runlevels/x11
/etc/runlevels/x11/dhcpcd
/etc/runlevels/x11/fcron
/etc/runlevels/x11/local
/etc/runlevels/x11/xdm
/etc/runlevels/x11/dbus
/etc/runlevels/x11/alsasound
/etc/runlevels/x11/cupsd
/etc/runlevels/x11/mpd
/etc/runlevels/x11/mpdscribble
/etc/runlevels/x11/net.lo
/etc/runlevels/minimal


Anything suspicious?

-- 
Nilesh Govindarajan
http://nileshgr.com



Re: [gentoo-user] Localmount starts before LVM

2012-12-10 Thread J. Roeleveld
 Hi,

 I have a raid0 (kernel autodetect) array, over which I have put LVM
 and then there are volumes on the LVM for /var, /tmp, swap and /home.

 The problem is, raid0 array gets recognized, but localmount fails to
 mount because lvm doesn't seem to start before localmount (due to my
 root being on SSD, I can't watch the output of openrc easily).

 For now I have added this to my rc.conf -
 rc_localmount_before=lvm

In other words: localmount should run before lvm

 rc_localmount_need=lvm

localmount requires lvm

 rc_lvm_after=localmount

lvm should run after localmount

Line 1 and 3 do the same. Line 2 is a contradiction.

 This fixes the problem, but localmount still executes before lvm and
 terminates with operational error. Then lvm starts up and localmount
 runs again successfully.

 Any idea why this happens?

Yes (See above)

 The localmount script in init.d has proper depends:

 depend()
 {
 need fsck
 use lvm modules mtab
 after lvm modules
 keyword -jail -openvz -prefix -vserver -lxc
 }

This should work.

I actually have a similar setup and did not need to add the lines to rc.conf.
All I did was do what I was told:
Add lvm to the boot runlevel.

Can you remove the lines from rc.conf, ensure lvm is in the boot
runlevel (And not in any other, like default) and then let us know if
you still get the error during reboot?

If it all goes by too fast, can you press I during boot to get
interactive and then let us know:
1) Which starts first, lvm or localmount
2) What error messages do you see for any of the services.

Kind regards,

Joost Roeleveld