** Description changed:

  Binary package hint: mountall
  
- my fstab is at the bottom with blkid output
+ my fstab is at the bottom with blkid output, screenshots follow. Ignore
+ comment #2 as I have now cleaned up the details here in the main
+ description.
  
- One or more of four of my partitions (/dev/mapper/raid-home, /dev/mapper
- /raid-virt, /export/media and /export/torrent) regularly (almost every
- boot) and in no particular order (seems to be random as to which one
- appears first) fail to mount.
+ One or more of four of my partitions residing on two separate physical
+ disks regularly (almost every boot) fail to mount following their clean
+ fsck due to being "busy".  Screenshots and dmesg output are attached to
+ the comments below.  If /home is involved (almost always) I have to
+ reboot until it successfully fscks and mounts. Usually before four CTRL-
+ ALT-DELs.
  
- It seems that in almost every case (after observing some recent booting
- without quiet and splash) that they only get stuck when one or more is
- not clean (i.e. recovering journal), which seems to be frequently.
- (probably as a result of another minor issue whereby something is not
- unmounted just prior to reboot possibly to do ordering of mdadm and lvm2
- shutdown sequence) .
+ The partitions involved include both ext3 and ext4 filesystems,  both
+ LVM+RAID and raw disks, and both UUID and device-node mount points, as
+ shown in blkid and fstab output below.
  
- Two of these four partitions, one of which is home, are ext3 and sit on
- LVM2 on top of MD-RAID1 (/dev/md1). Two other partitions are local
- direct ext4 mounts (/dev/sdb1 on /export/media and /dev/sdb7 on
- /export/torrent)
+ It seems whenever a fsck engages a full filesystem check (splash shows
+ %completion on check) there is not a problem and I land in gdm without
+ issue.
  
- It seems whenever a fsck engages a disk check (splash shows %completion
- on check) there is not a problem and we land in gdm.  Likewise whenever
- there are no unclean partitions there are no problems reported and we
- land in gdm.
+ If there is problem, I am prompted for the S or M options.  If /home
+ does not fail (seems rare) I can workaround.. but usually I am rebooting
+ once or twice to access my system - WHICH is a PAIN and wastes a lot of
+ time.
  
- If there is problem, I am prompted for the S or M options.  Here are my 
typical workaround scenarios:
- 1) It is not /home, so I just pick [S] (one or more times) and proceed to gdm 
login and fix other mounts manually after login
- 2) If  /home fails I can press M, and then CTRL-D, mountall re-executes, and 
if no other partitions remain unclean, I can proceed to gdm login.   
- 3) Since the journal has "recovered" at the point the mount fails, generally 
I can press CTRL-ATL-DEL and the subsequent reboot works .. sometimes I have to 
do this once, twice and rarely three times to address multiple unclean 
filesystems though it seems most times all journals have recovered by the time 
the mount fails
+ 1) If /home does not fail to mount, I can skip the mount(s) pressing [S] (one 
or more times) and proceed to gdm login and retroactively fix other mounts 
manually after login from the command line.
+ 2) If /home fails (most of the time) I am screwed and have to reboot. In this 
case I have tried to press M for console, and then immediately CTRL-D to exit 
and noted that mountall re-executes - unfortunately this results in bad 
permissions on /dev/shm somehow so it is not a solution. 
  
- With quiet and splash disabled, I am able to see that the order of
- events seems to be that between init-bottom and mdadm starting,  I get a
- failure from fsck.ext3 or fsck.ext4 that "Device or resource busy trying
- to open <mount>" .. "mounted or opened exclusively by another program"
- and above typically the "recovering journal" entries for the same mount.
+ The specific error shown in screenshots attached below is a failure from
+ mountall that says "Device or resource busy trying to open <mount>" ..
+ "mounted or opened exclusively by another program" seen below clean fsck
+ lines for the very same mount.
+ 
+ This is a regression from Intrepid and Jaunty that worked fine, and has
+ gone from tolerably rare in Lucid to happening at every boot with
+ Maverick.
  
  -- lsb_release output --
  Description:  Ubuntu 10.10
  Release:      10.10
  
  -- apt policy output --
  mountall:
-   Installed: 2.19
-   Candidate: 2.19
-   Version table:
-  *** 2.19 0
-         500 http://ubuntu.mirror.rafal.ca/ubuntu/ maverick/main amd64 Packages
-         100 /var/lib/dpkg/status
+   Installed: 2.19
+   Candidate: 2.19
+   Version table:
+  *** 2.19 0
+         500 http://ubuntu.mirror.rafal.ca/ubuntu/ maverick/main amd64 Packages
+         100 /var/lib/dpkg/status
  
  -- fstab_contents --
  # /etc/fstab: static file system information.
  #
  # <file system> <mount point>   <type>  <options>       <dump>  <pass>
  proc            /proc           proc    defaults        0       0
  # /dev/sda1
  UUID=52543d1c-6080-4cdf-80e0-30b9af3748f7 /               ext3    
relatime,errors=remount-ro 0       1
  # /dev/sda2
  UUID=dba2bb63-7104-4bd7-b8d9-9e2250f131f7 none            swap    sw          
    0       0
  /dev/scd0       /media/cdrom0   udf,iso9660 user,noauto,exec,utf8 0       0
  /dev/mapper/raid-home /home   ext3    
auto,defaults,nosuid,acl,errors=remount-ro      0       2
- UUID=194d9e4a-faa5-4111-9a83-b6d31a4c28e8 /export/media ext4  
auto,defaults,nosuid,acl,errors=remount-ro      0       2       
- UUID=2bfff6f9-7991-4252-8f84-1370786f1650 /export/torrent ext4        
auto,defaults,nosuid,acl,errors=remount-ro      0       2       
+ UUID=194d9e4a-faa5-4111-9a83-b6d31a4c28e8 /export/media ext4  
auto,defaults,nosuid,acl,errors=remount-ro      0       2
+ UUID=2bfff6f9-7991-4252-8f84-1370786f1650 /export/torrent ext4        
auto,defaults,nosuid,acl,errors=remount-ro      0       2
  /dev/mapper/raid-virt         /export/virt            ext3    
auto,defaults,user      0       2
  192.168.79.10:/home                   /ahome  nfs4    
noauto,defaults,rsize=16384,wsize=16384,hard,intr,async
  
  -- blkid_output --
- /dev/sda1: UUID="52543d1c-6080-4cdf-80e0-30b9af3748f7" TYPE="ext3" 
- /dev/sda5: UUID="eeeb6708-d108-0847-57e9-714c01b7dbc8" 
TYPE="linux_raid_member" 
- /dev/sda6: UUID="dba2bb63-7104-4bd7-b8d9-9e2250f131f7" TYPE="swap" 
- /dev/sdb1: LABEL="Pictures" UUID="194d9e4a-faa5-4111-9a83-b6d31a4c28e8" 
TYPE="ext4" 
- /dev/sdb5: UUID="b19c6bdc-419a-4c69-98b3-3e8315900bd2" TYPE="ext4" 
- /dev/sdb6: UUID="c91ae53f-2b62-4ece-829a-eea0fbc7f2c2" TYPE="swap" 
- /dev/sdb7: UUID="2bfff6f9-7991-4252-8f84-1370786f1650" TYPE="ext4" 
- /dev/md1: UUID="lXDbNX-fv0w-DYmV-tAtO-wWjr-DpXj-Yltuk3" TYPE="LVM2_member" 
- /dev/mapper/raid-home: UUID="f0413a2b-ff74-4d44-a8d1-b142762b3c71" 
TYPE="ext3" 
- /dev/mapper/raid-virt: UUID="7142bf7d-bc0e-4bea-b0d6-8d776032a737" 
TYPE="ext3" 
+ /dev/sda1: UUID="52543d1c-6080-4cdf-80e0-30b9af3748f7" TYPE="ext3"
+ /dev/sda5: UUID="eeeb6708-d108-0847-57e9-714c01b7dbc8" 
TYPE="linux_raid_member"
+ /dev/sda6: UUID="dba2bb63-7104-4bd7-b8d9-9e2250f131f7" TYPE="swap"
+ /dev/sdb1: LABEL="Pictures" UUID="194d9e4a-faa5-4111-9a83-b6d31a4c28e8" 
TYPE="ext4"
+ /dev/sdb5: UUID="b19c6bdc-419a-4c69-98b3-3e8315900bd2" TYPE="ext4"
+ /dev/sdb6: UUID="c91ae53f-2b62-4ece-829a-eea0fbc7f2c2" TYPE="swap"
+ /dev/sdb7: UUID="2bfff6f9-7991-4252-8f84-1370786f1650" TYPE="ext4"
+ /dev/md1: UUID="lXDbNX-fv0w-DYmV-tAtO-wWjr-DpXj-Yltuk3" TYPE="LVM2_member"
+ /dev/mapper/raid-home: UUID="f0413a2b-ff74-4d44-a8d1-b142762b3c71" TYPE="ext3"
+ /dev/mapper/raid-virt: UUID="7142bf7d-bc0e-4bea-b0d6-8d776032a737" TYPE="ext3"
  
- my problem first appeared in lucid, and got much worse in maverick; it
- went from once in a while to almost every boot. Karmic and early lucid
- (I believe) seemed to work fine. This may have more to do with the
- frequency of cleanly unmounted filesystems then mountall not working at
- some point, so I am hesitant to put any value on these observations.
- 
- Finally, as result of using my #2 workaround I have possibly tripped on two 
other bugs I have not been able to properly diagnose, review other related bugs 
for and/or report on. 
-  a) Multiple CTRL-D attempts seems to result in a hang, which can happen if I 
get caught on a second failed mount after executing #2 once
-  b) when I sucessfully use workaround #2 on /home mount failures,  I end up 
with /dev/shm with bad permissions resulting in google-chrome telling me to fix 
them with chmod 777 which works for that session
  
  ProblemType: Bug
  DistroRelease: Ubuntu 10.10
  Package: mountall 2.19
  ProcVersionSignature: Ubuntu 2.6.35-22.34-generic 2.6.35.4
  Uname: Linux 2.6.35-22-generic x86_64
  NonfreeKernelModules: fglrx
  Architecture: amd64
  Date: Tue Oct 12 15:24:53 2010
  ProcEnviron:
-  LANG=en_US.utf8
-  SHELL=/bin/bash
+  LANG=en_US.utf8
+  SHELL=/bin/bash
  SourcePackage: mountall

** Summary changed:

- mountall fails in race with fsck on boot
+ mountall fails in apparent race with fsck on boot

-- 
mountall fails in apparent race with fsck on boot
https://bugs.launchpad.net/bugs/659492
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.

-- 
ubuntu-bugs mailing list
[email protected]
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to