In my case, Ubuntu 18.04 running on AWS, the problem was as follows:

1. /etc/fstab had 0 in the 6th (passno) field.  I presume this is the default 
that AWS places there, or maybe this is determined by the Ubuntu template on 
2. According to a comment that I found on Stack Overflow, this is used by 
mkinitramfs to decide not to put fsck in the initrd.  This is *in no way* 
3. Without fsck in the initrd, it can't run the fsck, so it just prints a 
message instead.  This message tells you that a fsck is recommended, but it 
doesn't tell you why it hasn't done one.

To fix this:
1. I changed /etc/fstab to have a "1" in the passno field
2. update-initramfs -u
3. reboot, and the check runs as expected.

I think the following changes should be made:

1. There should be a console warning during startup if the root volume cannot 
be checked because fsck is not available.
2. If /etc/fstab is coming from an Ubuntu-controlled template, then passno for 
the root volume should be set to 1.  I know that AWS will fail to boot the 
instance if the fsck goes interactive, necessitating manual repairs, but surely 
failing to boot the instance is better than silently running on a corrupt root 
3. This should be in some documentation somewhere.  The fact that mkinitramfs 
will exclude fsck from the initrd if it thinks you don't need it is 
particularly obscure.  If there is some kind of "getting started with Ubuntu on 
AWS" then this should be in there.

You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to systemd in Ubuntu.

  fsck not running at all on reboot

Status in systemd package in Ubuntu:

Bug description:
  After upgrading some servers from 16.04 to 18.04 I'm met with a MOTD
  that says:

  *** /dev/xvda1 should be checked for errors ***

  I added "fsck.mode=force" to GRUB_CMDLINE_LINUX_DEFAULT in
  /etc/default/grub before running 'sudo update-grub'. I also verified
  that fsck was present in /boot/grub/grub.cfg afterwards, before
  rebooting the server.

  Surprisingly I was met with the same error message in MOTD when
  logging in. I then ran tune2fs to see when the last check was
  performed and this is the output:

  $ sudo tune2fs -l /dev/xvda1  | grep checked
  Last checked:             Wed Sep 12 16:17:00 2018

  I then tried to change mount count to 1 with tune2fs -c 1 /dev/xvda1,
  but after another reboot I was still met with the same error message
  and the timestamp for last checked was unchanged.

  I have the same problem on all the servers that was upgraded from
  16.04 to 18.04, but also on a new server installed directly with

  In should be mentioned that the servers are AWS EC2 instances, so I
  have no way of trying to run fsck from a liveusb.

  Another user has reported the same issue here:

  I'm not quite sure what information is needed, but attached are some
  basic information about the system.

  Please let me know if you need any any other outputs or logs.

  Edit1: I tested fsck.mode=force on my laptop running Ubuntu 18.04.1
  LTS (Xubuntu) and it works fine. Seems to be related to the server

To manage notifications about this bug go to:

Mailing list:
Post to     :
Unsubscribe :
More help   :

Reply via email to