> J. R. Okajima:
   > Unfortunately I don't understand what you want to say.
   > Do you want to say that 325192 - 325188 = 4 blocks are gone? If so, I'd
   > suggest you to try more tests and
   > - confirm those 4 blocks are never re-used
   > - every time you create/delete a file some blocks are gone
   Apparently I can confirm both of the statements. The problem was discovered
   only  because  those 4 block are never reused. As I mentioned I have a
   completely diskless system. One day I found bunch of messages in my log
   file:
   Apr    14    03:40:01    plc-comm    kernel:   [3212661.041853]   aufs
   au_xino_do_write:428:ln[15772]: I/O Error, write failed (-28)
   Apr    14    03:40:01    plc-comm    kernel:   [3212661.041861]   aufs
   au_xino_write:464:ln[15772]: I/O Error, write failed (-5)
   Apr    14    03:40:01    plc-comm    kernel:   [3212661.045447]   aufs
   au_xino_do_write:428:find[15773]: I/O Error, write failed (-28)
   Apr    14    03:40:01    plc-comm    kernel:   [3212661.045455]   aufs
   au_xino_write:464:find[15773]: I/O Error, write failed (-5)
   Apr    14    03:40:01    plc-comm    kernel:   [3212661.048293]   aufs
   au_xino_do_write:428:find[15774]: I/O Error, write failed (-28)
   Apr    14    03:40:01    plc-comm    kernel:   [3212661.048301]   aufs
   au_xino_write:464:find[15774]: I/O Error, write failed (-5)
   Apr    14    03:40:01    plc-comm    kernel:   [3212661.051121]   aufs
   au_xino_do_write:428:find[15775]: I/O Error, write failed (-28)
   Apr    14    03:40:01    plc-comm    kernel:   [3212661.051128]   aufs
   au_xino_write:464:find[15775]: I/O Error, write failed (-5)
   Apr    14    03:40:01    plc-comm    kernel:   [3212661.054155]   aufs
   au_xino_do_write:428:find[15776]: I/O Error, write failed (-28)
   Apr    14    03:40:01    plc-comm    kernel:   [3212661.054162]   aufs
   au_xino_write:464:find[15776]: I/O Error, write failed (-5)
   I investigated it and found that system can not write to /var/spool folder.
   I checked it with df command and it reported 100% use of file system. I
   checked it with du and I saw:
   plc-comm ~ # du /var/spool/
   0Â Â Â  /var/spool/cron/lastrun
   0Â Â Â  /var/spool/cron/crontabs
   0Â Â Â  /var/spool/cron
   0Â Â Â  /var/spool/postfix/trace
   0Â Â Â  /var/spool/postfix/saved
   4Â Â Â  /var/spool/postfix/pid
   0Â Â Â  /var/spool/postfix/public
   0Â Â Â  /var/spool/postfix/maildrop
   0Â Â Â  /var/spool/postfix/private
   0Â Â Â  /var/spool/postfix/incoming
   0Â Â Â  /var/spool/postfix/hold
   0Â Â Â  /var/spool/postfix/flush
   0Â Â Â  /var/spool/postfix/deferred
   0Â Â Â  /var/spool/postfix/defer
   0Â Â Â  /var/spool/postfix/corrupt
   0Â Â Â  /var/spool/postfix/bounce
   0Â Â Â  /var/spool/postfix/active
   4Â Â Â  /var/spool/postfix
   0Â Â Â  /var/spool/mail
   4Â Â Â  /var/spool/
   So the filesystem is 100% used but I could not find what is consuming the
   space.
   I begun to look what is happening over there on /var/spool and there is
   /usr/sbin/run-crons which is run every ten minutes as per /etc/crontab:
   */10   *   *  *  *     root     test -x /usr/sbin/run-crons &&
   /usr/sbin/run-crons
   This job creates and deletes some temp files on /var/spool. So there is file
   creation/deletion operation every ten minutes on /var/spool.
   I started to monitor free space of /var/spool and this is how it looks:
   [1]https://drive.google.com/file/d/0B2U4oGBm8rEeQVNUVG04aDYxUTg/edit?usp=sha
   ring
   It can be seen that amount of free space stably decreases every 10 minutes.
   I will mention it again: Disk usage gradually increases to 100%, but with du
   I can not see what consumes the space.
   > > And there was even solution found:
   > >
   > > To mount aufs partition with "trunc_xino" option
   >
   > For tmpfs, it is enabled automatically.
   I  am  sorry if I am wrong, but I do not see this option being enabled
   automatically:
   computer ~ # mount
   /dev/loop0 on / type squashfs (ro,relatime)
   etc-aufs on /etc type aufs (rw,relatime,si=23716b7a49752dbb)
   var-aufs on /var/log type aufs (rw,relatime,si=23716b7a49754dbb)
   varlib-aufs on /var/lib type aufs (rw,relatime,si=23716b7a46bbedbb)
   varspool-aufs on /var/spool type aufs (rw,relatime,si=23716b7a46bb8dbb)
   home-aufs on /home/user type aufs (rw,relatime,si=23716b7a46bbadbb)
   Mounting is done in the initrd like this:
   # Create tmpfs filesystems in ram.
   mount -n -t tmpfs etc-tmpfs /aufs/etc -o size=32M
   mount -n -t tmpfs var-tmpfs /aufs/var/log -o size=32M
   mount -n -t tmpfs varlib-tmpfs /aufs/var/lib -o size=16M
   mount -n -t tmpfs varspool-tmpfs /aufs/var/spool -o size=16M
   mount -n -t tmpfs home-tmpfs /aufs/home/user -o size=2500M
   #combine filesystems with aufs
   echo "Mount aufs"
   mount -n -t aufs etc-aufs /mnt/root/etc -o
   dirs=/aufs/etc=rw:/mnt/root/etc=ro
   mount     -n     -t     aufs     var-aufs     /mnt/root/var/log     -o
   dirs=/aufs/var/log=rw:/mnt/root/var/log=ro
   mount     -n     -t     aufs    varlib-aufs    /mnt/root/var/lib    -o
   dirs=/aufs/var/lib=rw:/mnt/root/var/lib=ro
   mount    -n    -t    aufs    varspool-aufs    /mnt/root/var/spool   -o
   dirs=/aufs/var/spool=rw:/mnt/root/var/spool=ro
   mount     -n     -t     aufs    home-aufs    /mnt/root/home/user    -o
   dirs=/aufs/home/user=rw:/mnt/root/home/user=ro
   Please, correct me if I am wrong somewhere.
   I  just  want  to find correct solution for this situation. Should use
   "trunc_xino" mount option handle this? One of the questions I did not get
   answer on was (in first message):
   I would like to ask clarification about it - is it official solution which
   should be used ("trunc_xino") or there should be patch or something that
   fixes root of the problem?
   As you can see from output of my mount command - this option is not enabled.
   Should I go ahead and implement this option? Is it safe?
   --
   Use GNU/Linux

References

   1. 
https://drive.google.com/file/d/0B2U4oGBm8rEeQVNUVG04aDYxUTg/edit?usp=sharing
------------------------------------------------------------------------------
"Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE
Instantly run your Selenium tests across 300+ browser/OS combos.  Get 
unparalleled scalability from the best Selenium testing platform available.
Simple to use. Nothing to install. Get started now for free."
http://p.sf.net/sfu/SauceLabs

Reply via email to