Hi,

thanks for your quick resonse again :)

On Wed, Aug 28, 2013 at 3:28 PM,  <sf...@users.sourceforge.net> wrote:
> Steffen Dettmer:
>> - but the last-resort-emergency deletion may fail because "rm"
>>   fails with I/O error on aufs
>> - we are looking for a way to ensure that deleting from full file
>>   systems work
>
> As the last-resort-emergency deletion, you can remove the file bypassing
> aufs. eg. "rm /rw/fileA", instead of "rm /fileA".
> But you should pay attention several things.
> - if a process is still opening fileA, then the disk space for fileA is
>   not freed until the process closes fileA. (especially tmpfile)

Yes, this also is a point we have to consider...

> - bypassing aufs will make aufs confused since aufs have some info
>   cached about fileA. But you can discard the obsolete cache by
>   "mount -o remount /".

Removing in /rw and this remount sounds very promising, I'll try that,
thanks a lot!

>> Just in case this idea is not ridiculous:
>> some file systems, like ext4, reserve some memory to be available
>> to root only. Could aufs reserve some memory to be available to
>> XINO files only? This could create a safety margin. Of course the
>         :::
>
> I don't think it ridiculous.
> But aufs doesn't have the backend block devices. It just refers to
> another mounted fs (or a dir). It is the "branch" fs which holds the
> block device. So aufs cannot reserve some space on it.
> In other words, if tmpfs has some reserved space, then aufs will follow
> it simply.

Couldn't aufs create XINO files with let's say 1% unused space at
the end? If file system is full, 1% XINO is available (assuming
that 1% would be sufficient to support file deletion)?

>> aufs                             505M  456K  505M   1% /
>> /dev/sda1                        935M  532M  356M  60% /ro
>> aufs-tmpfs                       505M  456K  505M   1% /rw
>
> When the problem happened, how are these free spaces? The
> several large files ate all 505MB? And the number of free
> inodes (df -i) had enough room?

I saw:
- one huge file consuming 99% and normal logs the rest
- 10-and-a-half 50 MB files
- many 1-3 MB gzipped log files

> If the cause is really several large files, then such separation will
> not solve the problem. I'd suggest you to try "direct deletion"
> eg. bypassing aufs, described above.

Thank you, I'll try that!

Best regards,
Steffen

------------------------------------------------------------------------------
Learn the latest--Visual Studio 2012, SharePoint 2013, SQL 2012, more!
Discover the easy way to master current and previous Microsoft technologies
and advance your career. Get an incredible 1,500+ hours of step-by-step
tutorial videos with LearnDevNow. Subscribe today and save!
http://pubads.g.doubleclick.net/gampad/clk?id=58040911&iu=/4140/ostg.clktrk

Reply via email to