Hello, I decided to test the 'defragment' feature on my system (after a huge number of system updates and prelinking):find /bin /sbin /lib /usr/lib /usr/bin /usr/sbin -type d -exec btrfs filesystem defragment '{}' '+' I have already defragmented a couple of (very large) directories with no errors at all, so this was expected to work somehow. Surprisingly, this time there were thousands of messages like this: ioctl failed on<directory name> ret -1 errno 28errno 28 is ENOSPC You've run out of disk space. (Or at least, btrfs thinks so).
Pleased to hear that this is not a fatal error. :-)
The filesystem still has quite a lot of free space. New files can be created. I
have just tried to add about 10 GB of data, which worked fine. The output from
'df' indicates that only 71% of the partition (177 GB out of 250 GB) is used.
The "built-in" df shows similar numbers -- if I understand it well, there is
plenty of free space left.
# btrfs filesystem df /
Data: total=175.01GB, used=169.57GB
Metadata: total=6.51GB, used=3.64GB
System: total=12.00MB, used=32.00KB
# btrfs filesystem show octopus
failed to read /dev/sdb
failed to read /dev/sr0
Label: 'octopus' uuid: 8576b57b-b934-424e-9a8a-04abc780c963
Total devices 1 FS bytes used 173.21GB
devid 1 size 249.50GB used 188.04GB path /dev/dm-2
Btrfs Btrfs v0.19
Does defragmentation have any unexpected (and not yet documented) free space
requirements? (Most of the files I was attempting to defragment were smaller
than 10 MB, as the directory names suggest.)
Is there a workaround for this issue? Or should I just leave the
defragmentation feature alone for the time being?
Andrej
smime.p7s
Description: S/MIME Cryptographic Signature
