My apologies, you're right.  Tested here with -m25 to make it clearer, and 
a 100MB virtual block device with 50MB then 90MB filled.

# truncate -s 100000000 test.img
# mke2fs -i 262144 -m 25 test.img
# mount -o loop test.img /mnt
# dd if=/dev/urandom bs=1000000 count=50 of=/mnt/test.dat
# df -k /mnt
Filesystem     1K-blocks  Used Available Use% Mounted on
/dev/loop8         97572 48908     24252  67% /mnt
# dd if=/dev/urandom bs=1000000 count=40 of=/mnt/test2.dat
# df -k /mnt
Filesystem     1K-blocks  Used Available Use% Mounted on
/dev/loop8         97572 88016         0 100% /mnt
# umount /mnt
# rm test.img

The "Available" and "Use%" do take into account the reserved space.

It would have been a bit clearer if you'd use "df -k" instead of "df -h" in 
your original post, since the "humanized" values have very low resolution - 
but it still shows roughly 1.0GiB of total free space, compared to 518MiB 
available space.

On Saturday, 10 September 2022 at 01:40:44 UTC+1 [email protected] wrote:

> I thought I'd include some pictures as well, showing the snapshots failing 
> around 405 MiB free disk space:
>
> Prometheus metrics query showing ~405 MiB remaining
> [image: avail_bytes.PNG]
>
> df also showing ~405 MiB remaining
> [image: avail_bytes_df.PNG]
>
> Calling snapshot endpoint from another Docker container, showing no space 
> left on device:
> [image: no_space.jpg]
> On Friday, September 9, 2022 at 5:02:11 PM UTC-7 Flav T wrote:
>
>> Hi Brian,
>>
>> Thank you for the reply, I did some investigating as you suggested:
>>
>> My partition is using ext4 filesystem, and with `tune2fs` I confirmed 
>> that 5% free space is reserved. However, my understanding of the `df` 
>> output is that it is not counting this reserved disk space under 
>> "available". I confirmed this by running the suggested prometheus metrics 
>> as well: "node_filesystem_avail_bytes" gives me 518 MiB as well for the 
>> above, which correlates to the "available" space according to `df`. 
>> Accordingly, "node_filesystem_free_bytes" shows a larger remaining disk 
>> space, as would be expected since this includes reserved space as well.
>>
>> So according to this, it seems to me prometheus is indeed seeing 518 MiB 
>> remaining, and yet for values just a bit lower than this (i.e. below 500 
>> MiB), I start getting the 'no space left on device' error.
>>
>> On Friday, September 9, 2022 at 2:24:17 AM UTC-7 Brian Candler wrote:
>>
>>> Oh and I should add, the other place to look is prometheus metrics :-)
>>>
>>> node_exporter reports both "node_filesystem_avail_bytes" and 
>>> "node_filesystem_free_bytes".  The former excludes space reserved for root, 
>>> so you'll see that hit zero sooner, and that'll be when prometheus thinks 
>>> the disk is "full".
>>>
>>> You can of course graph:
>>> node_filesystem_avail_bytes / node_filesystem_size_bytes
>>> node_filesystem_free_bytes / node_filesystem_size_bytes
>>>
>>> On Friday, 9 September 2022 at 10:05:48 UTC+1 Brian Candler wrote:
>>>
>>>> Which filesystem are you using on the docker host?
>>>>
>>>> If it's ext4: many systems by default configure it to reserve a minimum 
>>>> of 5% free space (i.e. space that only 'root' can use).
>>>>
>>>> Check with:
>>>> tune2fs -l /dev/sda1   # or whatever device your root partition is in
>>>> and look at the ratio of "Reserved block count" to "Block count".  e.g. 
>>>> on one system I have here, I see
>>>>
>>>> Block count:              5242880
>>>> Reserved block count:     262144
>>>>
>>>> 262144/5242880 = 0.05
>>>>
>>>> It can be changed with the -m option to tune2fs.
>>>>
>>>> If it's btrfs: there's a whole can of worms around what constitutes 
>>>> "free space" in btrfs :-)
>>>>
>>>> But looking at your figures, where you're at 94% full, I think it's 
>>>> most likely you're hitting the ext4 reserved blocks limit.
>>>>
>>>> On Friday, 9 September 2022 at 08:36:07 UTC+1 [email protected] 
>>>> wrote:
>>>>
>>>>> Hi all,
>>>>>
>>>>> I'm running a Prometheus docker container, and I've run into an issue 
>>>>> regarding disk space. Despite having hundreds of MB of free disk space, 
>>>>> if 
>>>>> I attempt to call the API snapshot endpoint `curl -XPOST 
>>>>> http://localhost:9090/api/v1/admin/tsdb/snapshot` I receive an error 
>>>>> message: 
>>>>>
>>>>> `create snapshot: snapshot head block: populate block: write chunks: 
>>>>> preallocate: no space left on device`
>>>>>
>>>>> Output of `df -h` inside my prometheus container:
>>>>> [image: space_left.PNG]
>>>>> With around 518 MB free, I am able to call the snapshot endpoint. 
>>>>> However, with less free space than this, the snapshot endpoint returns 
>>>>> the 
>>>>> error regarding no space left on device.
>>>>>
>>>>> size of my /data folder:
>>>>> [image: data_folder.PNG]
>>>>>
>>>>> I see from this thread 
>>>>> <https://github.com/prometheus/prometheus/issues/8406> that 
>>>>> potentially 100's of MB free disk space is required to take a snapshot. I 
>>>>> am a little surprised that (from what it looks like) at least 500 MB are 
>>>>> required to take a single snapshot. Is this expected/intended behaviour 
>>>>> of 
>>>>> prometheus, or could there be something on my end (perhaps docker 
>>>>> related) 
>>>>> that is contributing to this issue?
>>>>>
>>>>> Thank you!
>>>>>
>>>>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/46ce8ed6-65e0-4138-9208-f678d70c978an%40googlegroups.com.

Reply via email to