Re: 14Gb of space lost after distro upgrade on BTFS root partition (long thread with logs)

2018-08-28 Thread Menion
Ok, I have removed the snapshot and the free expected space is here, thank you!
As a side note: apt-btrfs-snapshot was not installed, but it is
present in Ubuntu repository and I have used it (and I like the idea
of automatic snapshot during upgrade)
This means that the do-release-upgrade does it's own job on BTRFS,
silently which I believe is not good from the usability perspective,
just google it, there is no mention of this behaviour
Il giorno mar 28 ago 2018 alle ore 19:07 Austin S. Hemmelgarn
 ha scritto:
>
> On 2018-08-28 12:05, Noah Massey wrote:
> > On Tue, Aug 28, 2018 at 11:47 AM Austin S. Hemmelgarn
> >  wrote:
> >>
> >> On 2018-08-28 11:27, Noah Massey wrote:
> >>> On Tue, Aug 28, 2018 at 10:59 AM Menion  wrote:
> >>>>
> >>>> [sudo] password for menion:
> >>>> ID  gen top level   path
> >>>> --  --- -   
> >>>> 257 600627  5   /@
> >>>> 258 600626  5   /@home
> >>>> 296 599489  5
> >>>> /@apt-snapshot-release-upgrade-bionic-2018-08-27_15:29:55
> >>>> 297 599489  5
> >>>> /@apt-snapshot-release-upgrade-bionic-2018-08-27_15:30:08
> >>>> 298 599489  5
> >>>> /@apt-snapshot-release-upgrade-bionic-2018-08-27_15:33:30
> >>>>
> >>>> So, there are snapshots, right? The time stamp is when I have launched
> >>>> do-release-upgrade, but it didn't ask anything about snapshot, neither
> >>>> I asked for it.
> >>>
> >>> This is an Ubuntu thing
> >>> `apt show apt-btrfs-snapshot`
> >>> which "will create a btrfs snapshot of the root filesystem each time
> >>> that apt installs/removes/upgrades a software package."
> >> Not Ubuntu, Debian.  It's just that Ubuntu installs and configures the
> >> package by default, while Debian does not.
> >
> > Ubuntu also maintains the package, and I did not find it in Debian 
> > repositories.
> > I think it's also worth mentioning that these snapshots were created
> > by the do-release-upgrade script using the package directly, not as a
> > result of the apt configuration. Meaning if you do not want a snapshot
> > taken prior to upgrade, you have to remove the apt-btrfs-snapshot
> > package prior to running the upgrade script. You cannot just update
> > /etc/apt/apt.conf.d/80-btrfs-snapshot
> Hmm... I could have sworn that it was in the Debian repositories.
>
> That said, it's kind of stupid that the snapshot is not trivially
> optional for a release upgrade.  Yes, that's where it's arguably the
> most important, but it's still kind of stupid to have to remove a
> package to get rid of that behavior and then reinstall it again afterwards.


Re: 14Gb of space lost after distro upgrade on BTFS root partition (long thread with logs)

2018-08-28 Thread Menion
[sudo] password for menion:
ID  gen top level   path
--  --- -   
257 600627  5   /@
258 600626  5   /@home
296 599489  5
/@apt-snapshot-release-upgrade-bionic-2018-08-27_15:29:55
297 599489  5
/@apt-snapshot-release-upgrade-bionic-2018-08-27_15:30:08
298 599489  5
/@apt-snapshot-release-upgrade-bionic-2018-08-27_15:33:30

So, there are snapshots, right? The time stamp is when I have launched
do-release-upgrade, but it didn't ask anything about snapshot, neither
I asked for it.
During the do-release-upgrade I got some issues due to the (very) bad
behaviour of the script in remote terminal, then I have fixed
everything manually and now the filesystem is operational in bionic
version
If it is confirmed, how can I remove the unwanted snapshot, keeping
the current "visible" filesystem contents
Sorry, I am still learning BTRFS and I would like to avoid mistakes
Bye
Il giorno mar 28 ago 2018 alle ore 15:47 Chris Murphy
 ha scritto:
>
> On Tue, Aug 28, 2018 at 3:34 AM, Menion  wrote:
> > Hi all
> > I have run a distro upgrade on my Ubuntu 16.04 that runs ppa kernel
> > 4.17.2 with btrfsprogs 4.17.0
> > The root filesystem is BTRFS single created by the Ubuntu Xenial
> > installer (so on kernel 4.4.0) on an internal mmc, located in
> > /dev/mmcblk0p3
> > After the upgrade I have cleaned apt cache and checked the free space,
> > the results were odd, following some checks (shrinked), followed by
> > more comments:
>
> Do you know if you're using Timeshift? I'm not sure if it's enabled by
> default on Ubuntu when using Btrfs, but you may have snapshots.
>
> 'sudo btrfs sub list -at /'
>
> That should show all subvolumes (includes snapshots).
>
>
>
> > [48479.254106] BTRFS info (device mmcblk0p3): 17 enospc errors during 
> > balance
>
> Probably soft enospc errors it was able to work around.
>
>
> --
> Chris Murphy


Re: 14Gb of space lost after distro upgrade on BTFS root partition (long thread with logs)

2018-08-28 Thread Menion
Ok, thanks for your replay
This is a root FS, how can I defragment it?
If I try to launch it I get this output:

menion@Menionubuntu:~$ sudo btrfs filesystem defragment -r /
ERROR: defrag failed on /bin/bash: Text file busy
ERROR: defrag failed on /bin/dash: Text file busy
ERROR: defrag failed on /bin/btrfs: Text file busy
ERROR: defrag failed on /lib/systemd/systemd: Text file busy
ERROR: defrag failed on /lib/systemd/systemd-journald: Text file busy
ERROR: defrag failed on /lib/systemd/systemd-logind: Text file busy
ERROR: defrag failed on /lib/systemd/systemd-resolved: Text file busy
ERROR: defrag failed on /lib/systemd/systemd-timesyncd: Text file busy
ERROR: defrag failed on /lib/systemd/systemd-udevd: Text file busy
ERROR: defrag failed on /lib/x86_64-linux-gnu/ld-2.27.so: Text file busy

Bye
Il giorno mar 28 ago 2018 alle ore 13:54 Qu Wenruo
 ha scritto:
>
>
>
> On 2018/8/28 下午5:34, Menion wrote:
> > Hi all
> > I have run a distro upgrade on my Ubuntu 16.04 that runs ppa kernel
> > 4.17.2 with btrfsprogs 4.17.0
> > The root filesystem is BTRFS single created by the Ubuntu Xenial
> > installer (so on kernel 4.4.0) on an internal mmc, located in
> > /dev/mmcblk0p3
> > After the upgrade I have cleaned apt cache and checked the free space,
> > the results were odd, following some checks (shrinked), followed by
> > more comments:
> >
> > root@Menionubuntu:/home/menion# df -h
> > Filesystem  Size  Used Avail Use% Mounted on
> > .......
> > /dev/mmcblk0p3   28G   24G  2.7G  90% /
> >
> > root@Menionubuntu:/home/menion# btrfs fi usage /usr
> > Overall:
> > Device size:  27.07GiB
> > Device allocated: 25.28GiB
> > Device unallocated:1.79GiB
> > Device missing:  0.00B
> > Used: 23.88GiB
> > Free (estimated):  2.69GiB  (min: 2.69GiB)
> > Data ratio:   1.00
> > Metadata ratio:   1.00
> > Global reserve:   72.94MiB  (used: 0.00B)
> >
> > Data,single: Size:24.00GiB, Used:23.10GiB
> >/dev/mmcblk0p3 24.00GiB
> >
> > Metadata,single: Size:1.25GiB, Used:801.97MiB
> >/dev/mmcblk0p3      1.25GiB
> >
> > System,single: Size:32.00MiB, Used:16.00KiB
> >/dev/mmcblk0p3 32.00MiB
> >
> > Unallocated:
> >/dev/mmcblk0p3  1.79GiB
> >
> > root@Menionubuntu:/home/menion# btrfs fi df /mnt
> > Data, single: total=24.00GiB, used=23.10GiB
> > System, single: total=32.00MiB, used=16.00KiB
> > Metadata, single: total=1.25GiB, used=801.92MiB
> > GlobalReserve, single: total=72.89MiB, used=0.00B
> >
> > The different ways to check the free space are coherent, but if I
> > check the directories usage on root, surprise:
> >
> > root@Menionubuntu:/home/menion# du -x -s -h /*
> > 17M /bin
> > 189M/boot
> > 36K /dead.letter
> > 0   /dev
> > 18M /etc
> > 6.1G/home
> > 4.0K/initrd.img
> > 4.0K/initrd.img.old
> > 791M/lib
> > 8.3M/lib64
> > 0   /media
> > 4.0K/mnt
> > 0   /opt
> > du: cannot access '/proc/24660/task/24660/fd/3': No such file or directory
> > du: cannot access '/proc/24660/task/24660/fdinfo/3': No such file or 
> > directory
> > du: cannot access '/proc/24660/fd/3': No such file or directory
> > du: cannot access '/proc/24660/fdinfo/3': No such file or directory
> > 0   /proc
> > 2.9M/root
> > 2.9M/run
> > 17M /sbin
> > 4.0K/snap
> > 0   /srv
> > 0   /sys
> > 0   /tmp
> > 6.1G/usr
> > 2.0G/var
> > 4.0K/vmlinuz
> > 4.0K/vmlinuz.old
> > 4.0K/webmin-setup.out
> >
> > The computed usage is 15Gb which is what I expected, so there are 9Gb
> > lost somewhere.
> > I have run scrub and then full balance with:
>
> I think this is related to btrfs CoW and extent booking.
>
> One simple example would be:
>
> xfs_io -f -c "pwrite 0 128k" -c "sync" -c "pwrite 0 64K" \
> /mnt/btrfs/file1
>
> The result "/mnt/btrfs/file1" will only be sized 128K in du, but it
> on-disk usage is 128K + 64K.
>
> The first 128K is the data written by the first "pwrite" command, it
> caused a full 128K extent on disk.
> Then the 2nd pwrite command also created a new 64K extent, which is the
> default data CoW behavior.
> The first half of the original 128K extent is no

14Gb of space lost after distro upgrade on BTFS root partition (long thread with logs)

2018-08-28 Thread Menion
Hi all
I have run a distro upgrade on my Ubuntu 16.04 that runs ppa kernel
4.17.2 with btrfsprogs 4.17.0
The root filesystem is BTRFS single created by the Ubuntu Xenial
installer (so on kernel 4.4.0) on an internal mmc, located in
/dev/mmcblk0p3
After the upgrade I have cleaned apt cache and checked the free space,
the results were odd, following some checks (shrinked), followed by
more comments:

root@Menionubuntu:/home/menion# df -h
Filesystem  Size  Used Avail Use% Mounted on
...
/dev/mmcblk0p3   28G   24G  2.7G  90% /

root@Menionubuntu:/home/menion# btrfs fi usage /usr
Overall:
Device size:  27.07GiB
Device allocated: 25.28GiB
Device unallocated:1.79GiB
Device missing:  0.00B
Used: 23.88GiB
Free (estimated):  2.69GiB  (min: 2.69GiB)
Data ratio:   1.00
Metadata ratio:   1.00
Global reserve:   72.94MiB  (used: 0.00B)

Data,single: Size:24.00GiB, Used:23.10GiB
   /dev/mmcblk0p3 24.00GiB

Metadata,single: Size:1.25GiB, Used:801.97MiB
   /dev/mmcblk0p3  1.25GiB

System,single: Size:32.00MiB, Used:16.00KiB
   /dev/mmcblk0p3 32.00MiB

Unallocated:
   /dev/mmcblk0p3  1.79GiB

root@Menionubuntu:/home/menion# btrfs fi df /mnt
Data, single: total=24.00GiB, used=23.10GiB
System, single: total=32.00MiB, used=16.00KiB
Metadata, single: total=1.25GiB, used=801.92MiB
GlobalReserve, single: total=72.89MiB, used=0.00B

The different ways to check the free space are coherent, but if I
check the directories usage on root, surprise:

root@Menionubuntu:/home/menion# du -x -s -h /*
17M /bin
189M/boot
36K /dead.letter
0   /dev
18M /etc
6.1G/home
4.0K/initrd.img
4.0K/initrd.img.old
791M/lib
8.3M/lib64
0   /media
4.0K/mnt
0   /opt
du: cannot access '/proc/24660/task/24660/fd/3': No such file or directory
du: cannot access '/proc/24660/task/24660/fdinfo/3': No such file or directory
du: cannot access '/proc/24660/fd/3': No such file or directory
du: cannot access '/proc/24660/fdinfo/3': No such file or directory
0   /proc
2.9M/root
2.9M/run
17M /sbin
4.0K/snap
0   /srv
0   /sys
0   /tmp
6.1G/usr
2.0G/var
4.0K/vmlinuz
4.0K/vmlinuz.old
4.0K/webmin-setup.out

The computed usage is 15Gb which is what I expected, so there are 9Gb
lost somewhere.
I have run scrub and then full balance with:

btrfs scrub start /
btrfs balance start /
The balance freed 100Mb of space, it was running in background so I
have checked dmesg when "btrfs balance status" said that was completed

dmesg of balance:

[47264.250141] BTRFS info (device mmcblk0p3): relocating block group
37154193408 flags system
[47264.592082] BTRFS info (device mmcblk0p3): relocating block group
36046897152 flags data
[47271.499809] BTRFS info (device mmcblk0p3): found 73 extents
[47272.329921] BTRFS info (device mmcblk0p3): found 60 extents
[47272.471059] BTRFS info (device mmcblk0p3): relocating block group
35778461696 flags metadata
[47280.530041] BTRFS info (device mmcblk0p3): found 3199 extents
[47280.735667] BTRFS info (device mmcblk0p3): relocating block group
34704719872 flags data
[47301.460523] BTRFS info (device mmcblk0p3): relocating block group
37221302272 flags data
[47306.038404] BTRFS info (device mmcblk0p3): found 5 extents
[47306.481371] BTRFS info (device mmcblk0p3): found 5 extents
[47306.673135] BTRFS info (device mmcblk0p3): relocating block group
37187747840 flags system
[47306.874874] BTRFS info (device mmcblk0p3): found 1 extents
[47307.073288] BTRFS info (device mmcblk0p3): relocating block group
34704719872 flags data
[47371.059074] BTRFS info (device mmcblk0p3): found 16258 extents
[47388.191208] BTRFS info (device mmcblk0p3): found 16094 extents
[47388.985462] BTRFS info (device mmcblk0p3): relocating block group
31215058944 flags metadata
[47439.164167] BTRFS info (device mmcblk0p3): found 7378 extents
[47440.163793] BTRFS info (device mmcblk0p3): relocating block group
30141317120 flags data
[47593.239048] BTRFS info (device mmcblk0p3): found 15636 extents
[47618.389357] BTRFS info (device mmcblk0p3): found 15634 extents
[47620.020122] BTRFS info (device mmcblk0p3): relocating block group
29012000768 flags data
[47637.708444] BTRFS info (device mmcblk0p3): found 1154 extents
[47639.757342] BTRFS info (device mmcblk0p3): found 1154 extents
[47640.375483] BTRFS info (device mmcblk0p3): relocating block group
27938258944 flags data
[47743.312441] BTRFS info (device mmcblk0p3): found 17009 extents
[47756.928461] BTRFS info (device mmcblk0p3): found 17005 extents
[47757.607346] BTRFS info (device mmcblk0p3): relocating block group
9416212480 flags metadata
[47825.819449] BTRFS info (device mmcblk0p3): found 11503 extents
[47826.465926] BTRFS info (device mmcblk0p3): 

rm or mv of directories (with subdirectories) hang with no message

2018-08-24 Thread Menion
Hi all
I have been experiencing an issue that I believe can be quite easily
reproduced attempting to recursively rm (mv is also affected) a
directory containing some subdirectories and files.
The problem affects my system that run root BTRFS filesystem created
by Ubuntu server installer (so kernel 4.4.0 at the time of creation)
on an eMMC, and a storage RAID5 array (5x8Tb HDD) created with kernel
4.15.x
I have been observing this issue since kernel 4.15, now I run kernel
4.17.3 and hit the issue again
The issue pops up when you try to "rm -Rf" a directory that contains
some subdirectories with some files inside.
I have intentionally written "some" because I don't know how many
subdirectories are necessary to reproduce the problem, but don't
expect to have the problem with a couple of subdir and few dozen of
files inside.
You launch "rm -Rf" and after an insane amount of time (half an hour
or more) the process is still running. You can CTRL+Z and kill it. If
you start to "rm -Rf" one by one the subdirectories, starting from
inner one, you can remove all the way to the upper directory, meaning
that the filesystem is ok.
There is absolutely ZERO log in dmesg, meaning no log from neither
BTRFS, SCSI or USB (in my case the RAID5 array it is an external USB
enclosure).
Note that it affect a single BTRFS on eMMC and a BTRFS RAID5 on USB
enclosure, so it really seem being BTRFS releated.
Step to reproduce (as it happened just right now):
Clone CoreELEC (multimedia JeOS) and compile:

git clone https://github.com/CoreELEC/CoreELEC.git
cd CoreELEC
git checkout tags/8.95.0
PROJECT=Amlogic DEVICE=S912 ARCH=arm make image

Leave it running for 2-3 hours so it can download and decompress
packages, increasing the level of subdirectories and files. Then kill
the compilation and try to remove the root directory: "rm -Rf
CoreELEC" (or mv it to another medium)
Result: it should takes some minutes, but after half an hour rm
process is still running.
Bye


Re: List of known BTRFS Raid 5/6 Bugs?

2018-08-17 Thread Menion
Ok, but I cannot guarantee that I don't need to cancel scrub during the process
As said, this is a domestic storage, and when scrub is running the
performance hit is big enough to prevent smooth streaming of HD and 4k
movies
Il giorno gio 16 ago 2018 alle ore 21:38  ha scritto:
>
> Could you show scrub status -d, then start a new scrub (all drives) and show 
> scrub status -d again? This may help us diagnose the problem.
>
> Am 15-Aug-2018 09:27:40 +0200 schrieb men...@gmail.com:
> > I needed to resume scrub two times after an unclear shutdown (I was
> > cooking and using too much electricity) and two times after a manual
> > cancel, because I wanted to watch a 4k movie and the array
> > performances were not enough with scrub active.
> > Each time I resumed it, I checked also the status, and the total
> > number of data scrubbed was keep counting (never started from zero)
> > Il giorno mer 15 ago 2018 alle ore 05:33 Zygo Blaxell
> >  ha scritto:
> > >
> > > On Tue, Aug 14, 2018 at 09:32:51AM +0200, Menion wrote:
> > > > Hi
> > > > Well, I think it is worth to give more details on the array.
> > > > the array is built with 5x8TB HDD in an esternal USB3.0 to SATAIII 
> > > > enclosure
> > > > The enclosure is a cheap JMicron based chinese stuff (from Orico).
> > > > There is one USB3.0 link for all the 5 HDD with a SATAIII 3.0Gb
> > > > multiplexer behind it. So you cannot expect peak performance, which is
> > > > not the goal of this array (domestic data storage).
> > > > Also the USB to SATA firmware is buggy, so UAS operations are not
> > > > stable, it run in BOT mode.
> > > > Having said so, the scrub has been started (and resumed) on the array
> > > > mount point:
> > > >
> > > > sudo btrfs scrub start(resume) /media/storage/das1
> > >
> > > So is 2.59TB the amount scrubbed _since resume_? If you run a complete
> > > scrub end to end without cancelling or rebooting in between, what is
> > > the size on all disks (btrfs scrub status -d)?
> > >
> > > > even if reading the documentation I understand that it is the same
> > > > invoking it on mountpoint or one of the HDD in the array.
> > > > In the end, especially for a RAID5 array, does it really make sense to
> > > > scrub only one disk in the array???
> > >
> > > You would set up a shell for-loop and scrub each disk of the array
> > > in turn. Each scrub would correct errors on a single device.
> > >
> > > There was a bug in btrfs scrub where scrubbing the filesystem would
> > > create one thread for each disk, and the threads would issue commands
> > > to all disks and compete with each other for IO, resulting in terrible
> > > performance on most non-SSD hardware. By scrubbing disks one at a time,
> > > there are no competing threads, so the scrub runs many times faster.
> > > With this bug the total time to scrub all disks individually is usually
> > > less than the time to scrub the entire filesystem at once, especially
> > > on HDD (and even if it's not faster, one-at-a-time disk scrubs are
> > > much kinder to any other process trying to use the filesystem at the
> > > same time).
> > >
> > > It appears this bug is not fixed, based on some timing results I am
> > > getting from a test array. iostat shows 10x more reads than writes on
> > > all disks even when all blocks on one disk are corrupted and the scrub
> > > is given only a single disk to process (that should result in roughly
> > > equal reads on all disks slightly above the number of writes on the
> > > corrupted disk).
> > >
> > > This is where my earlier caveat about performance comes from. Many parts
> > > of btrfs raid5 are somewhere between slower and *much* slower than
> > > comparable software raid5 implementations. Some of that is by design:
> > > btrfs must be at least 1% slower than mdadm because btrfs needs to read
> > > metadata to verify data block csums in scrub, and the difference would
> > > be much larger in practice due to HDD seek times, but 500%-900% overhead
> > > still seems high especially when compared to btrfs raid1 that has the
> > > same metadata csum reading issue without the huge performance gap.
> > >
> > > It seems like btrfs raid5 could still use a thorough profiling to figure
> > > out where it's spending all its IO.
> > >
> > > > Regarding the data usage, here you have the current figures:
>

Re: List of known BTRFS Raid 5/6 Bugs?

2018-08-15 Thread Menion
I needed to resume scrub two times after an unclear shutdown (I was
cooking and using too much electricity) and two times after a manual
cancel, because I wanted to watch a 4k movie and the array
performances were not enough with scrub active.
Each time I resumed it, I checked also the status, and the total
number of data scrubbed was keep counting (never started from zero)
Il giorno mer 15 ago 2018 alle ore 05:33 Zygo Blaxell
 ha scritto:
>
> On Tue, Aug 14, 2018 at 09:32:51AM +0200, Menion wrote:
> > Hi
> > Well, I think it is worth to give more details on the array.
> > the array is built with 5x8TB HDD in an esternal USB3.0 to SATAIII enclosure
> > The enclosure is a cheap JMicron based chinese stuff (from Orico).
> > There is one USB3.0 link for all the 5 HDD with a SATAIII 3.0Gb
> > multiplexer behind it. So you cannot expect peak performance, which is
> > not the goal of this array (domestic data storage).
> > Also the USB to SATA firmware is buggy, so UAS operations are not
> > stable, it run in BOT mode.
> > Having said so, the scrub has been started (and resumed) on the array
> > mount point:
> >
> > sudo btrfs scrub start(resume) /media/storage/das1
>
> So is 2.59TB the amount scrubbed _since resume_?  If you run a complete
> scrub end to end without cancelling or rebooting in between, what is
> the size on all disks (btrfs scrub status -d)?
>
> > even if reading the documentation I understand that it is the same
> > invoking it on mountpoint or one of the HDD in the array.
> > In the end, especially for a RAID5 array, does it really make sense to
> > scrub only one disk in the array???
>
> You would set up a shell for-loop and scrub each disk of the array
> in turn.  Each scrub would correct errors on a single device.
>
> There was a bug in btrfs scrub where scrubbing the filesystem would
> create one thread for each disk, and the threads would issue commands
> to all disks and compete with each other for IO, resulting in terrible
> performance on most non-SSD hardware.  By scrubbing disks one at a time,
> there are no competing threads, so the scrub runs many times faster.
> With this bug the total time to scrub all disks individually is usually
> less than the time to scrub the entire filesystem at once, especially
> on HDD (and even if it's not faster, one-at-a-time disk scrubs are
> much kinder to any other process trying to use the filesystem at the
> same time).
>
> It appears this bug is not fixed, based on some timing results I am
> getting from a test array.  iostat shows 10x more reads than writes on
> all disks even when all blocks on one disk are corrupted and the scrub
> is given only a single disk to process (that should result in roughly
> equal reads on all disks slightly above the number of writes on the
> corrupted disk).
>
> This is where my earlier caveat about performance comes from.  Many parts
> of btrfs raid5 are somewhere between slower and *much* slower than
> comparable software raid5 implementations.  Some of that is by design:
> btrfs must be at least 1% slower than mdadm because btrfs needs to read
> metadata to verify data block csums in scrub, and the difference would
> be much larger in practice due to HDD seek times, but 500%-900% overhead
> still seems high especially when compared to btrfs raid1 that has the
> same metadata csum reading issue without the huge performance gap.
>
> It seems like btrfs raid5 could still use a thorough profiling to figure
> out where it's spending all its IO.
>
> > Regarding the data usage, here you have the current figures:
> >
> > menion@Menionubuntu:~$ sudo btrfs fi show
> > [sudo] password for menion:
> > Label: none  uuid: 6db4baf7-fda8-41ac-a6ad-1ca7b083430f
> > Total devices 1 FS bytes used 11.44GiB
> > devid1 size 27.07GiB used 18.07GiB path /dev/mmcblk0p3
> >
> > Label: none  uuid: 931d40c6-7cd7-46f3-a4bf-61f3a53844bc
> > Total devices 5 FS bytes used 6.57TiB
> > devid1 size 7.28TiB used 1.64TiB path /dev/sda
> > devid2 size 7.28TiB used 1.64TiB path /dev/sdb
> > devid3 size 7.28TiB used 1.64TiB path /dev/sdc
> > devid4 size 7.28TiB used 1.64TiB path /dev/sdd
> > devid    5 size 7.28TiB used 1.64TiB path /dev/sde
> >
> > menion@Menionubuntu:~$ sudo btrfs fi df /media/storage/das1
> > Data, RAID5: total=6.57TiB, used=6.56TiB
> > System, RAID5: total=12.75MiB, used=416.00KiB
> > Metadata, RAID5: total=9.00GiB, used=8.16GiB
> > GlobalReserve, single: total=512.00MiB, used=0.00B
> > menion@Menionubuntu:~$ sudo btrfs fi usage /media/storage/das1
> > WARNING: RAID56 detected, not implemented
> > WARNING: RAID56 detected, not implemente

Re: List of known BTRFS Raid 5/6 Bugs?

2018-08-14 Thread Menion
Hi
Well, I think it is worth to give more details on the array.
the array is built with 5x8TB HDD in an esternal USB3.0 to SATAIII enclosure
The enclosure is a cheap JMicron based chinese stuff (from Orico).
There is one USB3.0 link for all the 5 HDD with a SATAIII 3.0Gb
multiplexer behind it. So you cannot expect peak performance, which is
not the goal of this array (domestic data storage).
Also the USB to SATA firmware is buggy, so UAS operations are not
stable, it run in BOT mode.
Having said so, the scrub has been started (and resumed) on the array
mount point:

sudo btrfs scrub start(resume) /media/storage/das1

even if reading the documentation I understand that it is the same
invoking it on mountpoint or one of the HDD in the array.
In the end, especially for a RAID5 array, does it really make sense to
scrub only one disk in the array???
Regarding the data usage, here you have the current figures:

menion@Menionubuntu:~$ sudo btrfs fi show
[sudo] password for menion:
Label: none  uuid: 6db4baf7-fda8-41ac-a6ad-1ca7b083430f
Total devices 1 FS bytes used 11.44GiB
devid1 size 27.07GiB used 18.07GiB path /dev/mmcblk0p3

Label: none  uuid: 931d40c6-7cd7-46f3-a4bf-61f3a53844bc
Total devices 5 FS bytes used 6.57TiB
devid1 size 7.28TiB used 1.64TiB path /dev/sda
devid2 size 7.28TiB used 1.64TiB path /dev/sdb
devid3 size 7.28TiB used 1.64TiB path /dev/sdc
devid4 size 7.28TiB used 1.64TiB path /dev/sdd
devid5 size 7.28TiB used 1.64TiB path /dev/sde

menion@Menionubuntu:~$ sudo btrfs fi df /media/storage/das1
Data, RAID5: total=6.57TiB, used=6.56TiB
System, RAID5: total=12.75MiB, used=416.00KiB
Metadata, RAID5: total=9.00GiB, used=8.16GiB
GlobalReserve, single: total=512.00MiB, used=0.00B
menion@Menionubuntu:~$ sudo btrfs fi usage /media/storage/das1
WARNING: RAID56 detected, not implemented
WARNING: RAID56 detected, not implemented
WARNING: RAID56 detected, not implemented
Overall:
Device size:   36.39TiB
Device allocated:  0.00B
Device unallocated:   36.39TiB
Device missing:  0.00B
Used:  0.00B
Free (estimated):  0.00B (min: 8.00EiB)
Data ratio:   0.00
Metadata ratio:   0.00
Global reserve: 512.00MiB (used: 32.00KiB)

Data,RAID5: Size:6.57TiB, Used:6.56TiB
   /dev/sda1.64TiB
   /dev/sdb1.64TiB
   /dev/sdc1.64TiB
   /dev/sdd1.64TiB
   /dev/sde1.64TiB

Metadata,RAID5: Size:9.00GiB, Used:8.16GiB
   /dev/sda2.25GiB
   /dev/sdb2.25GiB
   /dev/sdc2.25GiB
   /dev/sdd2.25GiB
   /dev/sde2.25GiB

System,RAID5: Size:12.75MiB, Used:416.00KiB
   /dev/sda3.19MiB
   /dev/sdb3.19MiB
   /dev/sdc3.19MiB
   /dev/sdd3.19MiB
   /dev/sde3.19MiB

Unallocated:
   /dev/sda5.63TiB
   /dev/sdb5.63TiB
   /dev/sdc5.63TiB
   /dev/sdd5.63TiB
   /dev/sde5.63TiB
menion@Menionubuntu:~$
menion@Menionubuntu:~$ sf -h
The program 'sf' is currently not installed. You can install it by typing:
sudo apt install ruby-sprite-factory
menion@Menionubuntu:~$ df -h
Filesystem  Size  Used Avail Use% Mounted on
udev934M 0  934M   0% /dev
tmpfs   193M   22M  171M  12% /run
/dev/mmcblk0p3   28G   12G   15G  44% /
tmpfs   962M 0  962M   0% /dev/shm
tmpfs   5,0M 0  5,0M   0% /run/lock
tmpfs   962M 0  962M   0% /sys/fs/cgroup
/dev/mmcblk0p1  188M  3,4M  184M   2% /boot/efi
/dev/mmcblk0p3   28G   12G   15G  44% /home
/dev/sda 37T  6,6T   29T  19% /media/storage/das1
tmpfs   193M 0  193M   0% /run/user/1000
menion@Menionubuntu:~$ btrfs --version
btrfs-progs v4.17

So I don't fully understand where the scrub data size comes from
Il giorno lun 13 ago 2018 alle ore 23:56  ha scritto:
>
> Running time of 55:06:35 indicates that the counter is right, it is not 
> enough time to scrub the entire array using hdd.
>
> 2TiB might be right if you only scrubbed one disc, "sudo btrfs scrub start 
> /dev/sdx1" only scrubs the selected partition,
> whereas "sudo btrfs scrub start /media/storage/das1" scrubs the actual array.
>
> Use "sudo btrfs scrub status -d " to view per disc scrubbing statistics and 
> post the output.
> For live statistics, use "sudo watch -n 1".
>
> By the way:
> 0 errors despite multiple unclean shutdowns? I assumed that the write hole 
> would corrupt parity the first time around, was i wrong?
>
> Am 13-Aug-2018 09:20:36 +0200 schrieb men...@gmail.com:
> > Hi
> > I have a BTRFS RAID5 array built on 5x8TB HDD filled with, well :),
> > there are contradicting opinions by the, well, "several" ways to check
> > the used space on a BTRFS RAID5 array, but I should be aroud 8TB of
> > data.
> > This array is running on kernel 4.17.3 and it definitely experienced
> > power loss while data was being written.
> > I can say that it wen through at least a dozen of unclear

Re: List of known BTRFS Raid 5/6 Bugs?

2018-08-13 Thread Menion
Hi
I have a BTRFS RAID5 array built on 5x8TB HDD filled with, well :),
there are contradicting opinions by the, well, "several" ways to check
the used space on a BTRFS RAID5 array, but I should be aroud 8TB of
data.
This array is running on kernel 4.17.3 and it definitely experienced
power loss while data was being written.
I can say that it wen through at least a dozen of unclear shutdown
So following this thread I started my first scrub on the array. and
this is the outcome (after having resumed it 4 times, two after a
power loss...):

menion@Menionubuntu:~$ sudo btrfs scrub status /media/storage/das1/
scrub status for 931d40c6-7cd7-46f3-a4bf-61f3a53844bc
scrub resumed at Sun Aug 12 18:43:31 2018 and finished after 55:06:35
total bytes scrubbed: 2.59TiB with 0 errors

So, there are 0 errors, but I don't understand why it says 2.59TiB of
scrubbed data. Is it possible that also this values is crap, as the
non zero counters for RAID5 array?
Il giorno sab 11 ago 2018 alle ore 17:29 Zygo Blaxell
 ha scritto:
>
> On Sat, Aug 11, 2018 at 08:27:04AM +0200, erentheti...@mail.de wrote:
> > I guess that covers most topics, two last questions:
> >
> > Will the write hole behave differently on Raid 6 compared to Raid 5 ?
>
> Not really.  It changes the probability distribution (you get an extra
> chance to recover using a parity block in some cases), but there are
> still cases where data gets lost that didn't need to be.
>
> > Is there any benefit of running Raid 5 Metadata compared to Raid 1 ?
>
> There may be benefits of raid5 metadata, but they are small compared to
> the risks.
>
> In some configurations it may not be possible to allocate the last
> gigabyte of space.  raid1 will allocate 1GB chunks from 2 disks at a
> time while raid5 will allocate 1GB chunks from N disks at a time, and if
> N is an odd number there could be one chunk left over in the array that
> is unusable.  Most users will find this irrelevant because a large disk
> array that is filled to the last GB will become quite slow due to long
> free space search and seek times--you really want to keep usage below 95%,
> maybe 98% at most, and that means the last GB will never be needed.
>
> Reading raid5 metadata could theoretically be faster than raid1, but that
> depends on a lot of variables, so you can't assume it as a rule of thumb.
>
> Raid6 metadata is more interesting because it's the only currently
> supported way to get 2-disk failure tolerance in btrfs.  Unfortunately
> that benefit is rather limited due to the write hole bug.
>
> There are patches floating around that implement multi-disk raid1 (i.e. 3
> or 4 mirror copies instead of just 2).  This would be much better for
> metadata than raid6--more flexible, more robust, and my guess is that
> it will be faster as well (no need for RMW updates or journal seeks).
>
> > -
> > FreeMail powered by mail.de - MEHR SICHERHEIT, SERIOSITÄT UND KOMFORT
> >


Optimal maintenance for RAID5 array

2018-04-27 Thread Menion
Hi all
I am running a RAID5 array built on 5x8TB HD. The filesystem usage is
aproximatively 6TB now
I rung kernel 4.16.5 and btrfs progs 4.16 (planning to upgrade to
4.16.1) under Ubuntu xenial
I am not sure what is the best/safest way to maintain the array, in
particular which is the best scrub/whatever strategy to apply
The filesystem is 95% used as storage, meaning that few files are
moved around or deleted
Bye
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Unable to compile btrfs progs 4.16 on ubuntu Xenial

2018-04-08 Thread Menion
Ok, that was missing, also python3—setuptools is required.
I think it is worth to add in the wiki the packages dependencies
Bye

2018-04-08 10:51 GMT+02:00 Nikolay Borisov <nbori...@suse.com>:
>
>
> On  7.04.2018 20:16, Menion wrote:
>> Hi all
>> Apparently it is not possible to compile with python bindings the
>> btrfs progs on ubuntu xenial
>>
>> checking for a Python interpreter with version >= 3.4... python3
>> checking for python3... /usr/bin/python3
>> checking for python3 version... 3.5
>> checking for python3 platform... linux
>> checking for python3 script directory... 
>> ${prefix}/lib/python3.5/site-packages
>> checking for python3 extension module directory...
>> ${exec_prefix}/lib/python3.5/site-packages
>> checking for python-3.5... no
>> configure: error: Package requirements (python-3.5) were not met:
>>
>> No package 'python-3.5' found
>>
>> Consider adjusting the PKG_CONFIG_PATH environment variable if you
>> installed software in a non-standard prefix.
>>
>> Alternatively, you may set the environment variables PYTHON_CFLAGS
>> and PYTHON_LIBS to avoid the need to call pkg-config.
>> See the pkg-config man page for more details.
>>
>> /usr/lib/python3.5/site-packages exists, but on Ubuntu the package
>> name is python3.5 and not python-3.5
>
>
> Do you have libpython3-dev installed? When I installed it on my ubuntu i
> could compile the progs irrespective of whether --prefix was passed or not
>>
>> Bye
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
>> the body of a message to majord...@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Unable to compile btrfs progs 4.16 on ubuntu Xenial

2018-04-08 Thread Menion
./configure --prefix=/usr --disable-documentation --enable-zstd

2018-04-08 9:17 GMT+02:00 Nikolay Borisov <nbori...@suse.com>:
>
>
> On  7.04.2018 23:40, Menion wrote:
>> I am adding - - prefix=/usr that seems you are not using
>>
>
> Clearly you haven't shared all the necessary information, post your
> entire configure line
>
>> 2018-04-07 21:55 GMT+02:00 Nikolay Borisov <nbori...@suse.com>:
>>>
>>>
>>> On  7.04.2018 20:16, Menion wrote:
>>>> Hi all
>>>> Apparently it is not possible to compile with python bindings the
>>>> btrfs progs on ubuntu xenial
>>>>
>>>> checking for a Python interpreter with version >= 3.4... python3
>>>> checking for python3... /usr/bin/python3
>>>> checking for python3 version... 3.5
>>>> checking for python3 platform... linux
>>>> checking for python3 script directory... 
>>>> ${prefix}/lib/python3.5/site-packages
>>>> checking for python3 extension module directory...
>>>> ${exec_prefix}/lib/python3.5/site-packages
>>>> checking for python-3.5... no
>>>> configure: error: Package requirements (python-3.5) were not met:
>>>>
>>>> No package 'python-3.5' found
>>>>
>>>> Consider adjusting the PKG_CONFIG_PATH environment variable if you
>>>> installed software in a non-standard prefix.
>>>>
>>>> Alternatively, you may set the environment variables PYTHON_CFLAGS
>>>> and PYTHON_LIBS to avoid the need to call pkg-config.
>>>> See the pkg-config man page for more details.
>>>>
>>>> /usr/lib/python3.5/site-packages exists, but on Ubuntu the package
>>>> name is python3.5 and not python-3.5
>>>
>>>
>>> Works for me, I'm also on xenial:
>>>
>>> checking for python3... /usr/bin/python3
>>> checking for python3 version... 3.5
>>> checking for python3 platform... linux
>>> checking for python3 script directory... 
>>> ${prefix}/lib/python3.5/site-packages
>>> checking for python3 extension module directory... 
>>> ${exec_prefix}/lib/python3.5/site-packages
>>> checking for PYTHON... yes
>>> checking for lzo_version in -llzo2... yes
>>> configure: creating ./config.status
>>> config.status: creating Makefile.inc
>>> config.status: creating Documentation/Makefile
>>> config.status: creating version.h
>>> config.status: creating config.h
>>>
>>> btrfs-progs v4.16
>>>
>>> prefix: /usr/local
>>> exec prefix:${prefix}
>>>
>>> bindir: ${exec_prefix}/bin
>>> libdir: ${exec_prefix}/lib
>>> includedir: ${prefix}/include
>>>
>>> compiler:   gcc
>>> cflags: -g -O1 -Wall -D_FORTIFY_SOURCE=2
>>> ldflags:
>>>
>>> documentation:  no
>>> doc generator:  none
>>> backtrace support:  yes
>>> btrfs-convert:  no
>>> btrfs-restore zstd: no
>>> Python bindings:yes
>>> Python interpreter: /usr/bin/python3
>>>
>>>
>>>
>>>
>>>>
>>>> Bye
>>>> --
>>>> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
>>>> the body of a message to majord...@vger.kernel.org
>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>>
>>
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Unable to compile btrfs progs 4.16 on ubuntu Xenial

2018-04-07 Thread Menion
I am adding - - prefix=/usr that seems you are not using

2018-04-07 21:55 GMT+02:00 Nikolay Borisov <nbori...@suse.com>:
>
>
> On  7.04.2018 20:16, Menion wrote:
>> Hi all
>> Apparently it is not possible to compile with python bindings the
>> btrfs progs on ubuntu xenial
>>
>> checking for a Python interpreter with version >= 3.4... python3
>> checking for python3... /usr/bin/python3
>> checking for python3 version... 3.5
>> checking for python3 platform... linux
>> checking for python3 script directory... 
>> ${prefix}/lib/python3.5/site-packages
>> checking for python3 extension module directory...
>> ${exec_prefix}/lib/python3.5/site-packages
>> checking for python-3.5... no
>> configure: error: Package requirements (python-3.5) were not met:
>>
>> No package 'python-3.5' found
>>
>> Consider adjusting the PKG_CONFIG_PATH environment variable if you
>> installed software in a non-standard prefix.
>>
>> Alternatively, you may set the environment variables PYTHON_CFLAGS
>> and PYTHON_LIBS to avoid the need to call pkg-config.
>> See the pkg-config man page for more details.
>>
>> /usr/lib/python3.5/site-packages exists, but on Ubuntu the package
>> name is python3.5 and not python-3.5
>
>
> Works for me, I'm also on xenial:
>
> checking for python3... /usr/bin/python3
> checking for python3 version... 3.5
> checking for python3 platform... linux
> checking for python3 script directory... ${prefix}/lib/python3.5/site-packages
> checking for python3 extension module directory... 
> ${exec_prefix}/lib/python3.5/site-packages
> checking for PYTHON... yes
> checking for lzo_version in -llzo2... yes
> configure: creating ./config.status
> config.status: creating Makefile.inc
> config.status: creating Documentation/Makefile
> config.status: creating version.h
> config.status: creating config.h
>
> btrfs-progs v4.16
>
> prefix: /usr/local
> exec prefix:${prefix}
>
> bindir: ${exec_prefix}/bin
> libdir: ${exec_prefix}/lib
> includedir: ${prefix}/include
>
> compiler:   gcc
> cflags: -g -O1 -Wall -D_FORTIFY_SOURCE=2
> ldflags:
>
> documentation:  no
> doc generator:  none
> backtrace support:  yes
> btrfs-convert:  no
> btrfs-restore zstd: no
> Python bindings:yes
> Python interpreter: /usr/bin/python3
>
>
>
>
>>
>> Bye
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
>> the body of a message to majord...@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Unable to compile btrfs progs 4.16 on ubuntu Xenial

2018-04-07 Thread Menion
Hi all
Apparently it is not possible to compile with python bindings the
btrfs progs on ubuntu xenial

checking for a Python interpreter with version >= 3.4... python3
checking for python3... /usr/bin/python3
checking for python3 version... 3.5
checking for python3 platform... linux
checking for python3 script directory... ${prefix}/lib/python3.5/site-packages
checking for python3 extension module directory...
${exec_prefix}/lib/python3.5/site-packages
checking for python-3.5... no
configure: error: Package requirements (python-3.5) were not met:

No package 'python-3.5' found

Consider adjusting the PKG_CONFIG_PATH environment variable if you
installed software in a non-standard prefix.

Alternatively, you may set the environment variables PYTHON_CFLAGS
and PYTHON_LIBS to avoid the need to call pkg-config.
See the pkg-config man page for more details.

/usr/lib/python3.5/site-packages exists, but on Ubuntu the package
name is python3.5 and not python-3.5

Bye
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Status of RAID5/6

2018-03-30 Thread Menion
 Thanks for the detailed explanation. I think that a summary of this
should go in the btrfs raid56 wiki status page, because now it is
completely inconsistent and if a user comes there, ihe may get the
impression that the raid56 is just broken
Still I have the 1 bilion dollar question: from your word I understand
that even in RAID56 the metadata are spread on the devices in a coplex
way, but shall I assume that the array can survice to the sudden death
of one (two for raid6) HDD in the array?
Bye
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Status of RAID5/6

2018-03-21 Thread Menion
I am on 4.15.5 :)
Yes I agree that Journaling is better on the same array,  still should
be unit failure tolerant, so maybe it should go in a RAID1 scheme.
Will a raid56 array built with older kernel be compatible with the new
forecoming code?
Bye

2018-03-21 18:24 GMT+01:00 Liu Bo <obuil.li...@gmail.com>:
> On Wed, Mar 21, 2018 at 9:50 AM, Menion <men...@gmail.com> wrote:
>> Hi all
>> I am trying to understand the status of RAID5/6 in BTRFS
>> I know that there are some discussion ongoing on the RFC patch
>> proposed by Liu bo
>> But it seems that everything stopped last summary. Also it mentioned
>> about a "separate disk for journal", does it mean that the final
>> implementation of RAID5/6 will require a dedicated HDD for the
>> journaling?
>
> Thanks for the interest on btrfs and raid56.
>
> The patch set is to plug write hole, which is very rare in practice, tbh.
> The feedback is to use existing space instead of another dedicate
> "fast device" as the journal in order to get some extent of raid
> protection.  I'd need some time to pick it up.
>
> With that being said, we have several data reconstruction fixes for
> raid56 (esp. raid6) in 4.15, I'd say please deploy btrfs with the
> upstream kernel or some distros which do kernel updates frequently,
> the most important one is
>
> 8810f7517a3b Btrfs: make raid6 rebuild retry more
> https://patchwork.kernel.org/patch/10091755/
>
> AFAIK, no other data corruptions showed up.
>
> thanks,
> liubo
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Status of RAID5/6

2018-03-21 Thread Menion
Hi all
I am trying to understand the status of RAID5/6 in BTRFS
I know that there are some discussion ongoing on the RFC patch
proposed by Liu bo
But it seems that everything stopped last summary. Also it mentioned
about a "separate disk for journal", does it mean that the final
implementation of RAID5/6 will require a dedicated HDD for the
journaling?
Bye
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: dmesg flooded with "Very big device. Trying to use READ CAPACITY(16)" with 8TB HDDs

2018-03-08 Thread Menion
Actually this path can be taken in few occurrency

1) device probe, only when the device is plugged or detected the first time
2) revalidate_disk fops of block device

Is it possible that BTRFS every 5 minutes call the revalidate_disk?

2018-03-08 11:16 GMT+01:00 Menion <men...@gmail.com>:
> Hi again
> I had a discussion in linux-scsi about this topic
> My understanding is that it is true that the read_capacity is opaque
> to the filesystem but it is also true that the scsi layer export two
> specific read_capacity ops, the read10 and read16 and the upper layers
> shall select the proper one, based on the response of the other.
> In the log, I see that this read_capacity_10 is called every 5
> minutes, and it fallback to read_capacity_16, since who is doing it
> endup in calling sd_read_capacity in scsi layer, rather then pickup
> read10 or read16 directly
> I am not telling that BTRFS is doing it for sure, but I have ruled out
> smartd, so based on the periodicity of 5 minutes, can you think about
> anything in the BTRFS internals that can be responsible of this?
>
> 2018-03-02 17:19 GMT+01:00 Menion <men...@gmail.com>:
>> Thanks
>> My point was to understand if this action was taken by BTRFS or
>> automously by scsi.
>> From your word it seems clear to me that this should go in
>> KERNEL_DEBUG level, instead of KERNEL_NOTICE
>> Bye
>>
>> 2018-03-02 16:18 GMT+01:00 David Sterba <dste...@suse.cz>:
>>> On Fri, Mar 02, 2018 at 12:37:49PM +0100, Menion wrote:
>>>> Is it really a no problem? I mean, for some reason BTRFS is
>>>> continuously read the HDD capacity in an array, that does not seem to
>>>> be really correct
>>>
>>> The message comes from SCSI:
>>> https://elixir.bootlin.com/linux/latest/source/drivers/scsi/sd.c#L2508
>>>
>>> Reading drive capacity could be totally opaque for the filesystem, eg.
>>> when the scsi layer compares the requested block address with the device
>>> size.
>>>
>>> The sizes of blockdevices is obtained from the i_size member of the
>>> inode representing the block device, so there's no direct read by btrfs.
>>> You'd have better luck reporting that to scsi or block layer
>>> mailinglists.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: dmesg flooded with "Very big device. Trying to use READ CAPACITY(16)" with 8TB HDDs

2018-03-08 Thread Menion
Hi again
I had a discussion in linux-scsi about this topic
My understanding is that it is true that the read_capacity is opaque
to the filesystem but it is also true that the scsi layer export two
specific read_capacity ops, the read10 and read16 and the upper layers
shall select the proper one, based on the response of the other.
In the log, I see that this read_capacity_10 is called every 5
minutes, and it fallback to read_capacity_16, since who is doing it
endup in calling sd_read_capacity in scsi layer, rather then pickup
read10 or read16 directly
I am not telling that BTRFS is doing it for sure, but I have ruled out
smartd, so based on the periodicity of 5 minutes, can you think about
anything in the BTRFS internals that can be responsible of this?

2018-03-02 17:19 GMT+01:00 Menion <men...@gmail.com>:
> Thanks
> My point was to understand if this action was taken by BTRFS or
> automously by scsi.
> From your word it seems clear to me that this should go in
> KERNEL_DEBUG level, instead of KERNEL_NOTICE
> Bye
>
> 2018-03-02 16:18 GMT+01:00 David Sterba <dste...@suse.cz>:
>> On Fri, Mar 02, 2018 at 12:37:49PM +0100, Menion wrote:
>>> Is it really a no problem? I mean, for some reason BTRFS is
>>> continuously read the HDD capacity in an array, that does not seem to
>>> be really correct
>>
>> The message comes from SCSI:
>> https://elixir.bootlin.com/linux/latest/source/drivers/scsi/sd.c#L2508
>>
>> Reading drive capacity could be totally opaque for the filesystem, eg.
>> when the scsi layer compares the requested block address with the device
>> size.
>>
>> The sizes of blockdevices is obtained from the i_size member of the
>> inode representing the block device, so there's no direct read by btrfs.
>> You'd have better luck reporting that to scsi or block layer
>> mailinglists.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: dmesg flooded with "Very big device. Trying to use READ CAPACITY(16)" with 8TB HDDs

2018-03-02 Thread Menion
Thanks
My point was to understand if this action was taken by BTRFS or
automously by scsi.
>From your word it seems clear to me that this should go in
KERNEL_DEBUG level, instead of KERNEL_NOTICE
Bye

2018-03-02 16:18 GMT+01:00 David Sterba <dste...@suse.cz>:
> On Fri, Mar 02, 2018 at 12:37:49PM +0100, Menion wrote:
>> Is it really a no problem? I mean, for some reason BTRFS is
>> continuously read the HDD capacity in an array, that does not seem to
>> be really correct
>
> The message comes from SCSI:
> https://elixir.bootlin.com/linux/latest/source/drivers/scsi/sd.c#L2508
>
> Reading drive capacity could be totally opaque for the filesystem, eg.
> when the scsi layer compares the requested block address with the device
> size.
>
> The sizes of blockdevices is obtained from the i_size member of the
> inode representing the block device, so there's no direct read by btrfs.
> You'd have better luck reporting that to scsi or block layer
> mailinglists.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: dmesg flooded with "Very big device. Trying to use READ CAPACITY(16)" with 8TB HDDs

2018-03-02 Thread Menion
Is it really a no problem? I mean, for some reason BTRFS is
continuously read the HDD capacity in an array, that does not seem to
be really correct
Bye

2018-02-26 11:07 GMT+01:00 Menion <men...@gmail.com>:
> Hi all
> I have recently started to operate an array of 5x8TB HDD (WD RED) in RAID5 
> mode
> The array seems to work ok, but with the time the dmesg is flooded by this 
> log:
>
> [ 338.674673] sd 0:0:0:0: [sda] Very big device. Trying to use READ
> CAPACITY(16).
> [ 338.767184] sd 0:0:0:1: [sdb] Very big device. Trying to use READ
> CAPACITY(16).
> [  338.989477] sd 0:0:0:3: [sdd] Very big device. Trying to use READ
> CAPACITY(16).
> [  339.301194] sd 0:0:0:4: [sde] Very big device. Trying to use READ
> CAPACITY(16).
> [  339.506579] sd 0:0:0:2: [sdc] Very big device. Trying to use READ
> CAPACITY(16).
> [  649.393340] sd 0:0:0:0: [sda] Very big device. Trying to use READ
> CAPACITY(16).
> [  650.129849] sd 0:0:0:1: [sdb] Very big device. Trying to use READ
> CAPACITY(16).
> [  650.379622] sd 0:0:0:3: [sdd] Very big device. Trying to use READ
> CAPACITY(16).
> [  650.524828] sd 0:0:0:4: [sde] Very big device. Trying to use READ
> CAPACITY(16).
> [  650.721615] sd 0:0:0:2: [sdc] Very big device. Trying to use READ
> CAPACITY(16).
> [  959.544384] sd 0:0:0:0: [sda] Very big device. Trying to use READ
> CAPACITY(16).
> [  959.627015] sd 0:0:0:1: [sdb] Very big device. Trying to use READ
> CAPACITY(16).
> [  959.790280] sd 0:0:0:3: [sdd] Very big device. Trying to use READ
> CAPACITY(16).
> [  959.901179] sd 0:0:0:4: [sde] Very big device. Trying to use READ
> CAPACITY(16).
> [  960.048734] sd 0:0:0:2: [sdc] Very big device. Trying to use READ
> CAPACITY(16).
>
> sda,sdb,sdc,sdd,sde as you can imagine are the HDDs in the array
>
> Other info (note: there is also another single BTRFS array of 3 small
> device that never print this log and my root filesystem is BTRFS as
> well)
>
> menion@Menionubuntu:/etc$ uname -a
> Linux Menionubuntu 4.15.5-041505-generic #201802221031 SMP Thu Feb 22
> 15:32:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
> menion@Menionubuntu:/etc$   btrfs --version
> btrfs-progs v4.15.1
> menion@Menionubuntu:/etc$ sudo btrfs fi show
> [sudo] password for menion:
> Label: none  uuid: 6db4baf7-fda8-41ac-a6ad-1ca7b083430f
> Total devices 1 FS bytes used 9.02GiB
> devid1 size 27.07GiB used 11.02GiB path /dev/mmcblk0p3
>
> Label: none  uuid: 931d40c6-7cd7-46f3-a4bf-61f3a53844bc
> Total devices 5 FS bytes used 5.47TiB
> devid1 size 7.28TiB used 1.37TiB path /dev/sda
> devid2 size 7.28TiB used 1.37TiB path /dev/sdb
> devid3 size 7.28TiB used 1.37TiB path /dev/sdc
> devid4 size 7.28TiB used 1.37TiB path /dev/sdd
> devid5 size 7.28TiB used 1.37TiB path /dev/sde
>
> Label: none  uuid: ba1e0d88-2e26-499d-8fe3-458b9c53349a
> Total devices 3 FS bytes used 534.50GiB
>     devid1 size 232.89GiB used 102.03GiB path /dev/sdh
> devid2 size 232.89GiB used 102.00GiB path /dev/sdi
> devid3 size 465.76GiB used 335.03GiB path /dev/sdj
>
> menion@Menionubuntu:/etc$ sudo btrfs fi df /media/storage/das1
> Data, RAID5: total=5.49TiB, used=5.46TiB
> System, RAID5: total=12.75MiB, used=352.00KiB
> Metadata, RAID5: total=7.00GiB, used=6.11GiB
> GlobalReserve, single: total=512.00MiB, used=0.00B
> menion@Menionubuntu:/etc$
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html