Re: Bug#922815: insserv FATAL while updating as mountkernfs has to be enabled to use service udev

2019-02-26 Thread Dmitry Bogatov


[2019-02-25 05:45] shirish शिरीष 
> On 24/02/2019, Dmitry Bogatov  wrote:
>
> 
>
> > Interesting, you seems somehow got mountkernfs.sh script removed from
> > runlevel S.
> >
> > Can you please invoke as root
> >
> > # update-rc.d mountkernfs.sh enable S
> >
> > and then retry your upgrade.
>
> When I try the command you shared, it says -
>
> root@debian:~# update-rc.d mountkernfs.sh enable S
> update-rc.d: error: cannot find a LSB script for mountkernfs.sh

Seems there is no /etc/init.d/mountkernfs.sh on your system.

Since your init system is systemd, I question, why do you need insserv
in first place. Do you have bin:initscripts installed?

My guess is that update-initramfs invokes insserv /if/ it is available,
and since you have insserv installed, but no initscripts, we get this
bug.

Adding initramfs-tools maintainer into loop. If my guess is correct,
this issue should be resolved on initramfs side, since making insserv
depending on initscripts is not nice to user.
-- 
Note, that I send and fetch email in batch, once every 24 hours.
 If matter is urgent, try https://t.me/kaction
 --



Bug#913119: Bug#913138: Bug#913119: linux-image-4.18.0-2-amd64: Hangs on lvm raid1

2019-02-26 Thread Dragan Milenkovic

On 2/26/19 10:11 PM, Cesare Leonardi wrote:
Hello Dragan, do you know if the patch was eventually included upstream 
and possibly in which version?


Hello, Cesare. It was included in 4.20.11 and 4.19.24.

Dragan



Bug#913119: Bug#913138: Bug#913119: linux-image-4.18.0-2-amd64: Hangs on lvm raid1

2019-02-26 Thread martin

On 2019-02-26 21:11, Cesare Leonardi wrote:

On 13/02/19 18:21, Dragan Milenkovic wrote:
This patch is already on its way to stable branches. I have tested it 
and confirmed that it resolves the problem.


Hello Dragan, do you know if the patch was eventually included
upstream and possibly in which version?


Looking at the change log it's in here:
https://cdn.kernel.org/pub/linux/kernel/v4.x/ChangeLog-4.19.24

towards the bottom search for "blk-mq: fix a hung issue when fsync"

Regards
M



Bug#913119: Bug#913138: Bug#913119: linux-image-4.18.0-2-amd64: Hangs on lvm raid1

2019-02-26 Thread Cesare Leonardi

On 13/02/19 18:21, Dragan Milenkovic wrote:
This patch is already on its way to stable branches. I have tested it 
and confirmed that it resolves the problem.


Hello Dragan, do you know if the patch was eventually included upstream 
and possibly in which version?


Cesare.



Re: last preparations for switching to production Secure Boot key

2019-02-26 Thread Ansgar
Hi,

Colin Watson writes:
> On Mon, Feb 25, 2019 at 08:13:22PM +0100, Ansgar wrote:
>> I added support for listing `trusted_certs`[1] as proposed by Ben
>> Hutchings.  This means the `files.json` structure *must* list the
>> sha256sum of certificates the signed binaries will trust (this can be an
>> empty list in case no hard-coded certificates are trusted).
>
> Do I understand correctly that this ought to be empty in the case of
> grub2, since it does all its signature checking via shim?  If so, done:
>
>   
> https://salsa.debian.org/grub-team/grub/commit/89c1529cd82f106dbb9a4b17bae03e828ec349b6

Yes, that looks okay.

>> I would like to implement one additional change.  Currently files.json
>> looks like this:
> [...]
>> This is not extendable; therefore I would like to move everything below a
>> top-level `packages` key, i.e. the file would look like this instead:
> [...]
>> This would allow adding additional top-level keys later should the need
>> arise.  (I'll prepare the archive-side changes for this later today.)
>
> I'm happy to do this, though presumably it's a flag day?

It is a flag day change, but we already have a flag day for adding
trusted_certs (as uploads without the key will no longer get signed).
It also means we won't have to support the old files.json format as we
never had a (stable) release using it.

>> Could all maintainers (for fwupd, fwupdate, grub2, linux) please ack one
>> last time that their packages are ready for switching to the production
>> key?  And prepare an upload with the changes described above and ready
>> to use the production key?
>
> I don't know of any blockers from the grub2 side.  Once the archive has
> the "packages" key changes, I can prepare an upload - I was planning to
> make one this week anyway.

The changes to code-signing are done and pushed to my fork on salsa[1]; I'm
just waiting to deploy them (well, and change the config to use the
production key at the same time).

Ansgar

  [1] 
https://salsa.debian.org/ansgar/code-signing/commits/d22b8ec28d7b50a6cda738a52e5496492edb8ba9



Bug#922306: linux: btrfs corruption (compressed data + hole data)

2019-02-26 Thread Salvatore Bonaccorso
Control: tags -1 + pending

On Sun, Feb 24, 2019 at 09:40:59AM +0100, Salvatore Bonaccorso wrote:
> Control: found -1 4.3~rc5-1~exp1
> Control: tags -1 + upstream patch
> 
> Upstream patch submission at
> https://lore.kernel.org/linux-btrfs/20190214151720.23563-1-fdman...@kernel.org/

Pending in git, 
https://salsa.debian.org/kernel-team/linux/commit/76a21e66e34fff09735813e6c13398bc29ff18ec

Salvatore



Processed: Re: Bug#922306: linux: btrfs corruption (compressed data + hole data)

2019-02-26 Thread Debian Bug Tracking System
Processing control commands:

> tags -1 + pending
Bug #922306 [src:linux] linux: btrfs corruption (compressed data + hole data)
Added tag(s) pending.

-- 
922306: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=922306
Debian Bug Tracking System
Contact ow...@bugs.debian.org with problems



Processed: tagging 922306

2019-02-26 Thread Debian Bug Tracking System
Processing commands for cont...@bugs.debian.org:

> tags 922306 + confirmed
Bug #922306 [src:linux] linux: btrfs corruption (compressed data + hole data)
Added tag(s) confirmed.
> thanks
Stopping processing here.

Please contact me if you need assistance.
-- 
922306: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=922306
Debian Bug Tracking System
Contact ow...@bugs.debian.org with problems



Bug#913119: linux-image-4.18.0-2-amd64: Hangs on lvm raid1

2019-02-26 Thread Thorsten Glaser
severity 913119 serious
found 913119 4.19.16-1
thanks

On Wed, 13 Feb 2019, Dragan Milenkovic wrote:

> Jens Axboe has determined that the proper fix is quite different:
> 
> http://git.kernel.dk/cgit/linux-block/commit/?h=for-linus=85bd6e61f34dffa8ec2dc75ff3c02ee7b2f1cbce
> 
> This patch is already on its way to stable branches. I have tested it and
> confirmed that it resolves the problem.

I haven’t, but I’ve just run into this in a very uncomfortable
situation: I’ve just upgraded several virtualisation hosts from
jessie or stretch to buster, and one of them just started to
hang with exactly this problem. (They are all set up as LVM on
RAID, too, no cryptsetup.)

Ben et al. please do consider the fix for buster, otherwise
it’ll be unusable for this kind of setup.

Thanks,
//mirabilos
-- 
tarent solutions GmbH
Rochusstraße 2-4, D-53123 Bonn • http://www.tarent.de/
Tel: +49 228 54881-393 • Fax: +49 228 54881-235
HRB 5168 (AG Bonn) • USt-ID (VAT): DE122264941
Geschäftsführer: Dr. Stefan Barth, Kai Ebenrett, Boris Esser, Alexander Steeg

*

**!!! NEU !!!** Mit der **tarent Academy** bieten wir ab sofort auch Trainings
und Schulungen in den Bereichen Softwareentwicklung, Agiles Arbeiten und
Zukunftstechnologien an. Besuchen Sie uns
auf [www.tarent.de/academy](http://www.tarent.de/academy). Wir freuen uns auf
Ihren Kontakt.

*



Processed: Re: Bug#913119: linux-image-4.18.0-2-amd64: Hangs on lvm raid1

2019-02-26 Thread Debian Bug Tracking System
Processing commands for cont...@bugs.debian.org:

> severity 913119 serious
Bug #913119 [src:linux] linux-image-4.18.0-2-amd64: Hangs on lvm raid1
Severity set to 'serious' from 'normal'
> found 913119 4.19.16-1
Bug #913119 [src:linux] linux-image-4.18.0-2-amd64: Hangs on lvm raid1
Marked as found in versions linux/4.19.16-1.
> thanks
Stopping processing here.

Please contact me if you need assistance.
-- 
913119: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=913119
Debian Bug Tracking System
Contact ow...@bugs.debian.org with problems



Re: last preparations for switching to production Secure Boot key

2019-02-26 Thread Colin Watson
On Mon, Feb 25, 2019 at 08:13:22PM +0100, Ansgar wrote:
> I added support for listing `trusted_certs`[1] as proposed by Ben
> Hutchings.  This means the `files.json` structure *must* list the
> sha256sum of certificates the signed binaries will trust (this can be an
> empty list in case no hard-coded certificates are trusted).

Do I understand correctly that this ought to be empty in the case of
grub2, since it does all its signature checking via shim?  If so, done:

  
https://salsa.debian.org/grub-team/grub/commit/89c1529cd82f106dbb9a4b17bae03e828ec349b6

> I would like to implement one additional change.  Currently files.json
> looks like this:
[...]
> This is not extendable; therefore I would like to move everything below a
> top-level `packages` key, i.e. the file would look like this instead:
[...]
> This would allow adding additional top-level keys later should the need
> arise.  (I'll prepare the archive-side changes for this later today.)

I'm happy to do this, though presumably it's a flag day?

> Could all maintainers (for fwupd, fwupdate, grub2, linux) please ack one
> last time that their packages are ready for switching to the production
> key?  And prepare an upload with the changes described above and ready
> to use the production key?

I don't know of any blockers from the grub2 side.  Once the archive has
the "packages" key changes, I can prepare an upload - I was planning to
make one this week anyway.

Thanks,

-- 
Colin Watson   [cjwat...@debian.org]



Bug#908216: btrfs blocked for more than 120 seconds

2019-02-26 Thread Russell Mosemann

On Monday, February 25, 2019 10:17pm, "Nicholas D Steeves"  
said:



> Control: tags -1 -unreproducible
> 
> Hi Russell,
> 
> Thank you for providing more info. Now I see where you're running
> into known limitations with btrfs (all versions). Reply follows inline.
> 
> BTW, you're not using SMR and/or USB disks, right?
 
No SMR or USB disks are being used. It is either a single-partition, dedicated 
hard drive or a partition on hardware RAID.

> On Mon, Feb 25, 2019 at 12:33:51PM -0600, Russell Mosemann wrote:
> > Steps to reproduce
> >
> > Simply copying a file into the file system can cause things to lock up.
> In
> > this case, the files will usually be thin-provisioned qcow2 disks for kvm
> > vm's. There is no detailed formula to force the lockup to occur, but it
> > happens regularly, sometimes multiple times in one day.
> >
> 
> Have you read https://wiki.debian.org/Btrfs ? Specifically "COW on
> COW: Don't do it!" ? If you did read it, maybe the document needs to
> be more firm about this... eg: "take care to use raw images" should
> be "under no circumstances use non-raw images". P.S. Yes, I know that
> page would benefit from a reorganisation... Sorry about it's current
> state.
 
In every case, the btrfs partition is used exclusively as an archive for 
backups. In no circumstance is a vm or something like a database run on the 
partition. Consequently, it is not possible for CoW on CoW to happen. The 
partition is simply storing files.

> > Files are often copied from a master by reference (cp --reflink), one per
> > day to perform a daily backup for up to 45 days. Removing older files is
> a
> > painfully slow process, even though there are only 45 files in the
> > directory. Doing a scrub is almost a sure way to lock up the system,
> > especially if a copy or delete operation is in progress. On two systems,
> > crashes occur with 4.18 and 4.19 but not 4.17. On the other systems that
> > crash, it does not seem to matter if it is 4.17, 4.18 or 4.19.
> >
> 
> It might be that >4.17 fixed some corner-case corruption issue, for
> example by adding an additional check during each step of a backref
> walk, and that this makes the timeout more frequent and severe. eg:
> 4.17 works because it is less strict.
> 
> By the way, is it your VM host that locks up, or your VM guests? Do[es]
> they[it] recover if you leave it alone for many hours? I didn't see
> any oopses or panics in your kernel logs.
 
It is the host that locks up. This does not involve vm's in any way. If vm's 
are present, they are running on different drives. Some of the vm's even use 
btrfs partitions themselves. None of the vm's experience issues with their 
btrfs volumes. None of the vm's are affected by the hung btrfs tasks on the 
host. That is because the issue exclusively involves the separate, dedicated, 
archive partition used by the host. For all practical purposes, vm's aren't 
part of this picture.
 
As far as I am aware, a hung task does not recover, even after many hours. A 
number of times, it has hung at night. When I check in the morning hours later, 
it is still hung. In many cases, the server must be forcibly rebooted, because 
the hung task hangs the reboot process.

> Reflinked file are like snapshots, any I/O on a file must walk every branch
> of the backref tree that is relevant to a file. For more info see:
> https://btrfs.wiki.kernel.org/index.php/Resolving_Extent_Backrefs
> 
> As the tree grows and becomes more complex, a COW fs will get slower.
> You've hit the >120sec threshold, due to one or more of the issues
> discussed in this email. eg: a scrub, even during a file copy/delete
> should never cause this timeout. I haven't experienced one since
> linux-4.4.x or 4.9.x...
> 
> To get a figure that will provide a sense of scale to how many
> operations it takes to do anything other than reflink or snapshot you
> can consult the output of:
> 
> filefrag each_live_copy_vm_image
> 
> I expect the number of extends will exceed tens of thousands. BTW,
> you can use btrfsmaintenance to periodically defrag the source (and
> only the source) images. Note that this will break reflinks between
> SOURCE and each of the 45 REFLINKED-COPIES, but not between
> REFLINK-COPY1 and REFLINK-COPY2. Defragging weekly strikes a nice
> balance between lost space efficiency (due to fewer shared references
> between today's backup and yesterday's) and avoiding the performance
> issue you've encountered. Mounting with autodefrag is the least space
> efficient. (P.S. Also, I don't trust autodefrag)
> 
> IIRC btrfs-debug-tree can accurately count references.
 
This is useful information, but it doesn't seem directly related to the hung 
tasks. The btrfs tasks hang when a file is being copied into the btrfs 
partition. No references or vm's are involved in that process. It is a simple 
file copy.

> > Unless otherwise indicated
> >
> > using qgroups: No
> >
> 
> Whew, thank you for not! :-) Qgroups make this kind of issue worse.
> 
> >