Bug#973252: Bug 973252

2020-11-22 Thread Kertesz Eniko
Hi,
For that you have to install linux-image-5.9.0-3-amd64 from unstable.
It will go into testing automatically in some time, like 2 weeks.

On Thu, 19 Nov 2020 09:33:03 -0500 Cubed  wrote:
> Good morning.
>
> I was notified today that this bug had been resolved. As someone somewhat
> new to TV in over the last year I'm curious where I can obtain this
updated
> kernel. I am in Bullseye testing and have updated to the 5.9.6.1. The
> resolution email states that there is a stable 5.9.7 and I was curious
> where to obtain that to resolve my issue by taking advantage of the big
> closure.
>
> Thank you
>
> Jonathan


Bug#972709: Wishlist/RFC: Change to CONFIG_PREEMPT_NONE in linux-image-cloud-*

2020-11-22 Thread Noah Meyerhans
On Sun, Nov 22, 2020 at 03:53:32PM -0800, Flavio Veloso Soares wrote:
>  Unfortunately, I couldn't find many comprehensive benchmarks of kernel
>  CONFIG_PREEMPT* options. The one at
>  
> [1]https://www.codeblueprint.co.uk/2019/12/23/linux-preemption-latency-throughput.html
>  seems to be very thorough,
> 
>  [...]
> 
>  Not particularly.  I'm used to latency benchmarks showing e.g. average,
>  90th percentile, 99th percentile, as well as worst.

I don't think Ben was talking about specific benchmarks.  The web page
you cites lacks basic measurements one would expect to see from *any*
meaningful performance benchmark.  Comparing maximum latency is fine,
but it's not really relevant by itself.  If a configuration change
improves the worst case (100th percentile) but negatively impacts the
50th percentile, is that a change worth making?  Maybe.  But without
having that data at all, the benchmark really isn't worth much at all.

It's totally reasonable for us to consider making this change, but we
should have comprehensive data about the impact of doing so.  What
impact does the change have on different classes of workloads?  e.g.
high tps, CPU-bound, IO-bound, etc.  It's entirely possible that the
proposed change improves performance under certain workloads, but
negatively impacts others.  Without knowing the impact in more in more
detail, which would allow us to evaluate the tradeoffs, I don't think
there's a compelling reason to make a change.

noah



Bug#972709: Wishlist/RFC: Change to CONFIG_PREEMPT_NONE in linux-image-cloud-*

2020-11-22 Thread Flavio Veloso Soares


On 2020-11-22 2:28 p.m., Ben Hutchings wrote:

On Sun, 2020-11-22 at 13:45 -0800, Flavio Veloso Soares wrote:

[Resending: just noticed that the reply I sent on Oct 23 didn't include
b.d.o]

I don't think the article is about the same thing we're talking here.
CONFIG_PREEMPT* options control the compromise between latency and
throughput of *system calls* and *scheduling of CPU cycles spent in
kernel mode*, not network traffic.

The latency of requests to services on a server is affected by both
scheduler and network latency.

[...]


"Services" is a too broad term. Which kind of service are you talking about?

For the record, I'm talking about latency of kernel system calls 
specifically, which happens to be what CONFIG_PREEMPT* controls.




Unfortunately, I couldn't find many comprehensive benchmarks of kernel
CONFIG_PREEMPT* options. The one at
https://www.codeblueprint.co.uk/2019/12/23/linux-preemption-latency-throughput.html
seems to be very thorough,

[...]

Not particularly.  I'm used to latency benchmarks showing e.g. average,
90th percentile, 99th percentile, as well as worst.

Ben.


Are those benchmarks public? Can you provide links to them?


--
FVS



Bug#972709: Wishlist/RFC: Change to CONFIG_PREEMPT_NONE in linux-image-cloud-*

2020-11-22 Thread Ben Hutchings
On Sun, 2020-11-22 at 13:45 -0800, Flavio Veloso Soares wrote:
> [Resending: just noticed that the reply I sent on Oct 23 didn't include 
> b.d.o]
> 
> I don't think the article is about the same thing we're talking here. 
> CONFIG_PREEMPT* options control the compromise between latency and 
> throughput of *system calls* and *scheduling of CPU cycles spent in 
> kernel mode*, not network traffic.

The latency of requests to services on a server is affected by both
scheduler and network latency.

[...]
> Unfortunately, I couldn't find many comprehensive benchmarks of kernel 
> CONFIG_PREEMPT* options. The one at 
> https://www.codeblueprint.co.uk/2019/12/23/linux-preemption-latency-throughput.html
>  
> seems to be very thorough,
[...]

Not particularly.  I'm used to latency benchmarks showing e.g. average,
90th percentile, 99th percentile, as well as worst.

Ben.

-- 
Ben Hutchings
If at first you don't succeed, you're doing about average.



signature.asc
Description: This is a digitally signed message part


Bug#972709: Wishlist/RFC: Change to CONFIG_PREEMPT_NONE in linux-image-cloud-*

2020-11-22 Thread Flavio Veloso Soares
[Resending: just noticed that the reply I sent on Oct 23 didn't include 
b.d.o]


I don't think the article is about the same thing we're talking here. 
CONFIG_PREEMPT* options control the compromise between latency and 
throughput of *system calls* and *scheduling of CPU cycles spent in 
kernel mode*, not network traffic. Granted, networking is affected by 
the setting too,  but intuition tells me that a nonpreemptible system 
call -- meaning, one that finish all processing until it ends, or blocks 
on I/O -- could even *decrease* network latency, not increase.


Unfortunately, I couldn't find many comprehensive benchmarks of kernel 
CONFIG_PREEMPT* options. The one at 
https://www.codeblueprint.co.uk/2019/12/23/linux-preemption-latency-throughput.html 
seems to be very thorough, and shows that the difference of latency 
between CONFIG_PREEMPT_VOLUNTARY and CONFIG_PREEMPT_NONE is actually 
nonexistent, while no-preemption provides noticeable more throughput.


This unsurprising conclusion alone tells that CONFIG_PREEMPT_NONE is a 
better choice for servers.


However, there's more. No benchmark touches the subject of overhead 
context switches and burstable CPU cycles "credit" system used in many 
(most?) cloud environments, which happens to be the target of *-cloud 
kernels. With voluntary preemption, all those cycles used in overhead 
context switches are not only wasted, but they still count against 
instance CPU "credits", and that reduces overall computing power 
available to the instance. This is like double-paying for something you 
don't need.



On 2020-10-23 6:04 p.m., Ben Hutchings wrote:

On Thu, 2020-10-22 at 13:43 -0700, Flavio Veloso wrote:

Package: linux-image-cloud-amd64
Version: 4.19+105+deb10u7
Severity: wishlist

Since cloud images are mostly run for server workloads in headless
environments accessed via network only, it would be better if
"linux-image-cloud-*" kernels were compiled with CONFIG_PREEMPT_NONE=y
("No Forced Preemption (Server)").

Currently those packages use CONFIG_PREEMPT_VOLUNTARY=y ("Voluntary
Kernel Preemption (Desktop)")

CONFIG_PREEMPT_NONE description from kernel help:

[...]

I know what it says, but I think the notion that latency is less
important on servers is outdated.

It's well known that people give up quickly on web pages that are slow
to load:
.
And a web page can depend on (indirectly) very many servers, which
means that e.g. high latency that only occurs 1% of the time on any
single server actually affects a large fraction of requests.

Ben.


--
FVS



Bug#974939: machine does not boot

2020-11-22 Thread Toni Mueller



Hi,

On Tue, Nov 17, 2020 at 12:50:19PM +0100, Bastian Blank wrote:
> On Mon, Nov 16, 2020 at 07:41:05PM +, Toni wrote:
> > Severity: critical
> 
> Sorry, no.  This problem does not break the package for everyone.
> 
> > On the console, after dmesg, these three lines repeat ad nauseum:
> > mdadm: No arrays found in config file or automatically
> >   Volume group "ev0" not found
> >   Cannot process volume group ev0
> > mdadm: No arrays found in config file or automatically
> >   Volume group "ev0" not found
> >   Cannot process volume group ev0
> 
> So it actually boots, but the boot process is not able to find your root
> filesystem?

It loads the kernel, I can see the dmesg, and then I see an endless loop
of these messages. So yes, there's something broken with encrypted
partitions, that wasn't broken until the -10 kernel.

> Yes, that look pretty normal and like something the Debian installer
> would create.

It did.

> What is the content of /etc/crypttab? /etc/fstab? /boot/grub/grub.conf?
> What do you have mdadm for?

I don't have mdadm devices in this machine (it's a laptop with only one
SSD, anyway). I don't know why mdadm is present on this machine, but it
has "always" been, with no detrimental effects until possibly recently.



Thanks,
Toni



Bug#974166: I'd like to help

2020-11-22 Thread Alejandro Colomar (man-pages)
Hi,

I suspect it may have to do with the 5th gen (Broadwell Desktop) only,
as they have a very special (and powerful) iGPU.
And maybe that's why the bug passed inadvertently so much time:
those CPUs didn't sell very well.

Yes, I have that package installed:

||/ NameVersion  Architecture Description
+++-===---===
ii  intel-microcode 3.20201110.1 amd64Processor microcode
firmware for Intel CPUs

When I first upgraded to 5.7, I couldn't boot,
and my first thought was: this seems to be such a big bug,
so they probably have already noticed, and I didn't report it.
It's been more than half a year, kernel 5.9, and the bug is still there,
so I strongly suspect that the only affected CPUs are those with
Intel Iris Pro 6200 (there are only 4 models with that iGPU).

Thanks,

Alex


On 11/22/20 5:53 PM, Georgi Naplatanov wrote:
> Hi,
> 
> I have Intel i5 4th generation and I have no such problem with Linux
> kernel 5.9.6.
> 
> Do you have "intel-microcode" package installed on those computers?
> 
> Kind regards
> Georgi
> 



Bug#974166: I'd like to help

2020-11-22 Thread Georgi Naplatanov
Hi,

I have Intel i5 4th generation and I have no such problem with Linux
kernel 5.9.6.

Do you have "intel-microcode" package installed on those computers?

Kind regards
Georgi



Bug#974166: I'd like to help

2020-11-22 Thread Alejandro Colomar (man-pages)
Hi,

I received no answer at all, so I guess you are very busy.

I know the bug was introduced somewhere between 5.6 and 5.7.

If you have an idea of what the source of the bug could be,
I'd like to help by testing on my computers,
or providing code.

I have two computers with similar CPUs (i5-5675C & i7-5775C),
and both have the same problem.

Basically I'm limited to kernel <= 5.6, right now.
I can't boot from a more recent kernel.

Thanks,

Alex