Re: [Xen-devel] [PATCH v3 1/2] x86: Meltdown band-aid against malicious 64-bit PV guests

2018-01-16 Thread Andy Smith
Hi Jan, On Tue, Jan 16, 2018 at 08:21:52AM -0700, Jan Beulich wrote: > This is a very simplistic change limiting the amount of memory a running > 64-bit PV guest has mapped (and hence available for attacking): Only the > mappings of stack, IDT, and TSS are being cloned from the direct map > into

Re: [Xen-devel] XPTI patches for 4.10 don't apply

2018-01-19 Thread Andy Smith
Hi Jan, On Fri, Jan 19, 2018 at 02:52:50AM -0700, Jan Beulich wrote: > >>> On 19.01.18 at 08:21, wrote: > > Maybe this is a silly question but which tag are the 4.10 XPTI > > commits from https://xenbits.xen.org/xsa/xsa254/README.pti against? > > They don't apply to

[Xen-devel] XPTI patches for 4.10 don't apply

2018-01-18 Thread Andy Smith
Hi, Maybe this is a silly question but which tag are the 4.10 XPTI commits from https://xenbits.xen.org/xsa/xsa254/README.pti against? They don't apply to RELEASE-4.10.0. Cheers, Andy ___ Xen-devel mailing list Xen-devel@lists.xenproject.org

[Xen-devel] Vixen - does no migration imply no save/restore?

2018-01-12 Thread Andy Smith
Hi, I understand that Vixen does not support migration at this stage. Does that also mean that save/restore is also not expected to work for PV guests running with Vixen? I tried it and it doesn't work, whereas it does when the guest is started normal PV. I thought I better check expectations

[Xen-devel] Clarification regarding Meltdown and 64-bit PV guests

2018-01-12 Thread Andy Smith
Hi, In : "On Intel processors, only 64-bit PV mode guests can attack Xen using Variant 3. Guests running in 32-bit PV mode, HVM mode, and PVH mode (both v1 and v2) cannot attack the hypervisor using Variant

[Xen-devel] Trying out vixen: vif-route issue

2018-01-11 Thread Andy Smith
Hi, On Thu, Jan 11, 2018 at 10:26:36PM +, Andy Smith wrote: > Parsing config from /etc/xen/debtest1-with-shim.conf > libxl: error: libxl_exec.c:118:libxl_report_child_exitstatus: > /etc/xen/scripts/vif-route add [31567] exited with error status 1 > libxl: error: libxl_d

Re: [Xen-devel] [Xen-users] Trying out vixen: failure to start device model

2018-01-11 Thread Andy Smith
[Cc'ing xen-devel as this bit seems like a bug in pvshim] On Thu, Jan 11, 2018 at 09:59:24PM +, Andy Smith wrote: > libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm: Spawning device-model > /var/lib/xen/pvshim-sidecars/debtest1.dm with arguments: > libxl: debug: libxl_

[Xen-devel] Trying out vixen: qemu processes left behind

2018-01-11 Thread Andy Smith
Hi, I'm giving Vixen a try by following the instructions in https://xenbits.xen.org/xsa/xsa254/README.vixen Debian jessie, xen 4.8.1 packages from jessie-backports with XSAs applied. I finally got a guest booted although its networking doesn't work. Every time I've started a guest and had it

Re: [Xen-devel] Trying out vixen: qemu processes left behind

2018-01-11 Thread Andy Smith
Hi Anthony, On Thu, Jan 11, 2018 at 03:47:25PM -0800, Anthony Liguori wrote: > On Thu, Jan 11, 2018 at 3:00 PM, Andy Smith <a...@strugglers.net> wrote: > > $ sudo xl list > > NameID Mem VCPUs State >

Re: [Xen-devel] Clarification regarding Meltdown and 64-bit PV guests

2018-01-13 Thread Andy Smith
Hi Hans, On Sat, Jan 13, 2018 at 10:43:03AM +0100, Hans van Kranenburg wrote: > By injecting a copy of a hypervisor between the outer level hypervisor > (that's called L0 right?) (in HVM or PVH mode) and the guest, having it > just run 1 guest, that (64-bit PV) guest cannot attack its own kernel,

Re: [Xen-devel] [Xen-users] Future of 32-bit PV support

2018-08-16 Thread Andy Smith
Hi Juergen, As this was also addressed to -user I'm going to assume that you do want user response as well. On Thu, Aug 16, 2018 at 08:17:13AM +0200, Juergen Gross wrote: > We'd like to evaluate whether anyone would see problems with: > > - deprecating 32-bit PV guest support in Xen, meaning

[Xen-devel] Problems booting 32-bit PV; just me or more widespread?

2018-08-29 Thread Andy Smith
Hi, I'm sorry this is a long email, but I wanted to explain everything that I have tried, because it seems like quite a few different versions of 32-bit upstream Linux kernel no longer boot as PV guest and I'm surprised I am the first to encounter this. Probably I have done something wrong. I

Re: [Xen-devel] Problems booting 32-bit PV; just me or more widespread?

2018-09-02 Thread Andy Smith
Hi Boris, On Thu, Aug 30, 2018 at 05:59:38PM -0400, Boris Ostrovsky wrote: > On 08/29/2018 08:51 PM, Andy Smith wrote: > > I cannot get any of the Ubuntu packaged 32-bit mainline kernels > > after v4.13.16 that are found at > > http://kernel.ubuntu.com/~kernel-ppa/mainline/

Re: [Xen-devel] 4.10.1 Xen crash and reboot

2019-01-01 Thread Andy Smith
Hello, On Fri, Dec 21, 2018 at 06:55:38PM +, Andy Smith wrote: > Is it worth me moving this guest to a test host without pcid=0 to > see if it crashes it, meanwhile keeping production hosts with > pcid=0? And then putting pcid=0 on the test host to see if it > survives longer?

Re: [Xen-devel] 4.10.1 Xen crash and reboot

2018-12-10 Thread Andy Smith
Hi Jan, On Mon, Dec 10, 2018 at 09:29:34AM -0700, Jan Beulich wrote: > >>> On 10.12.18 at 16:58, wrote: > > Are there any other hypervisor command line options that would be > > beneficial to set for next time? > > Well, just like for your report from a couple of weeks ago - if this is > on

Re: [Xen-devel] 4.10.1 Xen crash and reboot

2018-12-21 Thread Andy Smith
Dec 10, 2018 at 03:58:41PM +, Andy Smith wrote: > Hi, > > Up front information: > > Today one of my Xen hosts crashed with this logging on the serial: > > (XEN) [ Xen-4.10.1 x86_64 debug=n Not tainted ] > (XEN) CPU:15 > (XEN) RIP:e008:[] guest_4.o

[Xen-devel] 4.10.1 Xen crash and reboot

2018-12-10 Thread Andy Smith
Hi, Up front information: Today one of my Xen hosts crashed with this logging on the serial: (XEN) [ Xen-4.10.1 x86_64 debug=n Not tainted ] (XEN) CPU:15 (XEN) RIP:e008:[] guest_4.o#shadow_set_l1e+0x75/0x6a0 (XEN) RFLAGS: 00010246 CONTEXT: hypervisor (d31v1) (XEN)

[Xen-devel] Sporadic PV guest malloc.c assertion failures and segfaults unless pv-l1tf=false is set

2018-11-24 Thread Andy Smith
Hi, Last weekend I deployed a hypervisor built from 4.10.1 release plus the most recent XSAs (which were under embargo at that time). Previously to this I had only gone as far as XSA-267, having taken a decision to wait before applying later XSAs. So, this most recent deployment included the

Re: [Xen-devel] Sporadic PV guest malloc.c assertion failures and segfaults unless pv-l1tf=false is set

2018-11-25 Thread Andy Smith
Hello, On Sun, Nov 25, 2018 at 06:18:49AM +, Andy Smith wrote: > In the text for XSA-273 it says: > > "Shadowing comes with a workload-dependent performance hit to > the guest. Once the guest kernel software updates have been > applied, a well behaved

Re: [Xen-devel] Sporadic PV guest malloc.c assertion failures and segfaults unless pv-l1tf=false is set

2018-11-25 Thread Andy Smith
Hi Andrew, On Sun, Nov 25, 2018 at 02:48:48PM +, Andrew Cooper wrote: > Which are your two types of Intel server? 7 of them have Xeon D-1540, 2 of them have Xeon E5-1680v4. I've seen this issue on guests running on both kinds, and my reproducer guest was moved from a production D-1540 server

Re: [Xen-devel] 4.10.1 Xen crash and reboot

2019-01-04 Thread Andy Smith
Hello, On Fri, Jan 04, 2019 at 03:16:32AM -0700, Jan Beulich wrote: > >>> On 01.01.19 at 20:46, wrote: > > I did move the suspect guest to a test host that does not have > > pcid=0 and 10 days later it crashed too: > > Thanks for trying this. It is now pretty clear that we need a means > to

Re: [Xen-devel] 4.10.1 Xen crash and reboot

2019-01-30 Thread Andy Smith
Hi, On Tue, Jan 01, 2019 at 07:46:57PM +, Andy Smith wrote: > The test host is slightly different hardware to the others: Xeon > E5-1680v4 on there as opposed to Xeon D-1540 previously. > > Test host is now running with pcid=0 to see if that helps. The > longest this gues

Re: [Xen-devel] "CPU N still not dead..." messages during microcode update stage of boot when smt=0

2019-08-01 Thread Andy Smith
Hi, On Mon, Jul 22, 2019 at 01:06:03PM +0100, Andrew Cooper wrote: > On 22/07/2019 10:16, Jan Beulich wrote: > > On 21.07.2019 22:06, Andy Smith wrote: > >> (XEN) Adding cpu 1 to runqueue 0 > >> (XEN) CPU 1 still not dead... > >> (XEN) CPU 1 still not dea

[Xen-devel] "CPU N still not dead..." messages during microcode update stage of boot when smt=0

2019-07-21 Thread Andy Smith
Hi, My first time using smt=0 on hypervisor command line so not sure how many versions and different pieces of hardware this happens with, but I noticed this during the microcode update stage of boot: (XEN) HVM: HAP page sizes: 4kB, 2MB, 1GB (XEN) Adding cpu 1 to runqueue 0 (XEN) CPU 1 still not

[Xen-devel] livepatch-build: What does getting no output from "readelf -wi xen-syms" usually mean?

2019-12-02 Thread Andy Smith
Hi, I've been looking into live patching for the first time. Starting with a 4.12.1 build: $ cd ~/dev $ ls -l total 8 drwxr-xr-x 3 andy andy 4096 Oct 25 16:11 xen drwxr-xr-x 6 andy andy 4096 Dec 2 01:16 livepatch-build-tools (there is already a 4.12.1 hypervisor built in /xen and is what's

Re: [Xen-devel] bug: unable to LZ4 decompress ub1910 installer kernel when launching domU

2019-12-02 Thread Andy Smith
Hello, On Sun, Dec 01, 2019 at 06:47:14PM +0100, Jeremi Piotrowski wrote: > On Thu, Oct 24, 2019 at 10:12:19AM +0200, Jan Beulich wrote: > > Would you please increase verbosity (xl -vvv create ...) such that we > > can see what exactly the decompression code doesn't like about this […] > I

Re: zstd compressed kernels

2020-11-17 Thread Andy Smith
On Tue, Nov 17, 2020 at 08:48:25PM +, Andrew Cooper wrote: > For domU's, tools/libs/guest/xg_dom_bzimageloader.c and > xc_dom_probe_bzimage_kernel() > > (Wow this plumbing is ugly and in need of some rationalisation...) Though not part of Xen, the PV part of grub could also do with some love

Re: dom0 suddenly blocking on all access to md device

2021-06-12 Thread Andy Smith
iguring Xen. Should I take a kernel from buster-backports which would currently be: https://packages.debian.org/buster-backports/linux-image-5.10.0-0.bpo.5-amd64 or should I build a kernel package from a mainline release? Thanks, Andy On Fri, Feb 26, 2021 at 10:39:27PM +, Andy Smith wr

Re: dom0 suddenly blocking on all access to md device

2021-06-12 Thread Andy Smith
Hi Rob, On Sat, Jun 12, 2021 at 05:47:49PM -0500, Rob Townley wrote: > mdadm.conf has email reporting capabilities to alert to failing drives. > Test that you receive emails. I do receive those emails, when such things occur, but the drives are not failing. Devices are not kicked out of MD

dom0 suddenly blocking on all access to md device

2021-02-26 Thread Andy Smith
Hi, I suspect this might be an issue in the dom0 kernel (Debian buster, kernel 4.19.0-13-amd64), but just lately I've been sporadically having issues where dom0 blocks or severely slows down on all access to the particular md device that hosts all domU block devices. Setup in dom0: an md RAID10

Re: dom0 suddenly blocking on all access to md device

2021-02-26 Thread Andy Smith
Oops, I didn't finish this sentence before sending: On Fri, Feb 26, 2021 at 10:39:27PM +, Andy Smith wrote: > Also, it's always the md device that the guest block devices are > on that is stalled - IO to other devices in dom0 …seems fine. Thanks, Andy

Re: dom0 suddenly blocking on all access to md device

2021-03-01 Thread Andy Smith
Hello, On Fri, Feb 26, 2021 at 10:39:27PM +, Andy Smith wrote: > just lately I've been sporadically having issues where dom0 blocks > or severely slows down on all access to the particular md device > that hosts all domU block devices. This just happened again on the sa

Filesystem corruption on restore without "xen-blkfront: introduce blkfront_gather_backend_features()"

2021-08-27 Thread Andy Smith
Hi, [This conversation started on the xen-security-issues-discuss list as I mistakenly thought it was to do with then-embargoed XSA patches] I did "xl save" on 17 domUs that were running under dom0 kernel 4.19.0-16-amd64 (4.19.181-1), hypervisor 4.14.2. I then rebooted dom0 into kernel

5.10.40 dom0 kernel - nvme: Invalid SGL for payload:131072 nents:13

2021-07-20 Thread Andy Smith
Hi, I have a Debian 10 (buster/stable) dom0 running hypervisor 4.14.2. For almost 2 years it's been using the packaged Debian stable kernel which is 4.19.x. Last night I upgraded the kernel to the buster-backports package which is based on 5.10.40 and about 4 hours later got this: Jul 20

Re: 5.10.40 dom0 kernel - nvme: Invalid SGL for payload:131072 nents:13

2021-07-21 Thread Andy Smith
Hi Jan, On Wed, Jul 21, 2021 at 10:10:13AM +0200, Jan Beulich wrote: > Since xen-blkback only talks in terms of bio-s, I don't think it is > the party responsible for honoring such driver restrictions. Instead > I'd expect the block layer's bio merging to be where this needs to be > observed.

Re: 5.10.40 dom0 kernel - nvme: Invalid SGL for payload:131072 nents:13

2021-07-23 Thread Andy Smith
On Fri, Jul 23, 2021 at 08:10:28PM +, Andy Smith wrote: > Hmm, I have the sector offset in the MD device so maybe I can > convert that into a logical volume to know if a particular guest is > provoking it… So for anyone who ever wants to do that sort of thing: # Find out offset that

Re: 5.10.40 dom0 kernel - nvme: Invalid SGL for payload:131072 nents:13

2021-07-23 Thread Andy Smith
Hi Jan, On Wed, Jul 21, 2021 at 04:49:26PM +0200, Jan Beulich wrote: > On 21.07.2021 16:19, Andy Smith wrote: > > I understand that below 4GiB memory use of swiotlb is disabled so > > all the time previously this was not used, and now is. Perhaps the > > bug is in there

Re: 5.10.40 dom0 kernel - nvme: Invalid SGL for payload:131072 nents:13

2021-07-25 Thread Andy Smith
Hello, On Tue, Jul 20, 2021 at 10:32:39PM +, Andy Smith wrote: > I have a Debian 10 (buster/stable) dom0 running hypervisor 4.14.2. > For almost 2 years it's been using the packaged Debian stable kernel > which is 4.19.x. > > Last night I upgraded the kernel to the buster-ba

Re: qemu-xen is unavailable

2022-01-05 Thread Andy Smith
Hello, On Wed, Jan 05, 2022 at 04:27:56PM +, Anthony PERARD wrote: > The bug here is that libxl shouldn't print this message for PVH guest > because it's confusing. It also does it for PV guests, again if no qemu is installed (or needed). I squash it by adding:

Some feature requests for guest consoles

2022-03-14 Thread Andy Smith
Hi, Mike H made a feature request in: https://lists.xenproject.org/archives/html/xen-users/2022-03/msg9.html for the Xen guest console as connected to with "xl console" to correctly support the terminal size rather than always being 80x20. Additionally I wondered about some other