On Fri, 2025-08-08 at 10:17 +0200, Cédric Le Goater wrote: > On 8/8/25 08:07, Michael Tokarev wrote: > > On 06.08.2025 23:46, Miles Glenn wrote: > > > On Tue, 2025-08-05 at 22:07 +0200, Cédric Le Goater wrote: > > ... > > > > These seem to be interesting to have : > > > > > > > > ppc/xive2: Fix treatment of PIPR in CPPR update > > > > ppc/xive2: Fix irq preempted by lower priority group irq > > I added : > > ppc/xive2: Reset Generation Flipped bit on END Cache Watch > > > > > ppc/xive: Fix PHYS NSR ring matching > > > > ppc/xive2: fix context push calculation of IPB priority > > > > ppc/xive2: Remote VSDs need to match on forwarding address > > > > ppc/xive2: Fix calculation of END queue sizes > > > > ppc/xive: Report access size in XIVE TM operation error logs > > > > ppc/xive: Fix xive trace event output > > > > > > I'm still not sure that the benefit is worth the effort, but I > > > certainly don't have a problem with them being backported if someone > > > has the desire and the time to do it. > > > > I mentioned already that 10.0 series will (hopefully) be LTS series. > > At the very least, it is what we'll have in the upcoming debian > > stable release (trixie), which will be stable for the next 2 years. > > Whenever this is important to have working Power* support in debian - > > I don't know. > > > > All the mentioned patches applied to 10.0 branch cleanly (in the > > reverse order, from bottom to top), so there's no effort needed > > to back-port them. And the result passes at least the standard > > qemu testsuite. So it looks like everything works as intended. > > 24.04 operates correctly with a "6.14.0-27-generic #27~24.04.1-Ubuntu" > kernel on a PowerNV10 system defined as : > > Architecture: ppc64le > Byte Order: Little Endian > CPU(s): 16 > On-line CPU(s) list: 0-15 > Model name: POWER10, altivec supported > Model: 2.0 (pvr 0080 1200) > Thread(s) per core: 4 > Core(s) per socket: 2 > Socket(s): 2 > Frequency boost: enabled > CPU(s) scaling MHz: 76% > CPU max MHz: 3800.0000 > CPU min MHz: 2000.0000 > Caches (sum of all): > L1d: 128 KiB (4 instances) > L1i: 128 KiB (4 instances) > NUMA: > NUMA node(s): 2 > NUMA node0 CPU(s): 0-7 > NUMA node1 CPU(s): 8-15 > > with devices : > > 0000:00:00.0 PCI bridge: IBM Device 0652 > 0000:01:00.0 Non-Volatile memory controller: Red Hat, Inc. QEMU NVM > Express Controller (rev 02) > 0001:00:00.0 PCI bridge: IBM Device 0652 > 0001:01:00.0 PCI bridge: Red Hat, Inc. Device 000e > 0001:02:02.0 USB controller: NEC Corporation uPD720200 USB 3.0 Host > Controller (rev 03) > 0001:02:03.0 Ethernet controller: Intel Corporation 82574L Gigabit Network > Connection > 0002:00:00.0 PCI bridge: IBM Device 0652 > ... > > A rhel9 nested guest boots too. > > Poweroff and reboot are fine. > > > > Michael, > > I would say ship it. > > > Glenn, Gautam, > > It would nice to get rid of these messages. > > [ 0.000000] NR_IRQS: 512, nr_irqs: 512, preallocated irqs: 16 > [ 2.270794918,5] XIVE: [ IC 00 ] Resetting one xive... > [ 2.271575295,3] XIVE: [CPU 0000] Error enabling PHYS CAM already > enabled > CPU 0100 Backtrace: > S: 0000000032413a20 R: 0000000030021408 .backtrace+0x40 > S: 0000000032413ad0 R: 000000003008427c .xive2_tima_enable_phys+0x40 > S: 0000000032413b50 R: 0000000030087430 > .__xive_reset.constprop.0.isra.0+0x520 > S: 0000000032413c90 R: 0000000030087638 .opal_xive_reset+0x78 > S: 0000000032413d10 R: 00000000300038bc opal_entry+0x14c > --- OPAL call token: 0x80 caller R1: 0xc0000000014bbc90 --- > [ 2.273581201,3] XIVE: [CPU 0001] Error enabling PHYS CAM already > enabled > > > Is it a modeling issue ? > > > Thanks, > > C. > > > >
Thank you, Cédric! I'm not sure what's causing that error message. I'm assuming it wasn't there before now, which would probably mean that something (the model?) is enabling the PHYS cams at initialization or realization where we didn't used to. Mike Kowal, is that the expecected behavior? Can you take a look when you have a chance? Thanks, Glenn