On 14.05.19 11:03, David Hildenbrand wrote: > On 14.05.19 11:00, Cornelia Huck wrote: >> On Tue, 14 May 2019 10:56:43 +0200 >> Christian Borntraeger <borntrae...@de.ibm.com> wrote: >> >>> On 14.05.19 10:50, David Hildenbrand wrote: >> >>>> Another idea for temporary handling: Simply only indicate 240 CPUs to >>>> the guest if the response does not fit into a page. Once we have that >>>> SCLP thingy, this will be fixed. Guest migration back and forth should >>>> work, as the VCPUs are fully functional (and initially always stopped), >>>> the guest will simply not be able to detect them via SCLP when booting >>>> up, and therefore not use them. >>> >>> Yes, that looks like a good temporary solution. In fact if the guest relies >>> on simply probing it could even make use of the additional CPUs. Its just >>> the sclp response that is limited to 240 (or make it 247?) >> >> Where did the 240 come from - extra spare room? If so, 247 would >> probably be all right? >> > > +++ b/include/hw/s390x/sclp.h > @@ -133,6 +133,8 @@ typedef struct ReadInfo { > uint16_t highest_cpu; > uint8_t _reserved5[124 - 122]; /* 122-123 */ > uint32_t hmfai; > + uint8_t _reserved7[134 - 128]; /* 128-133 */ > + uint8_t fac134; > struct CPUEntry entries[0]; > } QEMU_PACKED ReadInfo; > > > So we have "4096 - 135 + 1" memory. Each element is 16 bytes wide. > -> 246 CPUs fit.
(I meant 247 :( ) -- Thanks, David / dhildenb