[Qemu-devel] [PATCH] This patch is used to move some struct definition, like QEMUTimer, QEMUClock, from .c to .h.

2012-02-02 Thread Wei Yang
Tested on i386 platform.

Signed-off-by: Wei Yang
---
 qemu-timer.c |   40 
 qemu-timer.h |   41 +
 2 files changed, 41 insertions(+), 40 deletions(-)

diff --git a/qemu-timer.c b/qemu-timer.c
index cd026c6..2b5cc48 100644
--- a/qemu-timer.c
+++ b/qemu-timer.c
@@ -46,46 +46,6 @@
 
 #include "qemu-timer.h"
 
-/***/
-/* timers */
-
-#define QEMU_CLOCK_REALTIME 0
-#define QEMU_CLOCK_VIRTUAL  1
-#define QEMU_CLOCK_HOST 2
-
-struct QEMUClock {
-int type;
-int enabled;
-
-QEMUTimer *active_timers;
-
-NotifierList reset_notifiers;
-int64_t last;
-};
-
-struct QEMUTimer {
-QEMUClock *clock;
-int64_t expire_time;   /* in nanoseconds */
-int scale;
-QEMUTimerCB *cb;
-void *opaque;
-struct QEMUTimer *next;
-};
-
-struct qemu_alarm_timer {
-char const *name;
-int (*start)(struct qemu_alarm_timer *t);
-void (*stop)(struct qemu_alarm_timer *t);
-void (*rearm)(struct qemu_alarm_timer *t, int64_t nearest_delta_ns);
-#if defined(__linux__)
-int fd;
-timer_t timer;
-#elif defined(_WIN32)
-HANDLE timer;
-#endif
-char expired;
-char pending;
-};
 
 static struct qemu_alarm_timer *alarm_timer;
 
diff --git a/qemu-timer.h b/qemu-timer.h
index 67ca72e..5bf2fc7 100644
--- a/qemu-timer.h
+++ b/qemu-timer.h
@@ -20,6 +20,47 @@
 typedef struct QEMUClock QEMUClock;
 typedef void QEMUTimerCB(void *opaque);
 
+/***/
+/* timers */
+
+#define QEMU_CLOCK_REALTIME 0
+#define QEMU_CLOCK_VIRTUAL  1
+#define QEMU_CLOCK_HOST 2
+
+struct QEMUClock {
+int type;
+int enabled;
+
+QEMUTimer *active_timers;
+
+NotifierList reset_notifiers;
+int64_t last;
+};
+
+struct QEMUTimer {
+QEMUClock *clock;
+int64_t expire_time;   /* in nanoseconds */
+int scale;
+QEMUTimerCB *cb;
+void *opaque;
+struct QEMUTimer *next;
+};
+
+struct qemu_alarm_timer {
+char const *name;
+int (*start)(struct qemu_alarm_timer *t);
+void (*stop)(struct qemu_alarm_timer *t);
+void (*rearm)(struct qemu_alarm_timer *t, int64_t nearest_delta_ns);
+#if defined(__linux__)
+int fd;
+timer_t timer;
+#elif defined(_WIN32)
+HANDLE timer;
+#endif
+char expired;
+char pending;
+};
+
 /* The real time clock should be used only for stuff which does not
change the virtual machine state, as it is run even if the virtual
machine is stopped. The real time clock has a frequency of 1000
-- 
1.7.4.1




[Qemu-devel] how could I analysis the trace log?

2012-02-12 Thread Wei Yang
All

I enable the trace function with --enable-trace-backend=simple and I
create the event file like this
g_realloc
g_malloc

Then I start the qemu with following command.
./i386-softmmu/qemu-system-i386 -enable-kvm -drive
file=../../kvm/ubuntu.img -boot dc -m 512 -usb
 -monitor stdio -trace events=qemu_trace_events,file=qemu_trace.log

After some run time, I run the script like:
./scripts/simpletrace.py qemu_trace_events_parse qemu_trace.log

The qemu_trace_events_parse is :
g_realloc(addr)
g_malloc(addr)

The output looks like:
g_malloc 1.831 addr=0xb945d1f0
g_malloc 2.498 addr=0xb945d1f0
g_realloc 4.715 addr=0x10
g_realloc 1.520 addr=0xc
g_realloc 1.505 addr=0xc

The steps I used is correct?
I just guess the format of input events file of the simpletrace.py.
For so many available events, how could I specify the format of all
those events?

-- 
Richard Yang
Help You, Help Me



Re: [Qemu-devel] How to follow a child process created in the guest OS?

2012-02-12 Thread Wei Yang
2012/2/11 malc :
> On Sat, 11 Feb 2012, Andreas F?rber wrote:
>
>> Am 10.02.2012 11:26, schrieb ???:
>> > On Fri, Feb 10, 2012 at 08:14:41AM +, Stefan Hajnoczi wrote:
>> >> On Thu, Feb 09, 2012 at 06:33:16PM +0800, ??? wrote:
>> >>> I am running a tiny OS on QEMU and debugging it with gdbstub. The tiny 
>> >>> OS will
>> >>> fork process 1, 2, ... and so on. I want to follow the child process, 
>> >>> [...]
>> >>>
>> >>>   Is there a way to do what I'm trying to do? Thanks!
>>
>> > - Tiny OS code -
>> > void main(void)   /* This really IS void, no error here. */
>> > {
>> >   /* initialize enviroment */
>> >
>> >   sti();
>> >   move_to_user_mode();
>> >   if (!fork()) {    /* we count on this going ok */
>> >     init();         // task 1
>> >   }
>> >
>> >   for(;;) pause();  // task 0
>> > }
>> > 
>> >
>> >   I am running this tiny OS on QEMU then using GDB to connect it.
>> > I want to follow task 1 after the forking, [...]
>>

Could the Qemu gdbstub debug a user space process?

-- 
Richard Yang
Help You, Help Me



Re: [Qemu-devel] how could I analysis the trace log?

2012-02-12 Thread Wei Yang
Hi, this kind of trace is not popular?

2012/2/12 Wei Yang :
> All
>
> I enable the trace function with --enable-trace-backend=simple and I
> create the event file like this
> g_realloc
> g_malloc
>
> Then I start the qemu with following command.
> ./i386-softmmu/qemu-system-i386 -enable-kvm -drive
> file=../../kvm/ubuntu.img -boot dc -m 512 -usb
>  -monitor stdio -trace events=qemu_trace_events,file=qemu_trace.log
>
> After some run time, I run the script like:
> ./scripts/simpletrace.py qemu_trace_events_parse qemu_trace.log
>
> The qemu_trace_events_parse is :
> g_realloc(addr)
> g_malloc(addr)
>
> The output looks like:
> g_malloc 1.831 addr=0xb945d1f0
> g_malloc 2.498 addr=0xb945d1f0
> g_realloc 4.715 addr=0x10
> g_realloc 1.520 addr=0xc
> g_realloc 1.505 addr=0xc
>
> The steps I used is correct?
> I just guess the format of input events file of the simpletrace.py.
> For so many available events, how could I specify the format of all
> those events?
>
> --
> Richard Yang
> Help You, Help Me



-- 
Richard Yang
Help You, Help Me



[Qemu-devel] [PATCH] kvm: shoten the parameter list for get_real_device()

2013-08-18 Thread Wei Yang
get_real_device() has 5 parameters with the last 4 is contained in the first
structure.

This patch removes the last 4 parameters and directly use them from the first
parameter.

Signed-off-by: Wei Yang 
---
 hw/i386/kvm/pci-assign.c |9 -
 1 files changed, 4 insertions(+), 5 deletions(-)

diff --git a/hw/i386/kvm/pci-assign.c b/hw/i386/kvm/pci-assign.c
index 5618173..011764f 100644
--- a/hw/i386/kvm/pci-assign.c
+++ b/hw/i386/kvm/pci-assign.c
@@ -568,8 +568,7 @@ static int get_real_device_id(const char *devpath, uint16_t 
*val)
 return get_real_id(devpath, "device", val);
 }
 
-static int get_real_device(AssignedDevice *pci_dev, uint16_t r_seg,
-   uint8_t r_bus, uint8_t r_dev, uint8_t r_func)
+static int get_real_device(AssignedDevice *pci_dev)
 {
 char dir[128], name[128];
 int fd, r = 0, v;
@@ -582,7 +581,8 @@ static int get_real_device(AssignedDevice *pci_dev, 
uint16_t r_seg,
 dev->region_number = 0;
 
 snprintf(dir, sizeof(dir), "/sys/bus/pci/devices/%04x:%02x:%02x.%x/",
- r_seg, r_bus, r_dev, r_func);
+ pci_dev->host.domain, pci_dev->host.bus,
+ pci_dev->host.slot, pci_dev->host.function);
 
 snprintf(name, sizeof(name), "%sconfig", dir);
 
@@ -1769,8 +1769,7 @@ static int assigned_initfn(struct PCIDevice *pci_dev)
 memcpy(dev->emulate_config_write, dev->emulate_config_read,
sizeof(dev->emulate_config_read));
 
-if (get_real_device(dev, dev->host.domain, dev->host.bus,
-dev->host.slot, dev->host.function)) {
+if (get_real_device(dev)) {
 error_report("pci-assign: Error: Couldn't get real device (%s)!",
  dev->dev.qdev.id);
 goto out;
-- 
1.7.5.4




Re: [Qemu-devel] [PULL 9/9] kvm: shorten the parameter list for get_real_device()

2013-08-23 Thread Wei Yang
Paolo,

Is it necessary to add this?

From: Wei Yang 

On Fri, Aug 23, 2013 at 11:03:35AM +0200, Paolo Bonzini wrote:
>get_real_device() has 5 parameters with the last 4 is contained in the first
>structure.
>
>This patch removes the last 4 parameters and directly use them from the first
>parameter.
>
>Acked-by: Alex Williamson 
>Signed-off-by: Wei Yang 
>Signed-off-by: Paolo Bonzini 
>---
> hw/i386/kvm/pci-assign.c | 9 -
> 1 file changed, 4 insertions(+), 5 deletions(-)
>
>diff --git a/hw/i386/kvm/pci-assign.c b/hw/i386/kvm/pci-assign.c
>index ff33dc8..73941b2 100644
>--- a/hw/i386/kvm/pci-assign.c
>+++ b/hw/i386/kvm/pci-assign.c
>@@ -568,8 +568,7 @@ static int get_real_device_id(const char *devpath, 
>uint16_t *val)
> return get_real_id(devpath, "device", val);
> }
>
>-static int get_real_device(AssignedDevice *pci_dev, uint16_t r_seg,
>-   uint8_t r_bus, uint8_t r_dev, uint8_t r_func)
>+static int get_real_device(AssignedDevice *pci_dev)
> {
> char dir[128], name[128];
> int fd, r = 0, v;
>@@ -582,7 +581,8 @@ static int get_real_device(AssignedDevice *pci_dev, 
>uint16_t r_seg,
> dev->region_number = 0;
>
> snprintf(dir, sizeof(dir), "/sys/bus/pci/devices/%04x:%02x:%02x.%x/",
>- r_seg, r_bus, r_dev, r_func);
>+ pci_dev->host.domain, pci_dev->host.bus,
>+ pci_dev->host.slot, pci_dev->host.function);
>
> snprintf(name, sizeof(name), "%sconfig", dir);
>
>@@ -1769,8 +1769,7 @@ static int assigned_initfn(struct PCIDevice *pci_dev)
> memcpy(dev->emulate_config_write, dev->emulate_config_read,
>sizeof(dev->emulate_config_read));
>
>-if (get_real_device(dev, dev->host.domain, dev->host.bus,
>-dev->host.slot, dev->host.function)) {
>+if (get_real_device(dev)) {
> error_report("pci-assign: Error: Couldn't get real device (%s)!",
>  dev->dev.qdev.id);
> goto out;
>-- 
>1.8.3.1

-- 
Richard Yang
Help you, Help me




Re: [Qemu-devel] [PULL 9/9] kvm: shorten the parameter list for get_real_device()

2013-08-23 Thread Wei Yang
Paolo,

Sorry, maybe I am not familiar with the patch format in qemu-dev.

I didn't see the From: Wei Yang in this one neither.

On Fri, Aug 23, 2013 at 11:39:53AM +0200, Paolo Bonzini wrote:
>get_real_device() has 5 parameters with the last 4 is contained in the first
>structure.
>
>This patch removes the last 4 parameters and directly use them from the first
>parameter.
>
>Acked-by: Alex Williamson 
>Signed-off-by: Wei Yang 
>Signed-off-by: Paolo Bonzini 
>---
> hw/i386/kvm/pci-assign.c | 9 -
> 1 file changed, 4 insertions(+), 5 deletions(-)
>
>diff --git a/hw/i386/kvm/pci-assign.c b/hw/i386/kvm/pci-assign.c
>index ff33dc8..73941b2 100644
>--- a/hw/i386/kvm/pci-assign.c
>+++ b/hw/i386/kvm/pci-assign.c
>@@ -568,8 +568,7 @@ static int get_real_device_id(const char *devpath, 
>uint16_t *val)
> return get_real_id(devpath, "device", val);
> }
>
>-static int get_real_device(AssignedDevice *pci_dev, uint16_t r_seg,
>-   uint8_t r_bus, uint8_t r_dev, uint8_t r_func)
>+static int get_real_device(AssignedDevice *pci_dev)
> {
> char dir[128], name[128];
> int fd, r = 0, v;
>@@ -582,7 +581,8 @@ static int get_real_device(AssignedDevice *pci_dev, 
>uint16_t r_seg,
> dev->region_number = 0;
>
> snprintf(dir, sizeof(dir), "/sys/bus/pci/devices/%04x:%02x:%02x.%x/",
>- r_seg, r_bus, r_dev, r_func);
>+ pci_dev->host.domain, pci_dev->host.bus,
>+ pci_dev->host.slot, pci_dev->host.function);
>
> snprintf(name, sizeof(name), "%sconfig", dir);
>
>@@ -1769,8 +1769,7 @@ static int assigned_initfn(struct PCIDevice *pci_dev)
> memcpy(dev->emulate_config_write, dev->emulate_config_read,
>sizeof(dev->emulate_config_read));
>
>-if (get_real_device(dev, dev->host.domain, dev->host.bus,
>-dev->host.slot, dev->host.function)) {
>+if (get_real_device(dev)) {
> error_report("pci-assign: Error: Couldn't get real device (%s)!",
>  dev->dev.qdev.id);
> goto out;
>-- 
>1.8.3.1

-- 
Richard Yang
Help you, Help me




[Qemu-devel] Failed to set a breakpoint on start_kernel

2012-03-15 Thread Wei Yang
All

I like qemu very much and know it could debug the kernel.

I tried what I searched on web but couldn't stop at the break point.
Below is what I did.

1. Both host and guest installed the same OS, Fedora16 x86_64.

2. Compile the qemu with
./configure --target-list=x86_64-softmmu --enable-kvm
--enable-debug-tcg --enable-debug --enable-trace-backend=simple

3. With this command I can boot up my guest.
./../qemu/x86_64-softmmu/qemu-system-x86_64 -enable-kvm -smp 4 -m
1024  -boot dc fedora16.img -monitor stdio

4. I git clone the kernel source in the guest and make a new kernel and initrd.
I start the guest with this new kernel successfully

5. I copy out the initrd.img and the .config of kernel to host.
compile the kernel on host.
the kernel source code is identical on host and gueset,

6. I start the guest with the kernel and initrd on host
./../qemu/x86_64-softmmu/qemu-system-x86_64 -enable-kvm -smp 4 -m
1024  -boot dc fedora16.img -monitor stdio -kernel
~/git/linux-yinghai/arch/x86_64/boot/bzImage -initrd
~/git/debug/initramfs-3.0.0.img -append
"root=/dev/mapper/vg_wizard-lv_root ro rd.lvm.lv=vg_wizard/lv_root
rd.md=0 rd.lvm.lv=vg_wizard/lv_swap"

This works fine.

7. Then I start the guest with gdbstub option
./../qemu/x86_64-softmmu/qemu-system-x86_64 -enable-kvm -smp 4 -m
1024  -boot dc fedora16.img -monitor stdio -kernel
/home/ywywyang/git/linux-yinghai/arch/x86_64/boot/bzImage -initrd
/home/ywywyang/git/debug/initramfs-3.0.0.img -append
"root=/dev/mapper/vg_wizard-lv_root ro rd.lvm.lv=vg_wizard/lv_root
rd.md=0 rd.lvm.lv=vg_wizard/lv_swap" -S -gdb tcp::4321

Then the guest stop at the beginning.

8. Attach the gdb in the kernel source directory
gdb
file vmlinux
target remote localhost:4321
b start_kernel
c

   Then the guest will run very happily

Also use the "info b " could show the break point is set.

Which step I made a mistake?


-- 
Wei Yang
Help You, Help Me



Re: [Qemu-devel] Failed to set a breakpoint on start_kernel

2012-03-17 Thread Wei Yang
2012/3/17 Jan Kiszka :
> [ re-added qemu-devel to CC ]
>
> On 2012-03-17 13:10, Wei Yang wrote:
>>> Two major issues with this procedure:
>>>
>>> 1. When using kvm, a soft breakpoint (as set by 'b') will inject a trap
>>> instruction into the guest image - which is not yet loaded after the
>>> bios ran. You need to use a hardware breakpoint in this case.
>>>
>>> 2. Due to gdb limitations, you cannot switch between 16/32-bit mode (the
>>> CPU starts in 16 bit) and the 64-bit mode of kernel within the same gdb
>>> session. Therefore:
>>>  - let the target run into Linux is active
>>>  - attach gdb
>>>  - issue "hw start_kernel"
>>>  - reboot (e.g. "monitor system_reset")
>>>  - you will hit the breakpoint, and gdb will be usable
>>>
>>> Jan
>>>
>>>
>> oh, so when qemu run with kvm enabled, I couldn't debug the kernel right?
>
> That's not what I said. You need to be aware of how it works. And, in
> contrast to pure emulation, kwm uses a non-transparent mechanism for
> injecting software breakpoints. Consider it the price for the gained speed.
>

Thanks :)
It works.  Though I don't understand it totally, I get the rough idea of it. :)

>>
>> I tried to run qemu with out -enable-kvm, kernel could stop at the break 
>> point.
>>
>> BTW, I tried "hw start_kernel", but it failed.
>> (gdb) hw start_kernel
>> Undefined command: "hw".  Try "help".
>
> Sorry, typo. Must be "hb".
>
> Jan
>



-- 
Richard Yang
Help You, Help Me



Re: [Qemu-devel] Failed to set a breakpoint on start_kernel

2012-03-17 Thread Wei Yang
>> You can also try my patch :
>>
>> http://patchwork.ozlabs.org/patch/137543/
>
> Unless there is a use case beyond this x86 band-aid, lets focus on
> getting gdb right. Reminds me that gdb folks asked me to file a bug
> about this - which I still need to do. :-/
>
> Jan
>

Jan, I didn't try your patch yet.

You mean  this hardware assist break point is just support on x86 now?

-- 
Richard Yang
Help You, Help Me



[Qemu-devel] one question on the makefile

2012-06-12 Thread Wei Yang
All,

I saw a makefile rule which confused.

This is in the tests/Makefile

.PHONY: $(patsubst %, check-qtest-%, $(QTEST_TARGETS))
$(patsubst %, check-qtest-%, $(QTEST_TARGETS)): check-qtest-%: $(check-qtest-y)
$(call quiet-command,QTEST_QEMU_BINARY=$*-softmmu/qemu-system-$* \
gtester $(GTESTER_OPTIONS) -m=$(SPEED) 
$(check-qtest-$*-y),"GTESTER $@")

I know the general idea is to create a rule for target such as
check-qtest-x86_64.

There are two colons, usually there is one colon in dependency.

And the result dependency is
check-qtest-x86_64: tests/fdc-test tests/rtc-test tests/cwd-test

It is expanded to the content of check-qtest-x86_64-y.

I searched the googl and makefile manual. Do not find a result.
Could someone give me a hint?


Thanks a lot.

-- 
Richard Yang
Help You, Help Me



[Qemu-devel] Curious about this code in the ohci_service_td()

2012-02-18 Thread Wei Yang
I am reading the code in ohci_service_td().

There is a calculation of the length of the buffer.

if ((td.cbp & 0xf000) != (td.be & 0xf000)) {
len = (td.be & 0xfff) + 0x1001 - (td.cbp & 0xfff);
} else {
len = (td.be - td.cbp) + 1;
}

I am curious about the first case.
So the td.be could be less than the tb.cbp?

Seems this is a buffer ring and the size is around 4k?

-- 
Richard Yang
Help You, Help Me



Re: [Qemu-devel] Curious about this code in the ohci_service_td()

2012-02-18 Thread Wei Yang
2012/2/19 Peter Maydell :
> On 18 February 2012 16:19, Wei Yang  wrote:
>> I am reading the code in ohci_service_td().
>>
>> There is a calculation of the length of the buffer.
>>
>>        if ((td.cbp & 0xf000) != (td.be & 0xf000)) {
>>            len = (td.be & 0xfff) + 0x1001 - (td.cbp & 0xfff);
>>        } else {
>>            len = (td.be - td.cbp) + 1;
>>        }
>>
>> I am curious about the first case.
>> So the td.be could be less than the tb.cbp?
>
Thanks Peter.

> Yes. See the OHCI spec:
> ftp://ftp.compaq.com/pub/supportinformation/papers/hcir1_0a.pdf
> and specifically section 4.3: the data buffer described by a
> TD may span two separate physically disjoint pages. The exact

I see this sentence. So this sentence means.
For example, I have
page 1
page 2
page 3
which contains the content of the TD. (In the guest I think.)

TD buffer could locate at page1 and page 3, without touching page 2?
And I am wondering, the buffer in the guest is page aligned or not?

In struct OHCIState, there is the usb_buf[8192].
I think this is the maximum packet size defined in the MPS field in the ED.
The size is exactly 2 pages, if the page is 4K.

This usb_buf is the buffer in the HC I think, so each time we use ohci_copy_td()
to get the content from the guest and put it in usb_buf.

Now come to my previous question again.
tb.cbp and tb.be is pointing to the address in the guest.
This means
tb.be may point to page 1
while
tb.cbp may point to page 3?

So tb.cbp > tb.be?

> page crossing behaviour is described more specifically in
> section 4.3.1.3.1. (We don't implement it quite as described
> there because we try to handle the whole data buffer at once
> rather than byte-by-byte, but the effect is the same. See
> also the code later on in the function under 'Transmission
> succeeded' which updates td.cbp to the right value if we only
> managed to send part of the data.)
>
I quoted it from the specification.
===
If during the data transfer the buffer address contained in the HC’s
working copy of
CurrentBufferPointer crosses a 4K boundary, the upper 20 bits of
Buffer End are copied to the
working value of CurrentBufferPointer causing the next buffer address
to be the 0th byte in the
same 4K page that contains the last byte of the buffer (the 4K
boundary crossing may occur
within a data packet transfer.)
===

As I asked before, if the buffer is paged aligned, the
CurrentBufferPointer is pointing to the
beginning of one page.
Since the MPS is 8K, so Buffer End just has two cases.
 Page A   Page B
  |---||---|
  ^  ^  ^
  |   | case 1   | case 2
cbp   bebe

PageA and PageB are disjoint pages.

For case 1, it is easy to calculate the buffer size.

For case 2, since the page maybe disjoint, so after copy the first page,
cbp is set to the start of the next page.  If we just calculate
cbp = cbp + 0x1000
this may not be the start address of PageB.

I think my assumption is based on
1. buffer size is less than the MPS, which is 8k.
2. the cbp is page aligned.

Hmm I looked at the code again.

len = (td.be & 0xfff) + 0x1001 - (td.cbp & 0xfff);
If cbp is page aligned, then no need to do (td.cbp & 0xfff)?

> -- PMM



-- 
Richard Yang
Help You, Help Me



[Qemu-devel] In OHCI the HC will sent packet to each attached port?

2012-02-19 Thread Wei Yang
All

In function ohci_service_td() , there is a loop.

for (i = 0; i < ohci->num_ports; i++) {
dev = ohci->rhport[i].port.dev;
if ((ohci->rhport[i].ctrl & OHCI_PORT_PES) == 0)
continue;

if (ohci->async_td) {
/* ??? The hardware should allow one active packet per
   endpoint.  We only allow one active packet per controller.
   This should be sufficient as long as devices respond in a
   timely manner.
 */
#ifdef DEBUG_PACKET
DPRINTF("Too many pending packets\n");
#endif
return 1;
}
usb_packet_setup(&ohci->usb_packet, pid,
 OHCI_BM(ed->flags, ED_FA),
 OHCI_BM(ed->flags, ED_EN));
usb_packet_addbuf(&ohci->usb_packet, ohci->usb_buf, pktlen);
ret = usb_handle_packet(dev, &ohci->usb_packet);
if (ret != USB_RET_NODEV)
break;
}

I searched the spec,
HcRhPortStatus:PES means This bit indicates whether the port is
enabled or disabled.
1 means enabled.

So for each td, HC will try to send packet on each port?

-- 
Richard Yang
Help You, Help Me



Re: [Qemu-devel] [PATCH 2/2] qmp-spec: fix index in doc

2016-02-01 Thread Wei Yang
On Sat, Jan 30, 2016 at 12:46:58PM +0300, Michael Tokarev wrote:
>24.01.2016 17:09, Wei Yang wrote:
>> The index is duplicated. Just change it.
>
>It is indeed, with previous being 2.5 as well
>and the next being 3. Applying to -trivial.
>
>Please the next time send to qemu-devel@ and Cc to -trivial.
>

Thanks, got the correct procedure.

>Thanks!
>
>/mjt
>
>> Signed-off-by: Wei Yang 
>> ---
>>  docs/qmp-spec.txt | 2 +-
>>  1 file changed, 1 insertion(+), 1 deletion(-)
>> 
>> diff --git a/docs/qmp-spec.txt b/docs/qmp-spec.txt
>> index 4fb10a5..8e4bc3d 100644
>> --- a/docs/qmp-spec.txt
>> +++ b/docs/qmp-spec.txt
>> @@ -180,7 +180,7 @@ Some events are rate-limited to at most one per second.  
>> If additional
>>  dropped, and the last one is delayed.  "Similar" normally means same
>>  event type.  See qmp-events.txt for details.
>>  
>> -2.5 QGA Synchronization
>> +2.6 QGA Synchronization
>>  ---
>>  
>>  When using QGA, an additional synchronization feature is built into
>> 

-- 
Wei Yang
Help you, Help me



Re: [Qemu-devel] [PATCH 1/2] rdma: remove check on time_spent when calculating mbs

2016-02-01 Thread Wei Yang
On Sat, Jan 30, 2016 at 12:45:27PM +0300, Michael Tokarev wrote:
>24.01.2016 17:09, Wei Yang wrote:
>> Within the if statement, time_spent is assured to be non-zero.
>> 
>> This patch just removes the check on time_spent when calculating mbs.
>
>The if statement is this one:
>
>if (current_time >= initial_time + BUFFER_DELAY) {
>
>Note that we use time_spent as a divisor to calculate
>bandwidth too.
>
>This is indeed a trivial patch, I'm applying it to -trivial,
>but please the next time post it to qemu-devel (the main
>mailinglist), and Cc qemu-trivial if it is trivial.
>

To Amit and Michael,

Yep, thanks. I thought these two are too trivial to post to qemu-devel :)

Will do this next time.

>Thanks!
>
>/mjt
>
>> Signed-off-by: Wei Yang 
>> ---
>>  migration/migration.c | 4 ++--
>>  1 file changed, 2 insertions(+), 2 deletions(-)
>> 
>> diff --git a/migration/migration.c b/migration/migration.c
>> index c842499..40b87f2 100644
>> --- a/migration/migration.c
>> +++ b/migration/migration.c
>> @@ -1677,8 +1677,8 @@ static void *migration_thread(void *opaque)
>>  double bandwidth = (double)transferred_bytes / time_spent;
>>  max_size = bandwidth * migrate_max_downtime() / 100;
>>  
>> -s->mbps = time_spent ? (((double) transferred_bytes * 8.0) /
>> -((double) time_spent / 1000.0)) / 1000.0 / 1000.0 : -1;
>> +s->mbps = (((double) transferred_bytes * 8.0) /
>> +((double) time_spent / 1000.0)) / 1000.0 / 1000.0;
>>  
>>  trace_migrate_transferred(transferred_bytes, time_spent,
>>bandwidth, max_size);
>> 

-- 
Wei Yang
Help you, Help me



[Qemu-devel] [PATCH] migration: reorder code to make it symmetric

2016-02-04 Thread Wei Yang
In qemu_savevm_state_complete_precopy(), it iterates on each device to add
a json object and transfer related status to destination, while the order
of the last two steps could be refined.

Current order:

json_start_object()
save_section_header()
vmstate_save()
json_end_object()
save_section_footer()

After the change:

json_start_object()
save_section_header()
vmstate_save()
save_section_footer()
json_end_object()

This patch reorder the code to to make it symmetric. No functional change.

Signed-off-by: Wei Yang 
---
 migration/savevm.c |5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/migration/savevm.c b/migration/savevm.c
index b9caf59..42bfab4 100644
--- a/migration/savevm.c
+++ b/migration/savevm.c
@@ -1088,12 +1088,11 @@ void qemu_savevm_state_complete_precopy(QEMUFile *f, 
bool iterable_only)
 json_prop_int(vmdesc, "instance_id", se->instance_id);
 
 save_section_header(f, se, QEMU_VM_SECTION_FULL);
-
 vmstate_save(f, se, vmdesc);
-
-json_end_object(vmdesc);
 trace_savevm_section_end(se->idstr, se->section_id, 0);
 save_section_footer(f, se);
+
+json_end_object(vmdesc);
 }
 
 if (!in_postcopy) {
-- 
1.7.9.5




[Qemu-devel] [PATCH] vfio/pci: replace 1 with PCI_CAP_LIST_NEXT to make code self-explain

2016-02-10 Thread Wei Yang
Use the macro PCI_CAP_LIST_NEXT instead of 1, so that the code would be
more self-explain.

This patch makes this change and also fixs one typo in comment.

Signed-off-by: Wei Yang 
---
 hw/vfio/pci.c | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/hw/vfio/pci.c b/hw/vfio/pci.c
index 1fb868c..17d1462 100644
--- a/hw/vfio/pci.c
+++ b/hw/vfio/pci.c
@@ -1472,7 +1472,7 @@ static uint8_t vfio_std_cap_max_size(PCIDevice *pdev, 
uint8_t pos)
 uint8_t tmp, next = 0xff;
 
 for (tmp = pdev->config[PCI_CAPABILITY_LIST]; tmp;
- tmp = pdev->config[tmp + 1]) {
+ tmp = pdev->config[tmp + PCI_CAP_LIST_NEXT]) {
 if (tmp > pos && tmp < next) {
 next = tmp;
 }
@@ -1661,7 +1661,7 @@ static int vfio_add_std_cap(VFIOPCIDevice *vdev, uint8_t 
pos)
 int ret;
 
 cap_id = pdev->config[pos];
-next = pdev->config[pos + 1];
+next = pdev->config[pos + PCI_CAP_LIST_NEXT];
 
 /*
  * If it becomes important to configure capabilities to their actual
@@ -1675,7 +1675,7 @@ static int vfio_add_std_cap(VFIOPCIDevice *vdev, uint8_t 
pos)
  * pci_add_capability always inserts the new capability at the head
  * of the chain.  Therefore to end up with a chain that matches the
  * physical device, we insert from the end by making this recursive.
- * This is also why we pre-caclulate size above as cached config space
+ * This is also why we pre-calculate size above as cached config space
  * will be changed as we unwind the stack.
  */
 if (next) {
@@ -1691,7 +1691,7 @@ static int vfio_add_std_cap(VFIOPCIDevice *vdev, uint8_t 
pos)
 }
 
 /* Use emulated next pointer to allow dropping caps */
-pci_set_byte(vdev->emulated_config_bits + pos + 1, 0xff);
+pci_set_byte(vdev->emulated_config_bits + pos + PCI_CAP_LIST_NEXT, 0xff);
 
 switch (cap_id) {
 case PCI_CAP_ID_MSI:
-- 
2.5.0




Re: [Qemu-devel] [PATCH] migration: reorder code to make it symmetric

2016-02-10 Thread Wei Yang
Hello everyone,

Is this one correct?

On Thu, Feb 04, 2016 at 10:50:30PM +, Wei Yang wrote:
>In qemu_savevm_state_complete_precopy(), it iterates on each device to add
>a json object and transfer related status to destination, while the order
>of the last two steps could be refined.
>
>Current order:
>
>json_start_object()
>   save_section_header()
>   vmstate_save()
>json_end_object()
>   save_section_footer()
>
>After the change:
>
>json_start_object()
>   save_section_header()
>   vmstate_save()
>   save_section_footer()
>json_end_object()
>
>This patch reorder the code to to make it symmetric. No functional change.
>
>Signed-off-by: Wei Yang 
>---
> migration/savevm.c |5 ++---
> 1 file changed, 2 insertions(+), 3 deletions(-)
>
>diff --git a/migration/savevm.c b/migration/savevm.c
>index b9caf59..42bfab4 100644
>--- a/migration/savevm.c
>+++ b/migration/savevm.c
>@@ -1088,12 +1088,11 @@ void qemu_savevm_state_complete_precopy(QEMUFile *f, 
>bool iterable_only)
> json_prop_int(vmdesc, "instance_id", se->instance_id);
> 
> save_section_header(f, se, QEMU_VM_SECTION_FULL);
>-
> vmstate_save(f, se, vmdesc);
>-
>-json_end_object(vmdesc);
> trace_savevm_section_end(se->idstr, se->section_id, 0);
> save_section_footer(f, se);
>+
>+json_end_object(vmdesc);
> }
> 
> if (!in_postcopy) {
>-- 
>1.7.9.5

-- 
Wei Yang
Help you, Help me



[Qemu-devel] [PATCH 0/2] use PCI_MSIX_FLAGS on retrieving the MSIX entries

2016-02-19 Thread Wei Yang
These two patches replace the PCI_CAP_FLAGS with PCI_MSIX_FLAGS on retrieving
the MSIX entries. The change is the same, while I put them in two patches for
different author to review.

Wei Yang (2):
  vfio/pci: use PCI_MSIX_FLAGS on retrieving the MSIX entries
  s390x/pci: use PCI_MSIX_FLAGS on retrieving the MSIX entries

 hw/s390x/s390-pci-bus.c |2 +-
 hw/vfio/pci.c   |2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

-- 
1.7.9.5




[Qemu-devel] [PATCH 1/2] vfio/pci: use PCI_MSIX_FLAGS on retrieving the MSIX entries

2016-02-19 Thread Wei Yang
Even PCI_CAP_FLAGS has the same value as PCI_MSIX_FLAGS, the later one is
the more proper on retrieving MSIX entries.

This patch uses PCI_MSIX_FLAGS to retrieve the MSIX entries.

Signed-off-by: Wei Yang 
---
 hw/vfio/pci.c |2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hw/vfio/pci.c b/hw/vfio/pci.c
index e66c47f..321423b 100644
--- a/hw/vfio/pci.c
+++ b/hw/vfio/pci.c
@@ -1210,7 +1210,7 @@ static int vfio_msix_early_setup(VFIOPCIDevice *vdev)
 }
 
 if (pread(fd, &ctrl, sizeof(ctrl),
-  vdev->config_offset + pos + PCI_CAP_FLAGS) != sizeof(ctrl)) {
+  vdev->config_offset + pos + PCI_MSIX_FLAGS) != sizeof(ctrl)) {
 return -errno;
 }
 
-- 
1.7.9.5




[Qemu-devel] [PATCH 2/2] s390x/pci: use PCI_MSIX_FLAGS on retrieving the MSIX entries

2016-02-19 Thread Wei Yang
Even PCI_CAP_FLAGS has the same value as PCI_MSIX_FLAGS, the later one is
the more proper on retrieving MSIX entries.

This patch uses PCI_MSIX_FLAGS to retrieve the MSIX entries.

Signed-off-by: Wei Yang 
CC: Cornelia Huck 
CC: Christian Borntraeger 
---
 hw/s390x/s390-pci-bus.c |2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hw/s390x/s390-pci-bus.c b/hw/s390x/s390-pci-bus.c
index 132588b..9d40039 100644
--- a/hw/s390x/s390-pci-bus.c
+++ b/hw/s390x/s390-pci-bus.c
@@ -523,7 +523,7 @@ static int s390_pcihost_setup_msix(S390PCIBusDevice *pbdev)
 return 0;
 }
 
-ctrl = pci_host_config_read_common(pbdev->pdev, pos + PCI_CAP_FLAGS,
+ctrl = pci_host_config_read_common(pbdev->pdev, pos + PCI_MSIX_FLAGS,
  pci_config_size(pbdev->pdev), sizeof(ctrl));
 table = pci_host_config_read_common(pbdev->pdev, pos + PCI_MSIX_TABLE,
  pci_config_size(pbdev->pdev), sizeof(table));
-- 
1.7.9.5




Re: [Qemu-devel] [PATCH 1/2] vfio/pci: use PCI_MSIX_FLAGS on retrieving the MSIX entries

2016-02-19 Thread Wei Yang
On Fri, Feb 19, 2016 at 09:45:32AM -0700, Alex Williamson wrote:
>On Fri, 19 Feb 2016 15:18:10 +0000
>Wei Yang  wrote:
>
>> Even PCI_CAP_FLAGS has the same value as PCI_MSIX_FLAGS, the later one is
>> the more proper on retrieving MSIX entries.
>> 
>> This patch uses PCI_MSIX_FLAGS to retrieve the MSIX entries.
>> 
>> Signed-off-by: Wei Yang 
>> ---
>>  hw/vfio/pci.c |2 +-
>>  1 file changed, 1 insertion(+), 1 deletion(-)
>> 
>> diff --git a/hw/vfio/pci.c b/hw/vfio/pci.c
>> index e66c47f..321423b 100644
>> --- a/hw/vfio/pci.c
>> +++ b/hw/vfio/pci.c
>> @@ -1210,7 +1210,7 @@ static int vfio_msix_early_setup(VFIOPCIDevice *vdev)
>>  }
>>  
>>  if (pread(fd, &ctrl, sizeof(ctrl),
>> -  vdev->config_offset + pos + PCI_CAP_FLAGS) != sizeof(ctrl)) {
>> +  vdev->config_offset + pos + PCI_MSIX_FLAGS) != sizeof(ctrl)) {
>>  return -errno;
>>  }
>>  
>
>This is certainly trivial, I'll grab it for the respin of yesterday's
>pull request.  Thanks,

Thanks Alex, have a nice weekend :-)

>
>Alex

-- 
Wei Yang
Help you, Help me



Re: [Qemu-devel] [PATCH] migration: reorder code to make it symmetric

2016-02-19 Thread Wei Yang
Hi, Amit

Do you like this one?

On Thu, Feb 04, 2016 at 10:50:30PM +, Wei Yang wrote:
>In qemu_savevm_state_complete_precopy(), it iterates on each device to add
>a json object and transfer related status to destination, while the order
>of the last two steps could be refined.
>
>Current order:
>
>json_start_object()
>   save_section_header()
>   vmstate_save()
>json_end_object()
>   save_section_footer()
>
>After the change:
>
>json_start_object()
>   save_section_header()
>   vmstate_save()
>   save_section_footer()
>json_end_object()
>
>This patch reorder the code to to make it symmetric. No functional change.
>
>Signed-off-by: Wei Yang 
>---
> migration/savevm.c |5 ++---
> 1 file changed, 2 insertions(+), 3 deletions(-)
>
>diff --git a/migration/savevm.c b/migration/savevm.c
>index b9caf59..42bfab4 100644
>--- a/migration/savevm.c
>+++ b/migration/savevm.c
>@@ -1088,12 +1088,11 @@ void qemu_savevm_state_complete_precopy(QEMUFile *f, 
>bool iterable_only)
> json_prop_int(vmdesc, "instance_id", se->instance_id);
> 
> save_section_header(f, se, QEMU_VM_SECTION_FULL);
>-
> vmstate_save(f, se, vmdesc);
>-
>-json_end_object(vmdesc);
> trace_savevm_section_end(se->idstr, se->section_id, 0);
> save_section_footer(f, se);
>+
>+json_end_object(vmdesc);
> }
> 
> if (!in_postcopy) {
>-- 
>1.7.9.5

-- 
Wei Yang
Help you, Help me



Re: [Qemu-devel] [PATCH 2/2] s390x/pci: use PCI_MSIX_FLAGS on retrieving the MSIX entries

2016-02-23 Thread Wei Yang
On Tue, Feb 23, 2016 at 02:17:11PM +0800, Yi Min Zhao wrote:
>于 Mon, 22 Feb 2016 14:15:07 +0100
>Christian Borntraeger  写道:
>
>> On 02/19/2016 04:18 PM, Wei Yang wrote:
>> > Even PCI_CAP_FLAGS has the same value as PCI_MSIX_FLAGS, the later one is
>> > the more proper on retrieving MSIX entries.
>> > 
>> > This patch uses PCI_MSIX_FLAGS to retrieve the MSIX entries.
>> > 
>> > Signed-off-by: Wei Yang 
>> > CC: Cornelia Huck 
>> > CC: Christian Borntraeger 
>> > ---
>> >  hw/s390x/s390-pci-bus.c |2 +-
>> >  1 file changed, 1 insertion(+), 1 deletion(-)
>> > 
>> > diff --git a/hw/s390x/s390-pci-bus.c b/hw/s390x/s390-pci-bus.c
>> > index 132588b..9d40039 100644
>> > --- a/hw/s390x/s390-pci-bus.c
>> > +++ b/hw/s390x/s390-pci-bus.c
>> > @@ -523,7 +523,7 @@ static int s390_pcihost_setup_msix(S390PCIBusDevice 
>> > *pbdev)
>> >  return 0;
>> >  }
>> > 
>> > -ctrl = pci_host_config_read_common(pbdev->pdev, pos + PCI_CAP_FLAGS,
>> > +ctrl = pci_host_config_read_common(pbdev->pdev, pos + PCI_MSIX_FLAGS,
>> >   pci_config_size(pbdev->pdev), sizeof(ctrl));
>> >  table = pci_host_config_read_common(pbdev->pdev, pos + PCI_MSIX_TABLE,
>> >   pci_config_size(pbdev->pdev), sizeof(table));
>> > 
>> 
>> looks sane.
>> Yi Min, can you ack/nack?
>> 
>> 
>
>It looks sane to me. A little change.

Thanks Yi Min and Christian.

So someone would pick it up?

-- 
Wei Yang
Help you, Help me



Re: [Qemu-devel] [PATCH 2/2] s390x/pci: use PCI_MSIX_FLAGS on retrieving the MSIX entries

2016-02-24 Thread Wei Yang
On Wed, Feb 24, 2016 at 10:40:15AM +0100, Cornelia Huck wrote:
>On Fri, 19 Feb 2016 15:18:11 +0000
>Wei Yang  wrote:
>
>> Even PCI_CAP_FLAGS has the same value as PCI_MSIX_FLAGS, the later one is
>> the more proper on retrieving MSIX entries.
>> 
>> This patch uses PCI_MSIX_FLAGS to retrieve the MSIX entries.
>> 
>> Signed-off-by: Wei Yang 
>> CC: Cornelia Huck 
>> CC: Christian Borntraeger 
>> ---
>>  hw/s390x/s390-pci-bus.c |2 +-
>>  1 file changed, 1 insertion(+), 1 deletion(-)
>
>Thanks, applied to s390-next.

Thanks :-)

-- 
Wei Yang
Help you, Help me



[Qemu-devel] [PATCH] bitmap: refine and move BITMAP_{FIRST/LAST}_WORD_MASK

2016-03-05 Thread Wei Yang
According to linux kernel commit <89c1e79eb30> ("linux/bitmap.h: improve
BITMAP_{LAST,FIRST}_WORD_MASK"), these two macro could be improved.

This patch takes this change and also move them all in header file.

Signed-off-by: Wei Yang 
---
 include/qemu/bitmap.h | 7 ++-
 util/bitmap.c | 2 --
 2 files changed, 2 insertions(+), 7 deletions(-)

diff --git a/include/qemu/bitmap.h b/include/qemu/bitmap.h
index 0e33fa5..864982d 100644
--- a/include/qemu/bitmap.h
+++ b/include/qemu/bitmap.h
@@ -58,11 +58,8 @@
  * find_next_bit(addr, nbits, bit) Position next set bit in *addr >= bit
  */
 
-#define BITMAP_LAST_WORD_MASK(nbits)\
-(   \
-((nbits) % BITS_PER_LONG) ? \
-(1UL<<((nbits) % BITS_PER_LONG))-1 : ~0UL   \
-)
+#define BITMAP_FIRST_WORD_MASK(start) (~0UL << ((start) & (BITS_PER_LONG - 1)))
+#define BITMAP_LAST_WORD_MASK(nbits) (~0UL >> (-(nbits) & (BITS_PER_LONG - 1)))
 
 #define DECLARE_BITMAP(name,bits)  \
 unsigned long name[BITS_TO_LONGS(bits)]
diff --git a/util/bitmap.c b/util/bitmap.c
index 40aadfb..43ed011 100644
--- a/util/bitmap.c
+++ b/util/bitmap.c
@@ -157,8 +157,6 @@ int slow_bitmap_andnot(unsigned long *dst, const unsigned 
long *bitmap1,
 return result != 0;
 }
 
-#define BITMAP_FIRST_WORD_MASK(start) (~0UL << ((start) % BITS_PER_LONG))
-
 void bitmap_set(unsigned long *map, long start, long nr)
 {
 unsigned long *p = map + BIT_WORD(start);
-- 
2.5.0




[Qemu-devel] [PATCH] kvm/irqchip: use bitmap utility for gsi tracking

2016-03-05 Thread Wei Yang
By using utilities in bitops and bitmap, this patch tries to make it more
friendly to audience. No functional change.

Signed-off-by: Wei Yang 
---
 kvm-all.c | 34 ++
 1 file changed, 10 insertions(+), 24 deletions(-)

diff --git a/kvm-all.c b/kvm-all.c
index bd9e764..ed3f4a2 100644
--- a/kvm-all.c
+++ b/kvm-all.c
@@ -90,7 +90,7 @@ struct KVMState
 #ifdef KVM_CAP_IRQ_ROUTING
 struct kvm_irq_routing *irq_routes;
 int nr_allocated_irq_routes;
-uint32_t *used_gsi_bitmap;
+unsigned long *used_gsi_bitmap;
 unsigned int gsi_count;
 QTAILQ_HEAD(msi_hashtab, KVMMSIRoute) msi_hashtab[KVM_MSI_HASHTAB_SIZE];
 #endif
@@ -951,12 +951,12 @@ typedef struct KVMMSIRoute {
 
 static void set_gsi(KVMState *s, unsigned int gsi)
 {
-s->used_gsi_bitmap[gsi / 32] |= 1U << (gsi % 32);
+set_bit(gsi, s->used_gsi_bitmap);
 }
 
 static void clear_gsi(KVMState *s, unsigned int gsi)
 {
-s->used_gsi_bitmap[gsi / 32] &= ~(1U << (gsi % 32));
+clear_bit(gsi, s->used_gsi_bitmap);
 }
 
 void kvm_init_irq_routing(KVMState *s)
@@ -965,17 +965,9 @@ void kvm_init_irq_routing(KVMState *s)
 
 gsi_count = kvm_check_extension(s, KVM_CAP_IRQ_ROUTING) - 1;
 if (gsi_count > 0) {
-unsigned int gsi_bits, i;
-
 /* Round up so we can search ints using ffs */
-gsi_bits = ALIGN(gsi_count, 32);
-s->used_gsi_bitmap = g_malloc0(gsi_bits / 8);
+s->used_gsi_bitmap = bitmap_new(gsi_count);
 s->gsi_count = gsi_count;
-
-/* Mark any over-allocated bits as already in use */
-for (i = gsi_count; i < gsi_bits; i++) {
-set_gsi(s, i);
-}
 }
 
 s->irq_routes = g_malloc0(sizeof(*s->irq_routes));
@@ -1105,9 +1097,7 @@ static void kvm_flush_dynamic_msi_routes(KVMState *s)
 
 static int kvm_irqchip_get_virq(KVMState *s)
 {
-uint32_t *word = s->used_gsi_bitmap;
-int max_words = ALIGN(s->gsi_count, 32) / 32;
-int i, zeroes;
+int next_virq;
 
 /*
  * PIC and IOAPIC share the first 16 GSI numbers, thus the available
@@ -1120,16 +1110,12 @@ static int kvm_irqchip_get_virq(KVMState *s)
 }
 
 /* Return the lowest unused GSI in the bitmap */
-for (i = 0; i < max_words; i++) {
-zeroes = ctz32(~word[i]);
-if (zeroes == 32) {
-continue;
-}
-
-return zeroes + i * 32;
+next_virq = find_first_zero_bit(s->used_gsi_bitmap, s->gsi_count);
+if (next_virq >= s->gsi_count) {
+return -ENOSPC;
+} else {
+return next_virq;
 }
-return -ENOSPC;
-
 }
 
 static KVMMSIRoute *kvm_lookup_msi_route(KVMState *s, MSIMessage msg)
-- 
2.5.0




Re: [Qemu-devel] [PATCH] kvm/irqchip: use bitmap utility for gsi tracking

2016-03-07 Thread Wei Yang
On Mon, Mar 07, 2016 at 10:47:53AM +0100, Paolo Bonzini wrote:
>On 06/03/2016 02:57, Wei Yang wrote:
>> By using utilities in bitops and bitmap, this patch tries to make it more
>> friendly to audience. No functional change.
>> 
>> Signed-off-by: Wei Yang 
>> ---
>>  kvm-all.c | 34 ++
>>  1 file changed, 10 insertions(+), 24 deletions(-)
>> 
>> diff --git a/kvm-all.c b/kvm-all.c
>> index bd9e764..ed3f4a2 100644
>> --- a/kvm-all.c
>> +++ b/kvm-all.c
>> @@ -90,7 +90,7 @@ struct KVMState
>>  #ifdef KVM_CAP_IRQ_ROUTING
>>  struct kvm_irq_routing *irq_routes;
>>  int nr_allocated_irq_routes;
>> -uint32_t *used_gsi_bitmap;
>> +unsigned long *used_gsi_bitmap;
>>  unsigned int gsi_count;
>>  QTAILQ_HEAD(msi_hashtab, KVMMSIRoute) msi_hashtab[KVM_MSI_HASHTAB_SIZE];
>>  #endif
>> @@ -951,12 +951,12 @@ typedef struct KVMMSIRoute {
>>  
>>  static void set_gsi(KVMState *s, unsigned int gsi)
>>  {
>> -s->used_gsi_bitmap[gsi / 32] |= 1U << (gsi % 32);
>> +set_bit(gsi, s->used_gsi_bitmap);
>>  }
>>  
>>  static void clear_gsi(KVMState *s, unsigned int gsi)
>>  {
>> -s->used_gsi_bitmap[gsi / 32] &= ~(1U << (gsi % 32));
>> +clear_bit(gsi, s->used_gsi_bitmap);
>>  }
>>  
>>  void kvm_init_irq_routing(KVMState *s)
>> @@ -965,17 +965,9 @@ void kvm_init_irq_routing(KVMState *s)
>>  
>>  gsi_count = kvm_check_extension(s, KVM_CAP_IRQ_ROUTING) - 1;
>>  if (gsi_count > 0) {
>> -unsigned int gsi_bits, i;
>> -
>>  /* Round up so we can search ints using ffs */
>> -gsi_bits = ALIGN(gsi_count, 32);
>> -s->used_gsi_bitmap = g_malloc0(gsi_bits / 8);
>> +s->used_gsi_bitmap = bitmap_new(gsi_count);
>>  s->gsi_count = gsi_count;
>> -
>> -/* Mark any over-allocated bits as already in use */
>> -for (i = gsi_count; i < gsi_bits; i++) {
>> -set_gsi(s, i);
>> -}
>>  }
>>  
>>  s->irq_routes = g_malloc0(sizeof(*s->irq_routes));
>> @@ -1105,9 +1097,7 @@ static void kvm_flush_dynamic_msi_routes(KVMState *s)
>>  
>>  static int kvm_irqchip_get_virq(KVMState *s)
>>  {
>> -uint32_t *word = s->used_gsi_bitmap;
>> -int max_words = ALIGN(s->gsi_count, 32) / 32;
>> -int i, zeroes;
>> +int next_virq;
>>  
>>  /*
>>   * PIC and IOAPIC share the first 16 GSI numbers, thus the available
>> @@ -1120,16 +1110,12 @@ static int kvm_irqchip_get_virq(KVMState *s)
>>  }
>>  
>>  /* Return the lowest unused GSI in the bitmap */
>> -for (i = 0; i < max_words; i++) {
>> -zeroes = ctz32(~word[i]);
>> -if (zeroes == 32) {
>> -    continue;
>> -}
>> -
>> -return zeroes + i * 32;
>> +next_virq = find_first_zero_bit(s->used_gsi_bitmap, s->gsi_count);
>> +if (next_virq >= s->gsi_count) {
>> +return -ENOSPC;
>> +} else {
>> +return next_virq;
>>  }
>> -return -ENOSPC;
>> -
>>  }
>>  
>>  static KVMMSIRoute *kvm_lookup_msi_route(KVMState *s, MSIMessage msg)
>> 
>
>Queued, thanks.  I will send a pull request today.
>

Thanks, glad to contribute :-)

>Paolo

-- 
Wei Yang
Help you, Help me



Re: [Qemu-devel] [RFC PATCH V2 00/10] Qemu: Add live migration support for SRIOV NIC

2016-03-18 Thread Wei Yang
Hi, Tianyu,

I am testing your V2 patch set in our environment, while facing two issues
now. Have a workaround for the first one and hope you could share some light
on the second one :-)

1. Mismatch for ram_block (Have a workaround)
---
Below is the error message on the destination:

qemu-system-x86_64: Length mismatch: : 0x3000 in != 0x4000: Invalid argument
qemu-system-x86_64: error while loading state for instance 0x0 of device 
'ram'
qemu-system-x86_64: load of migration failed: Invalid argument

With the following command line on source and destination respectively:

git/qemu/x86_64-softmmu/qemu-system-x86_64 --enable-kvm -m 4096 -smp 4 
--nographic -drive file=/root/nfs/rhel.img,format=raw,cache=none -device 
vfio-sriov,host=:03:10.0
git/qemu/x86_64-softmmu/qemu-system-x86_64 --enable-kvm -m 4096 -smp 4 
--nographic -drive file=/root/nfs/rhel.img,format=raw,cache=none -device 
vfio-sriov,host=:03:10.0 --incoming tcp:0:

By some debugging, the reason for this error is the ram_block->idstr of
pass-through MMIO region is not set.

My workaround is to add vmstate_register_ram() in vfio_mmap_region() after
memory_region_init_ram_ptr() returns.

I think this is not a good solution, since the ram_block->idstr is coded
with the VF's BDF. So I guess this will not work when the VF has different BDF
from source to destination respectively.

Maybe my test step is not correct?

2. Failed to migrate the MAC address
---

By adding some code in VF's driver in destination guest, I found the MAC
information has been migrated to destination in adapter->hw.mac. While this is
"reset" by VF's driver, when ixgbevf_migration_task is invoked at the end of
the migration process.

Below is what I have printed:

The ifconfig output from destination:

eth8  Link encap:Ethernet  HWaddr 52:54:00:81:39:F2  
  inet addr:9.31.210.106  Bcast:9.31.255.255  Mask:255.255.0.0
  inet6 addr: fe80::5054:ff:fe81:39f2/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:66 errors:0 dropped:0 overruns:0 frame:0
  TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000 
  RX bytes:21840 (21.3 KiB)  TX bytes:920 (920.0 b)

The log message I printed in destination's VF driver:

ixgbevf: migration end -- 
ixgbevf: original mac:52:54:00:81:39:f2
ixgbevf: after reset mac:52:54:00:92:04:a3
ixgbevf: migration end  ==

I didn't take a close look in the "reset" function, while seems it retrieves
the mac from VF hardware. Hmm... is there some possible way to have the same
mac on both source and destination?

At last, I appreciated all your work and help, learned much from your side.

On Tue, Nov 24, 2015 at 09:35:17PM +0800, Lan Tianyu wrote:
>This patchset is to propose a solution of adding live migration
>support for SRIOV NIC.
>
>During migration, Qemu needs to let VF driver in the VM to know
>migration start and end. Qemu adds faked PCI migration capability
>to help to sync status between two sides during migration.
>
>Qemu triggers VF's mailbox irq via sending MSIX msg when migration
>status is changed. VF driver tells Qemu its mailbox vector index
>via the new PCI capability. In some cases(NIC is suspended or closed),
>VF mailbox irq is freed and VF driver can disable irq injecting via
>new capability.   
>
>VF driver will put down nic before migration and put up again on
>the target machine.
>
>Lan Tianyu (10):
>  Qemu/VFIO: Create head file pci.h to share data struct.
>  Qemu/VFIO: Add new VFIO_GET_PCI_CAP_INFO ioctl cmd definition
>  Qemu/VFIO: Rework vfio_std_cap_max_size() function
>  Qemu/VFIO: Add vfio_find_free_cfg_reg() to find free PCI config space
>regs
>  Qemu/VFIO: Expose PCI config space read/write and msix functions
>  Qemu/PCI: Add macros for faked PCI migration capability
>  Qemu: Add post_load_state() to run after restoring CPU state
>  Qemu: Add save_before_stop callback to run just before stopping VCPU
>during migration
>  Qemu/VFIO: Add SRIOV VF migration support
>  Qemu/VFIO: Misc change for enable migration with VFIO
>
> hw/vfio/Makefile.objs   |   2 +-
> hw/vfio/pci.c   | 196 +---
> hw/vfio/pci.h   | 168 +
> hw/vfio/sriov.c | 178 
> include/hw/pci/pci_regs.h   |  19 +
> include/migration/vmstate.h |   5 ++
> include/sysemu/sysemu.h |   1 +
> linux-headers/linux/vfio.h  |  16 
> migration/migration.c   |   3 +-
> migration/savevm.c  |  28 +++
> 10 files changed, 459 insertions(+), 157 deletions(-)
> create mode 100644 hw/vfio/pci.h
> create mode 100644 hw/vfio/sriov.c
>
>-- 
>1.9.3
>
>

--

Re: [Qemu-devel] [PATCH] vfio/pci: replace 1 with PCI_CAP_LIST_NEXT to make code self-explain

2016-03-19 Thread Wei Yang
On Wed, Mar 16, 2016 at 12:52:52PM +0100, Paolo Bonzini wrote:
>
>
>On 16/03/2016 12:27, Michael Tokarev wrote:
>>> >  for (tmp = pdev->config[PCI_CAPABILITY_LIST]; tmp;
>>> > - tmp = pdev->config[tmp + 1]) {
>>> > + tmp = pdev->config[tmp + PCI_CAP_LIST_NEXT]) {
>>> > -next = pdev->config[pos + 1];
>>> > +next = pdev->config[pos + PCI_CAP_LIST_NEXT];
>> Hmm. I'm not sure the new version is better, to me "+1" reads
>> easier than the new symbolic constant variant.
>> 
>> If it were something like pdev->config[PCI_CAP_LIST_NEXT], that'd be
>> nice, but not "pos + PCI_CAP_LIST_NEXT".
>> 
>> But again, I'm not pci config space expert and don't understand
>> the basics :)

Thanks Michael for your comment. By using the macro, audience is more easy to
understand it tries to get the position of next capability.

>
>Each capability is a node of a linked list, and the position of the next
>capability is at offset 1 inside the capability (here it is at offset 1
>from the tmp or pos base).  I think the patch is an improvement.
>

Thanks Paolo for your reply. :-) 

>Paolo

-- 
Wei Yang
Help you, Help me



Re: [Qemu-devel] [RFC Design Doc]Speed up live migration by skipping free pages

2016-03-22 Thread Wei Yang
Hi, Liang

This is a very clear documentation of your work, I appreciated it a lot. Below
are some of my personal opinion and question.

On Tue, Mar 22, 2016 at 03:43:49PM +0800, Liang Li wrote:
>I have sent the RFC version patch set for live migration optimization
>by skipping processing the free pages in the ram bulk stage and
>received a lot of comments. The related threads can be found at:
>
>https://lists.gnu.org/archive/html/qemu-devel/2016-03/msg00715.html
>https://lists.gnu.org/archive/html/qemu-devel/2016-03/msg00714.html
>https://lists.gnu.org/archive/html/qemu-devel/2016-03/msg00717.html
>https://lists.gnu.org/archive/html/qemu-devel/2016-03/msg00716.html
>https://lists.gnu.org/archive/html/qemu-devel/2016-03/msg00718.html
>
>https://lists.gnu.org/archive/html/qemu-devel/2016-03/msg00719.html 
>https://lists.gnu.org/archive/html/qemu-devel/2016-03/msg00720.html
>https://lists.gnu.org/archive/html/qemu-devel/2016-03/msg00721.html
>

Actually there are two threads, Qemu thread and kernel thread. It would be
more clear for audience, if you just list two first mail for these two thread
respectively.

>To make things easier, I wrote this doc about the possible designs
>and my choices. Comments are welcome! 
>
>Content
>===
>1. Background
>2. Why not use virtio-balloon
>3. Virtio interface
>4. Constructing free page bitmap
>5. Tighten free page bitmap
>6. Handling page cache in the guest
>7. APIs for live migration
>8. Pseudo code 
>
>Details
>===
>1. Background
>As we know, in the ram bulk stage of live migration, current QEMU live
>migration implementation mark the all guest's RAM pages as dirtied in
>the ram bulk stage, all these pages will be checked for zero page
>first, and the page content will be sent to the destination depends on
>the checking result, that process consumes quite a lot of CPU cycles
>and network bandwidth.
>
>>From guest's point of view, there are some pages currently not used by

I see in your original RFC patch and your RFC doc, this line starts with a
character '>'. Not sure this one has a special purpose?

>the guest, guest doesn't care about the content in these pages. Free
>pages are this kind of pages which are not used by guest. We can make
>use of this fact and skip processing the free pages in the ram bulk
>stage, it can save a lot CPU cycles and reduce the network traffic
>while speed up the live migration process obviously.
>
>Usually, only the guest has the information of free pages. But it’s
>possible to let the guest tell QEMU it’s free page information by some
>mechanism. E.g. Through the virtio interface. Once QEMU get the free
>page information, it can skip processing these free pages in the ram
>bulk stage by clearing the corresponding bit of the migration bitmap. 
>
>2. Why not use virtio-balloon 
>Actually, the virtio-balloon can do the similar thing by inflating the
>balloon before live migration, but its performance is no good, for an
>8GB idle guest just boots, it takes about 5.7 Sec to inflate the
>balloon to 7GB, but it only takes 25ms to get a valid free page bitmap
>from the guest.  There are some of reasons for the bad performance of
>vitio-balloon:
>a. allocating pages (5%, 304ms)
>b. sending PFNs to host (71%, 4194ms)
>c. address translation and madvise() operation (24%, 1423ms)
>Debugging shows the time spends on these operations are listed in the
>brackets above. By changing the VIRTIO_BALLOON_ARRAY_PFNS_MAX to a
>large value, such as 16384, the time spends on sending the PFNs can be
>reduced to about 400ms, but it’s still too long.
>
>Obviously, the virtio-balloon mechanism has a bigger performance
>impact to the guest than the way we are trying to implement.
>
>3. Virtio interface
>There are three different ways of using the virtio interface to
>send the free page information.
>a. Extend the current virtio device
>The virtio spec has already defined some virtio devices, and we can
>extend one of these devices so as to use it to transport the free page
>information. It requires modifying the virtio spec.
>
>b. Implement a new virtio device
>Implementing a brand new virtio device to exchange information
>between host and guest is another choice. It requires modifying the
>virtio spec too.
>
>c. Make use of virtio-serial (Amit’s suggestion, my choice)
>It’s possible to make use the virtio-serial for communication between
>host and guest, the benefit of this solution is no need to modify the
>virtio spec. 
>
>4. Construct free page bitmap
>To minimize the space for saving free page information, it’s better to
>use a bitmap to describe the free pages. There are two ways to
>construct the free page bitmap.
>
>a. Construct free page bitmap when demand (My choice)
>Guest can allocate memory for the free page bitmap only when it
>receives the request from QEMU, and set the free page bitmap by
>traversing the free page list. The advantage of this way is that it’s
>quite simple and easy to implement. The disadvantage is that the
>traversing oper

Re: [Qemu-devel] [RFC Design Doc]Speed up live migration by skipping free pages

2016-03-23 Thread Wei Yang
On Wed, Mar 23, 2016 at 07:18:57AM +, Li, Liang Z wrote:
>> Hi, Liang
>> 
>> This is a very clear documentation of your work, I appreciated it a lot. 
>> Below
>> are some of my personal opinion and question.
>> 
>
>Thanks for your comments!
>
>> On Tue, Mar 22, 2016 at 03:43:49PM +0800, Liang Li wrote:
>> >I have sent the RFC version patch set for live migration optimization
>> >by skipping processing the free pages in the ram bulk stage and
>> >received a lot of comments. The related threads can be found at:
>> >
>> >https://lists.gnu.org/archive/html/qemu-devel/2016-03/msg00715.html
>> >https://lists.gnu.org/archive/html/qemu-devel/2016-03/msg00714.html
>> >https://lists.gnu.org/archive/html/qemu-devel/2016-03/msg00717.html
>> >https://lists.gnu.org/archive/html/qemu-devel/2016-03/msg00716.html
>> >https://lists.gnu.org/archive/html/qemu-devel/2016-03/msg00718.html
>> >
>> >https://lists.gnu.org/archive/html/qemu-devel/2016-03/msg00719.html
>> >https://lists.gnu.org/archive/html/qemu-devel/2016-03/msg00720.html
>> >https://lists.gnu.org/archive/html/qemu-devel/2016-03/msg00721.html
>> >
>> 
>> Actually there are two threads, Qemu thread and kernel thread. It would be
>> more clear for audience, if you just list two first mail for these two thread
>> respectively.
>> 
>
>Indeed,  my original version has this kind of information, but I removed it.
>
>> >To make things easier, I wrote this doc about the possible designs
>> >and my choices. Comments are welcome!
>> >
>> >Content
>> >===
>> >1. Background
>> >2. Why not use virtio-balloon
>> >3. Virtio interface
>> >4. Constructing free page bitmap
>> >5. Tighten free page bitmap
>> >6. Handling page cache in the guest
>> >7. APIs for live migration
>> >8. Pseudo code
>> >
>> >Details
>> >===
>> >1. Background
>> >As we know, in the ram bulk stage of live migration, current QEMU live
>> >migration implementation mark the all guest's RAM pages as dirtied in
>> >the ram bulk stage, all these pages will be checked for zero page
>> >first, and the page content will be sent to the destination depends on
>> >the checking result, that process consumes quite a lot of CPU cycles
>> >and network bandwidth.
>> >
>> >>From guest's point of view, there are some pages currently not used by
>> 
>> I see in your original RFC patch and your RFC doc, this line starts with a
>> character '>'. Not sure this one has a special purpose?
>> 
>
>No special purpose. Maybe it's caused by the email client. I didn't find the
>character in the original doc.
>

https://lists.gnu.org/archive/html/qemu-devel/2016-03/msg00715.html

You could take a look at this link, there is a '>' before From.

>> >the guest, guest doesn't care about the content in these pages. Free
>> >pages are this kind of pages which are not used by guest. We can make
>> >use of this fact and skip processing the free pages in the ram bulk
>> >stage, it can save a lot CPU cycles and reduce the network traffic
>> >while speed up the live migration process obviously.
>> >
>> >Usually, only the guest has the information of free pages. But it’s
>> >possible to let the guest tell QEMU it’s free page information by some
>> >mechanism. E.g. Through the virtio interface. Once QEMU get the free
>> >page information, it can skip processing these free pages in the ram
>> >bulk stage by clearing the corresponding bit of the migration bitmap.
>> >
>> >2. Why not use virtio-balloon
>> >Actually, the virtio-balloon can do the similar thing by inflating the
>> >balloon before live migration, but its performance is no good, for an
>> >8GB idle guest just boots, it takes about 5.7 Sec to inflate the
>> >balloon to 7GB, but it only takes 25ms to get a valid free page bitmap
>> >from the guest.  There are some of reasons for the bad performance of
>> >vitio-balloon:
>> >a. allocating pages (5%, 304ms)
>> >b. sending PFNs to host (71%, 4194ms)
>> >c. address translation and madvise() operation (24%, 1423ms)
>> >Debugging shows the time spends on these operations are listed in the
>> >brackets above. By changing the VIRTIO_BALLOON_ARRAY_PFNS_MAX to a
>> >large value, such as 16384, the time spends on sending the PFNs can be
>> >reduced to about 400ms, but it’s still too long.
>> >
>> >Obviously, the virtio-balloon mechanism has a bigger performance
>> >impact to the guest than the way we are trying to implement.
>> >
>> >3. Virtio interface
>> >There are three different ways of using the virtio interface to
>> >send the free page information.
>> >a. Extend the current virtio device
>> >The virtio spec has already defined some virtio devices, and we can
>> >extend one of these devices so as to use it to transport the free page
>> >information. It requires modifying the virtio spec.
>> >
>> >b. Implement a new virtio device
>> >Implementing a brand new virtio device to exchange information
>> >between host and guest is another choice. It requires modifying the
>> >virtio spec too.
>> >
>> >c. Make use of virtio-serial (Amit’s sugges

Re: [Qemu-devel] [RFC Design Doc]Speed up live migration by skipping free pages

2016-03-23 Thread Wei Yang
On Wed, Mar 23, 2016 at 10:53:42AM -0600, Eric Blake wrote:
>On 03/23/2016 01:18 AM, Li, Liang Z wrote:
>
>>>>
>>>> >From guest's point of view, there are some pages currently not used by
>>>
>>> I see in your original RFC patch and your RFC doc, this line starts with a
>>> character '>'. Not sure this one has a special purpose?
>>>
>> 
>> No special purpose. Maybe it's caused by the email client. I didn't find the
>> character in the original doc.
>> 
>
>Yes, it's an artifact used by many mailers so that mailboxes don't get
>confused by a bare "From" at the start of a line but in the middle
>rather than the start of a message.
>
>It's possible to avoid the artifact by using quoted-printable and
>escaping the 'F', and I'm honestly a bit surprised that git doesn't do
>it automatically.
>

Oh, first time to notice this, interesting~

>-- 
>Eric Blake   eblake redhat com+1-919-301-3266
>Libvirt virtualization library http://libvirt.org
>



-- 
Wei Yang
Help you, Help me



Re: [Qemu-devel] [RFC Design Doc]Speed up live migration by skipping free pages

2016-03-23 Thread Wei Yang
On Wed, Mar 23, 2016 at 02:35:42PM +, Li, Liang Z wrote:
>> >No special purpose. Maybe it's caused by the email client. I didn't
>> >find the character in the original doc.
>> >
>> 
>> https://lists.gnu.org/archive/html/qemu-devel/2016-03/msg00715.html
>> 
>> You could take a look at this link, there is a '>' before From.
>
>Yes, there is. 
>
>> >> >
>> >> >6. Handling page cache in the guest
>> >> >The memory used for page cache in the guest will change depends on
>> >> >the workload, if guest run some block IO intensive work load, there
>> >> >will
>> >>
>> >> Would this improvement benefit a lot when guest only has little free page?
>> >
>> >Yes, the improvement is very obvious.
>> >
>> 
>> Good to know this.
>> 
>> >> In your Performance data Case 2, I think it mimic this kind of case.
>> >> While the memory consuming task is stopped before migration. If it
>> >> continues, would we still perform better than before?
>> >
>> >Actually, my RFC patch didn't consider the page cache, Roman raised this
>> issue.
>> >so I add this part in this doc.
>> >
>> >Case 2 didn't mimic this kind of scenario, the work load is an memory
>> >consuming work load, not an block IO intensive work load, so there are
>> >not many page cache in this case.
>> >
>> >If the work load in case 2 continues, as long as it not write all the
>> >memory it allocates, we still can get benefits.
>> >
>> 
>> Sounds I have little knowledge on page cache, and its relationship between
>> free page and I/O intensive work.
>> 
>> Here is some personal understanding, I would appreciate if you could correct
>> me.
>> 
>> +-+
>> |PageCache|
>> +-+
>>   +-+-+-+-+
>>   |Page |Page |Free Page|Page |
>>   +-+-+-+-+
>> 
>> Free Page is a page in the free_list, PageCache is some page cached in CPU's
>> cache line?
>
>No, page cache is quite different with CPU cache line.
>" In computing, a page cache, sometimes also called disk cache,[2] is a 
>transparent cache
> for the pages originating from a secondary storage device such as a hard disk 
> drive (HDD).
> The operating system keeps a page cache in otherwise unused portions of the 
> main
> memory (RAM), resulting in quicker access to the contents of cached pages and 
>overall performance improvements "
>you can refer to https://en.wikipedia.org/wiki/Page_cache
>for more details.
>

My poor knowledge~ Should google it before I imagine the meaning of the
terminology.

If my understanding is correct, the Page Cache is counted as Free Page, while
actually we should migrate them instead of filter them.

>
>> When memory consuming task runs, it leads to little Free Page in the whole
>> system. What's the consequence when I/O intensive work runs? I guess it
>> still leads to little Free Page. And will have some problem in sync on
>> PageCache?
>> 
>> >>
>> >> I am thinking is it possible to have a threshold or configurable
>> >> threshold to utilize free page bitmap optimization?
>> >>
>> >
>> >Could you elaborate your idea? How does it work?
>> >
>> 
>> Let's back to Case 2. We run a memory consuming task which will leads to
>> little Free Page in the whole system. Which means from Qemu perspective,
>> little of the dirty_memory is filtered by Free Page list. My original 
>> question is
>> whether your solution benefits in this scenario. As you mentioned it works
>> fine. So maybe this threshold is not necessary.
>> 
>I didn't quite understand your question before. 
>The benefits we get depends on the  count of free pages we can filter out.
>This is always true.
>
>> My original idea is in Qemu we can calculate the percentage of the Free Page
>> in the whole system. If it finds there is only little percentage of Free 
>> Page,
>> then we don't need to bother to use this method.
>> 
>
>I got you. The threshold can be used for optimization, but the effect is very 
>limited.
>If there are only a few of free pages, the process of constructing the free 
>page
>bitmap is very quick. 
>But we can stop doing the following things, e.g. sending the free page bitmap 
>and doing
>the bitmap operation, theoretically, that may help to save some time, maybe 
>several ms.
>

Ha, you got what I mean.

>I think a VM has no free pages at all is very rare, in the worst case, there 
>are still several
> MB of free pages. The proper threshold should be determined by comparing  the 
> extra
> time spends on processing the free page bitmap and the time spends on sending
>the several MB of free pages though the network. If the formal is longer, we 
>can stop
>using this method. So we should take the network bandwidth into consideration, 
>it's 
>too complicated and not worth to do.
>

Yes, after some thinking, it maybe not that easy and worth to do this
optimization.

>Thanks
>
>Liang
>> Have a nice day~
>> 
>> >Liang
>> >
>> >>
>> >> --
>> >> Richard Yang\nHelp you, Help me
>> 

Re: [Qemu-devel] [RFC Design Doc]Speed up live migration by skipping free pages

2016-03-23 Thread Wei Yang
On Wed, Mar 23, 2016 at 06:48:22AM +, Li, Liang Z wrote:
[...]
>> > 8. Pseudo code
>> > Dirty page logging should be enabled before getting the free page
>> > information from guest, this is important because during the process
>> > of getting free pages, some free pages may be used and written by the
>> > guest, dirty page logging can trace these pages. The pseudo code is
>> > like below:
>> >
>> > ---
>> > MigrationState *s = migrate_get_current();
>> > ...
>> >
>> > memory_global_dirty_log_start();
>> >
>> > if (get_guest_mem_info(&info)) {
>> > while (!get_free_page_bmap(free_page_bitmap,  drop_page_cache)
>> &&
>> >s->state != MIGRATION_STATUS_CANCELLING) {
>> > usleep(1000) // sleep for 1 ms
>> > }
>> >
>> > tighten_free_page_bmap =
>> tighten_guest_free_pages(free_page_bitmap);
>> > filter_out_guest_free_pages(tighten_free_page_bmap);
>> > }
>> 
>> Given the typical speed of networks; it wouldn't do too much harm to start
>> sending assuming all pages are dirty and then when the guest finally gets
>> around to finishing the bitmap then update, so it's asynchronous - and then 
>> if
>> the guest never responds we don't really care.
>
>Indeed, thanks!
>

This is interesting. By doing so, the threshold I mentioned in another mail is
not necessary, since we can do it in parallel.

>Liang
>> 
>> Dave
>> 
>> >
>> > migration_bitmap_sync();
>> > ...
>> >
>> > ---
>> >
>> >
>> > --
>> > 1.9.1
>> >
>> --
>> Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK
>N�r��y���b�X��ǧv�^�)޺{.n�+h����ܨ}���Ơz�&j:+v���zZ+��+zf���h���~i���z��w���?&�)ߢf
-- 
Richard Yang\nHelp you, Help me



Re: [Qemu-devel] [RFC Design Doc]Speed up live migration by skipping free pages

2016-03-23 Thread Wei Yang
On Thu, Mar 24, 2016 at 01:32:25AM +, Li, Liang Z wrote:
>> >> >> >
>> >> >> >6. Handling page cache in the guest The memory used for page
>> >> >> >cache in the guest will change depends on the workload, if guest
>> >> >> >run some block IO intensive work load, there will
>> >> >>
>> >> >> Would this improvement benefit a lot when guest only has little free
>> page?
>> >> >
>> >> >Yes, the improvement is very obvious.
>> >> >
>> >>
>> >> Good to know this.
>> >>
>> >> >> In your Performance data Case 2, I think it mimic this kind of case.
>> >> >> While the memory consuming task is stopped before migration. If it
>> >> >> continues, would we still perform better than before?
>> >> >
>> >> >Actually, my RFC patch didn't consider the page cache, Roman raised
>> >> >this
>> >> issue.
>> >> >so I add this part in this doc.
>> >> >
>> >> >Case 2 didn't mimic this kind of scenario, the work load is an
>> >> >memory consuming work load, not an block IO intensive work load, so
>> >> >there are not many page cache in this case.
>> >> >
>> >> >If the work load in case 2 continues, as long as it not write all
>> >> >the memory it allocates, we still can get benefits.
>> >> >
>> >>
>> >> Sounds I have little knowledge on page cache, and its relationship
>> >> between free page and I/O intensive work.
>> >>
>> >> Here is some personal understanding, I would appreciate if you could
>> >> correct me.
>> >>
>> >> +-+
>> >> |PageCache|
>> >> +-+
>> >>   +-+-+-+-+
>> >>   |Page |Page |Free Page|Page |
>> >>   +-+-+-+-+
>> >>
>> >> Free Page is a page in the free_list, PageCache is some page cached
>> >> in CPU's cache line?
>> >
>> >No, page cache is quite different with CPU cache line.
>> >" In computing, a page cache, sometimes also called disk cache,[2] is a
>> >transparent cache  for the pages originating from a secondary storage
>> device such as a hard disk drive (HDD).
>> > The operating system keeps a page cache in otherwise unused portions
>> >of the main  memory (RAM), resulting in quicker access to the contents
>> >of cached pages and overall performance improvements "
>> >you can refer to https://en.wikipedia.org/wiki/Page_cache
>> >for more details.
>> >
>> 
>> My poor knowledge~ Should google it before I imagine the meaning of the
>> terminology.
>> 
>> If my understanding is correct, the Page Cache is counted as Free Page, while
>> actually we should migrate them instead of filter them.
>
>No, the Page Cache is not counted as Free Page ...

OK, I misunderstand the concept in wiki.

The Page Cache is a trade off between Free Page percentage and the I/O
performance in guest.

>
>Liang

-- 
Richard Yang\nHelp you, Help me



Re: [Qemu-devel] [PATCH 0/3] Cleanup migration/ram.c

2019-05-01 Thread Wei Yang
On Wed, May 01, 2019 at 03:53:01PM -0700, no-re...@patchew.org wrote:
>Patchew URL: 
>https://patchew.org/QEMU/20190430034412.12935-1-richardw.y...@linux.intel.com/
>
>
>
>Hi,
>
>This series failed the docker-mingw@fedora build test. Please find the testing 
>commands and
>their output below. If you have Docker installed, you can probably reproduce it
>locally.
>
>=== TEST SCRIPT BEGIN ===
>#!/bin/bash
>time make docker-test-mingw@fedora SHOW_ENV=1 J=14 NETWORK=1
>=== TEST SCRIPT END ===
>
>
>
>
>The full log is available at
>http://patchew.org/logs/20190430034412.12935-1-richardw.y...@linux.intel.com/testing.docker-mingw@fedora/?type=message.

Where can I see the test case?

>---
>Email generated automatically by Patchew [https://patchew.org/].
>Please send your feedback to patchew-de...@redhat.com

-- 
Wei Yang
Help you, Help me



Re: [Qemu-devel] [PATCH 0/3] Cleanup migration/ram.c

2019-05-01 Thread Wei Yang
On Wed, May 01, 2019 at 09:24:06PM -0700, no-re...@patchew.org wrote:
>Patchew URL: 
>https://patchew.org/QEMU/20190430034412.12935-1-richardw.y...@linux.intel.com/
>
>
>
>Hi,
>
>This series failed the asan build test. Please find the testing commands and
>their output below. If you have Docker installed, you can probably reproduce it
>locally.
>
>=== TEST SCRIPT BEGIN ===
>#!/bin/bash
>time make docker-test-debug@fedora TARGET_LIST=x86_64-softmmu J=14 NETWORK=1
>=== TEST SCRIPT END ===
>
>  COPYRUNNER
>RUN test-debug in qemu:fedora 
>container_linux.go:247: starting container process caused 
>"process_linux.go:258: applying cgroup configuration for process caused \"The 
>maximum number of active connections for UID 0 has been reached\""
>/usr/bin/docker-current: Error response from daemon: oci runtime error: The 
>maximum number of active connections for UID 0 has been reached.
>Traceback (most recent call last):
>  File "./tests/docker/docker.py", line 615, in 
>sys.exit(main())
>

May I ask how I can reproduce this?

>
>The full log is available at
>http://patchew.org/logs/20190430034412.12935-1-richardw.y...@linux.intel.com/testing.asan/?type=message.
>---
>Email generated automatically by Patchew [https://patchew.org/].
>Please send your feedback to patchew-de...@redhat.com

-- 
Wei Yang
Help you, Help me



Re: [Qemu-devel] [PATCH 0/3] Cleanup migration/ram.c

2019-05-02 Thread Wei Yang
On Thu, May 02, 2019 at 09:35:50AM +0100, Dr. David Alan Gilbert wrote:
>* Wei Yang (richard.weiy...@gmail.com) wrote:
>> On Wed, May 01, 2019 at 09:24:06PM -0700, no-re...@patchew.org wrote:
>> >Patchew URL: 
>> >https://patchew.org/QEMU/20190430034412.12935-1-richardw.y...@linux.intel.com/
>> >
>> >
>> >
>> >Hi,
>> >
>> >This series failed the asan build test. Please find the testing commands and
>> >their output below. If you have Docker installed, you can probably 
>> >reproduce it
>> >locally.
>> >
>> >=== TEST SCRIPT BEGIN ===
>> >#!/bin/bash
>> >time make docker-test-debug@fedora TARGET_LIST=x86_64-softmmu J=14 NETWORK=1
>> >=== TEST SCRIPT END ===
>> >
>> >  COPYRUNNER
>> >RUN test-debug in qemu:fedora 
>> >container_linux.go:247: starting container process caused 
>> >"process_linux.go:258: applying cgroup configuration for process caused 
>> >\"The maximum number of active connections for UID 0 has been reached\""
>> >/usr/bin/docker-current: Error response from daemon: oci runtime error: The 
>> >maximum number of active connections for UID 0 has been reached.
>> >Traceback (most recent call last):
>> >  File "./tests/docker/docker.py", line 615, in 
>> >sys.exit(main())
>> >
>> 
>> May I ask how I can reproduce this?
>
>To me this just looks like patchew having a problem, I don't think it's
>a real qemu bug.

Ah, thanks. 

>
>Dave
>
>> >
>> >The full log is available at
>> >http://patchew.org/logs/20190430034412.12935-1-richardw.y...@linux.intel.com/testing.asan/?type=message.
>> >---
>> >Email generated automatically by Patchew [https://patchew.org/].
>> >Please send your feedback to patchew-de...@redhat.com
>> 
>> -- 
>> Wei Yang
>> Help you, Help me
>--
>Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK

-- 
Wei Yang
Help you, Help me



[Qemu-devel] [PATCH] migration: don't set MIGRATION dirty range for ignored block

2019-05-03 Thread Wei Yang
The ignored blocks are not migrated and those ranges are not used.

Signed-off-by: Wei Yang 
---
 exec.c  | 4 +++-
 include/exec/ram_addr.h | 2 ++
 migration/ram.c | 2 +-
 3 files changed, 6 insertions(+), 2 deletions(-)

diff --git a/exec.c b/exec.c
index 86a38d3b3b..97da155c12 100644
--- a/exec.c
+++ b/exec.c
@@ -2192,6 +2192,8 @@ static void ram_block_add(RAMBlock *new_block, Error 
**errp, bool shared)
 RAMBlock *last_block = NULL;
 ram_addr_t old_ram_size, new_ram_size;
 Error *err = NULL;
+uint8_t dirty_memory_clients = ramblock_is_ignored(new_block) ?
+ DIRTY_CLIENTS_NOMIG : DIRTY_CLIENTS_ALL;
 
 old_ram_size = last_ram_page();
 
@@ -2252,7 +2254,7 @@ static void ram_block_add(RAMBlock *new_block, Error 
**errp, bool shared)
 
 cpu_physical_memory_set_dirty_range(new_block->offset,
 new_block->used_length,
-DIRTY_CLIENTS_ALL);
+dirty_memory_clients);
 
 if (new_block->host) {
 qemu_ram_setup_dump(new_block->host, new_block->max_length);
diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h
index a7c81bdb32..4765435fb8 100644
--- a/include/exec/ram_addr.h
+++ b/include/exec/ram_addr.h
@@ -72,6 +72,7 @@ static inline unsigned long int 
ramblock_recv_bitmap_offset(void *host_addr,
 }
 
 bool ramblock_is_pmem(RAMBlock *rb);
+bool ramblock_is_ignored(RAMBlock *rb);
 
 long qemu_getrampagesize(void);
 
@@ -117,6 +118,7 @@ int qemu_ram_resize(RAMBlock *block, ram_addr_t newsize, 
Error **errp);
 
 #define DIRTY_CLIENTS_ALL ((1 << DIRTY_MEMORY_NUM) - 1)
 #define DIRTY_CLIENTS_NOCODE  (DIRTY_CLIENTS_ALL & ~(1 << DIRTY_MEMORY_CODE))
+#define DIRTY_CLIENTS_NOMIG   (DIRTY_CLIENTS_ALL & ~(1 << 
DIRTY_MEMORY_MIGRATION))
 
 void tb_invalidate_phys_range(ram_addr_t start, ram_addr_t end);
 
diff --git a/migration/ram.c b/migration/ram.c
index 1def8122e9..44525e3816 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -159,7 +159,7 @@ out:
 return ret;
 }
 
-static bool ramblock_is_ignored(RAMBlock *block)
+bool ramblock_is_ignored(RAMBlock *block)
 {
 return !qemu_ram_is_migratable(block) ||
(migrate_ignore_shared() && qemu_ram_is_shared(block));
-- 
2.19.1




Re: [Qemu-devel] [PATCH v4 02/15] tests: acpi: make acpi_fetch_table() take size of fetched table pointer

2019-05-04 Thread Wei Yang
On Thu, May 02, 2019 at 04:51:50PM +0200, Igor Mammedov wrote:
>Currently acpi_fetch_table() assumes 32 bit size of table pointer
>in ACPI tables. However X_foo variants are 64 bit, prepare
>acpi_fetch_table() to handle both by adding an argument
>for addr_ptr pointed entry size. Follow up commits will use that
>to read XSDT and X_foo entries in ACPI tables.
>
>Signed-off-by: Igor Mammedov 

Reviewed-by: Wei Yang 

>---
> tests/acpi-utils.h   |  2 +-
> tests/acpi-utils.c   | 10 ++
> tests/bios-tables-test.c |  8 
> tests/vmgenid-test.c |  4 ++--
> 4 files changed, 13 insertions(+), 11 deletions(-)
>
>diff --git a/tests/acpi-utils.h b/tests/acpi-utils.h
>index 4cd5553..92285b7 100644
>--- a/tests/acpi-utils.h
>+++ b/tests/acpi-utils.h
>@@ -49,7 +49,7 @@ uint32_t acpi_find_rsdp_address(QTestState *qts);
> uint64_t acpi_get_xsdt_address(uint8_t *rsdp_table);
> void acpi_fetch_rsdp_table(QTestState *qts, uint32_t addr, uint8_t 
> *rsdp_table);
> void acpi_fetch_table(QTestState *qts, uint8_t **aml, uint32_t *aml_len,
>-  const uint8_t *addr_ptr, const char *sig,
>+  const uint8_t *addr_ptr, int addr_size, const char *sig,
>   bool verify_checksum);
> 
> #endif  /* TEST_ACPI_UTILS_H */
>diff --git a/tests/acpi-utils.c b/tests/acpi-utils.c
>index 633d8f5..644c87b 100644
>--- a/tests/acpi-utils.c
>+++ b/tests/acpi-utils.c
>@@ -91,13 +91,15 @@ void acpi_fetch_rsdp_table(QTestState *qts, uint32_t addr, 
>uint8_t *rsdp_table)
>  *  actual one.
>  */
> void acpi_fetch_table(QTestState *qts, uint8_t **aml, uint32_t *aml_len,
>-  const uint8_t *addr_ptr, const char *sig,
>+  const uint8_t *addr_ptr, int addr_size, const char *sig,
>   bool verify_checksum)
> {
>-uint32_t addr, len;
>+uint32_t len;
>+uint64_t addr = 0;
> 
>-memcpy(&addr, addr_ptr , sizeof(addr));
>-addr = le32_to_cpu(addr);
>+g_assert(addr_size == 4 || addr_size == 8);
>+memcpy(&addr, addr_ptr , addr_size);
>+addr = le64_to_cpu(addr);
> qtest_memread(qts, addr + 4, &len, 4); /* Length of ACPI table */
> *aml_len = le32_to_cpu(len);
> *aml = g_malloc0(*aml_len);
>diff --git a/tests/bios-tables-test.c b/tests/bios-tables-test.c
>index 6a678bf..86b592c 100644
>--- a/tests/bios-tables-test.c
>+++ b/tests/bios-tables-test.c
>@@ -114,14 +114,14 @@ static void test_acpi_rsdt_table(test_data *data)
> 
> /* read RSDT table */
> acpi_fetch_table(data->qts, &rsdt.aml, &rsdt.aml_len,
>- &data->rsdp_table[16 /* RsdtAddress */], "RSDT", true);
>+ &data->rsdp_table[16 /* RsdtAddress */], 4, "RSDT", 
>true);
> 
> /* Load all tables and add to test list directly RSDT referenced tables */
> ACPI_FOREACH_RSDT_ENTRY(rsdt.aml, rsdt.aml_len, ent, 4 /* Entry size */) {
> AcpiSdtTable ssdt_table = {};
> 
> acpi_fetch_table(data->qts, &ssdt_table.aml, &ssdt_table.aml_len, ent,
>- NULL, true);
>+ 4, NULL, true);
> /* Add table to ASL test tables list */
> g_array_append_val(data->tables, ssdt_table);
> }
>@@ -139,11 +139,11 @@ static void test_acpi_fadt_table(test_data *data)
> 
> /* Since DSDT/FACS isn't in RSDT, add them to ASL test list manually */
> acpi_fetch_table(data->qts, &table.aml, &table.aml_len,
>- fadt_aml + 36 /* FIRMWARE_CTRL */, "FACS", false);
>+ fadt_aml + 36 /* FIRMWARE_CTRL */, 4, "FACS", false);
> g_array_append_val(data->tables, table);
> 
> acpi_fetch_table(data->qts, &table.aml, &table.aml_len,
>- fadt_aml + 40 /* DSDT */, "DSDT", true);
>+ fadt_aml + 40 /* DSDT */, 4, "DSDT", true);
> g_array_append_val(data->tables, table);
> 
> memset(fadt_aml + 36, 0, 4); /* sanitize FIRMWARE_CTRL ptr */
>diff --git a/tests/vmgenid-test.c b/tests/vmgenid-test.c
>index f400184..85d8e64 100644
>--- a/tests/vmgenid-test.c
>+++ b/tests/vmgenid-test.c
>@@ -42,12 +42,12 @@ static uint32_t acpi_find_vgia(QTestState *qts)
> 
> acpi_fetch_rsdp_table(qts, rsdp_offset, rsdp_table);
> acpi_fetch_table(qts, &rsdt, &rsdt_len, &rsdp_table[16 /* RsdtAddress */],
>- "RSDT", true);
>+ 4, "RSDT", true);
> 
> ACPI_FOREACH_RSDT_ENTRY(rsdt, rsdt_len, ent, 4 /* Entry size */) {
> uint8_t *table_aml;
> 
>-acpi_fetch_table(qts, &table_aml, &table_length, ent, NULL, true);
>+acpi_fetch_table(qts, &table_aml, &table_length, ent, 4, NULL, true);
> if (!memcmp(table_aml + 16 /* OEM Table ID */, "VMGENID", 7)) {
> uint32_t vgia_val;
> uint8_t *aml = &table_aml[36 /* AML byte-code start */];
>-- 
>2.7.4

-- 
Wei Yang
Help you, Help me



Re: [Qemu-devel] [PATCH v4 03/15] tests: acpi: make RSDT test routine handle XSDT

2019-05-04 Thread Wei Yang
On Thu, May 02, 2019 at 04:51:51PM +0200, Igor Mammedov wrote:
>If RSDP revision is more than 0 fetch table pointed by XSDT
>and fallback to legacy RSDT table otherwise.
>
>While at it drop unused acpi_get_xsdt_address().
>
>Signed-off-by: Igor Mammedov 

Reviewed-by: Wei Yang 

>---
>PS:
> it doesn't affect existing pc/q35 machines as they use RSDP.revision == 0
> but it will be used by followup patch to enable testing arm/virt
> board which uses provides XSDT table.
>
>v4:
> * move out acpi_parse_rsdp_table() hunk to
>   "tests: acpi: make pointer to RSDP  64bit"
>   where it belongs
>---
> tests/acpi-utils.h   |  1 -
> tests/acpi-utils.c   | 12 
> tests/bios-tables-test.c | 20 ++--
> 3 files changed, 14 insertions(+), 19 deletions(-)
>
>diff --git a/tests/acpi-utils.h b/tests/acpi-utils.h
>index 92285b7..f55ccf9 100644
>--- a/tests/acpi-utils.h
>+++ b/tests/acpi-utils.h
>@@ -46,7 +46,6 @@ typedef struct {
> 
> uint8_t acpi_calc_checksum(const uint8_t *data, int len);
> uint32_t acpi_find_rsdp_address(QTestState *qts);
>-uint64_t acpi_get_xsdt_address(uint8_t *rsdp_table);
> void acpi_fetch_rsdp_table(QTestState *qts, uint32_t addr, uint8_t 
> *rsdp_table);
> void acpi_fetch_table(QTestState *qts, uint8_t **aml, uint32_t *aml_len,
>   const uint8_t *addr_ptr, int addr_size, const char *sig,
>diff --git a/tests/acpi-utils.c b/tests/acpi-utils.c
>index 644c87b..a0d49c4 100644
>--- a/tests/acpi-utils.c
>+++ b/tests/acpi-utils.c
>@@ -51,18 +51,6 @@ uint32_t acpi_find_rsdp_address(QTestState *qts)
> return off;
> }
> 
>-uint64_t acpi_get_xsdt_address(uint8_t *rsdp_table)
>-{
>-uint64_t xsdt_physical_address;
>-uint8_t revision = rsdp_table[15 /* Revision offset */];
>-
>-/* We must have revision 2 if we're looking for an XSDT pointer */
>-g_assert(revision == 2);
>-
>-memcpy(&xsdt_physical_address, &rsdp_table[24 /* XsdtAddress offset */], 
>8);
>-return le64_to_cpu(xsdt_physical_address);
>-}
>-
> void acpi_fetch_rsdp_table(QTestState *qts, uint32_t addr, uint8_t 
> *rsdp_table)
> {
> uint8_t revision;
>diff --git a/tests/bios-tables-test.c b/tests/bios-tables-test.c
>index 86b592c..d6ab121 100644
>--- a/tests/bios-tables-test.c
>+++ b/tests/bios-tables-test.c
>@@ -107,21 +107,29 @@ static void test_acpi_rsdp_table(test_data *data)
> }
> }
> 
>-static void test_acpi_rsdt_table(test_data *data)
>+static void test_acpi_rxsdt_table(test_data *data)
> {
>+const char *sig = "RSDT";
> AcpiSdtTable rsdt = {};
>+int entry_size = 4;
>+int addr_off = 16 /* RsdtAddress */;
> uint8_t *ent;
> 
>-/* read RSDT table */
>+if (data->rsdp_table[15 /* Revision offset */] != 0) {
>+addr_off = 24 /* XsdtAddress */;
>+entry_size = 8;
>+sig = "XSDT";
>+}
>+/* read [RX]SDT table */
> acpi_fetch_table(data->qts, &rsdt.aml, &rsdt.aml_len,
>- &data->rsdp_table[16 /* RsdtAddress */], 4, "RSDT", 
>true);
>+ &data->rsdp_table[addr_off], entry_size, sig, true);
> 
> /* Load all tables and add to test list directly RSDT referenced tables */
>-ACPI_FOREACH_RSDT_ENTRY(rsdt.aml, rsdt.aml_len, ent, 4 /* Entry size */) {
>+ACPI_FOREACH_RSDT_ENTRY(rsdt.aml, rsdt.aml_len, ent, entry_size) {
> AcpiSdtTable ssdt_table = {};
> 
> acpi_fetch_table(data->qts, &ssdt_table.aml, &ssdt_table.aml_len, ent,
>- 4, NULL, true);
>+ entry_size, NULL, true);
> /* Add table to ASL test tables list */
> g_array_append_val(data->tables, ssdt_table);
> }
>@@ -521,7 +529,7 @@ static void test_acpi_one(const char *params, test_data 
>*data)
> data->tables = g_array_new(false, true, sizeof(AcpiSdtTable));
> test_acpi_rsdp_address(data);
> test_acpi_rsdp_table(data);
>-test_acpi_rsdt_table(data);
>+test_acpi_rxsdt_table(data);
> test_acpi_fadt_table(data);
> 
> if (iasl) {
>-- 
>2.7.4

-- 
Wei Yang
Help you, Help me



Re: [Qemu-devel] [PATCH v4 04/15] tests: acpi: make pointer to RSDP 64bit

2019-05-04 Thread Wei Yang
On Thu, May 02, 2019 at 04:51:52PM +0200, Igor Mammedov wrote:
>In case of UEFI, RSDP doesn't have to be located in lowmem,
>it could be placed at any address. Make sure that test won't
>break if it is placed above the first 4Gb of address space.
>
>PS:
>While at it cleanup some local variables as we don't really
>need them.
>
>Signed-off-by: Igor Mammedov 
>Reviewed-by: Philippe Mathieu-Daudé 

Reviewed-by: Wei Yang 

>---
>v4:
> - move acpi_fetch_rsdp_table(s/uint32_t addr/uint64_t addr/) to
>   this patch where it belongs from
>   "tests: acpi: make RSDT test routine handle XSDT"
>   (Wei Yang )
>v2:
>  - s/In case of UEFI/In case of UEFI,/ (Laszlo Ersek )
>---
> tests/acpi-utils.h   |  2 +-
> tests/acpi-utils.c   |  2 +-
> tests/bios-tables-test.c | 10 --
> 3 files changed, 6 insertions(+), 8 deletions(-)
>
>diff --git a/tests/acpi-utils.h b/tests/acpi-utils.h
>index f55ccf9..1da6c10 100644
>--- a/tests/acpi-utils.h
>+++ b/tests/acpi-utils.h
>@@ -46,7 +46,7 @@ typedef struct {
> 
> uint8_t acpi_calc_checksum(const uint8_t *data, int len);
> uint32_t acpi_find_rsdp_address(QTestState *qts);
>-void acpi_fetch_rsdp_table(QTestState *qts, uint32_t addr, uint8_t 
>*rsdp_table);
>+void acpi_fetch_rsdp_table(QTestState *qts, uint64_t addr, uint8_t 
>*rsdp_table);
> void acpi_fetch_table(QTestState *qts, uint8_t **aml, uint32_t *aml_len,
>   const uint8_t *addr_ptr, int addr_size, const char *sig,
>   bool verify_checksum);
>diff --git a/tests/acpi-utils.c b/tests/acpi-utils.c
>index a0d49c4..c216b9e 100644
>--- a/tests/acpi-utils.c
>+++ b/tests/acpi-utils.c
>@@ -51,7 +51,7 @@ uint32_t acpi_find_rsdp_address(QTestState *qts)
> return off;
> }
> 
>-void acpi_fetch_rsdp_table(QTestState *qts, uint32_t addr, uint8_t 
>*rsdp_table)
>+void acpi_fetch_rsdp_table(QTestState *qts, uint64_t addr, uint8_t 
>*rsdp_table)
> {
> uint8_t revision;
> 
>diff --git a/tests/bios-tables-test.c b/tests/bios-tables-test.c
>index d6ab121..a164d27 100644
>--- a/tests/bios-tables-test.c
>+++ b/tests/bios-tables-test.c
>@@ -26,7 +26,7 @@
> typedef struct {
> const char *machine;
> const char *variant;
>-uint32_t rsdp_addr;
>+uint64_t rsdp_addr;
> uint8_t rsdp_table[36 /* ACPI 2.0+ RSDP size */];
> GArray *tables;
> uint32_t smbios_ep_addr;
>@@ -86,13 +86,11 @@ static void test_acpi_rsdp_address(test_data *data)
> 
> static void test_acpi_rsdp_table(test_data *data)
> {
>-uint8_t *rsdp_table = data->rsdp_table, revision;
>-uint32_t addr = data->rsdp_addr;
>+uint8_t *rsdp_table = data->rsdp_table;
> 
>-acpi_fetch_rsdp_table(data->qts, addr, rsdp_table);
>-revision = rsdp_table[15 /* Revision offset */];
>+acpi_fetch_rsdp_table(data->qts, data->rsdp_addr, rsdp_table);
> 
>-switch (revision) {
>+switch (rsdp_table[15 /* Revision offset */]) {
> case 0: /* ACPI 1.0 RSDP */
> /* With rev 1, checksum is only for the first 20 bytes */
> g_assert(!acpi_calc_checksum(rsdp_table,  20));
>-- 
>2.7.4

-- 
Wei Yang
Help you, Help me



Re: [Qemu-devel] [PATCH v4 05/15] tests: acpi: fetch X_DSDT if pointer to DSDT is 0

2019-05-04 Thread Wei Yang
On Thu, May 02, 2019 at 04:51:53PM +0200, Igor Mammedov wrote:
>that way it would be possible to test a DSDT pointed by
>64bit X_DSDT field in FADT.
>
>PS:
>it will allow to enable testing arm/virt board, which sets
>only newer X_DSDT field.
>
>Signed-off-by: Igor Mammedov 
>---
>v4:
> * dropping Reviewed-bys due to acpi_fetch_table() change
>   introduced by earlier patch:
>   "tests: acpi: make acpi_fetch_table() take size of fetched table pointer"
>v2:
>  add 'val = le32_to_cpu(val)' even if it doesn't necessary
>  it works as reminder that value copied from table is in
>  little-endian format (Philippe Mathieu-Daudé )
>---
> tests/bios-tables-test.c | 11 ++-
> 1 file changed, 10 insertions(+), 1 deletion(-)
>
>diff --git a/tests/bios-tables-test.c b/tests/bios-tables-test.c
>index a164d27..d165a1b 100644
>--- a/tests/bios-tables-test.c
>+++ b/tests/bios-tables-test.c
>@@ -140,6 +140,9 @@ static void test_acpi_fadt_table(test_data *data)
> AcpiSdtTable table = g_array_index(data->tables, typeof(table), 0);
> uint8_t *fadt_aml = table.aml;
> uint32_t fadt_len = table.aml_len;
>+uint32_t val;
>+int dsdt_offset = 40 /* DSDT */;
>+int dsdt_entry_size = 4;
> 
> g_assert(compare_signature(&table, "FACP"));
> 
>@@ -148,8 +151,14 @@ static void test_acpi_fadt_table(test_data *data)
>  fadt_aml + 36 /* FIRMWARE_CTRL */, 4, "FACS", false);
> g_array_append_val(data->tables, table);
> 
>+memcpy(&val, fadt_aml + dsdt_offset, 4);
>+val = le32_to_cpu(val);
>+if (!val) {
>+dsdt_offset = 140 /* X_DSDT */;

In case we can point out where we get it, e.g. ACPI 5, Table 5-34 FADT Format.

This may be more helpful for reviewing and maintaining.

Do you think so?

>+dsdt_entry_size = 8;
>+}
> acpi_fetch_table(data->qts, &table.aml, &table.aml_len,
>- fadt_aml + 40 /* DSDT */, 4, "DSDT", true);
>+ fadt_aml + dsdt_offset, dsdt_entry_size, "DSDT", true);
> g_array_append_val(data->tables, table);
> 
> memset(fadt_aml + 36, 0, 4); /* sanitize FIRMWARE_CTRL ptr */
>-- 
>2.7.4

-- 
Wei Yang
Help you, Help me



Re: [Qemu-devel] [PATCH v4 14/15] tests: acpi: refactor rebuild-expected-aml.sh to dump ACPI tables for a specified list of targets

2019-05-04 Thread Wei Yang
On Thu, May 02, 2019 at 04:52:02PM +0200, Igor Mammedov wrote:
>Make initial list contain aarch64 and x86_64 targets.
>
>Signed-off-by: Igor Mammedov 
>Reviewed-by: Philippe Mathieu-Daudé 

Reviewed-by: Wei Yang 

>---
>v4:
> * fix typo (Wei Yang )
>v2:
> * fix up error message (Philippe Mathieu-Daudé )
>---
> tests/data/acpi/rebuild-expected-aml.sh | 23 +++
> 1 file changed, 11 insertions(+), 12 deletions(-)
>
>diff --git a/tests/data/acpi/rebuild-expected-aml.sh 
>b/tests/data/acpi/rebuild-expected-aml.sh
>index abdff70..07f7e3f 100755
>--- a/tests/data/acpi/rebuild-expected-aml.sh
>+++ b/tests/data/acpi/rebuild-expected-aml.sh
>@@ -7,21 +7,12 @@
> #
> # Authors:
> #  Marcel Apfelbaum 
>+#  Igor Mammedov 
> #
> # This work is licensed under the terms of the GNU GPLv2.
> # See the COPYING.LIB file in the top-level directory.
> 
>-qemu=
>-
>-if [ -e x86_64-softmmu/qemu-system-x86_64 ]; then
>-qemu="x86_64-softmmu/qemu-system-x86_64"
>-elif [ -e i386-softmmu/qemu-system-i386 ]; then
>-qemu="i386-softmmu/qemu-system-i386"
>-else
>-echo "Run 'make' to build the qemu exectutable!"
>-echo "Run this script from the build directory."
>-exit 1;
>-fi
>+qemu_bins="aarch64-softmmu/qemu-system-aarch64 
>x86_64-softmmu/qemu-system-x86_64"
> 
> if [ ! -e "tests/bios-tables-test" ]; then
> echo "Test: bios-tables-test is required! Run make check before this 
> script."
>@@ -29,6 +20,14 @@ if [ ! -e "tests/bios-tables-test" ]; then
> exit 1;
> fi
> 
>-TEST_ACPI_REBUILD_AML=y QTEST_QEMU_BINARY=$qemu tests/bios-tables-test
>+for qemu in $qemu_bins; do
>+if [ ! -e $qemu ]; then
>+echo "Run 'make' to build the following QEMU executables: $qemu_bins"
>+echo "Also, run this script from the build directory."
>+exit 1;
>+fi
>+TEST_ACPI_REBUILD_AML=y QTEST_QEMU_BINARY=$qemu tests/bios-tables-test
>+done
>+
> 
> echo "The files were rebuilt and can be added to git."
>-- 
>2.7.4

-- 
Wei Yang
Help you, Help me



Re: [Qemu-devel] [PATCH] migration: don't set MIGRATION dirty range for ignored block

2019-05-04 Thread Wei Yang
On Sat, May 04, 2019 at 01:42:55PM +0800, Wei Yang wrote:
>The ignored blocks are not migrated and those ranges are not used.

My bad, this change is not correct.

>
>Signed-off-by: Wei Yang 
>---
> exec.c  | 4 +++-
> include/exec/ram_addr.h | 2 ++
> migration/ram.c | 2 +-
> 3 files changed, 6 insertions(+), 2 deletions(-)
>
>diff --git a/exec.c b/exec.c
>index 86a38d3b3b..97da155c12 100644
>--- a/exec.c
>+++ b/exec.c
>@@ -2192,6 +2192,8 @@ static void ram_block_add(RAMBlock *new_block, Error 
>**errp, bool shared)
> RAMBlock *last_block = NULL;
> ram_addr_t old_ram_size, new_ram_size;
> Error *err = NULL;
>+uint8_t dirty_memory_clients = ramblock_is_ignored(new_block) ?
>+ DIRTY_CLIENTS_NOMIG : DIRTY_CLIENTS_ALL;
> 
> old_ram_size = last_ram_page();
> 
>@@ -2252,7 +2254,7 @@ static void ram_block_add(RAMBlock *new_block, Error 
>**errp, bool shared)
> 
> cpu_physical_memory_set_dirty_range(new_block->offset,
> new_block->used_length,
>-DIRTY_CLIENTS_ALL);
>+dirty_memory_clients);
> 
> if (new_block->host) {
> qemu_ram_setup_dump(new_block->host, new_block->max_length);
>diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h
>index a7c81bdb32..4765435fb8 100644
>--- a/include/exec/ram_addr.h
>+++ b/include/exec/ram_addr.h
>@@ -72,6 +72,7 @@ static inline unsigned long int 
>ramblock_recv_bitmap_offset(void *host_addr,
> }
> 
> bool ramblock_is_pmem(RAMBlock *rb);
>+bool ramblock_is_ignored(RAMBlock *rb);
> 
> long qemu_getrampagesize(void);
> 
>@@ -117,6 +118,7 @@ int qemu_ram_resize(RAMBlock *block, ram_addr_t newsize, 
>Error **errp);
> 
> #define DIRTY_CLIENTS_ALL ((1 << DIRTY_MEMORY_NUM) - 1)
> #define DIRTY_CLIENTS_NOCODE  (DIRTY_CLIENTS_ALL & ~(1 << DIRTY_MEMORY_CODE))
>+#define DIRTY_CLIENTS_NOMIG   (DIRTY_CLIENTS_ALL & ~(1 << 
>DIRTY_MEMORY_MIGRATION))
> 
> void tb_invalidate_phys_range(ram_addr_t start, ram_addr_t end);
> 
>diff --git a/migration/ram.c b/migration/ram.c
>index 1def8122e9..44525e3816 100644
>--- a/migration/ram.c
>+++ b/migration/ram.c
>@@ -159,7 +159,7 @@ out:
> return ret;
> }
> 
>-static bool ramblock_is_ignored(RAMBlock *block)
>+bool ramblock_is_ignored(RAMBlock *block)
> {
> return !qemu_ram_is_migratable(block) ||
>(migrate_ignore_shared() && qemu_ram_is_shared(block));
>-- 
>2.19.1

-- 
Wei Yang
Help you, Help me



Re: [Qemu-devel] [PATCH 1/5] hw/block/pflash_cfi01: Removed an unused timer

2019-05-05 Thread Wei Yang
On Sun, May 05, 2019 at 10:05:58PM +0200, Philippe Mathieu-Daudé wrote:
>The 'CFI01' NOR flash was introduced in commit 29133e9a0fff, with
>timing modelled. One year later, the CFI02 model was introduced
>(commit 05ee37ebf630) based on the CFI01 model. As noted in the
>header, "It does not support timings". 12 years later, we never
>had to model the device timings. Time to remove the unused timer,
>we can still add it back if required.
>
>Suggested-by: Laszlo Ersek 
>Signed-off-by: Philippe Mathieu-Daudé 

Reviewed-by: Wei Yang 

>---
>Yes, I plan to model those timings later. Actually I have a series
>working, but I'd rather first
>1/ refactor common code between the both CFI implementations,
>2/ discuss on list whether or not use timings for the Virt flash.
>---
> hw/block/pflash_cfi01.c | 15 ---
> 1 file changed, 15 deletions(-)
>
>diff --git a/hw/block/pflash_cfi01.c b/hw/block/pflash_cfi01.c
>index 16dfae14b80..6dc04f156a7 100644
>--- a/hw/block/pflash_cfi01.c
>+++ b/hw/block/pflash_cfi01.c
>@@ -42,7 +42,6 @@
> #include "hw/block/flash.h"
> #include "sysemu/block-backend.h"
> #include "qapi/error.h"
>-#include "qemu/timer.h"
> #include "qemu/bitops.h"
> #include "qemu/host-utils.h"
> #include "qemu/log.h"
>@@ -86,7 +85,6 @@ struct PFlashCFI01 {
> uint8_t cfi_table[0x52];
> uint64_t counter;
> unsigned int writeblock_size;
>-QEMUTimer *timer;
> MemoryRegion mem;
> char *name;
> void *storage;
>@@ -110,18 +108,6 @@ static const VMStateDescription vmstate_pflash = {
> }
> };
> 
>-static void pflash_timer (void *opaque)
>-{
>-PFlashCFI01 *pfl = opaque;
>-
>-trace_pflash_timer_expired(pfl->cmd);
>-/* Reset flash */
>-pfl->status ^= 0x80;
>-memory_region_rom_device_set_romd(&pfl->mem, true);
>-pfl->wcycle = 0;
>-pfl->cmd = 0;
>-}
>-
> /* Perform a CFI query based on the bank width of the flash.
>  * If this code is called we know we have a device_width set for
>  * this flash.
>@@ -771,7 +757,6 @@ static void pflash_cfi01_realize(DeviceState *dev, Error 
>**errp)
> pfl->max_device_width = pfl->device_width;
> }
> 
>-pfl->timer = timer_new_ns(QEMU_CLOCK_VIRTUAL, pflash_timer, pfl);
> pfl->wcycle = 0;
> pfl->cmd = 0;
> pfl->status = 0;
>-- 
>2.20.1

-- 
Wei Yang
Help you, Help me



Re: [Qemu-devel] [PATCH 2/5] hw/block/pflash_cfi01: Extract the pflash_reset() code

2019-05-05 Thread Wei Yang
On Sun, May 05, 2019 at 10:05:59PM +0200, Philippe Mathieu-Daudé wrote:
>The reset() code is used in various places, refactor it.
>
>Signed-off-by: Philippe Mathieu-Daudé 

Reviewed-by: Wei Yang 

>---
> hw/block/pflash_cfi01.c | 21 -
> 1 file changed, 12 insertions(+), 9 deletions(-)
>
>diff --git a/hw/block/pflash_cfi01.c b/hw/block/pflash_cfi01.c
>index 6dc04f156a7..073cd14978f 100644
>--- a/hw/block/pflash_cfi01.c
>+++ b/hw/block/pflash_cfi01.c
>@@ -108,6 +108,15 @@ static const VMStateDescription vmstate_pflash = {
> }
> };
> 
>+static void pflash_reset(PFlashCFI01 *pfl)
>+{
>+trace_pflash_reset();
>+pfl->wcycle = 0;
>+pfl->cmd = 0;
>+pfl->status = 0;
>+memory_region_rom_device_set_romd(&pfl->mem, true);
>+}
>+
> /* Perform a CFI query based on the bank width of the flash.
>  * If this code is called we know we have a device_width set for
>  * this flash.
>@@ -275,8 +284,7 @@ static uint32_t pflash_read(PFlashCFI01 *pfl, hwaddr 
>offset,
> default:
> /* This should never happen : reset state & treat it as a read */
> DPRINTF("%s: unknown command state: %x\n", __func__, pfl->cmd);
>-pfl->wcycle = 0;
>-pfl->cmd = 0;
>+pflash_reset(pfl);
> /* fall through to read code */
> case 0x00:
> /* Flash area read */
>@@ -639,10 +647,7 @@ static void pflash_write(PFlashCFI01 *pfl, hwaddr offset,
>   "\n", __func__, offset, pfl->wcycle, pfl->cmd, value);
> 
>  reset_flash:
>-trace_pflash_reset();
>-memory_region_rom_device_set_romd(&pfl->mem, true);
>-pfl->wcycle = 0;
>-pfl->cmd = 0;
>+pflash_reset(pfl);
> }
> 
> 
>@@ -757,9 +762,7 @@ static void pflash_cfi01_realize(DeviceState *dev, Error 
>**errp)
> pfl->max_device_width = pfl->device_width;
> }
> 
>-pfl->wcycle = 0;
>-pfl->cmd = 0;
>-pfl->status = 0;
>+pflash_reset(pfl);
> /* Hardcoded CFI table */
> /* Standard "QRY" string */
> pfl->cfi_table[0x10] = 'Q';
>-- 
>2.20.1

-- 
Wei Yang
Help you, Help me



Re: [Qemu-devel] [PATCH 3/5] hw/block/pflash_cfi01: Add the DeviceReset() handler

2019-05-05 Thread Wei Yang
On Sun, May 05, 2019 at 10:06:00PM +0200, Philippe Mathieu-Daudé wrote:
>The pflash device is a child of TYPE_DEVICE, so it can implement
>the DeviceReset handler. Actually it has to implement it, else
>on machine reset it might stay in an incoherent state, as it has
>been reported in the buglink listed below.
>
>Add the DeviceReset handler and remove its call from the realize()
>function.
>
>Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=1678713
>Reported-by: Laszlo Ersek 
>Signed-off-by: Philippe Mathieu-Daudé 

Reviewed-by: Wei Yang 


-- 
Wei Yang
Help you, Help me



Re: [Qemu-devel] [PATCH 4/5] hw/block/pflash_cfi02: Extract the pflash_reset() code

2019-05-05 Thread Wei Yang
On Sun, May 05, 2019 at 10:06:01PM +0200, Philippe Mathieu-Daudé wrote:
>The reset() code is used in various places, refactor it.
>
>Signed-off-by: Philippe Mathieu-Daudé 

Reviewed-by: Wei Yang 

>---
> hw/block/pflash_cfi02.c | 25 +++--
> 1 file changed, 15 insertions(+), 10 deletions(-)
>
>diff --git a/hw/block/pflash_cfi02.c b/hw/block/pflash_cfi02.c
>index f2c6201f813..f321b74433c 100644
>--- a/hw/block/pflash_cfi02.c
>+++ b/hw/block/pflash_cfi02.c
>@@ -120,6 +120,17 @@ static void pflash_register_memory(PFlashCFI02 *pfl, int 
>rom_mode)
> pfl->rom_mode = rom_mode;
> }
> 
>+static void pflash_reset(PFlashCFI02 *pfl)
>+{
>+trace_pflash_reset();
>+timer_del(&pfl->timer);
>+pfl->bypass = 0;
>+pfl->wcycle = 0;
>+pfl->cmd = 0;
>+pfl->status = 0;
>+pflash_register_memory(pfl, 1);
>+}
>+
> static void pflash_timer (void *opaque)
> {
> PFlashCFI02 *pfl = opaque;
>@@ -129,11 +140,10 @@ static void pflash_timer (void *opaque)
> pfl->status ^= 0x80;
> if (pfl->bypass) {
> pfl->wcycle = 2;
>+pfl->cmd = 0;
> } else {
>-pflash_register_memory(pfl, 1);
>-pfl->wcycle = 0;
>+pflash_reset(pfl);
> }
>-pfl->cmd = 0;
> }
> 
> static uint32_t pflash_read(PFlashCFI02 *pfl, hwaddr offset,
>@@ -481,10 +491,7 @@ static void pflash_write(PFlashCFI02 *pfl, hwaddr offset,
> 
> /* Reset flash */
>  reset_flash:
>-trace_pflash_reset();
>-pfl->bypass = 0;
>-pfl->wcycle = 0;
>-pfl->cmd = 0;
>+pflash_reset(pfl);
> return;
> 
>  do_bypass:
>@@ -588,9 +595,7 @@ static void pflash_cfi02_realize(DeviceState *dev, Error 
>**errp)
> sysbus_init_mmio(SYS_BUS_DEVICE(dev), &pfl->mem);
> 
> timer_init_ns(&pfl->timer, QEMU_CLOCK_VIRTUAL, pflash_timer, pfl);
>-pfl->wcycle = 0;
>-pfl->cmd = 0;
>-pfl->status = 0;
>+pflash_reset(pfl);
> /* Hardcoded CFI table (mostly from SG29 Spansion flash) */
> /* Standard "QRY" string */
> pfl->cfi_table[0x10] = 'Q';
>-- 
>2.20.1

-- 
Wei Yang
Help you, Help me



Re: [Qemu-devel] [PATCH 5/5] hw/block/pflash_cfi02: Add the DeviceReset() handler

2019-05-05 Thread Wei Yang
On Sun, May 05, 2019 at 10:06:02PM +0200, Philippe Mathieu-Daudé wrote:
>The pflash device is a child of TYPE_DEVICE, so it can implement
>the DeviceReset handler. Actually it has to implement it, else
>on machine reset it might stay in an incoherent state, as it has
>been reported in the buglink listed below.
>
>Add the DeviceReset handler and remove its call from the realize()
>function.
>
>Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=1678713
>Reported-by: Laszlo Ersek 
>Signed-off-by: Philippe Mathieu-Daudé 

Reviewed-by: Wei Yang 


-- 
Wei Yang
Help you, Help me



[Qemu-devel] [PATCH] migratioin/ram: leave RAMBlock->bmap blank on allocating

2019-05-06 Thread Wei Yang
During migration, we would sync bitmap from ram_list.dirty_memory to
RAMBlock.bmap in cpu_physical_memory_sync_dirty_bitmap().

Since we set RAMBlock.bmap and ram_list.dirty_memory both to all 1, this
means at the first round this sync is meaningless and is a duplicated
work.

Leaving RAMBlock->bmap blank on allocating would have a side effect on
migration_dirty_pages, since it is calculated from the result of
cpu_physical_memory_sync_dirty_bitmap(). To keep it right, we need to
set migration_dirty_pages to 0 in ram_state_init().

Signed-off-by: Wei Yang 
---
 migration/ram.c | 8 +---
 1 file changed, 1 insertion(+), 7 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 95c51109d2..417874707d 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -3151,12 +3151,7 @@ static int ram_state_init(RAMState **rsp)
 qemu_mutex_init(&(*rsp)->src_page_req_mutex);
 QSIMPLEQ_INIT(&(*rsp)->src_page_requests);
 
-/*
- * Count the total number of pages used by ram blocks not including any
- * gaps due to alignment or unplugs.
- */
-(*rsp)->migration_dirty_pages = ram_bytes_total() >> TARGET_PAGE_BITS;
-
+(*rsp)->migration_dirty_pages = 0;
 ram_state_reset(*rsp);
 
 return 0;
@@ -3172,7 +3167,6 @@ static void ram_list_init_bitmaps(void)
 RAMBLOCK_FOREACH_NOT_IGNORED(block) {
 pages = block->max_length >> TARGET_PAGE_BITS;
 block->bmap = bitmap_new(pages);
-bitmap_set(block->bmap, 0, pages);
 if (migrate_postcopy_ram()) {
 block->unsentmap = bitmap_new(pages);
 bitmap_set(block->unsentmap, 0, pages);
-- 
2.19.1




Re: [Qemu-devel] [PATCH v4 05/15] tests: acpi: fetch X_DSDT if pointer to DSDT is 0

2019-05-07 Thread Wei Yang
On Tue, May 07, 2019 at 12:04:08PM +0200, Igor Mammedov wrote:
>On Sun, 5 May 2019 09:27:45 +0800
>Wei Yang  wrote:
>
>> On Thu, May 02, 2019 at 04:51:53PM +0200, Igor Mammedov wrote:
>> >that way it would be possible to test a DSDT pointed by
>> >64bit X_DSDT field in FADT.
>> >
>> >PS:
>> >it will allow to enable testing arm/virt board, which sets
>> >only newer X_DSDT field.
>> >
>> >Signed-off-by: Igor Mammedov 
>> >---
>> >v4:
>> > * dropping Reviewed-bys due to acpi_fetch_table() change
>> >   introduced by earlier patch:
>> >   "tests: acpi: make acpi_fetch_table() take size of fetched table pointer"
>> >v2:
>> >  add 'val = le32_to_cpu(val)' even if it doesn't necessary
>> >  it works as reminder that value copied from table is in
>> >  little-endian format (Philippe Mathieu-Daudé )
>> >---
>> > tests/bios-tables-test.c | 11 ++-
>> > 1 file changed, 10 insertions(+), 1 deletion(-)
>> >
>> >diff --git a/tests/bios-tables-test.c b/tests/bios-tables-test.c
>> >index a164d27..d165a1b 100644
>> >--- a/tests/bios-tables-test.c
>> >+++ b/tests/bios-tables-test.c
>> >@@ -140,6 +140,9 @@ static void test_acpi_fadt_table(test_data *data)
>> > AcpiSdtTable table = g_array_index(data->tables, typeof(table), 0);
>> > uint8_t *fadt_aml = table.aml;
>> > uint32_t fadt_len = table.aml_len;
>> >+uint32_t val;
>> >+int dsdt_offset = 40 /* DSDT */;
>> >+int dsdt_entry_size = 4;
>> > 
>> > g_assert(compare_signature(&table, "FACP"));
>> > 
>> >@@ -148,8 +151,14 @@ static void test_acpi_fadt_table(test_data *data)
>> >  fadt_aml + 36 /* FIRMWARE_CTRL */, 4, "FACS", false);
>> > g_array_append_val(data->tables, table);
>> > 
>> >+memcpy(&val, fadt_aml + dsdt_offset, 4);
>> >+val = le32_to_cpu(val);
>> >+if (!val) {
>> >+dsdt_offset = 140 /* X_DSDT */;
>> 
>> In case we can point out where we get it, e.g. ACPI 5, Table 5-34 FADT 
>> Format.
>> 
>> This may be more helpful for reviewing and maintaining.
>
>for fields we typically use only verbatim field name, so it would be easy
>to find by searching for it in spec. In this case it is obvious about which
>table it applies to, so reference to spec for a field probably excessive.
>
>Complete reference necessary for tables and API functions that implement
>ACPI primitive.
>

That's fine.

Reviewed-by: Wei Yang 

-- 
Wei Yang
Help you, Help me



Re: [Qemu-devel] [PATCH v4 0/6] Extract build_mcfg

2019-05-10 Thread Wei Yang
Hi, Igor

You would take this one? Or what should I do next?

On Fri, Apr 19, 2019 at 08:30:47AM +0800, Wei Yang wrote:
>This patch set tries to generalize MCFG table build process. And it is
>based on one un-merged patch from Igor, which is included in this serials.
>
>v3->v4:
>* adjust comment to give more information about MCFG table
>
>v2->v3:
>* Includes the un-merged patch from Igor
>* use build_append_foo() API to construct MCFG
>
>Igor Mammedov (1):
>  q35: acpi: do not create dummy MCFG table
>
>Wei Yang (5):
>  hw/arm/virt-acpi-build: remove unnecessary variable mcfg_start
>  i386, acpi: remove mcfg_ prefix in AcpiMcfgInfo members
>  hw/arm/virt-acpi-build: pass AcpiMcfgInfo to build_mcfg()
>  hw/acpi: Consolidate build_mcfg to pci.c
>  acpi: pci: use build_append_foo() API to construct MCFG
>
> default-configs/arm-softmmu.mak  |  1 +
> default-configs/i386-softmmu.mak |  1 +
> hw/acpi/Kconfig  |  4 +++
> hw/acpi/Makefile.objs|  1 +
> hw/acpi/pci.c| 55 
> hw/arm/virt-acpi-build.c | 31 +-
> hw/i386/acpi-build.c | 44 -
> include/hw/acpi/acpi-defs.h  | 18 ---
> include/hw/acpi/pci.h| 34 
> 9 files changed, 111 insertions(+), 78 deletions(-)
> create mode 100644 hw/acpi/pci.c
> create mode 100644 include/hw/acpi/pci.h
>
>-- 
>2.19.1
>

-- 
Wei Yang
Help you, Help me



[Qemu-devel] [PATCH] migration/ram.c: fix typos in comments

2019-05-10 Thread Wei Yang
Signed-off-by: Wei Yang 
---
 migration/ram.c | 10 +-
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 1def8122e9..720c2b73ca 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -888,7 +888,7 @@ struct {
  *- to make easier to know what to free at the end of migration
  *
  * This way we always know who is the owner of each "pages" struct,
- * and we don't need any loocking.  It belongs to the migration thread
+ * and we don't need any locking.  It belongs to the migration thread
  * or to the channel thread.  Switching is safe because the migration
  * thread is using the channel mutex when changing it, and the channel
  * have to had finish with its own, otherwise pending_job can't be
@@ -1594,7 +1594,7 @@ static int save_xbzrle_page(RAMState *rs, uint8_t 
**current_data,
  *
  * Called with rcu_read_lock() to protect migration_bitmap
  *
- * Returns the byte offset within memory region of the start of a dirty page
+ * Returns the page offset within memory region of the start of a dirty page
  *
  * @rs: current RAM state
  * @rb: RAMBlock where to search for dirty pages
@@ -2108,7 +2108,7 @@ retry:
  * find_dirty_block: find the next dirty page and update any state
  * associated with the search process.
  *
- * Returns if a page is found
+ * Returns true if a page is found
  *
  * @rs: current RAM state
  * @pss: data about the state of the current dirty page scan
@@ -2204,7 +2204,7 @@ static RAMBlock *unqueue_page(RAMState *rs, ram_addr_t 
*offset)
  *
  * Skips pages that are already sent (!dirty)
  *
- * Returns if a queued page is found
+ * Returns true if a queued page is found
  *
  * @rs: current RAM state
  * @pss: data about the state of the current dirty page scan
@@ -3411,7 +3411,7 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
 
 /* we want to check in the 1st loop, just in case it was the 1st time
and we had to sync the dirty bitmap.
-   qemu_get_clock_ns() is a bit expensive, so we only check each some
+   qemu_clock_get_ns() is a bit expensive, so we only check each some
iterations
 */
 if ((i & 63) == 0) {
-- 
2.19.1




Re: [Qemu-devel] [PATCH v4 0/6] Extract build_mcfg

2019-05-10 Thread Wei Yang
On Fri, May 10, 2019 at 07:59:43PM -0400, Michael S. Tsirkin wrote:
>I merged this will send pull request soon.
>

Ah, Thanks :-)

-- 
Wei Yang
Help you, Help me



[Qemu-devel] [RFC PATCH 0/9] hw/acpi: make build_madt arch agnostic

2019-05-12 Thread Wei Yang
Now MADT is highly depend in architecture and machine type and leaves
duplicated code in different architecture. The series here tries to generalize
it.

MADT contains one main table and several sub tables. These sub tables are
highly related to architecture. Here we introduce one method to make it
architecture agnostic.

  * each architecture define its sub-table implementation function in madt_sub
  * introduces struct madt_input to collect sub table information and pass to
build_madt

By doing so, each architecture could prepare its own sub-table implementation
and madt_input. And keep build_madt architecture agnostic.

Wei Yang (9):
  hw/acpi: expand pc_madt_cpu_entry in place
  hw/acpi: implement madt_sub[ACPI_APIC_PROCESSOR]
  hw/acpi: implement madt_sub[ACPI_APIC_LOCAL_X2APIC]
  hw/acpi: implement madt_sub[ACPI_APIC_IO]
  hw/acpi: implement madt_sub[ACPI_APIC_XRUPT_OVERRIDE]
  hw/acpi: implement madt_sub[ACPI_APIC_LOCAL_X2APIC_NMI]
  hw/acpi: implement madt_sub[ACPI_APIC_LOCAL_NMI]
  hw/acpi: factor build_madt with madt_input
  hw/acpi: implement madt_main to manipulate main madt table

 hw/acpi/cpu.c|  14 +-
 hw/acpi/piix4.c  |   3 +-
 hw/i386/acpi-build.c | 265 +--
 hw/isa/lpc_ich9.c|   3 +-
 include/hw/acpi/acpi_dev_interface.h |  12 +-
 include/hw/i386/pc.h |   2 +
 6 files changed, 194 insertions(+), 105 deletions(-)

-- 
2.19.1




[Qemu-devel] [RFC PATCH 1/9] hw/acpi: expand pc_madt_cpu_entry in place

2019-05-12 Thread Wei Yang
This is a preparation for MADT refactor.

Signed-off-by: Wei Yang 
---
 hw/acpi/cpu.c| 33 +++--
 hw/acpi/piix4.c  |  1 -
 hw/i386/acpi-build.c | 71 
 hw/isa/lpc_ich9.c|  1 -
 include/hw/acpi/acpi_dev_interface.h |  6 ---
 5 files changed, 59 insertions(+), 53 deletions(-)

diff --git a/hw/acpi/cpu.c b/hw/acpi/cpu.c
index 7a90c8f82d..d786df1a2c 100644
--- a/hw/acpi/cpu.c
+++ b/hw/acpi/cpu.c
@@ -329,7 +329,7 @@ void build_cpus_aml(Aml *table, MachineState *machine, 
CPUHotplugFeatures opts,
 char *cphp_res_path = g_strdup_printf("%s." CPUHP_RES_DEVICE, res_root);
 Object *obj = object_resolve_path_type("", TYPE_ACPI_DEVICE_IF, NULL);
 AcpiDeviceIfClass *adevc = ACPI_DEVICE_IF_GET_CLASS(obj);
-AcpiDeviceIf *adev = ACPI_DEVICE_IF(obj);
+uint32_t apic_id;
 
 cpu_ctrl_dev = aml_device("%s", cphp_res_path);
 {
@@ -522,8 +522,35 @@ void build_cpus_aml(Aml *table, MachineState *machine, 
CPUHotplugFeatures opts,
 aml_append(dev, method);
 
 /* build _MAT object */
-assert(adevc && adevc->madt_cpu);
-adevc->madt_cpu(adev, i, arch_ids, madt_buf);
+assert(adevc);
+apic_id = arch_ids->cpus[i].arch_id;
+if (apic_id < 255) {
+AcpiMadtProcessorApic *apic = acpi_data_push(madt_buf,
+ sizeof *apic);
+
+apic->type = ACPI_APIC_PROCESSOR;
+apic->length = sizeof(*apic);
+apic->processor_id = i;
+apic->local_apic_id = apic_id;
+if (arch_ids->cpus[i].cpu != NULL) {
+apic->flags = cpu_to_le32(1);
+} else {
+apic->flags = cpu_to_le32(0);
+}
+} else {
+AcpiMadtProcessorX2Apic *apic = acpi_data_push(madt_buf,
+   sizeof *apic);
+
+apic->type = ACPI_APIC_LOCAL_X2APIC;
+apic->length = sizeof(*apic);
+apic->uid = cpu_to_le32(i);
+apic->x2apic_id = cpu_to_le32(apic_id);
+if (arch_ids->cpus[i].cpu != NULL) {
+apic->flags = cpu_to_le32(1);
+} else {
+apic->flags = cpu_to_le32(0);
+}
+}
 switch (madt_buf->data[0]) {
 case ACPI_APIC_PROCESSOR: {
 AcpiMadtProcessorApic *apic = (void *)madt_buf->data;
diff --git a/hw/acpi/piix4.c b/hw/acpi/piix4.c
index 9c079d6834..76fde125a3 100644
--- a/hw/acpi/piix4.c
+++ b/hw/acpi/piix4.c
@@ -722,7 +722,6 @@ static void piix4_pm_class_init(ObjectClass *klass, void 
*data)
 hc->unplug = piix4_device_unplug_cb;
 adevc->ospm_status = piix4_ospm_status;
 adevc->send_event = piix4_send_gpe;
-adevc->madt_cpu = pc_madt_cpu_entry;
 }
 
 static const TypeInfo piix4_pm_info = {
diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index 29980bb3f4..54b70e62ac 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -301,52 +301,12 @@ build_facs(GArray *table_data)
 facs->length = cpu_to_le32(sizeof(*facs));
 }
 
-void pc_madt_cpu_entry(AcpiDeviceIf *adev, int uid,
-   const CPUArchIdList *apic_ids, GArray *entry)
-{
-uint32_t apic_id = apic_ids->cpus[uid].arch_id;
-
-/* ACPI spec says that LAPIC entry for non present
- * CPU may be omitted from MADT or it must be marked
- * as disabled. However omitting non present CPU from
- * MADT breaks hotplug on linux. So possible CPUs
- * should be put in MADT but kept disabled.
- */
-if (apic_id < 255) {
-AcpiMadtProcessorApic *apic = acpi_data_push(entry, sizeof *apic);
-
-apic->type = ACPI_APIC_PROCESSOR;
-apic->length = sizeof(*apic);
-apic->processor_id = uid;
-apic->local_apic_id = apic_id;
-if (apic_ids->cpus[uid].cpu != NULL) {
-apic->flags = cpu_to_le32(1);
-} else {
-apic->flags = cpu_to_le32(0);
-}
-} else {
-AcpiMadtProcessorX2Apic *apic = acpi_data_push(entry, sizeof *apic);
-
-apic->type = ACPI_APIC_LOCAL_X2APIC;
-apic->length = sizeof(*apic);
-apic->uid = cpu_to_le32(uid);
-apic->x2apic_id = cpu_to_le32(apic_id);
-if (apic_ids->cpus[uid].cpu != NULL) {
-apic->flags = cpu_to_le32(1);
-} else {
-apic->flags = cpu_to_le32(0);
-}
-}
-}
-
 static void
 build_madt(GArray *table_data, BIOSLinker *linker, PCMachineState *pcms)
 {
 MachineClass *mc = MACHINE_GET_CLASS(pcms);
 const CPUArchIdList *ap

[Qemu-devel] [RFC PATCH 2/9] hw/acpi: implement madt_sub[ACPI_APIC_PROCESSOR]

2019-05-12 Thread Wei Yang
Signed-off-by: Wei Yang 
---
 hw/acpi/cpu.c| 15 +++
 hw/acpi/piix4.c  |  1 +
 hw/i386/acpi-build.c | 39 +++-
 hw/isa/lpc_ich9.c|  1 +
 include/hw/acpi/acpi_dev_interface.h |  5 
 include/hw/i386/pc.h |  1 +
 6 files changed, 38 insertions(+), 24 deletions(-)

diff --git a/hw/acpi/cpu.c b/hw/acpi/cpu.c
index d786df1a2c..35e57f9824 100644
--- a/hw/acpi/cpu.c
+++ b/hw/acpi/cpu.c
@@ -508,6 +508,7 @@ void build_cpus_aml(Aml *table, MachineState *machine, 
CPUHotplugFeatures opts,
 Aml *uid = aml_int(i);
 GArray *madt_buf = g_array_new(0, 1, 1);
 int arch_id = arch_ids->cpus[i].arch_id;
+int processor_id = i;
 
 if (opts.acpi_1_compatible && arch_id < 255) {
 dev = aml_processor(i, 0, 0, CPU_NAME_FMT, i);
@@ -525,18 +526,8 @@ void build_cpus_aml(Aml *table, MachineState *machine, 
CPUHotplugFeatures opts,
 assert(adevc);
 apic_id = arch_ids->cpus[i].arch_id;
 if (apic_id < 255) {
-AcpiMadtProcessorApic *apic = acpi_data_push(madt_buf,
- sizeof *apic);
-
-apic->type = ACPI_APIC_PROCESSOR;
-apic->length = sizeof(*apic);
-apic->processor_id = i;
-apic->local_apic_id = apic_id;
-if (arch_ids->cpus[i].cpu != NULL) {
-apic->flags = cpu_to_le32(1);
-} else {
-apic->flags = cpu_to_le32(0);
-}
+assert(adevc->madt_sub[ACPI_APIC_PROCESSOR]);
+adevc->madt_sub[ACPI_APIC_PROCESSOR](madt_buf, &processor_id);
 } else {
 AcpiMadtProcessorX2Apic *apic = acpi_data_push(madt_buf,
sizeof *apic);
diff --git a/hw/acpi/piix4.c b/hw/acpi/piix4.c
index 76fde125a3..f4336b9238 100644
--- a/hw/acpi/piix4.c
+++ b/hw/acpi/piix4.c
@@ -722,6 +722,7 @@ static void piix4_pm_class_init(ObjectClass *klass, void 
*data)
 hc->unplug = piix4_device_unplug_cb;
 adevc->ospm_status = piix4_ospm_status;
 adevc->send_event = piix4_send_gpe;
+adevc->madt_sub = i386_madt_sub;
 }
 
 static const TypeInfo piix4_pm_info = {
diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index 54b70e62ac..dfcdcaeaf7 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -301,12 +301,37 @@ build_facs(GArray *table_data)
 facs->length = cpu_to_le32(sizeof(*facs));
 }
 
+static void pc_madt_apic_entry(GArray *entry, void *opaque)
+{
+MachineState *machine = MACHINE(qdev_get_machine());
+MachineClass *mc = MACHINE_GET_CLASS(machine);
+const CPUArchIdList *apic_ids = mc->possible_cpu_arch_ids(machine);
+int *processor_id = opaque;
+AcpiMadtProcessorApic *apic = acpi_data_push(entry, sizeof *apic);
+
+apic->type = ACPI_APIC_PROCESSOR;
+apic->length = sizeof(*apic);
+apic->processor_id = *processor_id;
+apic->local_apic_id = apic_ids->cpus[*processor_id].arch_id;
+if (apic_ids->cpus[*processor_id].cpu != NULL) {
+apic->flags = cpu_to_le32(1);
+} else {
+apic->flags = cpu_to_le32(0);
+}
+}
+
+madt_operations i386_madt_sub = {
+[ACPI_APIC_PROCESSOR] = pc_madt_apic_entry,
+};
+
 static void
 build_madt(GArray *table_data, BIOSLinker *linker, PCMachineState *pcms)
 {
 MachineClass *mc = MACHINE_GET_CLASS(pcms);
 const CPUArchIdList *apic_ids = mc->possible_cpu_arch_ids(MACHINE(pcms));
 int madt_start = table_data->len;
+AcpiDeviceIfClass *adevc = ACPI_DEVICE_IF_GET_CLASS(pcms->acpi_dev);
+
 bool x2apic_mode = false;
 
 AcpiMultipleApicTable *madt;
@@ -320,19 +345,9 @@ build_madt(GArray *table_data, BIOSLinker *linker, 
PCMachineState *pcms)
 
 for (i = 0; i < apic_ids->len; i++) {
 uint32_t apic_id = apic_ids->cpus[i].arch_id;
+int processor_id = i;
 if (apic_id < 255) {
-AcpiMadtProcessorApic *apic = acpi_data_push(table_data,
- sizeof *apic);
-
-apic->type = ACPI_APIC_PROCESSOR;
-apic->length = sizeof(*apic);
-apic->processor_id = i;
-apic->local_apic_id = apic_id;
-if (apic_ids->cpus[i].cpu != NULL) {
-apic->flags = cpu_to_le32(1);
-} else {
-apic->flags = cpu_to_le32(0);
-}
+adevc->madt_sub[ACPI_APIC_PROCESSOR](table_data, &processor_id);
 } else {
 AcpiMadtProcessorX2Apic *apic = acpi_data_push(table_data,
sizeof *apic);
diff --gi

[Qemu-devel] [RFC PATCH 3/9] hw/acpi: implement madt_sub[ACPI_APIC_LOCAL_X2APIC]

2019-05-12 Thread Wei Yang
Signed-off-by: Wei Yang 
---
 hw/acpi/cpu.c| 14 ++
 hw/i386/acpi-build.c | 33 +
 2 files changed, 23 insertions(+), 24 deletions(-)

diff --git a/hw/acpi/cpu.c b/hw/acpi/cpu.c
index 35e57f9824..cb5970d659 100644
--- a/hw/acpi/cpu.c
+++ b/hw/acpi/cpu.c
@@ -529,18 +529,8 @@ void build_cpus_aml(Aml *table, MachineState *machine, 
CPUHotplugFeatures opts,
 assert(adevc->madt_sub[ACPI_APIC_PROCESSOR]);
 adevc->madt_sub[ACPI_APIC_PROCESSOR](madt_buf, &processor_id);
 } else {
-AcpiMadtProcessorX2Apic *apic = acpi_data_push(madt_buf,
-   sizeof *apic);
-
-apic->type = ACPI_APIC_LOCAL_X2APIC;
-apic->length = sizeof(*apic);
-apic->uid = cpu_to_le32(i);
-apic->x2apic_id = cpu_to_le32(apic_id);
-if (arch_ids->cpus[i].cpu != NULL) {
-apic->flags = cpu_to_le32(1);
-} else {
-apic->flags = cpu_to_le32(0);
-}
+adevc->madt_sub[ACPI_APIC_LOCAL_X2APIC](madt_buf,
+&processor_id);
 }
 switch (madt_buf->data[0]) {
 case ACPI_APIC_PROCESSOR: {
diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index dfcdcaeaf7..4b480efff9 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -320,8 +320,28 @@ static void pc_madt_apic_entry(GArray *entry, void *opaque)
 }
 }
 
+static void pc_madt_x2apic_entry(GArray *entry, void *opaque)
+{
+MachineState *machine = MACHINE(qdev_get_machine());
+MachineClass *mc = MACHINE_GET_CLASS(machine);
+const CPUArchIdList *apic_ids = mc->possible_cpu_arch_ids(machine);
+int *processor_id = opaque;
+AcpiMadtProcessorX2Apic *apic = acpi_data_push(entry, sizeof *apic);
+
+apic->type = ACPI_APIC_LOCAL_X2APIC;
+apic->length = sizeof(*apic);
+apic->uid = cpu_to_le32(*processor_id);
+apic->x2apic_id = cpu_to_le32(apic_ids->cpus[*processor_id].arch_id);
+if (apic_ids->cpus[*processor_id].cpu != NULL) {
+apic->flags = cpu_to_le32(1);
+} else {
+apic->flags = cpu_to_le32(0);
+}
+}
+
 madt_operations i386_madt_sub = {
 [ACPI_APIC_PROCESSOR] = pc_madt_apic_entry,
+[ACPI_APIC_LOCAL_X2APIC] = pc_madt_x2apic_entry,
 };
 
 static void
@@ -349,18 +369,7 @@ build_madt(GArray *table_data, BIOSLinker *linker, 
PCMachineState *pcms)
 if (apic_id < 255) {
 adevc->madt_sub[ACPI_APIC_PROCESSOR](table_data, &processor_id);
 } else {
-AcpiMadtProcessorX2Apic *apic = acpi_data_push(table_data,
-   sizeof *apic);
-
-apic->type = ACPI_APIC_LOCAL_X2APIC;
-apic->length = sizeof(*apic);
-apic->uid = cpu_to_le32(i);
-apic->x2apic_id = cpu_to_le32(apic_id);
-if (apic_ids->cpus[i].cpu != NULL) {
-apic->flags = cpu_to_le32(1);
-} else {
-apic->flags = cpu_to_le32(0);
-}
+adevc->madt_sub[ACPI_APIC_LOCAL_X2APIC](table_data, &processor_id);
 }
 if (apic_id > 254) {
 x2apic_mode = true;
-- 
2.19.1




[Qemu-devel] [RFC PATCH 8/9] hw/acpi: factor build_madt with madt_input

2019-05-12 Thread Wei Yang
struct madt_input is introduced to represent one sub madt table.

With help of madt_sub[] for related sub madt table, build_madt could
be agnostic.

Signed-off-by: Wei Yang 
---
 hw/i386/acpi-build.c | 103 +++
 1 file changed, 65 insertions(+), 38 deletions(-)

diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index a7aeb215fc..74a34e297e 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -284,6 +284,54 @@ static void acpi_get_pci_holes(Range *hole, Range *hole64)
NULL));
 }
 
+struct madt_input {
+int sub_id;
+void *opaque;
+};
+
+int xrupt_override_idx[] = {0, 5, 9, 10, 11};
+static struct madt_input *
+acpi_get_madt_input(PCMachineState *pcms, int *processor_id)
+{
+MachineClass *mc = MACHINE_GET_CLASS(pcms);
+const CPUArchIdList *apic_ids = mc->possible_cpu_arch_ids(MACHINE(pcms));
+int i, sub_heads = 0;
+uint32_t apic_id;
+struct madt_input *input = NULL;
+
+sub_heads = apic_ids->len  /* PROCESSOR/X2APIC */
++ 1/* APIC_IO */
++ ARRAY_SIZE(xrupt_override_idx)   /* XRUPT_OVERRIDE */
++ 1/* NMI/X2APIC_NMI */
++ 1;   /* END MARK */
+input = g_new0(struct madt_input, sub_heads);
+for (i = 0, sub_heads = 0; i < apic_ids->len; i++, sub_heads++) {
+apic_id = apic_ids->cpus[i].arch_id;
+if (apic_id < 255) {
+input[sub_heads].sub_id = ACPI_APIC_PROCESSOR;
+} else {
+input[sub_heads].sub_id = ACPI_APIC_LOCAL_X2APIC;
+}
+input[sub_heads].opaque = processor_id;
+}
+input[sub_heads++].sub_id = ACPI_APIC_IO;
+for (i = 0; i < ARRAY_SIZE(xrupt_override_idx); i++, sub_heads++) {
+if (i == 0 && !pcms->apic_xrupt_override) {
+continue;
+}
+input[sub_heads].sub_id = ACPI_APIC_XRUPT_OVERRIDE;
+input[sub_heads].opaque = &xrupt_override_idx[i];
+}
+if (apic_id > 254) {
+input[sub_heads++].sub_id = ACPI_APIC_LOCAL_X2APIC_NMI;
+} else {
+input[sub_heads++].sub_id = ACPI_APIC_LOCAL_NMI;
+}
+input[sub_heads].sub_id = ACPI_APIC_RESERVED;
+
+return input;
+}
+
 static void acpi_align_size(GArray *blob, unsigned align)
 {
 /* Align size to multiple of given size. This reduces the chance
@@ -318,6 +366,7 @@ static void pc_madt_apic_entry(GArray *entry, void *opaque)
 } else {
 apic->flags = cpu_to_le32(0);
 }
+(*processor_id)++;
 }
 
 static void pc_madt_x2apic_entry(GArray *entry, void *opaque)
@@ -337,6 +386,7 @@ static void pc_madt_x2apic_entry(GArray *entry, void 
*opaque)
 } else {
 apic->flags = cpu_to_le32(0);
 }
+(*processor_id)++;
 }
 
 static void pc_madt_io_entry(GArray *entry, void *opaque)
@@ -405,54 +455,27 @@ madt_operations i386_madt_sub = {
 };
 
 static void
-build_madt(GArray *table_data, BIOSLinker *linker, PCMachineState *pcms)
+build_madt(GArray *table_data, BIOSLinker *linker, PCMachineState *pcms,
+   struct madt_input *input)
 {
-MachineClass *mc = MACHINE_GET_CLASS(pcms);
-const CPUArchIdList *apic_ids = mc->possible_cpu_arch_ids(MACHINE(pcms));
 int madt_start = table_data->len;
 AcpiDeviceIfClass *adevc = ACPI_DEVICE_IF_GET_CLASS(pcms->acpi_dev);
 
-bool x2apic_mode = false;
-
 AcpiMultipleApicTable *madt;
-int i;
+int i, sub_id;
+void *opaque;
 
 madt = acpi_data_push(table_data, sizeof *madt);
 madt->local_apic_address = cpu_to_le32(APIC_DEFAULT_ADDRESS);
 madt->flags = cpu_to_le32(1);
 
-for (i = 0; i < apic_ids->len; i++) {
-uint32_t apic_id = apic_ids->cpus[i].arch_id;
-int processor_id = i;
-if (apic_id < 255) {
-adevc->madt_sub[ACPI_APIC_PROCESSOR](table_data, &processor_id);
-} else {
-adevc->madt_sub[ACPI_APIC_LOCAL_X2APIC](table_data, &processor_id);
+for (i = 0; ; i++) {
+sub_id = input[i].sub_id;
+if (sub_id == ACPI_APIC_RESERVED) {
+break;
 }
-if (apic_id > 254) {
-x2apic_mode = true;
-}
-}
-
-adevc->madt_sub[ACPI_APIC_IO](table_data, NULL);
-
-if (pcms->apic_xrupt_override) {
-i = 0;
-adevc->madt_sub[ACPI_APIC_XRUPT_OVERRIDE](table_data, &i);
-}
-for (i = 1; i < 16; i++) {
-#define ACPI_BUILD_PCI_IRQS ((1<<5) | (1<<9) | (1<<10) | (1<<11))
-if (!(ACPI_BUILD_PCI_IRQS & (1 << i))) {
-/* No need for a INT source override structure. */
-continue;
-}
-adevc->madt_sub[ACPI_APIC_XRUPT_OVERRIDE](table_data, &i);
-}
-
-if (x2apic_mode) {
-adevc->madt_sub[AC

[Qemu-devel] [RFC PATCH 5/9] hw/acpi: implement madt_sub[ACPI_APIC_XRUPT_OVERRIDE]

2019-05-12 Thread Wei Yang
Signed-off-by: Wei Yang 
---
 hw/i386/acpi-build.c | 37 -
 1 file changed, 24 insertions(+), 13 deletions(-)

diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index a661fff51d..f48cc5b292 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -351,10 +351,31 @@ static void pc_madt_io_entry(GArray *entry, void *opaque)
 io_apic->interrupt = cpu_to_le32(0);
 }
 
+static void pc_madt_xrupt_override_entry(GArray *entry, void *opaque)
+{
+AcpiMadtIntsrcovr *intsrcovr;
+int *idx = opaque;
+
+intsrcovr = acpi_data_push(entry, sizeof *intsrcovr);
+intsrcovr->type   = ACPI_APIC_XRUPT_OVERRIDE;
+intsrcovr->length = sizeof(*intsrcovr);
+intsrcovr->source = *idx;
+if (*idx == 0) {
+intsrcovr->gsi= cpu_to_le32(2);
+/* conforms to bus specifications */
+intsrcovr->flags = cpu_to_le16(0);
+} else {
+intsrcovr->gsi= cpu_to_le32(*idx);
+/* active high, level triggered */
+intsrcovr->flags = cpu_to_le16(0xd);
+}
+}
+
 madt_operations i386_madt_sub = {
 [ACPI_APIC_PROCESSOR] = pc_madt_apic_entry,
 [ACPI_APIC_LOCAL_X2APIC] = pc_madt_x2apic_entry,
 [ACPI_APIC_IO] = pc_madt_io_entry,
+[ACPI_APIC_XRUPT_OVERRIDE] = pc_madt_xrupt_override_entry,
 };
 
 static void
@@ -368,7 +389,6 @@ build_madt(GArray *table_data, BIOSLinker *linker, 
PCMachineState *pcms)
 bool x2apic_mode = false;
 
 AcpiMultipleApicTable *madt;
-AcpiMadtIntsrcovr *intsrcovr;
 int i;
 
 madt = acpi_data_push(table_data, sizeof *madt);
@@ -391,12 +411,8 @@ build_madt(GArray *table_data, BIOSLinker *linker, 
PCMachineState *pcms)
 adevc->madt_sub[ACPI_APIC_IO](table_data, NULL);
 
 if (pcms->apic_xrupt_override) {
-intsrcovr = acpi_data_push(table_data, sizeof *intsrcovr);
-intsrcovr->type   = ACPI_APIC_XRUPT_OVERRIDE;
-intsrcovr->length = sizeof(*intsrcovr);
-intsrcovr->source = 0;
-intsrcovr->gsi= cpu_to_le32(2);
-intsrcovr->flags  = cpu_to_le16(0); /* conforms to bus specifications 
*/
+i = 0;
+adevc->madt_sub[ACPI_APIC_XRUPT_OVERRIDE](table_data, &i);
 }
 for (i = 1; i < 16; i++) {
 #define ACPI_BUILD_PCI_IRQS ((1<<5) | (1<<9) | (1<<10) | (1<<11))
@@ -404,12 +420,7 @@ build_madt(GArray *table_data, BIOSLinker *linker, 
PCMachineState *pcms)
 /* No need for a INT source override structure. */
 continue;
 }
-intsrcovr = acpi_data_push(table_data, sizeof *intsrcovr);
-intsrcovr->type   = ACPI_APIC_XRUPT_OVERRIDE;
-intsrcovr->length = sizeof(*intsrcovr);
-intsrcovr->source = i;
-intsrcovr->gsi= cpu_to_le32(i);
-intsrcovr->flags  = cpu_to_le16(0xd); /* active high, level triggered 
*/
+adevc->madt_sub[ACPI_APIC_XRUPT_OVERRIDE](table_data, &i);
 }
 
 if (x2apic_mode) {
-- 
2.19.1




[Qemu-devel] [RFC PATCH 9/9] hw/acpi: implement madt_main to manipulate main madt table

2019-05-12 Thread Wei Yang
Different arch would handle the main madt table differently. Add a
helper function to achieve this goal.

Signed-off-by: Wei Yang 
---
 hw/acpi/piix4.c  |  1 +
 hw/i386/acpi-build.c | 13 +++--
 hw/isa/lpc_ich9.c|  1 +
 include/hw/acpi/acpi_dev_interface.h |  1 +
 include/hw/i386/pc.h |  1 +
 5 files changed, 15 insertions(+), 2 deletions(-)

diff --git a/hw/acpi/piix4.c b/hw/acpi/piix4.c
index f4336b9238..d7d097daee 100644
--- a/hw/acpi/piix4.c
+++ b/hw/acpi/piix4.c
@@ -723,6 +723,7 @@ static void piix4_pm_class_init(ObjectClass *klass, void 
*data)
 adevc->ospm_status = piix4_ospm_status;
 adevc->send_event = piix4_send_gpe;
 adevc->madt_sub = i386_madt_sub;
+adevc->madt_main = i386_madt_main;
 }
 
 static const TypeInfo piix4_pm_info = {
diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index 74a34e297e..e73e9fddee 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -454,6 +454,14 @@ madt_operations i386_madt_sub = {
 [ACPI_APIC_LOCAL_NMI] = pc_madt_nmi_entry,
 };
 
+void i386_madt_main(GArray *entry, void *opaque)
+{
+AcpiMultipleApicTable *madt = opaque;
+
+madt->local_apic_address = cpu_to_le32(APIC_DEFAULT_ADDRESS);
+madt->flags = cpu_to_le32(1);
+}
+
 static void
 build_madt(GArray *table_data, BIOSLinker *linker, PCMachineState *pcms,
struct madt_input *input)
@@ -466,8 +474,9 @@ build_madt(GArray *table_data, BIOSLinker *linker, 
PCMachineState *pcms,
 void *opaque;
 
 madt = acpi_data_push(table_data, sizeof *madt);
-madt->local_apic_address = cpu_to_le32(APIC_DEFAULT_ADDRESS);
-madt->flags = cpu_to_le32(1);
+if (adevc->madt_main) {
+adevc->madt_main(table_data, madt);
+}
 
 for (i = 0; ; i++) {
 sub_id = input[i].sub_id;
diff --git a/hw/isa/lpc_ich9.c b/hw/isa/lpc_ich9.c
index efb0fd8e94..21afd1a12d 100644
--- a/hw/isa/lpc_ich9.c
+++ b/hw/isa/lpc_ich9.c
@@ -812,6 +812,7 @@ static void ich9_lpc_class_init(ObjectClass *klass, void 
*data)
 adevc->ospm_status = ich9_pm_ospm_status;
 adevc->send_event = ich9_send_gpe;
 adevc->madt_sub = i386_madt_sub;
+adevc->madt_main = i386_madt_main;
 }
 
 static const TypeInfo ich9_lpc_info = {
diff --git a/include/hw/acpi/acpi_dev_interface.h 
b/include/hw/acpi/acpi_dev_interface.h
index 3a3a12d543..d3b216544d 100644
--- a/include/hw/acpi/acpi_dev_interface.h
+++ b/include/hw/acpi/acpi_dev_interface.h
@@ -52,6 +52,7 @@ typedef struct AcpiDeviceIfClass {
 /*  */
 void (*ospm_status)(AcpiDeviceIf *adev, ACPIOSTInfoList ***list);
 void (*send_event)(AcpiDeviceIf *adev, AcpiEventStatusBits ev);
+madt_operation madt_main;
 madt_operation *madt_sub;
 } AcpiDeviceIfClass;
 #endif
diff --git a/include/hw/i386/pc.h b/include/hw/i386/pc.h
index db4ec693d3..6b7dd060ac 100644
--- a/include/hw/i386/pc.h
+++ b/include/hw/i386/pc.h
@@ -282,6 +282,7 @@ void pc_system_firmware_init(PCMachineState *pcms, 
MemoryRegion *rom_memory);
 void pc_madt_cpu_entry(AcpiDeviceIf *adev, int uid,
const CPUArchIdList *apic_ids, GArray *entry);
 extern madt_operations i386_madt_sub;
+void i386_madt_main(GArray *entry, void *opaque);
 
 /* e820 types */
 #define E820_RAM1
-- 
2.19.1




[Qemu-devel] [RFC PATCH 6/9] hw/acpi: implement madt_sub[ACPI_APIC_LOCAL_X2APIC_NMI]

2019-05-12 Thread Wei Yang
Signed-off-by: Wei Yang 
---
 hw/i386/acpi-build.c | 22 ++
 1 file changed, 14 insertions(+), 8 deletions(-)

diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index f48cc5b292..bec0bed53e 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -371,11 +371,24 @@ static void pc_madt_xrupt_override_entry(GArray *entry, 
void *opaque)
 }
 }
 
+static void pc_madt_x2apic_nmi_entry(GArray *entry, void *opaque)
+{
+AcpiMadtLocalX2ApicNmi *local_nmi;
+
+local_nmi = acpi_data_push(entry, sizeof *local_nmi);
+local_nmi->type   = ACPI_APIC_LOCAL_X2APIC_NMI;
+local_nmi->length = sizeof(*local_nmi);
+local_nmi->uid= 0x; /* all processors */
+local_nmi->flags  = cpu_to_le16(0);
+local_nmi->lint   = 1; /* ACPI_LINT1 */
+}
+
 madt_operations i386_madt_sub = {
 [ACPI_APIC_PROCESSOR] = pc_madt_apic_entry,
 [ACPI_APIC_LOCAL_X2APIC] = pc_madt_x2apic_entry,
 [ACPI_APIC_IO] = pc_madt_io_entry,
 [ACPI_APIC_XRUPT_OVERRIDE] = pc_madt_xrupt_override_entry,
+[ACPI_APIC_LOCAL_X2APIC_NMI] = pc_madt_x2apic_nmi_entry,
 };
 
 static void
@@ -424,14 +437,7 @@ build_madt(GArray *table_data, BIOSLinker *linker, 
PCMachineState *pcms)
 }
 
 if (x2apic_mode) {
-AcpiMadtLocalX2ApicNmi *local_nmi;
-
-local_nmi = acpi_data_push(table_data, sizeof *local_nmi);
-local_nmi->type   = ACPI_APIC_LOCAL_X2APIC_NMI;
-local_nmi->length = sizeof(*local_nmi);
-local_nmi->uid= 0x; /* all processors */
-local_nmi->flags  = cpu_to_le16(0);
-local_nmi->lint   = 1; /* ACPI_LINT1 */
+adevc->madt_sub[ACPI_APIC_LOCAL_X2APIC_NMI](table_data, NULL);
 } else {
 AcpiMadtLocalNmi *local_nmi;
 
-- 
2.19.1




[Qemu-devel] [RFC PATCH 4/9] hw/acpi: implement madt_sub[ACPI_APIC_IO]

2019-05-12 Thread Wei Yang
Signed-off-by: Wei Yang 
---
 hw/i386/acpi-build.c | 21 ++---
 1 file changed, 14 insertions(+), 7 deletions(-)

diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index 4b480efff9..a661fff51d 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -339,9 +339,22 @@ static void pc_madt_x2apic_entry(GArray *entry, void 
*opaque)
 }
 }
 
+static void pc_madt_io_entry(GArray *entry, void *opaque)
+{
+AcpiMadtIoApic *io_apic;
+
+io_apic = acpi_data_push(entry, sizeof *io_apic);
+io_apic->type = ACPI_APIC_IO;
+io_apic->length = sizeof(*io_apic);
+io_apic->io_apic_id = ACPI_BUILD_IOAPIC_ID;
+io_apic->address = cpu_to_le32(IO_APIC_DEFAULT_ADDRESS);
+io_apic->interrupt = cpu_to_le32(0);
+}
+
 madt_operations i386_madt_sub = {
 [ACPI_APIC_PROCESSOR] = pc_madt_apic_entry,
 [ACPI_APIC_LOCAL_X2APIC] = pc_madt_x2apic_entry,
+[ACPI_APIC_IO] = pc_madt_io_entry,
 };
 
 static void
@@ -355,7 +368,6 @@ build_madt(GArray *table_data, BIOSLinker *linker, 
PCMachineState *pcms)
 bool x2apic_mode = false;
 
 AcpiMultipleApicTable *madt;
-AcpiMadtIoApic *io_apic;
 AcpiMadtIntsrcovr *intsrcovr;
 int i;
 
@@ -376,12 +388,7 @@ build_madt(GArray *table_data, BIOSLinker *linker, 
PCMachineState *pcms)
 }
 }
 
-io_apic = acpi_data_push(table_data, sizeof *io_apic);
-io_apic->type = ACPI_APIC_IO;
-io_apic->length = sizeof(*io_apic);
-io_apic->io_apic_id = ACPI_BUILD_IOAPIC_ID;
-io_apic->address = cpu_to_le32(IO_APIC_DEFAULT_ADDRESS);
-io_apic->interrupt = cpu_to_le32(0);
+adevc->madt_sub[ACPI_APIC_IO](table_data, NULL);
 
 if (pcms->apic_xrupt_override) {
 intsrcovr = acpi_data_push(table_data, sizeof *intsrcovr);
-- 
2.19.1




[Qemu-devel] [RFC PATCH 7/9] hw/acpi: implement madt_sub[ACPI_APIC_LOCAL_NMI]

2019-05-12 Thread Wei Yang
Signed-off-by: Wei Yang 
---
 hw/i386/acpi-build.c | 22 ++
 1 file changed, 14 insertions(+), 8 deletions(-)

diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index bec0bed53e..a7aeb215fc 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -383,12 +383,25 @@ static void pc_madt_x2apic_nmi_entry(GArray *entry, void 
*opaque)
 local_nmi->lint   = 1; /* ACPI_LINT1 */
 }
 
+static void pc_madt_nmi_entry(GArray *entry, void *opaque)
+{
+AcpiMadtLocalNmi *local_nmi;
+
+local_nmi = acpi_data_push(entry, sizeof *local_nmi);
+local_nmi->type = ACPI_APIC_LOCAL_NMI;
+local_nmi->length   = sizeof(*local_nmi);
+local_nmi->processor_id = 0xff; /* all processors */
+local_nmi->flags= cpu_to_le16(0);
+local_nmi->lint = 1; /* ACPI_LINT1 */
+}
+
 madt_operations i386_madt_sub = {
 [ACPI_APIC_PROCESSOR] = pc_madt_apic_entry,
 [ACPI_APIC_LOCAL_X2APIC] = pc_madt_x2apic_entry,
 [ACPI_APIC_IO] = pc_madt_io_entry,
 [ACPI_APIC_XRUPT_OVERRIDE] = pc_madt_xrupt_override_entry,
 [ACPI_APIC_LOCAL_X2APIC_NMI] = pc_madt_x2apic_nmi_entry,
+[ACPI_APIC_LOCAL_NMI] = pc_madt_nmi_entry,
 };
 
 static void
@@ -439,14 +452,7 @@ build_madt(GArray *table_data, BIOSLinker *linker, 
PCMachineState *pcms)
 if (x2apic_mode) {
 adevc->madt_sub[ACPI_APIC_LOCAL_X2APIC_NMI](table_data, NULL);
 } else {
-AcpiMadtLocalNmi *local_nmi;
-
-local_nmi = acpi_data_push(table_data, sizeof *local_nmi);
-local_nmi->type = ACPI_APIC_LOCAL_NMI;
-local_nmi->length   = sizeof(*local_nmi);
-local_nmi->processor_id = 0xff; /* all processors */
-local_nmi->flags= cpu_to_le16(0);
-local_nmi->lint = 1; /* ACPI_LINT1 */
+adevc->madt_sub[ACPI_APIC_LOCAL_NMI](table_data, NULL);
 }
 
 build_header(linker, table_data,
-- 
2.19.1




[Qemu-devel] [PATCH 2/6] migration/multifd: notify channels_ready when send thread starts

2019-06-06 Thread Wei Yang
multifd_send_state->channels_ready is initialized to 0. It is proper to
let main thread know we are ready when thread start running.

Current implementation works since ram_save_setup() calls
multifd_send_sync_main() which wake up send thread and posts
channels_ready. This behavior will introduce some unpredictable
situation and disturb the semaphore value.

This is a preparation patch to use another mechanism to do send thread
synchronization to avoid post channels_ready in this case. So this patch
posts channels_ready when send threads start running.

Signed-off-by: Wei Yang 
---
 migration/ram.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/migration/ram.c b/migration/ram.c
index a4e7587648..f9e53ac413 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1093,6 +1093,8 @@ static void *multifd_send_thread(void *opaque)
 }
 /* initial packet */
 p->num_packets = 1;
+/* let main thread know we are ready */
+qemu_sem_post(&multifd_send_state->channels_ready);
 
 while (true) {
 qemu_sem_wait(&p->sem);
-- 
2.19.1




[Qemu-devel] [PATCH 4/6] migration/multifd: used must not be 0 for a pending job

2019-06-06 Thread Wei Yang
After thread synchronization request is handled in another case, this
means when we only get pending_job when there is used pages.

Signed-off-by: Wei Yang 
---
 migration/ram.c | 9 -
 1 file changed, 4 insertions(+), 5 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 9982930392..3e48795608 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1118,12 +1118,11 @@ static void *multifd_send_thread(void *opaque)
 break;
 }
 
-if (used) {
-ret = qio_channel_writev_all(p->c, p->pages->iov,
+assert(used);
+ret = qio_channel_writev_all(p->c, p->pages->iov,
  used, &local_err);
-if (ret != 0) {
-break;
-}
+if (ret != 0) {
+break;
 }
 
 qemu_mutex_lock(&p->mutex);
-- 
2.19.1




[Qemu-devel] [PATCH 6/6] migration/multifd: there is no spurious wakeup now

2019-06-06 Thread Wei Yang
The spurious wakeup is gone.

Signed-off-by: Wei Yang 
---
 migration/ram.c | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 831b15833b..2490631d52 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1153,9 +1153,6 @@ static void *multifd_send_thread(void *opaque)
 } else if (p->quit) {
 qemu_mutex_unlock(&p->mutex);
 break;
-} else {
-qemu_mutex_unlock(&p->mutex);
-/* sometimes there are spurious wakeups */
 }
 }
 
-- 
2.19.1




[Qemu-devel] [PATCH 3/6] migration/multifd: use sync field to synchronize send threads

2019-06-06 Thread Wei Yang
Add a field in MultiFDSendParams to indicate there is a request to
synchronize send threads.

By doing so, send_thread will just post sem_sync on synchronization
request and channels_ready will not *overflow*.

Signed-off-by: Wei Yang 
---
 migration/ram.c | 30 --
 1 file changed, 24 insertions(+), 6 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index f9e53ac413..9982930392 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -640,6 +640,8 @@ typedef struct {
 QemuMutex mutex;
 /* is this channel thread running */
 bool running;
+/* should sync this channel */
+bool sync;
 /* should this thread finish */
 bool quit;
 /* thread has work to do */
@@ -1065,8 +1067,7 @@ static void multifd_send_sync_main(void)
 qemu_mutex_lock(&p->mutex);
 
 p->packet_num = multifd_send_state->packet_num++;
-p->flags |= MULTIFD_FLAG_SYNC;
-p->pending_job++;
+p->sync = true;
 qemu_mutex_unlock(&p->mutex);
 qemu_sem_post(&p->sem);
 }
@@ -1129,10 +1130,27 @@ static void *multifd_send_thread(void *opaque)
 p->pending_job--;
 qemu_mutex_unlock(&p->mutex);
 
-if (flags & MULTIFD_FLAG_SYNC) {
-qemu_sem_post(&multifd_send_state->sem_sync);
-}
 qemu_sem_post(&multifd_send_state->channels_ready);
+} else if (p->sync) {
+uint64_t packet_num = p->packet_num;
+uint32_t flags = p->flags;
+assert(!p->pages->used);
+
+p->flags |= MULTIFD_FLAG_SYNC;
+multifd_send_fill_packet(p);
+p->sync = false;
+qemu_mutex_unlock(&p->mutex);
+
+trace_multifd_send(p->id, packet_num, 0, flags | MULTIFD_FLAG_SYNC,
+   p->next_packet_size);
+
+ret = qio_channel_write_all(p->c, (void *)p->packet,
+p->packet_len, &local_err);
+if (ret != 0) {
+break;
+}
+
+qemu_sem_post(&multifd_send_state->sem_sync);
 } else if (p->quit) {
 qemu_mutex_unlock(&p->mutex);
 break;
@@ -1196,7 +1214,7 @@ int multifd_save_setup(void)
 
 qemu_mutex_init(&p->mutex);
 qemu_sem_init(&p->sem, 0);
-p->quit = false;
+p->sync = p->quit = false;
 p->pending_job = 0;
 p->id = i;
 p->pages = multifd_pages_init(page_count);
-- 
2.19.1




[Qemu-devel] [PATCH 0/6] multifd: a new mechanism for send thread sync

2019-06-06 Thread Wei Yang
Current send thread could work while the sync mechanism has some problem:

  * has spuriously wakeup
  * number of channels_ready will *overflow* the number of real channels

The reason is:

  * if MULTIFD_FLAG_SYNC is set in the middle of send thread running, there
is one more spurious wakeup
  * if MULTIFD_FLAG_SYNC is set when send thread is not running, there is one
more channels_ready be triggered 

To solve this situation, one new mechanism is introduced to synchronize send
threads. The idea is simple, a new field *sync* is introduced to indicate a
synchronization is required.

Wei Yang (6):
  migration/multifd: move MultiFDSendParams handling into
multifd_send_fill_packet()
  migration/multifd: notify channels_ready when send thread starts
  migration/multifd: use sync field to synchronize send threads
  migration/multifd: used must not be 0 for a pending job
  migration/multifd: use boolean for pending_job is enough
  migration/multifd: there is no spurious wakeup now

 migration/ram.c | 62 +++--
 1 file changed, 39 insertions(+), 23 deletions(-)

-- 
2.19.1




[Qemu-devel] [PATCH 5/6] migration/multifd: use boolean for pending_job is enough

2019-06-06 Thread Wei Yang
After synchronization request is handled in another case, there only
could be one pending_job for one send thread at most.

This is fine to use boolean to represent this behavior.

Signed-off-by: Wei Yang 
---
 migration/ram.c | 9 -
 1 file changed, 4 insertions(+), 5 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 3e48795608..831b15833b 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -645,7 +645,7 @@ typedef struct {
 /* should this thread finish */
 bool quit;
 /* thread has work to do */
-int pending_job;
+bool pending_job;
 /* array of pages to sent */
 MultiFDPages_t *pages;
 /* packet allocated len */
@@ -942,7 +942,7 @@ static void multifd_send_pages(void)
 
 qemu_mutex_lock(&p->mutex);
 if (!p->pending_job) {
-p->pending_job++;
+p->pending_job = true;
 next_channel = (i + 1) % migrate_multifd_channels();
 break;
 }
@@ -1126,7 +1126,7 @@ static void *multifd_send_thread(void *opaque)
 }
 
 qemu_mutex_lock(&p->mutex);
-p->pending_job--;
+p->pending_job = false;
 qemu_mutex_unlock(&p->mutex);
 
 qemu_sem_post(&multifd_send_state->channels_ready);
@@ -1213,8 +1213,7 @@ int multifd_save_setup(void)
 
 qemu_mutex_init(&p->mutex);
 qemu_sem_init(&p->sem, 0);
-p->sync = p->quit = false;
-p->pending_job = 0;
+p->sync = p->quit = p->pending_job = false;
 p->id = i;
 p->pages = multifd_pages_init(page_count);
 p->packet_len = sizeof(MultiFDPacket_t)
-- 
2.19.1




[Qemu-devel] [PATCH 1/6] migration/multifd: move MultiFDSendParams handling into multifd_send_fill_packet()

2019-06-06 Thread Wei Yang
Currently there is only one user of multifd_send_fill_packet(). We
enlarge the responsibility of it to adjust MultiFDSendParams.

Signed-off-by: Wei Yang 
---
 migration/ram.c | 11 ++-
 1 file changed, 6 insertions(+), 5 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index bd356764ff..a4e7587648 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -787,9 +787,11 @@ static void multifd_pages_clear(MultiFDPages_t *pages)
 static void multifd_send_fill_packet(MultiFDSendParams *p)
 {
 MultiFDPacket_t *packet = p->packet;
+MultiFDPages_t *pages = p->pages;
 uint32_t page_max = MULTIFD_PACKET_SIZE / qemu_target_page_size();
 int i;
 
+p->next_packet_size = pages->used * qemu_target_page_size();
 packet->magic = cpu_to_be32(MULTIFD_MAGIC);
 packet->version = cpu_to_be32(MULTIFD_VERSION);
 packet->flags = cpu_to_be32(p->flags);
@@ -805,6 +807,10 @@ static void multifd_send_fill_packet(MultiFDSendParams *p)
 for (i = 0; i < p->pages->used; i++) {
 packet->offset[i] = cpu_to_be64(p->pages->offset[i]);
 }
+p->flags = 0;
+p->num_packets++;
+p->num_pages += pages->used;
+p->pages->used = 0;
 }
 
 static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
@@ -1097,12 +1103,7 @@ static void *multifd_send_thread(void *opaque)
 uint64_t packet_num = p->packet_num;
 uint32_t flags = p->flags;
 
-p->next_packet_size = used * qemu_target_page_size();
 multifd_send_fill_packet(p);
-p->flags = 0;
-p->num_packets++;
-p->num_pages += used;
-p->pages->used = 0;
 qemu_mutex_unlock(&p->mutex);
 
 trace_multifd_send(p->id, packet_num, used, flags,
-- 
2.19.1




Re: [Qemu-devel] [PATCH 1/2] migration/xbzrle: update cache and current_data in one place

2019-06-09 Thread Wei Yang
On Fri, Jun 07, 2019 at 07:57:34PM +0100, Dr. David Alan Gilbert wrote:
>* Wei Yang (richardw.y...@linux.intel.com) wrote:
>> When we are not in the last_stage, we need to update the cache if page
>> is not the same.
>> 
>> Currently this procedure is scattered in two places and mixed with
>> encoding status check.
>> 
>> This patch extract this general step out to make the code a little bit
>> easy to read.
>> 
>> Signed-off-by: Wei Yang 
>
>This function is actually quite subtle, and I think your change will
>work, but it has changed the behaviour slightly.
>
>When we enter the function the *current_data is pointing at actual guest
>RAM and is changing as we're running.
>It's critical that the contents of the cache match what was actually
>sent, so that in the next cycle the correct delta is generated;
>thus the reason for the two memcpy's is actually different.
>
>> ---
>>  migration/ram.c | 19 +--
>>  1 file changed, 9 insertions(+), 10 deletions(-)
>> 
>> diff --git a/migration/ram.c b/migration/ram.c
>> index e9b40d636d..878cd8de7a 100644
>> --- a/migration/ram.c
>> +++ b/migration/ram.c
>> @@ -1594,25 +1594,24 @@ static int save_xbzrle_page(RAMState *rs, uint8_t 
>> **current_data,
>>  encoded_len = xbzrle_encode_buffer(prev_cached_page, XBZRLE.current_buf,
>> TARGET_PAGE_SIZE, XBZRLE.encoded_buf,
>> TARGET_PAGE_SIZE);
>> +
>> +/*
>> + * we need to update the data in the cache, in order to get the same 
>> data
>> + */
>> +if (!last_stage && encoded_len != 0) {
>> +memcpy(prev_cached_page, XBZRLE.current_buf, TARGET_PAGE_SIZE);
>> +*current_data = prev_cached_page;
>> +}
>> +
>>  if (encoded_len == 0) {
>>  trace_save_xbzrle_page_skipping();
>>  return 0;
>>  } else if (encoded_len == -1) {
>>  trace_save_xbzrle_page_overflow();
>>  xbzrle_counters.overflow++;
>> -/* update data in the cache */
>> -if (!last_stage) {
>> -memcpy(prev_cached_page, *current_data, TARGET_PAGE_SIZE);
>> -*current_data = prev_cached_page;
>> -}
>>  return -1;
>
>In this case, we've not managed to compress, so we're going to have to
>transmit the whole page; but remember the guest is still writing. So we
>update the cache, and then update *current_data to point to the cache
>so that the caller sends from the cache directly rather than the guest
>RAM, this ensures that the thing that's sent matches the cache contents.
>
>However, note the memcpy is from *current_data, not XBZRLE.current_buf,
>so it might be slightly newer  - this is the first change you have made;
>you might be sending data that's slightly older; but it's safe because
>the data sent does match the cache contents.
>
>>  }
>>  
>> -/* we need to update the data in the cache, in order to get the same 
>> data */
>> -if (!last_stage) {
>> -memcpy(prev_cached_page, XBZRLE.current_buf, TARGET_PAGE_SIZE);
>> -}
>> -
>
>This memcpy is slightly different.
>Here we've managed to do the compress; so now we update the cache based
>on what was compressed (current_buf).  *current_data isn't used by the
>caller in this case because it's actually sending the compressed data.
>So it's safe for you to update it.

Yes, you are right. My change will alter the behavior a little. To be
specific, when we didn't manage to compress content, the content we sent will
be a little *old*.

>
>So I think we need to add comments like that, how about:
>
>   /*
>* Update the cache contents, so that it corresponds to the data
>* sent, in allc ases except where we skip the page.
>*/
>> +if (!last_stage && encoded_len != 0) {
>> +memcpy(prev_cached_page, XBZRLE.current_buf, TARGET_PAGE_SIZE);
>   /*
>* In the case where we couldn't compress, ensure that the caller
>        * sends the data from the cache, since the guest might have
>* changed the RAM since we copied it.
>*/
>
>> +*current_data = prev_cached_page;
>> +}
>>  /* Send XBZRLE based compressed page */
>>  bytes_xbzrle = save_page_header(rs, rs->f, block,
>>  offset | RAM_SAVE_FLAG_XBZRLE);

Yep, I agree with this comment.

Let me add this in v2.

Thanks :-)

>
>Dave
>
>> -- 
>> 2.19.1
>> 
>--
>Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK

-- 
Wei Yang
Help you, Help me



Re: [Qemu-devel] [PATCH 2/2] migration/xbzrle: cleanup the handling cache miss condition

2019-06-09 Thread Wei Yang
On Fri, Jun 07, 2019 at 08:01:14PM +0100, Dr. David Alan Gilbert wrote:
>* Wei Yang (richardw.y...@linux.intel.com) wrote:
>> For cache miss condition not in last_stage, we need to insert data into
>> cache. When this step succeed, current_data should be updated. While no
>> matter these checks pass or not, -1 is returned.
>> 
>> Based on this, the logic in cache miss handling could be simplified a
>> little.
>> 
>> Signed-off-by: Wei Yang 
>> ---
>>  migration/ram.c | 17 -
>>  1 file changed, 8 insertions(+), 9 deletions(-)
>> 
>> diff --git a/migration/ram.c b/migration/ram.c
>> index 878cd8de7a..67ba075cc4 100644
>> --- a/migration/ram.c
>> +++ b/migration/ram.c
>> @@ -1572,15 +1572,14 @@ static int save_xbzrle_page(RAMState *rs, uint8_t 
>> **current_data,
>>  if (!cache_is_cached(XBZRLE.cache, current_addr,
>>   ram_counters.dirty_sync_count)) {
>>  xbzrle_counters.cache_miss++;
>> -if (!last_stage) {
>> -if (cache_insert(XBZRLE.cache, current_addr, *current_data,
>> - ram_counters.dirty_sync_count) == -1) {
>> -return -1;
>> -} else {
>> -/* update *current_data when the page has been
>> -   inserted into cache */
>> -*current_data = get_cached_data(XBZRLE.cache, current_addr);
>> -}
>> +if (!last_stage &&
>> +!cache_insert(XBZRLE.cache, current_addr, *current_data,
>> +  ram_counters.dirty_sync_count)) {
>> +/*
>> + * update *current_data when the page has been inserted into
>> + * cache
>> + */
>> +*current_data = get_cached_data(XBZRLE.cache, current_addr);
>
>No this change doesn't gain anything and I find the original easier to
>read.
>
>This function is really subtle, every time I do anything with it I have
>to think really hard about it, so ease of reading is more important.
>

Yep, I agree ease of reading is more important.

Since the original version looks better, I will keep current code. :-)

>Dave
>
>>  }
>>  return -1;
>>  }
>> -- 
>> 2.19.1
>> 
>--
>Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK

-- 
Wei Yang
Help you, Help me



[Qemu-devel] [PATCH v2] migration/xbzrle: update cache and current_data in one place

2019-06-09 Thread Wei Yang
When we are not in the last_stage, we need to update the cache if page
is not the same.

Currently this procedure is scattered in two places and mixed with
encoding status check.

This patch extract this general step out to make the code a little bit
easy to read.

Signed-off-by: Wei Yang 

---
v2: give more comment on the behavior
---
 migration/ram.c | 25 +++--
 1 file changed, 15 insertions(+), 10 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index e9b40d636d..17cc9b2b44 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1594,25 +1594,30 @@ static int save_xbzrle_page(RAMState *rs, uint8_t 
**current_data,
 encoded_len = xbzrle_encode_buffer(prev_cached_page, XBZRLE.current_buf,
TARGET_PAGE_SIZE, XBZRLE.encoded_buf,
TARGET_PAGE_SIZE);
+
+/*
+ * Update the cache contents, so that it corresponds to the data
+ * sent, in all cases except where we skip the page.
+ */
+if (!last_stage && encoded_len != 0) {
+memcpy(prev_cached_page, XBZRLE.current_buf, TARGET_PAGE_SIZE);
+/*
+ * In the case where we couldn't compress, ensure that the caller
+ * sends the data from the cache, since the guest might have
+ * changed the RAM since we copied it.
+ */
+*current_data = prev_cached_page;
+}
+
 if (encoded_len == 0) {
 trace_save_xbzrle_page_skipping();
 return 0;
 } else if (encoded_len == -1) {
 trace_save_xbzrle_page_overflow();
 xbzrle_counters.overflow++;
-/* update data in the cache */
-if (!last_stage) {
-memcpy(prev_cached_page, *current_data, TARGET_PAGE_SIZE);
-*current_data = prev_cached_page;
-}
 return -1;
 }
 
-/* we need to update the data in the cache, in order to get the same data 
*/
-if (!last_stage) {
-memcpy(prev_cached_page, XBZRLE.current_buf, TARGET_PAGE_SIZE);
-}
-
 /* Send XBZRLE based compressed page */
 bytes_xbzrle = save_page_header(rs, rs->f, block,
 offset | RAM_SAVE_FLAG_XBZRLE);
-- 
2.19.1




[Qemu-devel] [PATCH v7] hw/acpi: extract acpi_add_rom_blob()

2019-06-09 Thread Wei Yang
arm and i386 has almost the same function acpi_add_rom_blob(), except
giving different FWCfgCallback function.

This patch moves acpi_add_rom_blob() to utils.c by passing
FWCfgCallback to it.

Signed-off-by: Wei Yang 
Reviewed-by: Igor Mammedov 

v7:
  * rebase on top of current master because of conflict
v6:
  * change author from Igor to Michael
v5:
  * remove unnecessary header glib/gprintf.h
  * rearrange include header to make it more suitable
v4:
  * extract -> moves
  * adjust comment in source to make checkpatch happy
v3:
  * put acpi_add_rom_blob() to hw/acpi/utils.c
v2:
  * remove unused header in original source file
---
 hw/acpi/Makefile.objs|  2 +-
 hw/acpi/utils.c  | 35 +++
 hw/arm/virt-acpi-build.c | 26 ++
 hw/i386/acpi-build.c | 26 ++
 include/hw/acpi/utils.h  |  9 +
 5 files changed, 65 insertions(+), 33 deletions(-)
 create mode 100644 hw/acpi/utils.c
 create mode 100644 include/hw/acpi/utils.h

diff --git a/hw/acpi/Makefile.objs b/hw/acpi/Makefile.objs
index 661a9b8c2f..9bb2101e3b 100644
--- a/hw/acpi/Makefile.objs
+++ b/hw/acpi/Makefile.objs
@@ -10,7 +10,7 @@ common-obj-$(call lnot,$(CONFIG_ACPI_X86)) += acpi-stub.o
 
 common-obj-y += acpi_interface.o
 common-obj-y += bios-linker-loader.o
-common-obj-y += aml-build.o
+common-obj-y += aml-build.o utils.o
 common-obj-$(CONFIG_ACPI_PCI) += pci.o
 common-obj-$(CONFIG_TPM) += tpm.o
 
diff --git a/hw/acpi/utils.c b/hw/acpi/utils.c
new file mode 100644
index 00..a134a4d554
--- /dev/null
+++ b/hw/acpi/utils.c
@@ -0,0 +1,35 @@
+/*
+ * Utilities for generating ACPI tables and passing them to Guests
+ *
+ * Copyright (C) 2019 Intel Corporation
+ * Copyright (C) 2019 Red Hat Inc
+ *
+ * Author: Wei Yang 
+ * Author: Michael S. Tsirkin 
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include "qemu/osdep.h"
+#include "hw/acpi/aml-build.h"
+#include "hw/acpi/utils.h"
+#include "hw/loader.h"
+
+MemoryRegion *acpi_add_rom_blob(FWCfgCallback update, void *opaque,
+GArray *blob, const char *name,
+uint64_t max_size)
+{
+return rom_add_blob(name, blob->data, acpi_data_len(blob), max_size, -1,
+name, update, opaque, NULL, true);
+}
diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
index 4a64f9985c..e3353de9e4 100644
--- a/hw/arm/virt-acpi-build.c
+++ b/hw/arm/virt-acpi-build.c
@@ -37,9 +37,9 @@
 #include "hw/acpi/acpi.h"
 #include "hw/nvram/fw_cfg.h"
 #include "hw/acpi/bios-linker-loader.h"
-#include "hw/loader.h"
 #include "hw/hw.h"
 #include "hw/acpi/aml-build.h"
+#include "hw/acpi/utils.h"
 #include "hw/acpi/pci.h"
 #include "hw/pci/pcie_host.h"
 #include "hw/pci/pci.h"
@@ -866,14 +866,6 @@ static void virt_acpi_build_reset(void *build_opaque)
 build_state->patched = false;
 }
 
-static MemoryRegion *acpi_add_rom_blob(AcpiBuildState *build_state,
-   GArray *blob, const char *name,
-   uint64_t max_size)
-{
-return rom_add_blob(name, blob->data, acpi_data_len(blob), max_size, -1,
-name, virt_acpi_build_update, build_state, NULL, true);
-}
-
 static const VMStateDescription vmstate_virt_acpi_build = {
 .name = "virt_acpi_build",
 .version_id = 1,
@@ -905,20 +897,22 @@ void virt_acpi_setup(VirtMachineState *vms)
 virt_acpi_build(vms, &tables);
 
 /* Now expose it all to Guest */
-build_state->table_mr = acpi_add_rom_blob(build_state, tables.table_data,
-   ACPI_BUILD_TABLE_FILE,
-   ACPI_BUILD_TABLE_MAX_SIZE);
+build_state->table_mr = acpi_add_rom_blob(virt_acpi_build_update,
+  build_state, tables.table_data,
+  ACPI_BUILD_TABLE_FILE,
+  ACPI_BUILD_TABLE_MAX_SIZE);
 assert(build_state->table_mr != NULL);
 
 build_state->linker_mr =
-acpi_add_rom_blob(build_state, tables.linker->cmd_blob,
-

[Qemu-devel] [PATCH 0/2] migration/xbzrle: make xbzrle_encode_buffer little easier

2019-06-09 Thread Wei Yang
Current xbzrle_encode_buffer() do everything in a big loop, which is a little
difficult for audience to catch the logic.

We can refine the logic with:

  * get the length of a run
  * encode it

At the same time, I found the encoding and decoding function has some extra
pointer operation. Removing this could save some code space.

Wei Yang (2):
  cutils: remove one unnecessary pointer operation
  migration/xbzrle: make xbzrle_encode_buffer little easier to read

 migration/xbzrle.c | 153 +
 util/cutils.c  |   8 +--
 2 files changed, 77 insertions(+), 84 deletions(-)

-- 
2.19.1




[Qemu-devel] [PATCH 1/2] cutils: remove one unnecessary pointer operation

2019-06-09 Thread Wei Yang
Since we will not operate on the next address pointed by out, it is not
necessary to do addition on it.

After removing the operation, the function size reduced 16/18 bytes.

Signed-off-by: Wei Yang 
---
 util/cutils.c | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/util/cutils.c b/util/cutils.c
index 9aacc422ca..1933a68da5 100644
--- a/util/cutils.c
+++ b/util/cutils.c
@@ -754,11 +754,11 @@ int uleb128_encode_small(uint8_t *out, uint32_t n)
 {
 g_assert(n <= 0x3fff);
 if (n < 0x80) {
-*out++ = n;
+*out = n;
 return 1;
 } else {
 *out++ = (n & 0x7f) | 0x80;
-*out++ = n >> 7;
+*out = n >> 7;
 return 2;
 }
 }
@@ -766,7 +766,7 @@ int uleb128_encode_small(uint8_t *out, uint32_t n)
 int uleb128_decode_small(const uint8_t *in, uint32_t *n)
 {
 if (!(*in & 0x80)) {
-*n = *in++;
+*n = *in;
 return 1;
 } else {
 *n = *in++ & 0x7f;
@@ -774,7 +774,7 @@ int uleb128_decode_small(const uint8_t *in, uint32_t *n)
 if (*in & 0x80) {
 return -1;
 }
-*n |= *in++ << 7;
+*n |= *in << 7;
 return 2;
 }
 }
-- 
2.19.1




[Qemu-devel] [PATCH 2/2] migration/xbzrle: make xbzrle_encode_buffer little easier to read

2019-06-09 Thread Wei Yang
The encoding process could be described below:

for [content]
get length of a run
encode this run

By refactoring the code with this logic, it maybe a little easier to
understand.

Signed-off-by: Wei Yang 
---
 migration/xbzrle.c | 153 +
 1 file changed, 73 insertions(+), 80 deletions(-)

diff --git a/migration/xbzrle.c b/migration/xbzrle.c
index 1ba482ded9..25c69708ec 100644
--- a/migration/xbzrle.c
+++ b/migration/xbzrle.c
@@ -14,6 +14,59 @@
 #include "qemu/cutils.h"
 #include "xbzrle.h"
 
+static int next_run(uint8_t *old_buf, uint8_t *new_buf, int off, int slen,
+bool zrun)
+{
+uint32_t len = 0;
+long res;
+
+res = (slen - off) % sizeof(long);
+
+/* first unaligned bytes */
+while (res) {
+uint8_t xor = old_buf[off + len] ^ new_buf[off + len];
+
+if (!(zrun ^ !!xor)) {
+break;
+}
+len++;
+res--;
+}
+
+if (res) {
+return len;
+}
+
+/* word at a time for speed, use of 32-bit long okay */
+while (off + len < slen) {
+/* truncation to 32-bit long okay */
+unsigned long mask = (unsigned long)0x0101010101010101ULL;
+long xor = (*(long *)(old_buf + off + len)) ^
+   (*(long *)(new_buf + off + len));
+
+if (zrun && !(zrun ^ !!xor)) {
+break;
+} else if (!zrun && ((xor - mask) & ~xor & (mask << 7))) {
+break;
+}
+
+len += sizeof(long);
+}
+
+/* go over the rest */
+while (off + len < slen) {
+uint8_t xor = old_buf[off + len] ^ new_buf[off + len];
+
+if (!(zrun ^ !!xor)) {
+break;
+}
+
+len++;
+}
+
+return len;
+}
+
 /*
   page = zrun nzrun
| zrun nzrun page
@@ -27,103 +80,43 @@
 int xbzrle_encode_buffer(uint8_t *old_buf, uint8_t *new_buf, int slen,
  uint8_t *dst, int dlen)
 {
-uint32_t zrun_len = 0, nzrun_len = 0;
-int d = 0, i = 0;
-long res;
-uint8_t *nzrun_start = NULL;
+bool zrun = true;
+int len, src_off = 0, dst_off = 0;
 
 g_assert(!(((uintptr_t)old_buf | (uintptr_t)new_buf | slen) %
sizeof(long)));
 
-while (i < slen) {
+for (; src_off < slen; src_off += len, zrun = !zrun) {
 /* overflow */
-if (d + 2 > dlen) {
+if (dst_off + 2 > dlen) {
 return -1;
 }
 
-/* not aligned to sizeof(long) */
-res = (slen - i) % sizeof(long);
-while (res && old_buf[i] == new_buf[i]) {
-zrun_len++;
-i++;
-res--;
-}
+len = next_run(old_buf, new_buf, src_off, slen, zrun);
 
-/* word at a time for speed */
-if (!res) {
-while (i < slen &&
-   (*(long *)(old_buf + i)) == (*(long *)(new_buf + i))) {
-i += sizeof(long);
-zrun_len += sizeof(long);
+if (zrun) {
+/* buffer unchanged */
+if (len == slen) {
+return 0;
 }
-
-/* go over the rest */
-while (i < slen && old_buf[i] == new_buf[i]) {
-zrun_len++;
-i++;
+/* skip last zero run */
+if (src_off + len == slen) {
+return dst_off;
 }
 }
 
-/* buffer unchanged */
-if (zrun_len == slen) {
-return 0;
-}
-
-/* skip last zero run */
-if (i == slen) {
-return d;
-}
-
-d += uleb128_encode_small(dst + d, zrun_len);
-
-zrun_len = 0;
-nzrun_start = new_buf + i;
-
-/* overflow */
-if (d + 2 > dlen) {
-return -1;
-}
-/* not aligned to sizeof(long) */
-res = (slen - i) % sizeof(long);
-while (res && old_buf[i] != new_buf[i]) {
-i++;
-nzrun_len++;
-res--;
-}
-
-/* word at a time for speed, use of 32-bit long okay */
-if (!res) {
-/* truncation to 32-bit long okay */
-unsigned long mask = (unsigned long)0x0101010101010101ULL;
-while (i < slen) {
-unsigned long xor;
-xor = *(unsigned long *)(old_buf + i)
-^ *(unsigned long *)(new_buf + i);
-if ((xor - mask) & ~xor & (mask << 7)) {
-/* found the end of an nzrun within the current long */
-while (old_buf[i] != new_buf[i]) {
-nzrun_len++;
-i++;
-}
-break;
-} else {
-i += sizeof(long);
-nzrun_len += sizeof(long);
-}
+dst_off

Re: [Qemu-devel] [PATCH] migration: remove unused field bytes_xfer

2019-06-11 Thread Wei Yang
On Tue, Apr 02, 2019 at 08:31:06AM +0800, Wei Yang wrote:
>MigrationState->bytes_xfer is only set to 0 in migrate_init().
>
>Remove this unnecessary field.
>
>Signed-off-by: Wei Yang 

Hi, David

Are you willing to pick up this one?

>---
> migration/migration.c | 1 -
> migration/migration.h | 1 -
> 2 files changed, 2 deletions(-)
>
>diff --git a/migration/migration.c b/migration/migration.c
>index dea7078bf4..c929cf8d0f 100644
>--- a/migration/migration.c
>+++ b/migration/migration.c
>@@ -1681,7 +1681,6 @@ void migrate_init(MigrationState *s)
>  * parameters/capabilities that the user set, and
>  * locks.
>  */
>-s->bytes_xfer = 0;
> s->cleanup_bh = 0;
> s->to_dst_file = NULL;
> s->rp_state.from_dst_file = NULL;
>diff --git a/migration/migration.h b/migration/migration.h
>index 852eb3c4e9..b9efbe9168 100644
>--- a/migration/migration.h
>+++ b/migration/migration.h
>@@ -116,7 +116,6 @@ struct MigrationState
> DeviceState parent_obj;
> 
> /*< public >*/
>-size_t bytes_xfer;
> QemuThread thread;
> QEMUBH *cleanup_bh;
> QEMUFile *to_dst_file;
>-- 
>2.19.1

-- 
Wei Yang
Help you, Help me



Re: [Qemu-devel] [PATCH] migration: cleanup check on ops in savevm.handlers iteration

2019-06-11 Thread Wei Yang
On Mon, Apr 01, 2019 at 02:14:57PM +0800, Wei Yang wrote:
>During migration, there are several places to iterate on
>savevm.handlers. And on each iteration, we need to check its ops and
>related callbacks before invoke it.
>
>Generally, ops is the first element to check, and it is only necessary
>to check it once.
>
>This patch clean all the related part in savevm.c to check ops only once
>in those iterations.
>
>Signed-off-by: Wei Yang 

Hi, David

Are you willing to pick up this one?

>---
> migration/savevm.c | 35 ++-
> 1 file changed, 14 insertions(+), 21 deletions(-)
>
>diff --git a/migration/savevm.c b/migration/savevm.c
>index 5f0ca7fac2..92af2471cd 100644
>--- a/migration/savevm.c
>+++ b/migration/savevm.c
>@@ -1096,10 +1096,9 @@ void qemu_savevm_state_setup(QEMUFile *f)
> if (!se->ops || !se->ops->save_setup) {
> continue;
> }
>-if (se->ops && se->ops->is_active) {
>-if (!se->ops->is_active(se->opaque)) {
>+if (se->ops->is_active &&
>+!se->ops->is_active(se->opaque)) {
> continue;
>-}
> }
> save_section_header(f, se, QEMU_VM_SECTION_START);
> 
>@@ -1127,10 +1126,9 @@ int qemu_savevm_state_resume_prepare(MigrationState *s)
> if (!se->ops || !se->ops->resume_prepare) {
> continue;
> }
>-if (se->ops && se->ops->is_active) {
>-if (!se->ops->is_active(se->opaque)) {
>+if (se->ops->is_active &&
>+!se->ops->is_active(se->opaque)) {
> continue;
>-}
> }
> ret = se->ops->resume_prepare(s, se->opaque);
> if (ret < 0) {
>@@ -1223,10 +1221,9 @@ void qemu_savevm_state_complete_postcopy(QEMUFile *f)
> if (!se->ops || !se->ops->save_live_complete_postcopy) {
> continue;
> }
>-if (se->ops && se->ops->is_active) {
>-if (!se->ops->is_active(se->opaque)) {
>+if (se->ops->is_active &&
>+!se->ops->is_active(se->opaque)) {
> continue;
>-}
> }
> trace_savevm_section_start(se->idstr, se->section_id);
> /* Section type */
>@@ -1265,18 +1262,16 @@ int qemu_savevm_state_complete_precopy(QEMUFile *f, 
>bool iterable_only,
> cpu_synchronize_all_states();
> 
> QTAILQ_FOREACH(se, &savevm_state.handlers, entry) {
>-if (!se->ops ||
>+if (!se->ops || !se->ops->save_live_complete_precopy ||
> (in_postcopy && se->ops->has_postcopy &&
>  se->ops->has_postcopy(se->opaque)) ||
>-(in_postcopy && !iterable_only) ||
>-!se->ops->save_live_complete_precopy) {
>+(in_postcopy && !iterable_only)) {
> continue;
> }
> 
>-if (se->ops && se->ops->is_active) {
>-if (!se->ops->is_active(se->opaque)) {
>+if (se->ops->is_active &&
>+!se->ops->is_active(se->opaque)) {
> continue;
>-}
> }
> trace_savevm_section_start(se->idstr, se->section_id);
> 
>@@ -1377,10 +1372,9 @@ void qemu_savevm_state_pending(QEMUFile *f, uint64_t 
>threshold_size,
> if (!se->ops || !se->ops->save_live_pending) {
> continue;
> }
>-if (se->ops && se->ops->is_active) {
>-if (!se->ops->is_active(se->opaque)) {
>+if (se->ops->is_active &&
>+!se->ops->is_active(se->opaque)) {
> continue;
>-}
> }
> se->ops->save_live_pending(f, se->opaque, threshold_size,
>res_precopy_only, res_compatible,
>@@ -2276,10 +2270,9 @@ static int qemu_loadvm_state_setup(QEMUFile *f)
> if (!se->ops || !se->ops->load_setup) {
> continue;
> }
>-if (se->ops && se->ops->is_active) {
>-if (!se->ops->is_active(se->opaque)) {
>+if (se->ops->is_active &&
>+!se->ops->is_active(se->opaque)) {
> continue;
>-}
> }
> 
> ret = se->ops->load_setup(f, se->opaque);
>-- 
>2.19.1

-- 
Wei Yang
Help you, Help me



Re: [Qemu-devel] [PATCH] migration: remove unused field bytes_xfer

2019-06-11 Thread Wei Yang
On Tue, Jun 11, 2019 at 10:33:29AM +0200, Juan Quintela wrote:
>Wei Yang  wrote:
>> On Tue, Apr 02, 2019 at 08:31:06AM +0800, Wei Yang wrote:
>>>MigrationState->bytes_xfer is only set to 0 in migrate_init().
>>>
>>>Remove this unnecessary field.
>>>
>>>Signed-off-by: Wei Yang 
>>
>> Hi, David
>
>Hi
>
>I am on duty this week, will get it.

Thanks :-)

>
>>
>> Are you willing to pick up this one?
>>
>>>---
>>> migration/migration.c | 1 -
>>> migration/migration.h | 1 -
>>> 2 files changed, 2 deletions(-)
>>>
>>>diff --git a/migration/migration.c b/migration/migration.c
>>>index dea7078bf4..c929cf8d0f 100644
>>>--- a/migration/migration.c
>>>+++ b/migration/migration.c
>>>@@ -1681,7 +1681,6 @@ void migrate_init(MigrationState *s)
>>>  * parameters/capabilities that the user set, and
>>>  * locks.
>>>  */
>>>-s->bytes_xfer = 0;
>>> s->cleanup_bh = 0;
>>> s->to_dst_file = NULL;
>>> s->rp_state.from_dst_file = NULL;
>>>diff --git a/migration/migration.h b/migration/migration.h
>>>index 852eb3c4e9..b9efbe9168 100644
>>>--- a/migration/migration.h
>>>+++ b/migration/migration.h
>>>@@ -116,7 +116,6 @@ struct MigrationState
>>> DeviceState parent_obj;
>>> 
>>> /*< public >*/
>>>-size_t bytes_xfer;
>>> QemuThread thread;
>>> QEMUBH *cleanup_bh;
>>> QEMUFile *to_dst_file;
>>>-- 
>>>2.19.1

-- 
Wei Yang
Help you, Help me



Re: [Qemu-devel] [PATCH 1/2] cutils: remove one unnecessary pointer operation

2019-06-11 Thread Wei Yang
On Tue, Jun 11, 2019 at 06:49:54PM +0100, Dr. David Alan Gilbert wrote:
>* Wei Yang (richardw.y...@linux.intel.com) wrote:
>> Since we will not operate on the next address pointed by out, it is not
>> necessary to do addition on it.
>> 
>> After removing the operation, the function size reduced 16/18 bytes.
>
>For me with a -O3 it didn't make any difference - the compiler was
>already smart enough to spot it, but it is correct.
>

Ah, you are right.

>
>Reviewed-by: Dr. David Alan Gilbert 
>
>> Signed-off-by: Wei Yang 
>> ---
>>  util/cutils.c | 8 
>>  1 file changed, 4 insertions(+), 4 deletions(-)
>> 
>> diff --git a/util/cutils.c b/util/cutils.c
>> index 9aacc422ca..1933a68da5 100644
>> --- a/util/cutils.c
>> +++ b/util/cutils.c
>> @@ -754,11 +754,11 @@ int uleb128_encode_small(uint8_t *out, uint32_t n)
>>  {
>>  g_assert(n <= 0x3fff);
>>  if (n < 0x80) {
>> -*out++ = n;
>> +*out = n;
>>  return 1;
>>  } else {
>>  *out++ = (n & 0x7f) | 0x80;
>> -*out++ = n >> 7;
>> +*out = n >> 7;
>>  return 2;
>>  }
>>  }
>> @@ -766,7 +766,7 @@ int uleb128_encode_small(uint8_t *out, uint32_t n)
>>  int uleb128_decode_small(const uint8_t *in, uint32_t *n)
>>  {
>>  if (!(*in & 0x80)) {
>> -*n = *in++;
>> +*n = *in;
>>  return 1;
>>  } else {
>>  *n = *in++ & 0x7f;
>> @@ -774,7 +774,7 @@ int uleb128_decode_small(const uint8_t *in, uint32_t *n)
>>  if (*in & 0x80) {
>>  return -1;
>>  }
>> -*n |= *in++ << 7;
>> +*n |= *in << 7;
>>  return 2;
>>  }
>>  }
>> -- 
>> 2.19.1
>> 
>--
>Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK

-- 
Wei Yang
Help you, Help me



Re: [Qemu-devel] [PATCH 2/2] migration/xbzrle: make xbzrle_encode_buffer little easier to read

2019-06-11 Thread Wei Yang
On Tue, Jun 11, 2019 at 06:59:26PM +0100, Dr. David Alan Gilbert wrote:
>* Wei Yang (richardw.y...@linux.intel.com) wrote:
>> The encoding process could be described below:
>> 
>> for [content]
>> get length of a run
>> encode this run
>> 
>> By refactoring the code with this logic, it maybe a little easier to
>> understand.
>> 
>> Signed-off-by: Wei Yang 
>> ---
>>  migration/xbzrle.c | 153 +
>>  1 file changed, 73 insertions(+), 80 deletions(-)
>> 
>> diff --git a/migration/xbzrle.c b/migration/xbzrle.c
>> index 1ba482ded9..25c69708ec 100644
>> --- a/migration/xbzrle.c
>> +++ b/migration/xbzrle.c
>> @@ -14,6 +14,59 @@
>>  #include "qemu/cutils.h"
>>  #include "xbzrle.h"
>>  
>> +static int next_run(uint8_t *old_buf, uint8_t *new_buf, int off, int slen,
>> +bool zrun)
>> +{
>> +uint32_t len = 0;
>> +long res;
>> +
>> +res = (slen - off) % sizeof(long);
>> +
>> +/* first unaligned bytes */
>> +while (res) {
>> +uint8_t xor = old_buf[off + len] ^ new_buf[off + len];
>> +
>> +if (!(zrun ^ !!xor)) {
>> +break;
>> +}
>> +len++;
>> +res--;
>> +}
>> +
>> +if (res) {
>> +return len;
>> +}
>> +
>> +/* word at a time for speed, use of 32-bit long okay */
>> +while (off + len < slen) {
>> +/* truncation to 32-bit long okay */
>> +unsigned long mask = (unsigned long)0x0101010101010101ULL;
>> +long xor = (*(long *)(old_buf + off + len)) ^
>> +   (*(long *)(new_buf + off + len));
>> +
>> +if (zrun && !(zrun ^ !!xor)) {
>
>Are lines like this really making it easier to read?
>

Yep, this may took some time to understand. Let me put a chart to explain.

We have two choices for both zrun and xor, so we have 4 combinations:

xor |  0 (no change) |  !0 (changed) 
 zrun   ||
 ---++--
  1 |  * |  x
 ---++--
  0 |  x |  *

We can see the situation with '*' is the one we are looking for. And those
situations could be expressed with (zrun ^ xor). Since we need to convert xor
to something like boolean, so xor is converted to !!xor.

But yes, you are right. This part is not as easy as original one. In case you
don't like this, we can change it back to the original version.
  
>Juan: Opinion?
>
>Dave
>
-- 
Wei Yang
Help you, Help me



Re: [Qemu-devel] [PATCH 2/2] migration/xbzrle: make xbzrle_encode_buffer little easier to read

2019-06-11 Thread Wei Yang
On Tue, Jun 11, 2019 at 06:59:26PM +0100, Dr. David Alan Gilbert wrote:
>* Wei Yang (richardw.y...@linux.intel.com) wrote:
>> The encoding process could be described below:
>> 
>> for [content]
>> get length of a run
>> encode this run
>> 
>> By refactoring the code with this logic, it maybe a little easier to
>> understand.
>> 
>> Signed-off-by: Wei Yang 
>> ---
>>  migration/xbzrle.c | 153 +
>>  1 file changed, 73 insertions(+), 80 deletions(-)
>> 
>> diff --git a/migration/xbzrle.c b/migration/xbzrle.c
>> index 1ba482ded9..25c69708ec 100644
>> --- a/migration/xbzrle.c
>> +++ b/migration/xbzrle.c
>> @@ -14,6 +14,59 @@
>>  #include "qemu/cutils.h"
>>  #include "xbzrle.h"
>>  
>> +static int next_run(uint8_t *old_buf, uint8_t *new_buf, int off, int slen,
>> +bool zrun)
>> +{
>> +uint32_t len = 0;
>> +long res;
>> +
>> +res = (slen - off) % sizeof(long);
>> +
>> +/* first unaligned bytes */
>> +while (res) {
>> +uint8_t xor = old_buf[off + len] ^ new_buf[off + len];
>> +
>> +if (!(zrun ^ !!xor)) {
>> +break;
>> +}
>> +len++;
>> +res--;
>> +}
>> +
>> +if (res) {
>> +return len;
>> +}
>> +
>> +/* word at a time for speed, use of 32-bit long okay */
>> +while (off + len < slen) {
>> +/* truncation to 32-bit long okay */
>> +unsigned long mask = (unsigned long)0x0101010101010101ULL;
>> +    long xor = (*(long *)(old_buf + off + len)) ^
>> +   (*(long *)(new_buf + off + len));
>> +
>> +if (zrun && !(zrun ^ !!xor)) {
>
>Are lines like this really making it easier to read?
>

BTW, this line could be simplified to 

if (!(zrun ^ !!xor)) {

>Juan: Opinion?
>
>Dave
>

-- 
Wei Yang
Help you, Help me



[Qemu-devel] [PATCH] migration/multifd: call multifd_send_sync_main when sending RAM_SAVE_FLAG_EOS

2019-06-11 Thread Wei Yang
On receiving RAM_SAVE_FLAG_EOS, multifd_recv_sync_main() is called to
synchronize receive threads. Current synchronization mechanism is to wait
for each channel's sem_sync semaphore. This semaphore is triggered by a
packet with MULTIFD_FLAG_SYNC flag. While in current implementation, we
don't do multifd_send_sync_main() to send such packet when
blk_mig_bulk_active() is true.

This will leads to the receive threads won't notify
multifd_recv_sync_main() by sem_sync. And multifd_recv_sync_main() will
always wait there.

[Note]: normal migration test works, while didn't test the
blk_mig_bulk_active() case. Since not sure how to produce this
situation.

Signed-off-by: Wei Yang 
---
 migration/ram.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/migration/ram.c b/migration/ram.c
index 3354944f39..bd356764ff 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -3458,8 +3458,8 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
  */
 ram_control_after_iterate(f, RAM_CONTROL_ROUND);
 
-multifd_send_sync_main();
 out:
+multifd_send_sync_main();
 qemu_put_be64(f, RAM_SAVE_FLAG_EOS);
 qemu_fflush(f);
 ram_counters.transferred += 8;
-- 
2.19.1




Re: [Qemu-devel] [PATCH] migration/multifd: call multifd_send_sync_main when sending RAM_SAVE_FLAG_EOS

2019-06-12 Thread Wei Yang
On Wed, Jun 12, 2019 at 12:14:14PM +0200, Juan Quintela wrote:
>Wei Yang  wrote:
>> On receiving RAM_SAVE_FLAG_EOS, multifd_recv_sync_main() is called to
>> synchronize receive threads. Current synchronization mechanism is to wait
>> for each channel's sem_sync semaphore. This semaphore is triggered by a
>> packet with MULTIFD_FLAG_SYNC flag. While in current implementation, we
>> don't do multifd_send_sync_main() to send such packet when
>> blk_mig_bulk_active() is true.
>>
>> This will leads to the receive threads won't notify
>> multifd_recv_sync_main() by sem_sync. And multifd_recv_sync_main() will
>> always wait there.
>>
>> [Note]: normal migration test works, while didn't test the
>> blk_mig_bulk_active() case. Since not sure how to produce this
>> situation.
>>
>> Signed-off-by: Wei Yang 
>
>Reviewed-by: Juan Quintela 
>
>Block migration is weird.
>Block migration is weird.
>

Block migration means migrate a whole disk?

-- 
Wei Yang
Help you, Help me



Re: [Qemu-devel] [PATCH 2/2] migration/xbzrle: make xbzrle_encode_buffer little easier to read

2019-06-12 Thread Wei Yang
On Wed, Jun 12, 2019 at 12:29:27PM +0200, Juan Quintela wrote:
>"Dr. David Alan Gilbert"  wrote:
>> * Wei Yang (richardw.y...@linux.intel.com) wrote:
>>> The encoding process could be described below:
>>> 
>>> for [content]
>>> get length of a run
>>> encode this run
>>> 
>>> By refactoring the code with this logic, it maybe a little easier to
>>> understand.
>>> 
>>
>> Are lines like this really making it easier to read?
>>
>> Juan: Opinion?
>
>Code is easier to understand .
>
>For values that I understand the code.  I need to take a big breath and
>will take a deep look at it and see if it really does the same.
>

Haha, thanks for your feedback.

>Later, Juan.

-- 
Wei Yang
Help you, Help me



Re: [Qemu-devel] [PATCH] bitmap: get last word mask from nr directly

2019-06-13 Thread Wei Yang
On Thu, Apr 25, 2019 at 11:28:31AM +0800, Wei Yang wrote:
>The value left in nr is the number of bits for the last word, which
>could be calculate the last word mask directly.
>
>Remove the unnecessary size.

Ping...

>
>Signed-off-by: Wei Yang 
>---
> util/bitmap.c | 6 ++
> 1 file changed, 2 insertions(+), 4 deletions(-)
>
>diff --git a/util/bitmap.c b/util/bitmap.c
>index cb618c65a5..5aa60b8717 100644
>--- a/util/bitmap.c
>+++ b/util/bitmap.c
>@@ -160,7 +160,6 @@ int slow_bitmap_andnot(unsigned long *dst, const unsigned 
>long *bitmap1,
> void bitmap_set(unsigned long *map, long start, long nr)
> {
> unsigned long *p = map + BIT_WORD(start);
>-const long size = start + nr;
> int bits_to_set = BITS_PER_LONG - (start % BITS_PER_LONG);
> unsigned long mask_to_set = BITMAP_FIRST_WORD_MASK(start);
> 
>@@ -174,7 +173,7 @@ void bitmap_set(unsigned long *map, long start, long nr)
> p++;
> }
> if (nr) {
>-mask_to_set &= BITMAP_LAST_WORD_MASK(size);
>+mask_to_set &= BITMAP_LAST_WORD_MASK(nr);
> *p |= mask_to_set;
> }
> }
>@@ -221,7 +220,6 @@ void bitmap_set_atomic(unsigned long *map, long start, 
>long nr)
> void bitmap_clear(unsigned long *map, long start, long nr)
> {
> unsigned long *p = map + BIT_WORD(start);
>-const long size = start + nr;
> int bits_to_clear = BITS_PER_LONG - (start % BITS_PER_LONG);
> unsigned long mask_to_clear = BITMAP_FIRST_WORD_MASK(start);
> 
>@@ -235,7 +233,7 @@ void bitmap_clear(unsigned long *map, long start, long nr)
> p++;
> }
> if (nr) {
>-mask_to_clear &= BITMAP_LAST_WORD_MASK(size);
>+mask_to_clear &= BITMAP_LAST_WORD_MASK(nr);
> *p &= ~mask_to_clear;
> }
> }
>-- 
>2.19.1

-- 
Wei Yang
Help you, Help me



[Qemu-devel] [PATCH v2] exec.c: refactor function flatview_add_to_dispatch()

2019-03-10 Thread Wei Yang
flatview_add_to_dispatch() registers page based on the condition of
*section*, which may looks like this:

|s|PPP|s|

where s stands for subpage and P for page.

The procedure of this function could be described as:

- register first subpage
- register page
- register last subpage

This means the procedure could be simplified into these three steps
instead of a loop iteration.

This patch refactors the function into three corresponding steps and
adds some comment to clarify it.

Signed-off-by: Wei Yang 

---
v2:
  * removes the loop iteration as suggested by Paolo
---
 exec.c | 50 +-
 1 file changed, 33 insertions(+), 17 deletions(-)

diff --git a/exec.c b/exec.c
index 0959b00c06..160b8587d4 100644
--- a/exec.c
+++ b/exec.c
@@ -1598,34 +1598,50 @@ static void register_multipage(FlatView *fv,
 phys_page_set(d, start_addr >> TARGET_PAGE_BITS, num_pages, section_index);
 }
 
+/*
+ * The range in *section* may look like this:
+ *
+ *  |s|PPP|s|
+ *
+ * where s stands for subpage and P for page.
+ *
+ * The procedure in following function could be described as:
+ *
+ * - register first subpage
+ * - register page
+ * - register last subpage
+ */
 void flatview_add_to_dispatch(FlatView *fv, MemoryRegionSection *section)
 {
-MemoryRegionSection now = *section, remain = *section;
+MemoryRegionSection now, remain = *section;
 Int128 page_size = int128_make64(TARGET_PAGE_SIZE);
 
-if (now.offset_within_address_space & ~TARGET_PAGE_MASK) {
-uint64_t left = TARGET_PAGE_ALIGN(now.offset_within_address_space)
-   - now.offset_within_address_space;
+/* register first subpage */
+if (remain.offset_within_address_space & ~TARGET_PAGE_MASK) {
+uint64_t left = TARGET_PAGE_ALIGN(remain.offset_within_address_space)
+- remain.offset_within_address_space;
 
+now = remain;
 now.size = int128_min(int128_make64(left), now.size);
+remain.size = int128_sub(remain.size, now.size);
+remain.offset_within_address_space += int128_get64(now.size);
+remain.offset_within_region += int128_get64(now.size);
 register_subpage(fv, &now);
-} else {
-now.size = int128_zero();
 }
-while (int128_ne(remain.size, now.size)) {
+
+/* register page */
+if (int128_ge(remain.size, page_size)) {
+now = remain;
+now.size = int128_and(now.size, int128_neg(page_size));
 remain.size = int128_sub(remain.size, now.size);
 remain.offset_within_address_space += int128_get64(now.size);
 remain.offset_within_region += int128_get64(now.size);
-now = remain;
-if (int128_lt(remain.size, page_size)) {
-register_subpage(fv, &now);
-} else if (remain.offset_within_address_space & ~TARGET_PAGE_MASK) {
-now.size = page_size;
-register_subpage(fv, &now);
-} else {
-now.size = int128_and(now.size, int128_neg(page_size));
-register_multipage(fv, &now);
-}
+register_multipage(fv, &now);
+}
+
+/* register last subpage */
+if (int128_gt(remain.size, 0)) {
+register_subpage(fv, &remain);
 }
 }
 
-- 
2.19.1




Re: [Qemu-devel] [PATCH v2] exec.c: refactor function flatview_add_to_dispatch()

2019-03-10 Thread Wei Yang
On Sun, Mar 10, 2019 at 09:54:54PM -0700, no-re...@patchew.org wrote:
>Patchew URL: 
>https://patchew.org/QEMU/20190311013016.14411-1-richardw.y...@linux.intel.com/
>
>
>
>Hi,
>
>This series failed the docker-mingw@fedora build test. Please find the testing 
>commands and
>their output below. If you have Docker installed, you can probably reproduce it
>locally.
>
>=== TEST SCRIPT BEGIN ===
>#!/bin/bash
>time make docker-test-mingw@fedora SHOW_ENV=1 J=14 NETWORK=1
>=== TEST SCRIPT END ===
>
>  CC  x86_64-softmmu/accel/tcg/tcg-runtime.o
>  CC  aarch64-softmmu/hw/cpu/realview_mpcore.o
>/tmp/qemu-test/src/exec.c: In function 'flatview_add_to_dispatch':
>/tmp/qemu-test/src/exec.c:1644:32: error: incompatible type for argument 2 of 
>'int128_gt'
> if (int128_gt(remain.size, 0)) {
>^
>In file included from /tmp/qemu-test/src/include/exec/memory.h:24,
>---
>  CC  aarch64-softmmu/hw/display/vga.o
>  CC  aarch64-softmmu/hw/display/bcm2835_fb.o
>/tmp/qemu-test/src/exec.c: In function 'flatview_add_to_dispatch':
>/tmp/qemu-test/src/exec.c:1644:32: error: incompatible type for argument 2 of 
>'int128_gt'
> if (int128_gt(remain.size, 0)) {
>^

Not sure why this works at my side.

Fixed this with 

  if (int128_gt(remain.size, int128_make64(0))) {

>In file included from /tmp/qemu-test/src/include/exec/memory.h:24,
>
>
>The full log is available at
>http://patchew.org/logs/20190311013016.14411-1-richardw.y...@linux.intel.com/testing.docker-mingw@fedora/?type=message.
>---
>Email generated automatically by Patchew [http://patchew.org/].
>Please send your feedback to patchew-de...@redhat.com

-- 
Wei Yang
Help you, Help me



[Qemu-devel] [PATCH v3] exec.c: refactor function flatview_add_to_dispatch()

2019-03-10 Thread Wei Yang
flatview_add_to_dispatch() registers page based on the condition of
*section*, which may looks like this:

|s|PPP|s|

where s stands for subpage and P for page.

The procedure of this function could be described as:

- register first subpage
- register page
- register last subpage

This means the procedure could be simplified into these three steps
instead of a loop iteration.

This patch refactors the function into three corresponding steps and
adds some comment to clarify it.

Signed-off-by: Wei Yang 

---
v3:
  * pass int128_make64() instead of 0 to int128_gt()
v2:
  * removes the loop iteration as suggested by Paolo
---
 exec.c | 50 +-
 1 file changed, 33 insertions(+), 17 deletions(-)

diff --git a/exec.c b/exec.c
index 0959b00c06..79cd561b3b 100644
--- a/exec.c
+++ b/exec.c
@@ -1598,34 +1598,50 @@ static void register_multipage(FlatView *fv,
 phys_page_set(d, start_addr >> TARGET_PAGE_BITS, num_pages, section_index);
 }
 
+/*
+ * The range in *section* may look like this:
+ *
+ *  |s|PPP|s|
+ *
+ * where s stands for subpage and P for page.
+ *
+ * The procedure in following function could be described as:
+ *
+ * - register first subpage
+ * - register page
+ * - register last subpage
+ */
 void flatview_add_to_dispatch(FlatView *fv, MemoryRegionSection *section)
 {
-MemoryRegionSection now = *section, remain = *section;
+MemoryRegionSection now, remain = *section;
 Int128 page_size = int128_make64(TARGET_PAGE_SIZE);
 
-if (now.offset_within_address_space & ~TARGET_PAGE_MASK) {
-uint64_t left = TARGET_PAGE_ALIGN(now.offset_within_address_space)
-   - now.offset_within_address_space;
+/* register first subpage */
+if (remain.offset_within_address_space & ~TARGET_PAGE_MASK) {
+uint64_t left = TARGET_PAGE_ALIGN(remain.offset_within_address_space)
+- remain.offset_within_address_space;
 
+now = remain;
 now.size = int128_min(int128_make64(left), now.size);
+remain.size = int128_sub(remain.size, now.size);
+remain.offset_within_address_space += int128_get64(now.size);
+remain.offset_within_region += int128_get64(now.size);
 register_subpage(fv, &now);
-} else {
-now.size = int128_zero();
 }
-while (int128_ne(remain.size, now.size)) {
+
+/* register page */
+if (int128_ge(remain.size, page_size)) {
+now = remain;
+now.size = int128_and(now.size, int128_neg(page_size));
 remain.size = int128_sub(remain.size, now.size);
 remain.offset_within_address_space += int128_get64(now.size);
 remain.offset_within_region += int128_get64(now.size);
-now = remain;
-if (int128_lt(remain.size, page_size)) {
-register_subpage(fv, &now);
-} else if (remain.offset_within_address_space & ~TARGET_PAGE_MASK) {
-now.size = page_size;
-register_subpage(fv, &now);
-} else {
-now.size = int128_and(now.size, int128_neg(page_size));
-register_multipage(fv, &now);
-}
+register_multipage(fv, &now);
+}
+
+/* register last subpage */
+if (int128_gt(remain.size, int128_make64(0))) {
+register_subpage(fv, &remain);
 }
 }
 
-- 
2.19.1




[Qemu-devel] [PATCH 2/2] vl.c: allocate TYPE_MACHINE list once during bootup

2019-03-10 Thread Wei Yang
Now all the functions used to select machine is local and the call flow
looks like below:

select_machine()
find_default_machine()
machine_parse()
find_machine()

All these related function will need a GSList for TYPE_MACHINE.
Currently we allocate this list each time we use it, while this is not
necessary to do so because we don't need to modify this.

This patch make the TYPE_MACHINE list allocation in select_machine and
pass this to its child for use.

Signed-off-by: Wei Yang 
---
 vl.c | 24 +++-
 1 file changed, 11 insertions(+), 13 deletions(-)

diff --git a/vl.c b/vl.c
index 3688e2bc98..cf08d96ce4 100644
--- a/vl.c
+++ b/vl.c
@@ -1418,9 +1418,9 @@ static int usb_parse(const char *cmdline)
 
 MachineState *current_machine;
 
-static MachineClass *find_machine(const char *name)
+static MachineClass *find_machine(const char *name, GSList *machines)
 {
-GSList *el, *machines = object_class_get_list(TYPE_MACHINE, false);
+GSList *el;
 MachineClass *mc = NULL;
 
 for (el = machines; el; el = el->next) {
@@ -1437,13 +1437,12 @@ static MachineClass *find_machine(const char *name)
 }
 }
 
-g_slist_free(machines);
 return mc;
 }
 
-static MachineClass *find_default_machine(void)
+static MachineClass *find_default_machine(GSList *machines)
 {
-GSList *el, *machines = object_class_get_list(TYPE_MACHINE, false);
+GSList *el;
 MachineClass *mc = NULL;
 
 for (el = machines; el; el = el->next) {
@@ -1455,7 +1454,6 @@ static MachineClass *find_default_machine(void)
 }
 }
 
-g_slist_free(machines);
 return mc;
 }
 
@@ -2538,16 +2536,15 @@ static gint machine_class_cmp(gconstpointer a, 
gconstpointer b)
   object_class_get_name(OBJECT_CLASS(mc1)));
 }
 
-static MachineClass *machine_parse(const char *name)
+static MachineClass *machine_parse(const char *name, GSList *machines)
 {
 MachineClass *mc = NULL;
-GSList *el, *machines = object_class_get_list(TYPE_MACHINE, false);
+GSList *el;
 
 if (name) {
-mc = find_machine(name);
+mc = find_machine(name, machines);
 }
 if (mc) {
-g_slist_free(machines);
 return mc;
 }
 if (name && !is_help_option(name)) {
@@ -2567,7 +2564,6 @@ static MachineClass *machine_parse(const char *name)
 }
 }
 
-g_slist_free(machines);
 exit(!name || !is_help_option(name));
 }
 
@@ -2659,7 +2655,8 @@ static const QEMUOption *lookup_opt(int argc, char **argv,
 
 static MachineClass *select_machine(void)
 {
-MachineClass *machine_class = find_default_machine();
+GSList *machines = object_class_get_list(TYPE_MACHINE, false);
+MachineClass *machine_class = find_default_machine(machines);
 const char *optarg;
 QemuOpts *opts;
 Location loc;
@@ -2671,7 +2668,7 @@ static MachineClass *select_machine(void)
 
 optarg = qemu_opt_get(opts, "type");
 if (optarg) {
-machine_class = machine_parse(optarg);
+machine_class = machine_parse(optarg, machines);
 }
 
 if (!machine_class) {
@@ -2681,6 +2678,7 @@ static MachineClass *select_machine(void)
 }
 
 loc_pop(&loc);
+g_slist_free(machines);
 return machine_class;
 }
 
-- 
2.19.1




[Qemu-devel] [PATCH 1/2] vl.c: make find_default_machine() local

2019-03-10 Thread Wei Yang
Function find_default_machine() is introduced by commit 2c8cffa599b7
"vl: make find_default_machine externally visible", while it seems no
one outside use it.

This patch make it local again.

Signed-off-by: Wei Yang 
---
 include/hw/boards.h | 1 -
 vl.c| 4 ++--
 2 files changed, 2 insertions(+), 3 deletions(-)

diff --git a/include/hw/boards.h b/include/hw/boards.h
index 21212f0859..e911d56c28 100644
--- a/include/hw/boards.h
+++ b/include/hw/boards.h
@@ -57,7 +57,6 @@ void memory_region_allocate_system_memory(MemoryRegion *mr, 
Object *owner,
 #define MACHINE_CLASS(klass) \
 OBJECT_CLASS_CHECK(MachineClass, (klass), TYPE_MACHINE)
 
-MachineClass *find_default_machine(void);
 extern MachineState *current_machine;
 
 void machine_run_board_init(MachineState *machine);
diff --git a/vl.c b/vl.c
index 502857a176..3688e2bc98 100644
--- a/vl.c
+++ b/vl.c
@@ -1441,7 +1441,7 @@ static MachineClass *find_machine(const char *name)
 return mc;
 }
 
-MachineClass *find_default_machine(void)
+static MachineClass *find_default_machine(void)
 {
 GSList *el, *machines = object_class_get_list(TYPE_MACHINE, false);
 MachineClass *mc = NULL;
@@ -2538,7 +2538,7 @@ static gint machine_class_cmp(gconstpointer a, 
gconstpointer b)
   object_class_get_name(OBJECT_CLASS(mc1)));
 }
 
- static MachineClass *machine_parse(const char *name)
+static MachineClass *machine_parse(const char *name)
 {
 MachineClass *mc = NULL;
 GSList *el, *machines = object_class_get_list(TYPE_MACHINE, false);
-- 
2.19.1




[Qemu-devel] [PATCH 0/2] cleanup select_machine

2019-03-10 Thread Wei Yang
Here is two simple change related to select_machine()

[1]: make find_default_machine() local since no one outside file use it
[2]: allocate TYPE_MACHINE list only once to select machine type

Wei Yang (2):
  vl.c: make find_default_machine() local
  vl.c: allocate TYPE_MACHINE list once during bootup

 include/hw/boards.h |  1 -
 vl.c| 24 +++-
 2 files changed, 11 insertions(+), 14 deletions(-)

-- 
2.19.1




[Qemu-devel] [PATCH] qom: use object_new_with_type in object_new_with_propv

2019-03-11 Thread Wei Yang
Function object_new_with_propv already get the Type of the object, so we
could leverage object_new_with_type here.

[make check test pass]

Signed-off-by: Wei Yang 
---
 qom/object.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/qom/object.c b/qom/object.c
index 05a8567041..76d2f1eb2f 100644
--- a/qom/object.c
+++ b/qom/object.c
@@ -640,7 +640,7 @@ Object *object_new_with_propv(const char *typename,
 error_setg(errp, "object type '%s' is abstract", typename);
 return NULL;
 }
-obj = object_new(typename);
+obj = object_new_with_type(klass->type);
 
 if (object_set_propv(obj, &local_err, vargs) < 0) {
 goto error;
-- 
2.19.1




Re: [Qemu-devel] [PATCH v3] exec.c: refactor function flatview_add_to_dispatch()

2019-03-11 Thread Wei Yang
On Mon, Mar 11, 2019 at 02:42:58PM +0100, Paolo Bonzini wrote:
>On 11/03/19 06:42, Wei Yang wrote:
>> flatview_add_to_dispatch() registers page based on the condition of
>> *section*, which may looks like this:
>> 
>> |s|PPP|s|
>> 
>> where s stands for subpage and P for page.
>> 
>> The procedure of this function could be described as:
>> 
>> - register first subpage
>> - register page
>> - register last subpage
>> 
>> This means the procedure could be simplified into these three steps
>> instead of a loop iteration.
>> 
>> This patch refactors the function into three corresponding steps and
>> adds some comment to clarify it.
>> 
>> Signed-off-by: Wei Yang 
>> 
>> ---
>> v3:
>>   * pass int128_make64() instead of 0 to int128_gt()
>> v2:
>>   * removes the loop iteration as suggested by Paolo
>> ---
>>  exec.c | 50 +-
>>  1 file changed, 33 insertions(+), 17 deletions(-)
>> 
>> diff --git a/exec.c b/exec.c
>> index 0959b00c06..79cd561b3b 100644
>> --- a/exec.c
>> +++ b/exec.c
>> @@ -1598,34 +1598,50 @@ static void register_multipage(FlatView *fv,
>>  phys_page_set(d, start_addr >> TARGET_PAGE_BITS, num_pages, 
>> section_index);
>>  }
>>  
>> +/*
>> + * The range in *section* may look like this:
>> + *
>> + *  |s|PPP|s|
>> + *
>> + * where s stands for subpage and P for page.
>> + *
>> + * The procedure in following function could be described as:
>> + *
>> + * - register first subpage
>> + * - register page
>> + * - register last subpage
>
>The last paragraph is a duplicate of the comments in the function, so it
>can be deleted.  I did that and queued the patch.
>

Thanks :-)

>Paolo
>
>> + */
>>  void flatview_add_to_dispatch(FlatView *fv, MemoryRegionSection *section)
>>  {
>> -MemoryRegionSection now = *section, remain = *section;
>> +MemoryRegionSection now, remain = *section;
>>  Int128 page_size = int128_make64(TARGET_PAGE_SIZE);
>>  
>> -if (now.offset_within_address_space & ~TARGET_PAGE_MASK) {
>> -uint64_t left = TARGET_PAGE_ALIGN(now.offset_within_address_space)
>> -   - now.offset_within_address_space;
>> +/* register first subpage */
>> +if (remain.offset_within_address_space & ~TARGET_PAGE_MASK) {
>> +uint64_t left = 
>> TARGET_PAGE_ALIGN(remain.offset_within_address_space)
>> +- remain.offset_within_address_space;
>>  
>> +now = remain;
>>  now.size = int128_min(int128_make64(left), now.size);
>> +remain.size = int128_sub(remain.size, now.size);
>> +remain.offset_within_address_space += int128_get64(now.size);
>> +remain.offset_within_region += int128_get64(now.size);
>>  register_subpage(fv, &now);
>> -} else {
>> -now.size = int128_zero();
>>  }
>> -while (int128_ne(remain.size, now.size)) {
>> +
>> +/* register page */
>> +if (int128_ge(remain.size, page_size)) {
>> +now = remain;
>> +now.size = int128_and(now.size, int128_neg(page_size));
>>  remain.size = int128_sub(remain.size, now.size);
>>  remain.offset_within_address_space += int128_get64(now.size);
>>  remain.offset_within_region += int128_get64(now.size);
>> -now = remain;
>> -if (int128_lt(remain.size, page_size)) {
>> -    register_subpage(fv, &now);
>> -} else if (remain.offset_within_address_space & ~TARGET_PAGE_MASK) {
>> -now.size = page_size;
>> -register_subpage(fv, &now);
>> -} else {
>> -now.size = int128_and(now.size, int128_neg(page_size));
>> -register_multipage(fv, &now);
>> -}
>> +register_multipage(fv, &now);
>> +}
>> +
>> +/* register last subpage */
>> +if (int128_gt(remain.size, int128_make64(0))) {
>> +register_subpage(fv, &remain);
>>  }
>>  }
>>  
>> 

-- 
Wei Yang
Help you, Help me



Re: [Qemu-devel] [PATCH] qom: use object_new_with_type in object_new_with_propv

2019-03-11 Thread Wei Yang
On Mon, Mar 11, 2019 at 06:08:10PM +0100, Marc-André Lureau wrote:
>Hi
>
>On Mon, Mar 11, 2019 at 9:34 AM Wei Yang  wrote:
>>
>> Function object_new_with_propv already get the Type of the object, so we
>> could leverage object_new_with_type here.
>>
>> [make check test pass]
>>
>> Signed-off-by: Wei Yang 
>> ---
>>  qom/object.c | 2 +-
>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/qom/object.c b/qom/object.c
>> index 05a8567041..76d2f1eb2f 100644
>> --- a/qom/object.c
>> +++ b/qom/object.c
>> @@ -640,7 +640,7 @@ Object *object_new_with_propv(const char *typename,
>>  error_setg(errp, "object type '%s' is abstract", typename);
>>  return NULL;
>>  }
>> -obj = object_new(typename);
>> +obj = object_new_with_type(klass->type);
>>
>>  if (object_set_propv(obj, &local_err, vargs) < 0) {
>>  goto error;
>> --
>> 2.19.1
>>
>>
>
>Reviewed-by: Marc-André Lureau 
>

Thanks :-)

>-- 
>Marc-André Lureau

-- 
Wei Yang
Help you, Help me



  1   2   3   4   5   6   7   8   >