Re: [Qemu-devel] [PATCH v4 00/10] curl: fix curl read
On Wed, May 22, 2013 at 12:12:24PM +0100, Richard W.M. Jones wrote: On Wed, May 22, 2013 at 01:04:51PM +0200, Paolo Bonzini wrote: Something is trying to write, but there's no write operation defined for CURL. I guess curl (and other backends too) should reject being opened for write. Alternatively, block.c could do that for them. Yes, I'd just got to that conclusion as well :-) The attached patch fixes the crash for me. Please post a top-level thread so that your patch gets picked up by scripts and noticed by humans too :). Alternatively Fam can include it in the next revision of this series. Stefan
Re: [Qemu-devel] [PATCH 0/9 v3] Make monitor command 'dump-guest-memory' dump in kdump-compressed format
Am 17.05.2013 05:24, schrieb Qiao Nuohan: Qiao Nuohan (9): Add API to manipulate dump_bitmap Add API to manipulate cache_data Move includes and struct definition to dump.h Add API to create header of vmcore Add API to create data of dump bitmap Add API to create page Add API to free memory used by creating header, bitmap and page Add API to write header, bitmap and page into vmcore Make monitor command 'dump-guest-memory' dump in kdump-compressed format All these subjects should get a dump: prefix (or so) please, to show what subsystem they are touching. If the subject gets line-wrapped it indicates it's too long, please limit to 76 chars (check `git log`). Andreas -- SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer; HRB 16746 AG Nürnberg
Re: [Qemu-devel] RFC: Full introspection support for QMP
Am 22.05.2013 um 15:40 hat Amos Kong geschrieben: Hi all, We already have query-command-line-options to query details of command-line options. As we discussed in the list, we also need full introspection of QMP (command). The qmp-events also need to be dumped, we can define events in qai-schema.json. We can also dump QMP errors in future if it's needed. Command name: query-qmp-schema Return: returns the contents of qapi-schema.json in json format. Solution to query json content from C code: qapi-schema.json is processed by qapi python scripts to generate C files, I found the content is good enough for Libvirt to know the QMP command schema. We can change qapi scripts to generate a talbe/list to record the raw string, then we can return the raw string in qmp_query_qmp_schema(). Yes, the schema as defined in qapi-schema.json should be good to be sent over the wire. Maybe we should already now consider that we'll want to have a dynamic schema eventually: Depending on which modules are compiled in (or even which modules are loaded when we go forward with shared libraries), some types, commands or enum values may be available or not. For example, libvirt wants to query which block drivers it can use. It doesn't really matter for which drivers we had the source initially, but only which drivers are compiled in (possibly loaded) and can actually be used. By default, return the complete schema in one go. And support to query of unknown type in new command. - { execute: query-qmp-schema arguments: { command: query-status }} - { return : data: { command': query-status, returns: StatusInfo }} - { execute: query-qmp-schema arguments: { type: StatusInfo }} - { return : data: { type: StatusInfo, data: {running: bool, singlestep: bool, status: RunState} } - { execute: query-qmp-schema arguments: { event: RX-FILTER-CHANGE }} Maybe a (shortened) example for the complete schema version? I assume it will be just an array of the structs you showed here? Kevin
Re: [Qemu-devel] New targets (was: [PATCH] qapi-schema.json: Reformat TargetType enum to one-per-line)
Andreas Färber afaer...@suse.de writes: Am 22.05.2013 16:28, schrieb Anthony Liguori: Andreas Färber afaer...@suse.de writes: More common is however that people start writing a new target and don't submit it yet (ahem!) while another target gets added, and the current form of rebreaking this block of enum values causes more conflicts Sounds like a good reason to submit the target upstream to me... So are incompletely implemented targets (wrt instruction set) eligible for upstream these days? Aren't most of our target incomplete by some standard? I thought the typical approach for a target was to get linux-user working first, and then go for softmmu. As long as linux-user can run application, I think it's fair game for merging. I thought they'd need to be able to run at least some small image successfully, preferably Linux where possible. I think that's far too high of a hurdle. Regards, Anthony Liguori Regards, Andreas -- SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer; HRB 16746 AG Nürnberg
Re: [Qemu-devel] [PATCH] hw/9pfs: Use O_NOFOLLOW when opening files on server
On Wed, May 22, 2013 at 04:52:54PM +0530, Aneesh Kumar K.V wrote: diff --git a/hw/9pfs/virtio-9p-handle.c b/hw/9pfs/virtio-9p-handle.c index fe8e0ed..e2a89e3 100644 --- a/hw/9pfs/virtio-9p-handle.c +++ b/hw/9pfs/virtio-9p-handle.c @@ -608,7 +608,7 @@ static int handle_init(FsContext *ctx) struct file_handle fh; struct handle_data *data = g_malloc(sizeof(struct handle_data)); -data-mountfd = open(ctx-fs_root, O_DIRECTORY); +data-mountfd = open(ctx-fs_root, O_DIRECTORY | O_NOFOLLOW); Why is the root path not allowed to be a symlink? And if so, it would be more user-friendly to resolve the path before open. That way we don't need to bug the user with an error here.
[Qemu-devel] [PATCH] block/curl.c: Refuse to open the handle for writes.
From: Richard W.M. Jones rjo...@redhat.com --- block/curl.c | 4 1 file changed, 4 insertions(+) diff --git a/block/curl.c b/block/curl.c index b8935fd..f1e302b 100644 --- a/block/curl.c +++ b/block/curl.c @@ -406,6 +406,10 @@ static int curl_open(BlockDriverState *bs, QDict *options, int flags) static int inited = 0; +if (flags BDRV_O_RDWR) { +return -ENOTSUP; +} + opts = qemu_opts_create_nofail(runtime_opts); qemu_opts_absorb_qdict(opts, options, local_err); if (error_is_set(local_err)) { -- 1.8.2.1
Re: [Qemu-devel] [PATCH v3 0/8] block: drive-backup live backup command
That way you can also implement async replication to a remote site (like MS do). Sounds like rsync. yes, but we need 'snapshots' and something more optimized (rsync compared the whole files). I think this can be implemented using the backup job with a specialized backup driver.
[Qemu-devel] [PATCH] monitor: allow to disable the default monitor
Signed-off-by: Luiz Capitulino lcapitul...@redhat.com --- qemu-options.hx | 1 + vl.c| 4 +++- 2 files changed, 4 insertions(+), 1 deletion(-) diff --git a/qemu-options.hx b/qemu-options.hx index fb3961d..bf94862 100644 --- a/qemu-options.hx +++ b/qemu-options.hx @@ -2528,6 +2528,7 @@ Redirect the monitor to host device @var{dev} (same devices as the serial port). The default device is @code{vc} in graphical mode and @code{stdio} in non graphical mode. +Use @code{-monitor none} to disable the default monitor. ETEXI DEF(qmp, HAS_ARG, QEMU_OPTION_qmp, \ -qmp devlike -monitor but opens in 'control' mode\n, diff --git a/vl.c b/vl.c index 59dc0b4..510d2c2 100644 --- a/vl.c +++ b/vl.c @@ -3366,8 +3366,10 @@ int main(int argc, char **argv, char **envp) break; } case QEMU_OPTION_monitor: -monitor_parse(optarg, readline); default_monitor = 0; +if (strncmp(optarg, none, 4)) { +monitor_parse(optarg, readline); +} break; case QEMU_OPTION_qmp: monitor_parse(optarg, control); -- 1.8.1.4
Re: [Qemu-devel] [libvirt] [qemu-devel] Default machine type setting for ppc64
On 2013年05月22日 04:01, Anthony Liguori wrote: Daniel P. Berrange berra...@redhat.com writes: On Tue, May 21, 2013 at 11:12:26AM -0600, Eric Blake wrote: I have also argued in the past that it would be useful for libvirt to support the idea of a template, where you can specify a domain XML that inherits defaults from the template. We've already done things like this for networking, nwfilter, and even secret management (in domain XML, you declare that you are using a named network object, and that network object serves as the template instead of you having to hard-code all the elements into your domain XML), so we have a design to base it on. But until someone adds such a feature for libvirt, then OpenStack should be passing explicit XML to libvirt, and tracking defaults at the OpenStack layer. I don't think the idea of a template belongs in libvirt. This is fine. But the conversation started with a statement that it's QEMU's job to define reasonable defaults and libvirt just exposes those. But in QEMU, we punt this problem by letting a user globally override this default. libvirt hides this ability from the end user. So either it's libvirt's problem to solve, or you should expose the ability to set the global setting within QEMU. We can't just point our fingers at each other and hope the problem goes away :-) Hi Anthony, Currently, to resolve this problem, can we set 'pseries' as default? Because mac99 doesn't work at all on our platform. Thanks. :) -- Li Regards, Anthony Liguori Creating basic XML structure with relevant defaults pre-filled for a particular usecase is something that the libvirt-designer library is aiming to take care of for applications. Daniel -- |: http://berrange.com -o-http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|
Re: [Qemu-devel] New targets (was: [PATCH] qapi-schema.json: Reformat TargetType enum to one-per-line)
On 22 May 2013 15:48, Anthony Liguori aligu...@us.ibm.com wrote: Andreas Färber afaer...@suse.de writes: Am 22.05.2013 16:28, schrieb Anthony Liguori: So are incompletely implemented targets (wrt instruction set) eligible for upstream these days? Aren't most of our target incomplete by some standard? I thought the typical approach for a target was to get linux-user working first, and then go for softmmu. As long as linux-user can run application, I think it's fair game for merging. I thought they'd need to be able to run at least some small image successfully, preferably Linux where possible. I think that's far too high of a hurdle. I think the bar should probably be at about either can run some plausible set of Linux binaries in user mode, or can run some softmmu image (eg linux with a minimal config, whichever the target submitter prefers. Basically something plausibly useful to somebody other than the developer. thanks -- PMM
Re: [Qemu-devel] [PATCH v3 0/8] block: drive-backup live backup command
That sounds like more work than a persistent dirty bitmap. The advantage is that while dirty bitmaps are consumed by a single user, the Merkle tree can be used to sync up any number of replicas. I also consider it safer, because you make sure the data exists (using hash keys like SHA1). I am unsure how you can check if a dirty bitmap contains errors, or is out of date? Also, you can compare arbitrary Merkle trees, whereas a dirty bitmap is always related to a single image. (consider the user remove the latest backup from the backup target).
Re: [Qemu-devel] [RFC] block-trace Low Level Command Supporting Disk Introspection
On Wed, May 15, 2013 at 7:54 AM, Paolo Bonzini pbonz...@redhat.com wrote: But does this really cover all use cases a real synchronous active mirror would provide? I understood that Wolf wants to get every single guest request exposed e.g. on an NBD connection. He can use throttling to limit the guest's I/O speed to the size of the asynchronous mirror's buffer. Throttling is fine for me, and actually what I do today (this is the highest source of overhead for a system that wants to see everything), just with the tracing framework. -- Wolf
Re: [Qemu-devel] [RFC] block-trace Low Level Command Supporting Disk Introspection
On Thu, May 16, 2013 at 9:44 AM, Richard W.M. Jones rjo...@redhat.comwrote: Ideally I'd like to issue some QMP commands which would set up the point-in-time snapshot, and then connect to this snapshot over (eg) NBD, then when I'm done, send some more QMP commands to tear down the snapshot. This is actually interesting. Does the QEMU nbd server support multiple readers? Essentially, if you're RWMJ (not me), and you're keeping a full mirror, it's clear that the mirror write stream goes to an nbd server, but is it possible to attach a reader to that same nbd server and read things back (read-only)? I know it's possible to name the volumes you attach to, so I think conceptually with the nbd protocol this should work. I think this document would be better with one or more examples showing how this would be used. I think the thread now has me looking at making the mirror command 'active' :-) rather than have a new QMP command. -- Wolf
[Qemu-devel] [PATCH v2 0/2] ps2 auto-repeat
When I execute sendkey command, the hold parameter doesn't work, no key is repeated in guest. Because ps2 doesn't support auto-repeat feature. This patchset adds this support. V2: ignore repeat events from host, re-implement in ps2 device. move repeat state to PS2KbdState, fix repeat timer. support migrate of repeat state Amos Kong (2): ps2: add support of auto-repeat ps2: preserve repeat state on migration hw/input/ps2.c | 63 ++ 1 file changed, 63 insertions(+) -- 1.8.1.4
[Qemu-devel] [PATCH v2 2/2] ps2: preserve repeat state on migration
Use a subsection to migrate repeat state (repate period and first delay). Signed-off-by: Amos Kong ak...@redhat.com --- hw/input/ps2.c | 22 ++ 1 file changed, 22 insertions(+) diff --git a/hw/input/ps2.c b/hw/input/ps2.c index 8adbb4a..cdb18e6 100644 --- a/hw/input/ps2.c +++ b/hw/input/ps2.c @@ -611,6 +611,13 @@ static const VMStateDescription vmstate_ps2_common = { } }; +static bool ps2_keyboard_repeatstate_needed(void *opaque) +{ +PS2KbdState *s = opaque; + +return s-repeat_period || s-repeat_delay; +} + static bool ps2_keyboard_ledstate_needed(void *opaque) { PS2KbdState *s = opaque; @@ -626,6 +633,18 @@ static int ps2_kbd_ledstate_post_load(void *opaque, int version_id) return 0; } +static const VMStateDescription vmstate_ps2_keyboard_repeatstate = { +.name = ps2kbd/repeatstate, +.version_id = 3, +.minimum_version_id = 2, +.minimum_version_id_old = 2, +.fields = (VMStateField[]) { +VMSTATE_INT32(repeat_period, PS2KbdState), +VMSTATE_INT32(repeat_delay, PS2KbdState), +VMSTATE_END_OF_LIST() +} +}; + static const VMStateDescription vmstate_ps2_keyboard_ledstate = { .name = ps2kbd/ledstate, .version_id = 3, @@ -665,6 +684,9 @@ static const VMStateDescription vmstate_ps2_keyboard = { .vmsd = vmstate_ps2_keyboard_ledstate, .needed = ps2_keyboard_ledstate_needed, }, { +.vmsd = vmstate_ps2_keyboard_repeatstate, +.needed = ps2_keyboard_repeatstate_needed, +}, { /* empty */ } } -- 1.8.1.4
[Qemu-devel] [PATCH v2 1/2] ps2: add support of auto-repeat
Guest driver sets repeat rate and delay time by KBD_CMD_SET_RATE, but ps2 backend doesn't process it and no auto-repeat implementation. This patch adds support of auto-repeat feature. The repeated events from host are ignored and re-implements ps2's auto-repeat. Guest ps2 driver sets autorepeat to fastest possible in reset, period: 250ms, delay: 33ms Tested by 'sendkey' monitor command. Tested by Linux Windows guests with SDL, VNC, SPICE, GTK+ referenced: http://www.computer-engineering.org/ps2keyboard/ Signed-off-by: Amos Kong ak...@redhat.com --- hw/input/ps2.c | 41 + 1 file changed, 41 insertions(+) diff --git a/hw/input/ps2.c b/hw/input/ps2.c index 3412079..8adbb4a 100644 --- a/hw/input/ps2.c +++ b/hw/input/ps2.c @@ -94,6 +94,10 @@ typedef struct { int translate; int scancode_set; /* 1=XT, 2=AT, 3=PS/2 */ int ledstate; +int repeat_period; /* typematic period, ms */ +int repeat_delay; /* typematic delay, ms */ +int repeat_key; /* keycode to repeat */ +QEMUTimer *repeat_timer; } PS2KbdState; typedef struct { @@ -146,6 +150,15 @@ void ps2_queue(void *opaque, int b) s-update_irq(s-update_arg, 1); } +static void repeat_ps2_queue(void *opaque) +{ +PS2KbdState *s = opaque; + +qemu_mod_timer(s-repeat_timer, qemu_get_clock_ns(vm_clock) + + muldiv64(get_ticks_per_sec(), s-repeat_period, 1000)); +ps2_queue(s-common, s-repeat_key); +} + /* keycode is expressed as follow: bit 7- 0 key pressed, 1 = key released @@ -167,7 +180,23 @@ static void ps2_put_keycode(void *opaque, int keycode) keycode = ps2_raw_keycode_set3[keycode 0x7f]; } } + +/* ignore repeated events from host, re-implement it */ +if (keycode == s-repeat_key) { +return; +} ps2_queue(s-common, keycode); +qemu_del_timer(s-repeat_timer); + +/* only auto-repeat press event */ +if (!(keycode 0x80)) { +s-repeat_key = keycode; +/* delay a while before first repeat */ +qemu_mod_timer(s-repeat_timer, qemu_get_clock_ns(vm_clock) + + muldiv64(get_ticks_per_sec(), s-repeat_delay, 1000)); +} else { +s-repeat_key = -1; +} } uint32_t ps2_read_data(void *opaque) @@ -213,6 +242,11 @@ static void ps2_reset_keyboard(PS2KbdState *s) void ps2_write_keyboard(void *opaque, int val) { +/* repeat period/delay table from kernel (drivers/input/keyboard/atkbd.c) */ +const short period[32] = { 33, 37, 42, 46, 50, 54, 58, 63, 67, 75, +83, 92, 100, 109, 116, 125, 133, 149, 167, 182, 200, 217, 232, +250, 270, 303, 333, 370, 400, 435, 470, 500 }; +const short delay[4] = { 250, 500, 750, 1000 }; PS2KbdState *s = (PS2KbdState *)opaque; switch(s-common.write_cmd) { @@ -288,6 +322,10 @@ void ps2_write_keyboard(void *opaque, int val) s-common.write_cmd = -1; break; case KBD_CMD_SET_RATE: + /* Bit0-4 specifies the repeat rate */ +s-repeat_period = period[val 0x1f]; + /* Bit5-6 bit specifies the delay time */ +s-repeat_delay = delay[val 5 0x3]; ps2_queue(s-common, KBD_REPLY_ACK); s-common.write_cmd = -1; break; @@ -536,6 +574,9 @@ static void ps2_kbd_reset(void *opaque) s-scan_enabled = 0; s-translate = 0; s-scancode_set = 0; +s-repeat_period = 92; +s-repeat_delay = 500; +s-repeat_timer = qemu_new_timer_ns(vm_clock, repeat_ps2_queue, s); } static void ps2_mouse_reset(void *opaque) -- 1.8.1.4
Re: [Qemu-devel] New targets (was: [PATCH] qapi-schema.json: Reformat TargetType enum to one-per-line)
Peter Maydell peter.mayd...@linaro.org writes: On 22 May 2013 15:48, Anthony Liguori aligu...@us.ibm.com wrote: Andreas Färber afaer...@suse.de writes: Am 22.05.2013 16:28, schrieb Anthony Liguori: So are incompletely implemented targets (wrt instruction set) eligible for upstream these days? Aren't most of our target incomplete by some standard? I thought the typical approach for a target was to get linux-user working first, and then go for softmmu. As long as linux-user can run application, I think it's fair game for merging. I thought they'd need to be able to run at least some small image successfully, preferably Linux where possible. I think that's far too high of a hurdle. I think the bar should probably be at about either can run some plausible set of Linux binaries in user mode, or can run some softmmu image (eg linux with a minimal config, whichever the target submitter prefers. Basically something plausibly useful to somebody other than the developer. Another way to put this, and this is true for *all* contributions, is that the bar for acceptance is whether something is useful even if the submitter immediately stopped contributing after the patch was merged. Regards, Anthony Liguori thanks -- PMM
Re: [Qemu-devel] [RFC] block-trace Low Level Command Supporting Disk Introspection
Il 22/05/2013 17:51, Wolfgang Richter ha scritto: On Thu, May 16, 2013 at 9:44 AM, Richard W.M. Jones rjo...@redhat.com mailto:rjo...@redhat.com wrote: Ideally I'd like to issue some QMP commands which would set up the point-in-time snapshot, and then connect to this snapshot over (eg) NBD, then when I'm done, send some more QMP commands to tear down the snapshot. This is actually interesting. Does the QEMU nbd server support multiple readers? Yes. Essentially, if you're RWMJ (not me), and you're keeping a full mirror, it's clear that the mirror write stream goes to an nbd server, but is it possible to attach a reader to that same nbd server and read things back (read-only)? Yes, it can be done with both qemu-nbd and the QEMU nbd server commands. Paolo
[Qemu-devel] [Bug 1182344] Re: ARM: invalid code execution after subs instruction
Thanks a lot for your help! It is not a Qemu problem. I ported the code from a pre ARMv7 environment. In the assembler code I have this: msr spsr, r5 The GNU as translates this to: msr SPSR_fc, r5 Correct is this: msr SPSR_fsxc, r5 I fixed the assembler source and now all SPSR fields are updated on exception return. ** Changed in: qemu Status: New = Invalid -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1182344 Title: ARM: invalid code execution after subs instruction Status in QEMU: Invalid Bug description: I use Qemu (Git 1239b472bb0dba8060f1af29d40dafbc1b2860d4) to test a SMP application on ARM. I use the following command line: qemu-system-arm -S -s -net none -nographic -M realview-pbx-a9 -kernel app.exe -m 256M -no-reboot -smp 2 -icount 8 The application crashes because Qemu executes the B.N instruction wrong after a SUBS (return from interrupt): 0x00010428 thread_dispatch_done+12: vldmia r1!, {d16-d31} 0x0001042c thread_dispatch_done+16: ldr r0, [r1] 0x00010430 thread_dispatch_done+20: add sp, sp, #200; 0xc8 0x00010434 thread_dispatch_done+24: vmsrfpscr, r0 0x00010438 thread_dispatch_done+28: ldmfd sp!, {lr} 0x0001043c thread_dispatch_done+32: mov r0, sp 0x00010440 thread_dispatch_done+36: add sp, sp, #28 0x00010444 thread_dispatch_done+40: mrs r1, CPSR 0x00010448 thread_dispatch_done+44: bic r1, r1, #1 0x0001044c thread_dispatch_done+48: msr CPSR_fc, r1 0x00010450 thread_dispatch_done+52: push{r4, r5} 0x00010454 thread_dispatch_done+56: ldm r0, {r0, r1, r2, r3, r4, r5, r12} 0x00010458 thread_dispatch_done+60: mov lr, r4 0x0001045c thread_dispatch_done+64: msr SPSR_fc, r5 0x00010460 thread_dispatch_done+68: pop {r4, r5} 0x00010464 thread_dispatch_done+72: subspc, lr, #4 0x0690 _Thread_Idle_body+8: b.n 0x690 _Thread_Idle_body+8 0x0692:nop 0x0694 __getreent+0: push{r7, lr} 0x0696 __getreent+2: add r7, sp, #0 0x0698 __getreent+4: bl 0x1cf0 bsp_smp_processor_id 0x1cf0 bsp_smp_processor_id+0: push{r7, lr} This instruction trace was generated with the attached patch. The 0x0690 _Thread_Idle_body+8:b.n 0x690 _Thread_Idle_body+8 should jump to itself. Instead the next instruction is executed: 0x0692: nop To manage notifications about this bug go to: https://bugs.launchpad.net/qemu/+bug/1182344/+subscriptions
Re: [Qemu-devel] 'qemu-nbd' explicit flush
Thank you guys for responding!! 1. Add a signal handler (like SIGHUP or SIGUSR1) to qemu-nbd which flushes all exports. That would be a useful addition anyway. Paolo This is exactly what I was going to try today. I'm just getting familiar with Qemu source. I'll let you know how it goes.. Thanks again Paolo and Stefan. Regards, Mark Trumpold -Original Message- From: Paolo Bonzini [mailto:pbonz...@redhat.com] Sent: Wednesday, May 22, 2013 04:07 AM To: 'Stefan Hajnoczi' Cc: 'Mark Trumpold', qemu-devel@nongnu.org, ma...@tachyon.net Subject: Re: 'qemu-nbd' explicit flush Il 22/05/2013 11:47, Stefan Hajnoczi ha scritto: On Tue, May 21, 2013 at 08:01:10PM +, Mark Trumpold wrote: Linux kernel 3.3.1 with Qemu patch to enable kernel flushing: http://thread.gmane.org/gmane.linux.drivers.nbd.general/1108 Did you check that the kernel is sending NBD_FLUSH commands? You can use tcpdump and then check the captured network traffic. Usage example: 'qemu-nbd --cache=writeback -c /dev/nbd0 /images/my-qcow.img' 'mount /dev/nbd0 /my-mount-point' Everything does flush correctly when I first unmount and then disconnect the device; however, in my case I am not able to unmount things before snapshotting. I tried several approaches externally to flush the device. For example: 'mount -o remount,ro /dev/nbd0' 'blockdev --flushbufs /dev/nbd0' Did you try plain old sync(1)? This could also work: dd if=/dev/zero of=dummy oflag=sync bs=512 count=1 1. Add a signal handler (like SIGHUP or SIGUSR1) to qemu-nbd which flushes all exports. That would be a useful addition anyway. Paolo
Re: [Qemu-devel] RFC: Full introspection support for QMP
Kevin Wolf kw...@redhat.com writes: Am 22.05.2013 um 15:40 hat Amos Kong geschrieben: Hi all, We already have query-command-line-options to query details of command-line options. As we discussed in the list, we also need full introspection of QMP (command). The qmp-events also need to be dumped, we can define events in qai-schema.json. We can also dump QMP errors in future if it's needed. Command name: query-qmp-schema Return: returns the contents of qapi-schema.json in json format. Solution to query json content from C code: qapi-schema.json is processed by qapi python scripts to generate C files, I found the content is good enough for Libvirt to know the QMP command schema. We can change qapi scripts to generate a talbe/list to record the raw string, then we can return the raw string in qmp_query_qmp_schema(). Yes, the schema as defined in qapi-schema.json should be good to be sent over the wire. Maybe we should already now consider that we'll want to have a dynamic schema eventually: Depending on which modules are compiled in (or even which modules are loaded when we go forward with shared libraries), some types, commands or enum values may be available or not. For example, libvirt wants to query which block drivers it can use. It doesn't really matter for which drivers we had the source initially, but only which drivers are compiled in (possibly loaded) and can actually be used. The schema is the wrong place to discover this. Loading a module wouldn't add an enumeration value. The enumeration values are fixed. We should introduce commands to query this kind of information. Schema introspection is primarily useful for dynamic languages to autogenerate bindings. It's not terribly useful for query capabilities/features. Regards, Anthony Liguori By default, return the complete schema in one go. And support to query of unknown type in new command. - { execute: query-qmp-schema arguments: { command: query-status }} - { return : data: { command': query-status, returns: StatusInfo }} - { execute: query-qmp-schema arguments: { type: StatusInfo }} - { return : data: { type: StatusInfo, data: {running: bool, singlestep: bool, status: RunState} } - { execute: query-qmp-schema arguments: { event: RX-FILTER-CHANGE }} Maybe a (shortened) example for the complete schema version? I assume it will be just an array of the structs you showed here? Kevin
Re: [Qemu-devel] [RFC] reverse execution.
On 18/05/2013 20:52, Blue Swirl wrote: On Fri, May 17, 2013 at 5:23 PM, KONRAD Frédéric fred.kon...@greensocs.com wrote: On 09/05/2013 19:54, Blue Swirl wrote: On Tue, May 7, 2013 at 6:27 PM, KONRAD Frédéric fred.kon...@greensocs.com wrote: Hi, We are trying to find a way to do reverse execution happen with QEMU. Actually, it is possible to debug the guest through the gdbstub, we want to make the reverse execution possible with GDB as well. How we are trying to make that working (basically without optimisation): -QEMU takes regular snapshot of the VM: that can be done with the save vm code without optimisation first. -When the VM is stopped and GDB requests a reverse-step: load the last snapshot and replay to one instruction before the current PC. There are one issue with that for now (for a basic running reverse execution): -How to stop one instruction before the actual PC. Add a special translation mode for reverse execution where the next PC is checked after each instruction. Alternatively, you could make temporary snapshots during this mode (first 1s intervals, then 0.1s etc) which could be used to find the location. I think this way was discussed briefly earlier in the list, please check the archives. Hi, thanks for your answer! I didn't find the discussion in the archive.. Do you have a clue? (Title or sender?) Paul Brook (long time QEMU developer) made a paper about this together with Daniel Jacobowitz: http://www.linuxsymposium.org/archives/GCC/Reprints-2007/jacobowitz-reprint.pdf IIRC Paul also mentioned some techniques on the list at that time but I couldn't find that in the archives. Other related discussions: http://article.gmane.org/gmane.comp.emulators.qemu/88447 http://article.gmane.org/gmane.comp.emulators.qemu/94861 http://article.gmane.org/gmane.comp.emulators.qemu/154572 Also this site contains some overview of reverse debugging: http://jakob.engbloms.se/archives/1554 Thanks for your help :). For now we tried some other things which are not working very well, It appeared that the replay is not deterministic even with icount: - the whole icount mechanism is not saved with save_vm (which can be achieved by moving qemu_icount to TimerState according to Paolo) - replaying two times the same thing and stopping at a specific breakpoint show two differents vmclock, so replaying the same amount of time don't work, and we abandoned this idea. We tried to count the amount of time tcg_qemu_tb_exec exited with having executed some TB and we stopped one before for the replay. This is nearly working but: - tcg_qemu_tb_exec exits a little more time during the first replay, seems the TB linked list is split dynamically? - this works with the next replay (reverse-stepi) but we can't stop at the exact PC instruction with this method. So we will try to add an instruction counter in the CPUState and increments it after each instruction in the translation code, which I think is approximately what you suggest. Then when replaying the code from the snapshot, we will check the amount of executed instruction and stop one instruction before. Maybe we can re-use icount mechanism but this might be a lot more complicated as it is a de-counter? Can this be working? Maybe we will need to trace the PC from the snapshot to the exact location? That should be easy, but not the fastest way. Or use both mechanism to get the right location? Yes, you could load VM from previous snapshot and then use icount or just host timer to get approximately halfway. Make a new snapshot and then try again, starting from that snapshot. When you get close enough, singlestep to the final instruction. Well, finally we plan to do approximately this way: We added a translation block counter, which is incremented by TCG code the same way as icount, and raise a debug exception when we are at the right location. Unfortunately this is not sufficient in term of precision, we can jump back 1 TB. We have two choices: a/ keep this executed translation block counter and step by step go to the right location. b/ transform this counter in a executed instruction counter like icount and do step by step execution when we are replaying. The first is a bit difficult, as we don't have the exact PC location where to stop, and the second can be really slow (I don't have performance measure at the moment). Maybe we can try mixing both: replaying to the start of the right TB, then step by step going to the right PC. Fred Thanks, Fred We though that using -icount and stop the guest a little time before the actual position would give us the right behavior (We use a qemu_timer with vm_clock to stop the vm at the good time), but it seems that it is not deterministic, and not reproducable. Is that normal? We don't make any input during the replay, and we though that it can be caused by some timer interruption but -icount is using a virtual timer as I
Re: [Qemu-devel] [RFC] block-trace Low Level Command Supporting Disk Introspection
On Wed, May 22, 2013 at 12:11 PM, Paolo Bonzini pbonz...@redhat.com wrote: Essentially, if you're RWMJ (not me), and you're keeping a full mirror, it's clear that the mirror write stream goes to an nbd server, but is it possible to attach a reader to that same nbd server and read things back (read-only)? Yes, it can be done with both qemu-nbd and the QEMU nbd server commands. Then this means, if there was an active mirror (or snapshot being created), it would be easy to attach an nbd client as a reader to it even as it is being synchronized (perhaps dangerous?). -- Wolf
[Qemu-devel] [PATCH v2] wdt_i6300esb: fix vmstate versioning
When this VMSD was introduced it's version fields were set to sizeof(I6300State), making them essentially random from build to build, version to version. To fix this, we lock in a high version id and low minimum version id to support old-new migration from all prior versions of this device's state. This should work since the device state has not changed since its introduction. The potentially breaks migration from 1.5+ to 1.5, but since the versioning was essentially random prior to this patch, new-old migration was not consistently functional to begin with. Reported-by: Nicholas Thomas n...@bytemark.co.uk Suggested-by: Peter Maydell peter.mayd...@linaro.org Cc: qemu-sta...@nongnu.org Signed-off-by: Michael Roth mdr...@linux.vnet.ibm.com --- v2: * Fixed s/except/accept/ typo (Laszlo) hw/watchdog/wdt_i6300esb.c | 19 --- 1 file changed, 16 insertions(+), 3 deletions(-) diff --git a/hw/watchdog/wdt_i6300esb.c b/hw/watchdog/wdt_i6300esb.c index 1407fba..05af0b1 100644 --- a/hw/watchdog/wdt_i6300esb.c +++ b/hw/watchdog/wdt_i6300esb.c @@ -374,9 +374,22 @@ static const MemoryRegionOps i6300esb_ops = { static const VMStateDescription vmstate_i6300esb = { .name = i6300esb_wdt, -.version_id = sizeof(I6300State), -.minimum_version_id = sizeof(I6300State), -.minimum_version_id_old = sizeof(I6300State), +/* With this VMSD's introduction, version_id/minimum_version_id were + * erroneously set to sizeof(I6300State), causing a somewhat random + * version_id to be set for every build. This eventually broke + * migration. + * + * To correct this without breaking old-new migration for older versions + * of QEMU, we've set version_id to a value high enough to exceed all past + * values of sizeof(I6300State) across various build environments, and have + * reset minimum_version_id_old/minimum_version_id to 1, since this VMSD + * has never changed and thus can accept all past versions. + * + * For future changes we can treat these values as we normally would. + */ +.version_id = 1, +.minimum_version_id = 1, +.minimum_version_id_old = 1, .fields = (VMStateField []) { VMSTATE_PCI_DEVICE(dev, I6300State), VMSTATE_INT32(reboot_enabled, I6300State), -- 1.7.9.5
Re: [Qemu-devel] [RFC] block-trace Low Level Command Supporting Disk Introspection
On Wed, May 22, 2013 at 11:51:16AM -0400, Wolfgang Richter wrote: This is actually interesting. Does the QEMU nbd server support multiple readers? Yes. qemu-nbd has a -e/--shared=N option which appears to do exactly what it says in the man page. $ guestfish -N fs exit $ ls -lh test1.img -rw-rw-r--. 1 rjones rjones 100M May 22 17:37 test1.img $ qemu-nbd -e 3 -r -t test1.img From another shell: $ guestfish --format=raw -a nbd://localhost Welcome to guestfish, the guest filesystem shell for editing virtual machine filesystems and disk images. Type: 'help' for help on commands 'man' to read the manual 'quit' to quit the shell fs run fs list-filesystems /dev/sda1: ext2 Run up to two extra guestfish instances, with the same result. The fourth guestfish instance hangs at the 'run' command until one of the first three is told to exit. Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones virt-top is 'top' for virtual machines. Tiny program with many powerful monitoring features, net stats, disk stats, logging, etc. http://people.redhat.com/~rjones/virt-top
Re: [Qemu-devel] [PATCH v2] wdt_i6300esb: fix vmstate versioning
On Wed, May 22, 2013 at 11:32:51AM -0500, Michael Roth wrote: When this VMSD was introduced it's version fields were set to sizeof(I6300State), making them essentially random from build to build, version to version. To fix this, we lock in a high version id and low minimum version id to support old-new migration from all prior versions of this device's state. This should work since the device state has not changed since its introduction. The potentially breaks migration from 1.5+ to 1.5, but since the versioning was essentially random prior to this patch, new-old migration was not consistently functional to begin with. Reported-by: Nicholas Thomas n...@bytemark.co.uk Suggested-by: Peter Maydell peter.mayd...@linaro.org Cc: qemu-sta...@nongnu.org Signed-off-by: Michael Roth mdr...@linux.vnet.ibm.com ACK. I guess no one uses watchdog much, or they don't try to migrate those guests :-( Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones virt-p2v converts physical machines to virtual machines. Boot with a live CD or over the network (PXE) and turn machines into KVM guests. http://libguestfs.org/virt-v2v
Re: [Qemu-devel] RFC: Full introspection support for QMP
On Wed, 22 May 2013 21:40:07 +0800 Amos Kong ak...@redhat.com wrote: Hi all, We already have query-command-line-options to query details of command-line options. As we discussed in the list, we also need full introspection of QMP (command). The qmp-events also need to be dumped, we can define events in qai-schema.json. We can also dump QMP errors in future if it's needed. Command name: query-qmp-schema Return: returns the contents of qapi-schema.json in json format. Solution to query json content from C code: qapi-schema.json is processed by qapi python scripts to generate C files, I found the content is good enough for Libvirt to know the QMP command schema. We can change qapi scripts to generate a talbe/list to record the raw string, then we can return the raw string in qmp_query_qmp_schema(). By default, return the complete schema in one go. And support to query of unknown type in new command. - { execute: query-qmp-schema arguments: { command: query-status }} - { return : data: { command': query-status, returns: StatusInfo }} - { execute: query-qmp-schema arguments: { type: StatusInfo }} - { return : data: { type: StatusInfo, data: {running: bool, singlestep: bool, status: RunState} } Looks good, but as Kevin said an example of the schema output would be good. Feel free to post patches along :) - { execute: query-qmp-schema arguments: { event: RX-FILTER-CHANGE }} You'll need to add the events to the schema some way. I'm not sure I'd mix the projects in the same series, maybe you could add event support to the schema after query-qmp-schema gets merged. Welcome your comments, thanks! Target: 1.6 Related RHBZ: https://bugzilla.redhat.com/show_bug.cgi?id=557939
Re: [Qemu-devel] [RFC] block-trace Low Level Command Supporting Disk Introspection
On Wed, May 22, 2013 at 12:42 PM, Richard W.M. Jones rjo...@redhat.comwrote: Run up to two extra guestfish instances, with the same result. The fourth guestfish instance hangs at the 'run' command until one of the first three is told to exit. And your interested on being notified when a snapshot is safe to read from? Or is it valuable to try reading immediately? -- Wolf
[Qemu-devel] June 3rd Workshop in Pittsburgh, PA, USA
I am in charge of a workshop happening at CMU with 21 guests currently registered. It will be on using QEMU/KVM, coding inside those codebases, using libvirt, and possibly OpenStack. We will have several talks during the day on how people have used QEMU + KVM in their own research, tips and tricks, best practices they've come across, and any stumbling blocks encountered. At the end of the workshop we will have tutorial sessions on just using QEMU/KVM (possibly in conjunction with libvirt) and also benchmarking with these systems etc. If you're in the Pittsburgh area, and would like to attend, please feel free to contact me. Breakfast and lunch would be included, and currently registration is free. -- Wolf
Re: [Qemu-devel] [RFC] block-trace Low Level Command Supporting Disk Introspection
On Wed, May 22, 2013 at 02:32:37PM -0400, Wolfgang Richter wrote: On Wed, May 22, 2013 at 12:42 PM, Richard W.M. Jones rjo...@redhat.comwrote: Run up to two extra guestfish instances, with the same result. The fourth guestfish instance hangs at the 'run' command until one of the first three is told to exit. And your interested on being notified when a snapshot is safe to read from? Or is it valuable to try reading immediately? I'm not sure I understand the question. I assumed (maybe wrongly) that if we had an NBD address (ie. Unix socket or IP:port) then we'd just connect to that and go. Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones Fedora Windows cross-compiler. Compile Windows programs, test, and build Windows installers. Over 100 libraries supported. http://fedoraproject.org/wiki/MinGW
Re: [Qemu-devel] [PATCH v2] wdt_i6300esb: fix vmstate versioning
On 05/22/13 18:32, Michael Roth wrote: When this VMSD was introduced it's version fields were set to sizeof(I6300State), making them essentially random from build to build, version to version. To fix this, we lock in a high version id and low minimum version id to support old-new migration from all prior versions of this device's state. This should work since the device state has not changed since its introduction. The potentially breaks migration from 1.5+ to 1.5, but since the versioning was essentially random prior to this patch, new-old migration was not consistently functional to begin with. Reported-by: Nicholas Thomas n...@bytemark.co.uk Suggested-by: Peter Maydell peter.mayd...@linaro.org Cc: qemu-sta...@nongnu.org Signed-off-by: Michael Roth mdr...@linux.vnet.ibm.com --- v2: * Fixed s/except/accept/ typo (Laszlo) hw/watchdog/wdt_i6300esb.c | 19 --- 1 file changed, 16 insertions(+), 3 deletions(-) Always alert to make a difference :), Reviewed-by: Laszlo Ersek ler...@redhat.com
Re: [Qemu-devel] [RFC] block-trace Low Level Command Supporting Disk Introspection
On Wed, May 22, 2013 at 3:26 PM, Richard W.M. Jones rjo...@redhat.comwrote: On Wed, May 22, 2013 at 02:32:37PM -0400, Wolfgang Richter wrote: On Wed, May 22, 2013 at 12:42 PM, Richard W.M. Jones rjo...@redhat.com wrote: Run up to two extra guestfish instances, with the same result. The fourth guestfish instance hangs at the 'run' command until one of the first three is told to exit. And your interested on being notified when a snapshot is safe to read from? Or is it valuable to try reading immediately? I'm not sure I understand the question. I assumed (maybe wrongly) that if we had an NBD address (ie. Unix socket or IP:port) then we'd just connect to that and go. I meant if there was interest in reading from a disk that isn't fully synchronized (yet) to the original disk (it might have old blocks). Or would you only want to connect once a (complete) snapshot is available (synchronized completely to some point-in. -- Wolf
Re: [Qemu-devel] [RFC] block-trace Low Level Command Supporting Disk Introspection
On Wed, May 22, 2013 at 03:38:33PM -0400, Wolfgang Richter wrote: On Wed, May 22, 2013 at 3:26 PM, Richard W.M. Jones rjo...@redhat.comwrote: On Wed, May 22, 2013 at 02:32:37PM -0400, Wolfgang Richter wrote: On Wed, May 22, 2013 at 12:42 PM, Richard W.M. Jones rjo...@redhat.com wrote: Run up to two extra guestfish instances, with the same result. The fourth guestfish instance hangs at the 'run' command until one of the first three is told to exit. And your interested on being notified when a snapshot is safe to read from? Or is it valuable to try reading immediately? I'm not sure I understand the question. I assumed (maybe wrongly) that if we had an NBD address (ie. Unix socket or IP:port) then we'd just connect to that and go. I meant if there was interest in reading from a disk that isn't fully synchronized (yet) to the original disk (it might have old blocks). Or would you only want to connect once a (complete) snapshot is available (synchronized completely to some point-in. IIUC a disk which wasn't fully synchronized wouldn't necessarily be interpretable by libguestfs, so I guess we would need the complete snapshot. Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones libguestfs lets you edit virtual machines. Supports shell scripting, bindings from many languages. http://libguestfs.org
[Qemu-devel] [Bug 661696] Re: incomplete emulation of fstenv under TCG
I've had quite some problems with this bug as well. It would be really nice if it could be fixed. I have ported Chalkerx's patch to QEMU-1.5.0. The patch is attached. // MOKI ** Patch added: patch-qemu-1.5.0-fpip.diff https://bugs.launchpad.net/qemu/+bug/661696/+attachment/3683865/+files/patch-qemu-1.5.0-fpip.diff -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/661696 Title: incomplete emulation of fstenv under TCG Status in QEMU: New Bug description: Steps to reproduce: 1) Install Windows (tried XP and 7) in qemu (tried qemu without kvm and qemu-kvm). 2) Get OllyDbg ( http://ollydbg.de/odbg200.zip ). 3) Use some Metasploit-encoded file, example included. It is not a virus! File was generated with Metasploit, command (if i remember it right): `msfpayload windows/exec cmd=notepad R | msfencode -e x86/shikata_ga_nai -t exe -o cmd_exec_notepad.shikata_ga_nai.exe`. 4) Launch the file under Windows in qemu, make sure it opens a notepad. 5) Open file under OllyDbg, run (F9) it there. It opens a notpad. Close OllyDbg. 6) Open file under OllyDbg, trace over (Ctrl+F12) it there. It fails with `Access violation when writing to [some address]`. Command: 316A 13, XOR DWORD PTR DS:[EDX+13],EBP Under native Windows, the trace over command works fine. Under VMware the trace works fine. Under VirtualBox it also fails (checked in the spring). $ qemu-kvm --version QEMU PC emulator version 0.12.5 (qemu-kvm-0.12.5), Copyright (c) 2003-2008 Fabrice Bellard To manage notifications about this bug go to: https://bugs.launchpad.net/qemu/+bug/661696/+subscriptions
[Qemu-devel] [Bug 1183083] [NEW] Unable to install Windows 2012 Server in qemu VM
Public bug reported: Trying to use qemu to create a new Windows 2012 Server virtual system on an Ubuntu 32-bit host. The VM boots, runs a progress-bar through twice, dispays the Windows logo for a few seconds, then displays the error- message... Your PC needs to restart. Please hold down the power button. Error Code: 0x005D Parmeters: 0x078BFBF9 0x2191ABF9 0c 0c Complete command line is qemu-system-x86_64 -no-kvm -m 1478 -vnc :29 -boot order=cd -hda win2k12-C.bin -cdrom win2k12.iso -name win2k12 -vga vmware -rtc base=localtime -net nic,vlan=1,model=e1000 -net user,vlan=1,net=10.10.199.0/24,host=10.10.199.78,hostfwd=tcp::32922-10.10.199.15:22,hostfwd=tcp::32980-10.10.199.15:80 -pidfile win2k12.pid -daemonize where file win2k12.iso is a symlink to file en_windows_server_2012_x64_dvd_915478.iso with MD5 fingerprint da91135483e24689bfdaf05d40301506. ** Affects: qemu (Ubuntu) Importance: Undecided Status: New ** Project changed: qemu = qemu (Ubuntu) -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1183083 Title: Unable to install Windows 2012 Server in qemu VM Status in “qemu” package in Ubuntu: New Bug description: Trying to use qemu to create a new Windows 2012 Server virtual system on an Ubuntu 32-bit host. The VM boots, runs a progress-bar through twice, dispays the Windows logo for a few seconds, then displays the error-message... Your PC needs to restart. Please hold down the power button. Error Code: 0x005D Parmeters: 0x078BFBF9 0x2191ABF9 0c 0c Complete command line is qemu-system-x86_64 -no-kvm -m 1478 -vnc :29 -boot order=cd -hda win2k12-C.bin -cdrom win2k12.iso -name win2k12 -vga vmware -rtc base=localtime -net nic,vlan=1,model=e1000 -net user,vlan=1,net=10.10.199.0/24,host=10.10.199.78,hostfwd=tcp::32922-10.10.199.15:22,hostfwd=tcp::32980-10.10.199.15:80 -pidfile win2k12.pid -daemonize where file win2k12.iso is a symlink to file en_windows_server_2012_x64_dvd_915478.iso with MD5 fingerprint da91135483e24689bfdaf05d40301506. To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1183083/+subscriptions
Re: [Qemu-devel] [RFC] block-trace Low Level Command Supporting Disk Introspection
Il 22/05/2013 22:47, Richard W.M. Jones ha scritto: I meant if there was interest in reading from a disk that isn't fully synchronized (yet) to the original disk (it might have old blocks). Or would you only want to connect once a (complete) snapshot is available (synchronized completely to some point-in. IIUC a disk which wasn't fully synchronized wouldn't necessarily be interpretable by libguestfs, so I guess we would need the complete snapshot. In the case of point-in-time backups (Stefan's block-backup) the plan is to have the snapshot complete from the beginning. Paolo
[Qemu-devel] [Bug 1177774] Re: Gtk+ frontend fails to build
So the *BSD build has been fixed, but someone needs to look into fixing the Gtk+ backend on Solaris. -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/114 Title: Gtk+ frontend fails to build Status in QEMU: New Bug description: The QEMU Gtk+ frontend fails to build.. cc -I/home/ports/pobj/qemu-1.5.0-rc0/qemu-1.5.0-rc0/tcg -I/home/ports/pobj/qemu-1.5.0-rc0/qemu-1.5.0-rc0/tcg/i386 -I. -I/home/ports/pobj/qemu-1.5.0-rc0/qemu-1.5.0-rc0 -I/home/ports/pobj/qemu-1.5.0-rc0/qemu-1.5.0-rc0/include -Iui -Iui -fPIE -DPIE -m64 -D_GNU_SOURCE -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -Wstrict-prototypes -Wredundant-decls -Wall -Wundef -Wwrite-strings -Wmissing-prototypes -fno-strict-aliasing -I/usr/local/include -I/usr/X11R6/include -Wno-redundant-decls -DTIME_MAX=INT_MAX -Wendif-labels -Wmissing-include-dirs -Wnested-externs -Wformat-security -Wformat-y2k -Winit-self -Wold-style-definition -fstack-protector-all -I/usr/local/include -I/usr/local/include/p11-kit-1 -I/usr/include -I/usr/local/include/libpng -I/usr/local/include -I/usr/include -I/usr/X11R6/include/pixman-1 -I/home/ports/pobj/qemu-1.5.0-rc0/qemu-1.5.0-rc0/dtc/libfdt -pthread -I/usr/local/include/glib-2.0 -I/usr/local/lib/glib-2.0/include -I/usr/local/include -I/home/ports/pobj/qemu-1.5.0-rc0/qemu-1.5.0-rc0/tests -I/usr/local/include/gtk-2.0 -I/usr/local/lib/gtk-2.0/include -I/usr/local/include/pango-1.0 -I/usr/local/include/gio-unix-2.0/ -I/usr/X11R6/include -I/usr/local/include/cairo -I/usr/local/include/atk-1.0 -I/usr/X11R6/include/pixman-1 -I/usr/local/include/libpng -I/usr/local/include/gdk-pixbuf-2.0 -I/usr/local/include/harfbuzz -pthread -I/usr/local/include/glib-2.0 -I/usr/local/lib/glib-2.0/include -I/usr/local/include -I/usr/X11R6/include/freetype2 -I/usr/local/include/vte-0.0 -I/usr/local/include/gtk-2.0 -I/usr/local/lib/gtk-2.0/include -I/usr/local/include/pango-1.0 -I/usr/X11R6/include -I/usr/local/include/atk-1.0 -I/usr/local/include/gdk-pixbuf-2.0 -I/usr/local/include/harfbuzz -I/usr/local/include/gio-unix-2.0/ -pthread -I/usr/local/include/cairo -I/usr/local/include/glib-2.0 -I/usr/local/lib/glib-2.0/include -I/usr/local/include -I/usr/X11R6/include/pixman-1 -I/usr/X11R6/include/freetype2 -I/usr/local/include/libpng -MMD -MP -MT ui/gtk.o -MF ui/gtk.d -O2 -pipe -c -o ui/gtk.o ui/gtk.c In file included from /usr/local/include/gtk-2.0/gtk/gtk.h:234, from ui/gtk.c:44: /usr/local/include/gtk-2.0/gtk/gtkitemfactory.h:47: warning: function declaration isn't a prototype ui/gtk.c:58:17: warning: pty.h: No such file or directory ui/gtk.c: In function 'gd_vc_init': ui/gtk.c:1142: error: storage size of 'tty' isn't known ui/gtk.c:1162: warning: implicit declaration of function 'openpty' ui/gtk.c:1162: warning: nested extern declaration of 'openpty' ui/gtk.c:1166: warning: implicit declaration of function 'tcgetattr' ui/gtk.c:1166: warning: nested extern declaration of 'tcgetattr' ui/gtk.c:1167: warning: implicit declaration of function 'cfmakeraw' ui/gtk.c:1167: warning: nested extern declaration of 'cfmakeraw' ui/gtk.c:1168: warning: implicit declaration of function 'tcsetattr' ui/gtk.c:1168: warning: nested extern declaration of 'tcsetattr' ui/gtk.c:1168: error: 'TCSAFLUSH' undeclared (first use in this function) ui/gtk.c:1168: error: (Each undeclared identifier is reported only once ui/gtk.c:1168: error: for each function it appears in.) ui/gtk.c:1142: warning: unused variable 'tty' To manage notifications about this bug go to: https://bugs.launchpad.net/qemu/+bug/114/+subscriptions
Re: [Qemu-devel] [GIT PULL for-1.5] Trivial patches for 2013-05-18
Pulled. Thanks. Regards, Anthony Liguori
Re: [Qemu-devel] [PATCH 0/2 v2] Chardev related fixes
Applied. Thanks. Regards, Anthony Liguori
Re: [Qemu-devel] [PATCH for-1.5] Revert migration: don't account sleep time for calculating bandwidth
Applied. Thanks. Regards, Anthony Liguori
Re: [Qemu-devel] [PATCH for-1.5] virtio: add virtio_bus_get_dev_path.
Applied. Thanks. Regards, Anthony Liguori
Re: [Qemu-devel] [PATCH] ui/gtk.c: Fix *BSD build of Gtk+ UI
Applied. Thanks. Regards, Anthony Liguori
Re: [Qemu-devel] [PATCH v2 1/1] qom/object: Don't poll cast cache for NULL objects
Applied. Thanks. Regards, Anthony Liguori
Re: [Qemu-devel] [PATCH] chardev: Get filename for new qapi backend
Applied. Thanks. Regards, Anthony Liguori
Re: [Qemu-devel] [PATCH for-1.5 0/2] main-loop: fix slirp on win32
Applied. Thanks. Regards, Anthony Liguori
Re: [Qemu-devel] [PATCH for-1.5] osdep: fix qemu_anon_ram_free trace (+ fix compilation on 32 bit hosts)
Applied. Thanks. Regards, Anthony Liguori
Re: [Qemu-devel] [PATCH for-1.5 resend] vl: new runstate transition: RUN_STATE_GUEST_PANICKED - RUN_STATE_FINISH_MIGRATE
Applied. Thanks. Regards, Anthony Liguori
Re: [Qemu-devel] [PATCH v2] pci-assign: Add MSI affinity support
Applied. Thanks. Regards, Anthony Liguori
Re: [Qemu-devel] [PATCH v5] virtio-net: dynamic network offloads configuration
Applied. Thanks. Regards, Anthony Liguori
Re: [Qemu-devel] [PATCH 0/2] pci-assign: MSI affinity support
Applied. Thanks. Regards, Anthony Liguori
Re: [Qemu-devel] [PATCH v1 1/1] glib: Fix some misuses of gsize/size_t types
Applied. Thanks. Regards, Anthony Liguori
Re: [Qemu-devel] [PATCH arm-devs v1 1/5] sd/sd.c: Fix inquiry ACMD41
Hi Igor, On Wed, May 22, 2013 at 11:37 PM, Igor Mitsyanko i.mitsya...@gmail.com wrote: On 05/21/2013 10:50 AM, peter.crosthwa...@xilinx.com wrote: From: Peter Crosthwaite peter.crosthwa...@xilinx.com the SD command ACMD41 can be used in a read only mode to query device state without doing the SD card initialisation. This is valid even which the device is already initialised. Fix the command to be responsive when in the ready state accordingly. Signed-off-by: Peter Crosthwaite peter.crosthwa...@xilinx.com --- hw/sd/sd.c | 1 + 1 file changed, 1 insertion(+) diff --git a/hw/sd/sd.c b/hw/sd/sd.c index 2e0ef3e..89bfb7a 100644 --- a/hw/sd/sd.c +++ b/hw/sd/sd.c @@ -1277,6 +1277,7 @@ static sd_rsp_type_t sd_app_command(SDState *sd, } switch (sd-state) { case sd_idle_state: +case sd_ready_state: /* We accept any voltage. 1 V is nothing. */ if (req.arg) sd-state = sd_ready_state; I couldn't find any info in SD specification that would confirm this change correctness, what about table Table 4-29: Card State Transition Table which states that ACMD41 is illegal in ready state? By the letter of the spec I think you are right. Although this patch is needed to make my QEMU consistent with my real hardware. I'll dig deeper. Regards, Peter -- Best wishes, Igor Mitsyanko email: i.mitsya...@gmail.com
[Qemu-devel] qemu seabios issue with vhost-scsi
Hi, While testing vhost-scsi in the current qemu git, ran into an earlier issue with seabios. I had to disable scsi support in seabios to get it working. I was hoping this issue got resolved when vhost-scsi support got merged into qemu. Is this still being worked on ? Thanks, Badari [root ~]# gdb /root/qemu/x86_64-softmmu/qemu-system-x86_64 GNU gdb (GDB) Red Hat Enterprise Linux (7.2-60.el6) Copyright (C) 2010 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later http://gnu.org/licenses/gpl.html This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type show copying and show warranty for details. This GDB was configured as x86_64-redhat-linux-gnu. For bug reporting instructions, please see: http://www.gnu.org/software/gdb/bugs/... Reading symbols from /root/qemu/x86_64-softmmu/qemu-system-x86_64...done. (gdb) run --cpu qemu64 --enable-kvm -m 4096 -drive file=/var/lib/libvirt/images/lnx.img,if=ide,cache=writethrough -device vhost-scsi-pci,wwpn=naa.6001405bd4e8476d,event_idx=off -vnc :10 -boot d Starting program: /root/qemu/x86_64-softmmu/qemu-system-x86_64 --cpu qemu64 --enable-kvm -m 4096 -drive file=/var/lib/libvirt/images/window.img,if=ide,cache=writethrough -device vhost-scsi-pci,wwpn=naa.6001405bd4e8476d,event_idx=off -vnc :10 -boot d warning: no loadable sections found in added symbol-file system-supplied DSO at 0x77ffa000 [Thread debugging using libthread_db enabled] [New Thread 0x71c1c700 (LWP 4725)] [New Thread 0x71239700 (LWP 4726)] [New Thread 0x7fffeb7ff700 (LWP 4729)] Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 0x71239700 (LWP 4726)] 0x556b3191 in scsi_device_find (bus=0x565abb50, channel=0, id=0, lun=0) at hw/scsi/scsi-bus.c:1744 1744QTAILQ_FOREACH_REVERSE(kid, bus-qbus.children, ChildrenHead, sibling) { Missing separate debuginfos, use: debuginfo-install cyrus-sasl-gssapi-2.1.23-13.el6_3.1.x86_64 cyrus-sasl-lib-2.1.23-13.el6_3.1.x86_64 cyrus-sasl-md5-2.1.23-13.el6_3.1.x86_64 cyrus-sasl-plain-2.1.23-13.el6_3.1.x86_64 db4-4.7.25-17.el6.x86_64 glib2-2.22.5-7.el6.x86_64 glibc-2.12-1.107.el6.x86_64 gnutls-2.8.5-10.el6.x86_64 keyutils-libs-1.4-4.el6.x86_64 krb5-libs-1.10.3-10.el6.x86_64 libcom_err-1.41.12-14.el6.x86_64 libcurl-7.19.7-35.el6.x86_64 libgcrypt-1.4.5-9.el6_2.2.x86_64 libgpg-error-1.7-4.el6.x86_64 libidn-1.18-2.el6.x86_64 libpng-1.2.49-1.el6_2.x86_64 libselinux-2.0.94-5.3.el6.x86_64 libssh2-1.4.2-1.el6.x86_64 libtasn1-2.3-3.el6_2.1.x86_64 ncurses-libs-5.7-3.20090208.el6.x86_64 nspr-4.9.2-1.el6.x86_64 nss-3.14.0.0-12.el6.x86_64 nss-softokn-freebl-3.12.9-11.el6.x86_64 nss-util-3.14.0.0-2.el6.x86_64 openldap-2.4.23-31.el6.x86_64 openssl-1.0.0-27.el6.x86_64 pixman-0.26.2-4.el6.x86_64 zlib-1.2.3-29.el6.x86_64 (gdb) bt #0 0x556b3191 in scsi_device_find (bus=0x565abb50, channel=0, id= 0, lun=0) at hw/scsi/scsi-bus.c:1744 #1 0x557a59f0 in virtio_scsi_device_find (vdev=0x565aba38, vq= 0x565d1150) at /root/qemu/hw/scsi/virtio-scsi.c:56 #2 virtio_scsi_handle_cmd (vdev=0x565aba38, vq=0x565d1150) at /root/qemu/hw/scsi/virtio-scsi.c:376 #3 0x557b3410 in access_with_adjusted_size (addr=16, value= 0x71238b78, size=2, access_size_min=value optimized out, access_size_max=value optimized out, access= 0x557b4b80 memory_region_write_accessor, opaque=0x565ab8f0) at /root/qemu/memory.c:364 #4 0x557b3a3b in memory_region_iorange_write ( iorange=value optimized out, offset=value optimized out, width=value optimized out, data=2) at /root/qemu/memory.c:439 #5 0x557b29a6 in kvm_handle_io (env=0x56520aa0) at /root/qemu/kvm-all.c:1485 #6 kvm_cpu_exec (env=0x56520aa0) at /root/qemu/kvm-all.c:1634 #7 0x5576108e in qemu_kvm_cpu_thread_fn (arg=0x56520aa0) at /root/qemu/cpus.c:759 #8 0x76059851 in start_thread () from /lib64/libpthread.so.0 #9 0x75da790d in clone () from /lib64/libc.so.6
Re: [Qemu-devel] [PATCH v4 00/10] curl: fix curl read
On Wed, 05/22 16:39, Stefan Hajnoczi wrote: On Wed, May 22, 2013 at 12:12:24PM +0100, Richard W.M. Jones wrote: On Wed, May 22, 2013 at 01:04:51PM +0200, Paolo Bonzini wrote: Something is trying to write, but there's no write operation defined for CURL. I guess curl (and other backends too) should reject being opened for write. Alternatively, block.c could do that for them. Yes, I'd just got to that conclusion as well :-) The attached patch fixes the crash for me. Please post a top-level thread so that your patch gets picked up by scripts and noticed by humans too :). Alternatively Fam can include it in the next revision of this series. Stefan No problem, I'll post a new version to include it. -- Fam
Re: [Qemu-devel] [PATCH] block/curl.c: Refuse to open the handle for writes.
On Wed, 05/22 16:01, Richard W.M. Jones wrote: From: Richard W.M. Jones rjo...@redhat.com --- block/curl.c | 4 1 file changed, 4 insertions(+) diff --git a/block/curl.c b/block/curl.c index b8935fd..f1e302b 100644 --- a/block/curl.c +++ b/block/curl.c @@ -406,6 +406,10 @@ static int curl_open(BlockDriverState *bs, QDict *options, int flags) static int inited = 0; +if (flags BDRV_O_RDWR) { +return -ENOTSUP; +} + opts = qemu_opts_create_nofail(runtime_opts); qemu_opts_absorb_qdict(opts, options, local_err); if (error_is_set(local_err)) { -- 1.8.2.1 Thanks, I'll include this in the new version of my series. -- Fam
Re: [Qemu-devel] qemu seabios issue with vhost-scsi
On Wed, May 22, 2013 at 05:36:08PM -0700, Badari wrote: Hi, While testing vhost-scsi in the current qemu git, ran into an earlier issue with seabios. I had to disable scsi support in seabios to get it working. I was hoping this issue got resolved when vhost-scsi support got merged into qemu. Is this still being worked on ? Hmm, can you try seabios.git? Not sure if seabios shipped by qemu picked up the fixes for vhost-scsi. Thanks, Badari [root ~]# gdb /root/qemu/x86_64-softmmu/qemu-system-x86_64 GNU gdb (GDB) Red Hat Enterprise Linux (7.2-60.el6) Copyright (C) 2010 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later http://gnu.org/licenses/gpl.html This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type show copying and show warranty for details. This GDB was configured as x86_64-redhat-linux-gnu. For bug reporting instructions, please see: http://www.gnu.org/software/gdb/bugs/... Reading symbols from /root/qemu/x86_64-softmmu/qemu-system-x86_64...done. (gdb) run --cpu qemu64 --enable-kvm -m 4096 -drive file=/var/lib/libvirt/images/lnx.img,if=ide,cache=writethrough -device vhost-scsi-pci,wwpn=naa.6001405bd4e8476d,event_idx=off -vnc :10 -boot d Starting program: /root/qemu/x86_64-softmmu/qemu-system-x86_64 --cpu qemu64 --enable-kvm -m 4096 -drive file=/var/lib/libvirt/images/window.img,if=ide,cache=writethrough -device vhost-scsi-pci,wwpn=naa.6001405bd4e8476d,event_idx=off -vnc :10 -boot d warning: no loadable sections found in added symbol-file system-supplied DSO at 0x77ffa000 [Thread debugging using libthread_db enabled] [New Thread 0x71c1c700 (LWP 4725)] [New Thread 0x71239700 (LWP 4726)] [New Thread 0x7fffeb7ff700 (LWP 4729)] Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 0x71239700 (LWP 4726)] 0x556b3191 in scsi_device_find (bus=0x565abb50, channel=0, id=0, lun=0) at hw/scsi/scsi-bus.c:1744 1744QTAILQ_FOREACH_REVERSE(kid, bus-qbus.children, ChildrenHead, sibling) { Missing separate debuginfos, use: debuginfo-install cyrus-sasl-gssapi-2.1.23-13.el6_3.1.x86_64 cyrus-sasl-lib-2.1.23-13.el6_3.1.x86_64 cyrus-sasl-md5-2.1.23-13.el6_3.1.x86_64 cyrus-sasl-plain-2.1.23-13.el6_3.1.x86_64 db4-4.7.25-17.el6.x86_64 glib2-2.22.5-7.el6.x86_64 glibc-2.12-1.107.el6.x86_64 gnutls-2.8.5-10.el6.x86_64 keyutils-libs-1.4-4.el6.x86_64 krb5-libs-1.10.3-10.el6.x86_64 libcom_err-1.41.12-14.el6.x86_64 libcurl-7.19.7-35.el6.x86_64 libgcrypt-1.4.5-9.el6_2.2.x86_64 libgpg-error-1.7-4.el6.x86_64 libidn-1.18-2.el6.x86_64 libpng-1.2.49-1.el6_2.x86_64 libselinux-2.0.94-5.3.el6.x86_64 libssh2-1.4.2-1.el6.x86_64 libtasn1-2.3-3.el6_2.1.x86_64 ncurses-libs-5.7-3.20090208.el6.x86_64 nspr-4.9.2-1.el6.x86_64 nss-3.14.0.0-12.el6.x86_64 nss-softokn-freebl-3.12.9-11.el6.x86_64 nss-util-3.14.0.0-2.el6.x86_64 openldap-2.4.23-31.el6.x86_64 openssl-1.0.0-27.el6.x86_64 pixman-0.26.2-4.el6.x86_64 zlib-1.2.3-29.el6.x86_64 (gdb) bt #0 0x556b3191 in scsi_device_find (bus=0x565abb50, channel=0, id= 0, lun=0) at hw/scsi/scsi-bus.c:1744 #1 0x557a59f0 in virtio_scsi_device_find (vdev=0x565aba38, vq= 0x565d1150) at /root/qemu/hw/scsi/virtio-scsi.c:56 #2 virtio_scsi_handle_cmd (vdev=0x565aba38, vq=0x565d1150) at /root/qemu/hw/scsi/virtio-scsi.c:376 #3 0x557b3410 in access_with_adjusted_size (addr=16, value= 0x71238b78, size=2, access_size_min=value optimized out, access_size_max=value optimized out, access= 0x557b4b80 memory_region_write_accessor, opaque=0x565ab8f0) at /root/qemu/memory.c:364 #4 0x557b3a3b in memory_region_iorange_write ( iorange=value optimized out, offset=value optimized out, width=value optimized out, data=2) at /root/qemu/memory.c:439 #5 0x557b29a6 in kvm_handle_io (env=0x56520aa0) at /root/qemu/kvm-all.c:1485 #6 kvm_cpu_exec (env=0x56520aa0) at /root/qemu/kvm-all.c:1634 #7 0x5576108e in qemu_kvm_cpu_thread_fn (arg=0x56520aa0) at /root/qemu/cpus.c:759 #8 0x76059851 in start_thread () from /lib64/libpthread.so.0 #9 0x75da790d in clone () from /lib64/libc.so.6 -- Asias
[Qemu-devel] [PATCH] qemu-kvm: fix unmatched RAM alloction/free
mmap is used in qemu_vmalloc function instead of qemu_memalign(commit 7dda5dc8), so it should change qemu_vfree to munmap to fix a unmatched issue. This issue appears when a PCI device is being assigned to KVM guest, failure to read PCI rom file will bring RAM free, then the incorrect qemu_vfree calling will cause a segment fault. Signed-off-by: Xudong Hao xudong@intel.com --- exec.c |6 +- 1 files changed, 1 insertions(+), 5 deletions(-) diff --git a/exec.c b/exec.c index fa1e0c3..d40d237 100644 --- a/exec.c +++ b/exec.c @@ -1152,15 +1152,11 @@ void qemu_ram_free(ram_addr_t addr) abort(); #endif } else { -#if defined(TARGET_S390X) defined(CONFIG_KVM) -munmap(block-host, block-length); -#else if (xen_enabled()) { xen_invalidate_map_cache_entry(block-host); } else { -qemu_vfree(block-host); +munmap(block-host, block-length); } -#endif } g_free(block); break; -- 1.5.6
Re: [Qemu-devel] [RFC] reverse execution.
On Fri, May 17, 2013 at 09:16:06PM +0200, Mark Burton wrote: I wish I could say I understood it better, but at this point any insight would be gratefully received. However, what does seem clear is that the intent and purpose of Icount is subtly different, and possibly orthogonal to what we're trying to achieve. And - actually, determinism (or the lack of it), is defiantly an issue, but - for now - we have spent much of this week finding a bit of code that avoids any non-determanistic behavior - simply so we can make sure the mechanisms work - THEN we will tackle the thorny subject of what is causing non-determanistic behavior (by which, I _suspect_ I mean, what devices or timers are not adhering to the icount mechanism). To recap, as I understand things, setting the icount value in the command line is intended to give a rough instructions per second mechanism. One of the effects of that is to make things more deterministic. Our overall intent is to allow the user that has hit a bug, to step backwards. After much discussion (!) I'm convinced by the argument that I might in the end want both of these things. I might want to set some sort of instructions per second value (and change it between runs), and if/when I hit a bug, go backwards. Thus far, so good. underneath the hood, icount keeps a counter in the TCG environment which is decremented (as Fred says) and the icount mechanism plays with it as it feels fit. The bottom line is that, orthogonal to this, we need a separate 'counter' which is almost identical to the icount counter, in order to count instructions for the reverse execution mechanism. We have looked at re-using the icount counter as Fred said, but that soon ends you up in a whole heap of pain. Our conclusion - it would be much cleaner to have a separate dedicated counter, then you can simply use either mechanism independent of the other. On this subject - I would like to hear any and all views. Having said all of that, in BOTH cases, we need determinism. In our case, determinism is very tightly defined (which - I suspect may not be the case for icount). In our case, having returned to a snapshot, the subsequent execution must follow the EXACT SAME path that it did last time. no if's no buts. Not IO no income tax, no VAT, no money back no guarantee…. Right now, what Fred has found is that sometimes things 'drift'… we will (of course) be looking into that. But, for now, our principle concern is to take a simple bit of code, with no IO, and nothing that causes non-determanism - save a snapshot at the beginning of the sequence, run, hit a breakpoint, return to the breakpoint, and be able to _exactly_ return to the place we came from. As Fred said, we imagined that we could do this based on TBs, at least as a 'block' level (which actually may be good enough for us). However, our mechanism for counting TB's was badly broken. None the less, we learnt a lot about TB's - and about some of the non-determaistic behavior that will come to haunt us later. We also concluded that counting TBs is always going to be second rate, and if we're going to do this properly, we need to count instructions. Finally, we have concluded that re-using the icount counters is going to be very painful, we need to re-use the same mechanism, but we need dedicated counters… Again, please, all - pitch in and say what you think. Fred and I have been scratching out head all week on this, and I'm not convinced we have come up with the right answers, so any input would be most welcome. Hi, This was a long time ago, but I recall having issues with determenism when hacking on TLMu. Ditching the display timer helped. IIRC, I was getting 100% reproducable runs after that. Cheers, Edgar
Re: [Qemu-devel] [PATCH v3 1/3] debugcon: fix always print addr=0x0, val=0x0 bug
ping ... again. 在 2013-04-22一的 14:18 +0800,li guang写道: ping ... 在 2013-04-16二的 11:53 +0800,liguang写道: when use DEBUG_DEBUGCON, screen spits: debugcon: write addr=0x val=0x00 Rdebugcon: write addr=0x val=0x00 udebugcon: write addr=0x val=0x00 ndebugcon: write addr=0x val=0x00 ndebugcon: write addr=0x val=0x00 idebugcon: write addr=0x val=0x00 ndebugcon: write addr=0x val=0x00 gdebugcon: write addr=0x val=0x00 debugcon: write addr=0x val=0x00 odebugcon: write addr=0x val=0x00 pdebugcon: write addr=0x val=0x00 tdebugcon: write addr=0x val=0x00 idebugcon: write addr=0x val=0x00 odebugcon: write addr=0x val=0x00 ndebugcon: write addr=0x val=0x00 debugcon: write addr=0x val=0x00 rdebugcon: write addr=0x val=0x00 odebugcon: write addr=0x val=0x00 mdebugcon: write addr=0x val=0x00 debugcon: write addr=0x val=0x00 adebugcon: write addr=0x val=0x00 tdebugcon: write addr=0x val=0x00 debugcon: write addr=0x val=0x00 Oh, that's wrong, val is not always be 0. this bug caused by lack of length modifier for specifier 'x'. Signed-off-by: liguang lig.f...@cn.fujitsu.com --- hw/char/debugcon.c |2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/hw/char/debugcon.c b/hw/char/debugcon.c index 0588eeb..44c93e1 100644 --- a/hw/char/debugcon.c +++ b/hw/char/debugcon.c @@ -55,7 +55,7 @@ static void debugcon_ioport_write(void *opaque, hwaddr addr, uint64_t val, unsigned char ch = val; #ifdef DEBUG_DEBUGCON -printf(debugcon: write addr=0x%04x val=0x%02x\n, addr, val); +printf(debugcon: write addr=0x%04 HWADDR_PRIx val=0x%02x\n, addr, val); #endif qemu_chr_fe_write(s-chr, ch, 1);
Re: [Qemu-devel] [PATCH 4/4][seabios] ec: add ASL for ACPI Embedded Controller
Add seabios mail-list 在 2013-05-22三的 11:46 +0800,liguang写道: Signed-off-by: liguang lig.f...@cn.fujitsu.com --- src/acpi-dsdt.dsl |1 + src/ec.dsl| 51 + src/q35-acpi-dsdt.dsl |1 + 3 files changed, 53 insertions(+), 0 deletions(-) create mode 100644 src/ec.dsl diff --git a/src/acpi-dsdt.dsl b/src/acpi-dsdt.dsl index 158f6b4..4fa2b9a 100644 --- a/src/acpi-dsdt.dsl +++ b/src/acpi-dsdt.dsl @@ -112,6 +112,7 @@ DefinitionBlock ( } Name(FDEN, 1) } +#include ec.dsl } #include acpi-dsdt-isa.dsl diff --git a/src/ec.dsl b/src/ec.dsl new file mode 100644 index 000..c8be420 --- /dev/null +++ b/src/ec.dsl @@ -0,0 +1,51 @@ +/* + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; version 2 of the License. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA + */ + +Device (EC0) +{ + Name(_HID, EISAID (PNP0C09)) + Name(_UID, 1) + + Method(_CRS, 0) + { + Name(BFFR, ResourceTemplate() + { + IO(Decode16, 0x62, 0x62, 0, 1) // ACPI DATA IN/OUT + IO(Decode16, 0x66, 0x66, 0, 1) // CMD/STS + }) + Return(BFFR) + } + + OperationRegion(ECFD, EmbeddedControl, 0, 0xFF) + Field(ECFD, ByteAcc, Lock, Preserve) + { Offset(1), + CPUS, 8, // 1, CPU status + CPUN, 8, // 2, CPU index + } + + Name(_GPE, 17) + + Method(_Q01) + { + If(LEqual(1, CPUS)) + { + Store(0, CPUS) + } + Else + { + Store(1, CPUS) + } + } +} diff --git a/src/q35-acpi-dsdt.dsl b/src/q35-acpi-dsdt.dsl index c031d83..056b4f8 100644 --- a/src/q35-acpi-dsdt.dsl +++ b/src/q35-acpi-dsdt.dsl @@ -167,6 +167,7 @@ DefinitionBlock ( FDEN, 1 } } +#include ec.dsl } #include acpi-dsdt-isa.dsl
[Qemu-devel] [PATCH v5 00/11] curl: fix curl read
CURL library API has changed, the current curl driver is not working. This patch rewrites the use of API as well as the structure of internal states. BDRVCURLState holds the pointer to curl multi interface (man 3 libcurl-multi), and 4 lists for internal states: - CURLState holds state for libcurl connection (man 3 libcurl-easy) - CURLSockInfo holds information for libcurl socket interface (man 3 curl_multi_socket_action). - CURLDataCache holds the user data read from libcurl, it is in a list ordered by access, the used cache is moved to list head on access, so the tail element is freed first. BDRVCURLState.cache_quota is the threshold to start freeing cache. - CURLAIOCB holds ongoing aio information. Changes from v4: 11: Added: block/curl.c: Refuse to open the handle for writes. Changes from v3: 01, 06, 07: Add QLIST_INIT in qemu_open to initialize each list. 07: Move clean up for s-acbs from later patch to here. Use qemu_aio_relase instead of g_free on acb. Fix use-after-free bug. [Rich] Changes from v2: 00: Removed previous 09, since changes are included in previous commits. previous 09: curl: release everything on curl close. 01: Add commit message for introducing CURLSockInfo. Remove unnecessary pointer type cast. 02: Use sizeof instead of macro for s-range len. 04: Use pstrcpy and strncasecmp. 06: Add commit message for why is there a cache in curl.c. Changes from v1: Split to multiple patches. The final status should be identical to v1. Fam Zheng (10): curl: introduce CURLSockInfo to BDRVCURLState. curl: change magic number to sizeof curl: change curl_multi_do to curl_fd_handler curl: fix curl_open curl: add timer to BDRVCURLState curl: introduce CURLDataCache curl: make use of CURLDataCache. curl: use list to store CURLState curl: add cache quota. curl: introduce ssl_no_cert runtime option. Richard W.M. Jones (1): block/curl.c: Refuse to open the handle for writes. block/curl.c | 578 +-- 1 file changed, 365 insertions(+), 213 deletions(-) -- 1.8.2.3
[Qemu-devel] [PATCH v5 02/11] curl: change magic number to sizeof
String field length is duplicated in two places. Make it a sizeof. Signed-off-by: Fam Zheng f...@redhat.com --- block/curl.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/block/curl.c b/block/curl.c index 2e42cd1..d0bc66f 100644 --- a/block/curl.c +++ b/block/curl.c @@ -569,7 +569,7 @@ static void curl_readv_bh_cb(void *p) state-orig_buf = g_malloc(state-buf_len); state-acb[0] = acb; -snprintf(state-range, 127, %zd-%zd, start, end); +snprintf(state-range, sizeof(state-range) - 1, %zd-%zd, start, end); DPRINTF(CURL (AIO): Reading %d at %zd (%s)\n, (acb-nb_sectors * SECTOR_SIZE), start, state-range); curl_easy_setopt(state-curl, CURLOPT_RANGE, state-range); -- 1.8.2.3
[Qemu-devel] [PATCH v5 04/11] curl: fix curl_open
Change curl_size_cb to curl_header_cb, as what the function is really doing. Fix the registering, CURLOPT_WRITEFUNCTION is apparently wrong, should be CURLOPT_HEADERFUNCTION. Parsing size from header is not necessary as we're using curl_easy_getinfo(state-curl, CURLINFO_CONTENT_LENGTH_DOWNLOAD, d) The validity of d is version dependant per man 3 curl_easy_getinfo: ... CURLINFO_CONTENT_LENGTH_DOWNLOAD Pass a pointer to a double to receive the content-length of the download. This is the value read from the Content-Length: field. Since 7.19.4, this returns -1 if the size isn't known. ... And Accept-Ranges is essential for curl to fetch right data from http/https servers, so we check for this in the header and fail the open if it's not supported. Signed-off-by: Fam Zheng f...@redhat.com --- block/curl.c | 44 1 file changed, 32 insertions(+), 12 deletions(-) diff --git a/block/curl.c b/block/curl.c index 4a56cfd..fc464ad 100644 --- a/block/curl.c +++ b/block/curl.c @@ -89,6 +89,8 @@ typedef struct BDRVCURLState { QLIST_HEAD(, CURLSockInfo) socks; char *url; size_t readahead_size; +/* Whether http server accept range in header */ +bool accept_range; } BDRVCURLState; static void curl_clean_state(CURLState *s); @@ -133,14 +135,14 @@ static int curl_sock_cb(CURL *curl, curl_socket_t fd, int action, return 0; } -static size_t curl_size_cb(void *ptr, size_t size, size_t nmemb, void *opaque) +static size_t curl_header_cb(void *ptr, size_t size, size_t nmemb, void *opaque) { -CURLState *s = ((CURLState*)opaque); +BDRVCURLState *s = (BDRVCURLState *)opaque; size_t realsize = size * nmemb; -size_t fsize; +const char *accept_line = Accept-Ranges: bytes; -if(sscanf(ptr, Content-Length: %zd, fsize) == 1) { -s-s-len = fsize; +if (strncmp((char *)ptr, accept_line, strlen(accept_line)) == 0) { +s-accept_range = true; } return realsize; @@ -428,6 +430,7 @@ static int curl_open(BlockDriverState *bs, QDict *options, int flags) Error *local_err = NULL; const char *file; double d; +int running; static int inited = 0; @@ -469,16 +472,29 @@ static int curl_open(BlockDriverState *bs, QDict *options, int flags) // Get file size curl_easy_setopt(state-curl, CURLOPT_NOBODY, 1); -curl_easy_setopt(state-curl, CURLOPT_WRITEFUNCTION, (void *)curl_size_cb); -if (curl_easy_perform(state-curl)) +curl_easy_setopt(state-curl, CURLOPT_HEADERFUNCTION, + curl_header_cb); +curl_easy_setopt(state-curl, CURLOPT_HEADERDATA, s); +if (curl_easy_perform(state-curl)) { goto out; +} curl_easy_getinfo(state-curl, CURLINFO_CONTENT_LENGTH_DOWNLOAD, d); -curl_easy_setopt(state-curl, CURLOPT_WRITEFUNCTION, (void *)curl_read_cb); curl_easy_setopt(state-curl, CURLOPT_NOBODY, 0); -if (d) +#if LIBCURL_VERSION_NUM 0x071304 +if (d != -1) { +#else +if (d) { +#endif s-len = (size_t)d; -else if(!s-len) +} else if (!s-len) { +goto out; +} +if (!strncasecmp(s-url, http://;, strlen(http://;)) + !strncasecmp(s-url, https://;, strlen(https://;)) + !s-accept_range) { +pstrcpy(state-errmsg, CURL_ERROR_SIZE, Server not supporting range.); goto out; +} DPRINTF(CURL: Size = %zd\n, s-len); curl_clean_state(state); @@ -487,10 +503,13 @@ static int curl_open(BlockDriverState *bs, QDict *options, int flags) // Now we know the file exists and its size, so let's // initialize the multi interface! - s-multi = curl_multi_init(); +if (!s-multi) { +goto out_noclean; +} curl_multi_setopt(s-multi, CURLMOPT_SOCKETDATA, s); curl_multi_setopt(s-multi, CURLMOPT_SOCKETFUNCTION, curl_sock_cb); +curl_multi_socket_action(s-multi, CURL_SOCKET_TIMEOUT, 0, running); qemu_opts_del(opts); return 0; @@ -500,8 +519,9 @@ out: curl_easy_cleanup(state-curl); state-curl = NULL; out_noclean: -g_free(s-url); qemu_opts_del(opts); +g_free(s-url); +s-url = NULL; return -EINVAL; } -- 1.8.2.3
[Qemu-devel] [PATCH v5 05/11] curl: add timer to BDRVCURLState
libcurl uses timer to manage ongoing sockets, it needs us to supply timer. This patch introduce QEMUTimer to BDRVCURLState and handles timeouts as libcurl expects (curl_multi_timer_cb sets given timeout value on the timer and curl_timer_cb calls curl_multi_socket_action on triggered). Signed-off-by: Fam Zheng f...@redhat.com --- block/curl.c | 42 ++ 1 file changed, 42 insertions(+) diff --git a/block/curl.c b/block/curl.c index fc464ad..4fd5bb9 100644 --- a/block/curl.c +++ b/block/curl.c @@ -89,6 +89,7 @@ typedef struct BDRVCURLState { QLIST_HEAD(, CURLSockInfo) socks; char *url; size_t readahead_size; +QEMUTimer *timer; /* Whether http server accept range in header */ bool accept_range; } BDRVCURLState; @@ -148,6 +149,38 @@ static size_t curl_header_cb(void *ptr, size_t size, size_t nmemb, void *opaque) return realsize; } +static void curl_timer_cb(void *opaque) +{ +int running; +BDRVCURLState *bs = (BDRVCURLState *)opaque; +DPRINTF(curl timeout!\n); +curl_multi_socket_action(bs-multi, CURL_SOCKET_TIMEOUT, 0, running); +} + +/* Call back for curl_multi interface */ +static int curl_multi_timer_cb(CURLM *multi, long timeout_ms, void *s) +{ +BDRVCURLState *bs = (BDRVCURLState *)s; +DPRINTF(curl multi timer cb, timeout: %ld (ms)\n, timeout_ms); +if (timeout_ms 0) { +if (bs-timer) { +qemu_del_timer(bs-timer); +qemu_free_timer(bs-timer); +bs-timer = NULL; +} +} else if (timeout_ms == 0) { +curl_timer_cb(bs); +} else { +if (!bs-timer) { +bs-timer = qemu_new_timer_ms(host_clock, curl_timer_cb, s); +assert(bs-timer); +} +qemu_mod_timer(bs-timer, qemu_get_clock_ms(host_clock) + timeout_ms); +} + +return 0; +} + static size_t curl_read_cb(void *ptr, size_t size, size_t nmemb, void *opaque) { CURLState *s = ((CURLState*)opaque); @@ -509,6 +542,8 @@ static int curl_open(BlockDriverState *bs, QDict *options, int flags) } curl_multi_setopt(s-multi, CURLMOPT_SOCKETDATA, s); curl_multi_setopt(s-multi, CURLMOPT_SOCKETFUNCTION, curl_sock_cb); +curl_multi_setopt(s-multi, CURLMOPT_TIMERDATA, s); +curl_multi_setopt(s-multi, CURLMOPT_TIMERFUNCTION, curl_multi_timer_cb); curl_multi_socket_action(s-multi, CURL_SOCKET_TIMEOUT, 0, running); qemu_opts_del(opts); @@ -634,6 +669,13 @@ static void curl_close(BlockDriverState *bs) int i; DPRINTF(CURL: Close\n); + +if (s-timer) { +qemu_del_timer(s-timer); +qemu_free_timer(s-timer); +s-timer = NULL; +} + for (i=0; iCURL_NUM_STATES; i++) { if (s-states[i].in_use) curl_clean_state(s-states[i]); -- 1.8.2.3
[Qemu-devel] [PATCH v5 01/11] curl: introduce CURLSockInfo to BDRVCURLState.
We use socket provided by curl in the driver. Libcurl multi interface has option CURLMOPT_SOCKETFUNCTION for socket. Per man 3 curl_multi_setopt: ... CURLMOPT_SOCKETFUNCTION Pass a pointer to a function matching the curl_socket_callback prototype. The curl_multi_socket_action(3) function informs the application about updates in the socket (file descriptor) status by doing none, one, or multiple calls to the curl_socket_callback given in the param argument. They update the status with changes since the previous time a curl_multi_socket(3) function was called. If the given callback pointer is NULL, no callback will be called. Set the callback's userp argument with CURLMOPT_SOCKETDATA. See curl_multi_socket(3) for more callback details. ... The added structure store information for all the socket returned by libcurl. The most important field is action, which is used to keep what actions are needed when this socket's fd handler is called. CURLSockInfo is added here and used in later commits to save the socket actions. Signed-off-by: Fam Zheng f...@redhat.com --- block/curl.c | 31 +-- 1 file changed, 29 insertions(+), 2 deletions(-) diff --git a/block/curl.c b/block/curl.c index b8935fd..2e42cd1 100644 --- a/block/curl.c +++ b/block/curl.c @@ -75,10 +75,18 @@ typedef struct CURLState char in_use; } CURLState; +typedef struct CURLSockInfo { +curl_socket_t fd; +int action; +struct BDRVCURLState *s; +QLIST_ENTRY(CURLSockInfo) next; +} CURLSockInfo; + typedef struct BDRVCURLState { CURLM *multi; size_t len; CURLState states[CURL_NUM_STATES]; +QLIST_HEAD(, CURLSockInfo) socks; char *url; size_t readahead_size; } BDRVCURLState; @@ -90,7 +98,16 @@ static int curl_aio_flush(void *opaque); static int curl_sock_cb(CURL *curl, curl_socket_t fd, int action, void *s, void *sp) { +BDRVCURLState *bs = s; DPRINTF(CURL (AIO): Sock action %d on fd %d\n, action, fd); +CURLSockInfo *sock = sp; +if (!sp) { +sock = g_malloc0(sizeof(CURLSockInfo)); +sock-fd = fd; +sock-s = bs; +QLIST_INSERT_HEAD(bs-socks, sock, next); +curl_multi_assign(bs-multi, fd, sock); +} switch (action) { case CURL_POLL_IN: qemu_aio_set_fd_handler(fd, curl_multi_do, NULL, curl_aio_flush, s); @@ -433,6 +450,8 @@ static int curl_open(BlockDriverState *bs, QDict *options, int flags) inited = 1; } +QLIST_INIT(s-socks); + DPRINTF(CURL: Opening %s\n, file); s-url = g_strdup(file); state = curl_init_state(s); @@ -462,8 +481,8 @@ static int curl_open(BlockDriverState *bs, QDict *options, int flags) // initialize the multi interface! s-multi = curl_multi_init(); -curl_multi_setopt( s-multi, CURLMOPT_SOCKETDATA, s); -curl_multi_setopt( s-multi, CURLMOPT_SOCKETFUNCTION, curl_sock_cb ); +curl_multi_setopt(s-multi, CURLMOPT_SOCKETDATA, s); +curl_multi_setopt(s-multi, CURLMOPT_SOCKETFUNCTION, curl_sock_cb); curl_multi_do(s); qemu_opts_del(opts); @@ -603,6 +622,14 @@ static void curl_close(BlockDriverState *bs) } if (s-multi) curl_multi_cleanup(s-multi); + +while (!QLIST_EMPTY(s-socks)) { +CURLSockInfo *sock = QLIST_FIRST(s-socks); +QLIST_REMOVE(sock, next); +g_free(sock); +sock = NULL; +} + g_free(s-url); } -- 1.8.2.3
[Qemu-devel] [PATCH v5 07/11] curl: make use of CURLDataCache.
Make subsequecial changes to make use of introduced CURLDataCache. Moved acb struct from CURLState to BDRVCURLState, and changed to list. Signed-off-by: Fam Zheng f...@redhat.com --- block/curl.c | 168 --- 1 file changed, 90 insertions(+), 78 deletions(-) diff --git a/block/curl.c b/block/curl.c index e387ae1..4c4752b 100644 --- a/block/curl.c +++ b/block/curl.c @@ -39,7 +39,6 @@ CURLPROTO_TFTP) #define CURL_NUM_STATES 8 -#define CURL_NUM_ACB8 #define SECTOR_SIZE 512 #define READ_AHEAD_SIZE (256 * 1024) @@ -52,9 +51,7 @@ typedef struct CURLAIOCB { int64_t sector_num; int nb_sectors; - -size_t start; -size_t end; +QLIST_ENTRY(CURLAIOCB) next; } CURLAIOCB; typedef struct CURLDataCache { @@ -70,14 +67,10 @@ typedef struct CURLDataCache { typedef struct CURLState { struct BDRVCURLState *s; -CURLAIOCB *acb[CURL_NUM_ACB]; CURL *curl; -char *orig_buf; -size_t buf_start; -size_t buf_off; -size_t buf_len; char range[128]; char errmsg[CURL_ERROR_SIZE]; +CURLDataCache *cache; char in_use; } CURLState; @@ -92,6 +85,7 @@ typedef struct BDRVCURLState { CURLM *multi; size_t len; CURLState states[CURL_NUM_STATES]; +QLIST_HEAD(, CURLAIOCB) acbs; QLIST_HEAD(, CURLSockInfo) socks; char *url; size_t readahead_size; @@ -221,31 +215,35 @@ static void curl_complete_io(BDRVCURLState *bs, CURLAIOCB *acb, static size_t curl_read_cb(void *ptr, size_t size, size_t nmemb, void *opaque) { -CURLState *s = ((CURLState*)opaque); +CURLState *s = (CURLState *)opaque; +CURLDataCache *c = s-cache; size_t realsize = size * nmemb; -int i; - -DPRINTF(CURL: Just reading %zd bytes\n, realsize); +CURLAIOCB *acb; -if (!s || !s-orig_buf) +if (!c || !c-data) { goto read_end; +} +if (c-write_pos = c-data_len) { +goto read_end; +} +memcpy(c-data + c-write_pos, ptr, + MIN(realsize, c-data_len - c-write_pos)); +c-write_pos += realsize; +if (c-write_pos = c-data_len) { +c-write_pos = c-data_len; +} -memcpy(s-orig_buf + s-buf_off, ptr, realsize); -s-buf_off += realsize; - -for(i=0; iCURL_NUM_ACB; i++) { -CURLAIOCB *acb = s-acb[i]; - -if (!acb) -continue; - -if ((s-buf_off = acb-end)) { -qemu_iovec_from_buf(acb-qiov, 0, s-orig_buf + acb-start, -acb-end - acb-start); -acb-common.cb(acb-common.opaque, 0); -qemu_aio_release(acb); -s-acb[i] = NULL; +acb = QLIST_FIRST(s-s-acbs); +while (acb) { +size_t aio_base = acb-sector_num * SECTOR_SIZE; +size_t aio_len = acb-nb_sectors * SECTOR_SIZE; +CURLAIOCB *next = QLIST_NEXT(acb, next); +if (aio_base = c-base_pos +aio_base + aio_len = c-base_pos + c-write_pos) { +QLIST_REMOVE(acb, next); +curl_complete_io(s-s, acb, c); } +acb = next; } read_end: @@ -275,10 +273,12 @@ static void curl_fd_handler(void *arg) CURLMsg *msg; msg = curl_multi_info_read(s-multi, msgs_in_queue); -if (!msg) +if (!msg) { break; -if (msg-msg == CURLMSG_NONE) +} +if (msg-msg == CURLMSG_NONE) { break; +} switch (msg-msg) { case CURLMSG_DONE: @@ -288,19 +288,17 @@ static void curl_fd_handler(void *arg) CURLINFO_PRIVATE, (char **)state); -/* ACBs for successful messages get completed in curl_read_cb */ +/* ACBs for successful messages get completed in curl_read_cb, + * fail existing acbs for now */ if (msg-data.result != CURLE_OK) { -int i; -for (i = 0; i CURL_NUM_ACB; i++) { -CURLAIOCB *acb = state-acb[i]; - -if (acb == NULL) { -continue; -} - +CURLAIOCB *acb = QLIST_FIRST(s-acbs); +while (acb) { +CURLAIOCB *next = QLIST_NEXT(acb, next); +DPRINTF(EIO, %s\n, state-errmsg); acb-common.cb(acb-common.opaque, -EIO); +QLIST_REMOVE(acb, next); qemu_aio_release(acb); -state-acb[i] = NULL; +acb = next; } } @@ -317,13 +315,10 @@ static void curl_fd_handler(void *arg) static CURLState *curl_init_state(BDRVCURLState *s) { CURLState *state = NULL; -int i, j; +int i; do { for (i=0; iCURL_NUM_STATES; i++) { -for (j=0; jCURL_NUM_ACB; j++) -
[Qemu-devel] [PATCH v5 06/11] curl: introduce CURLDataCache
Data buffer was contained by CURLState, they are allocated and freed together. This patch try to isolate them, by introducing a dedicated cache list to BDRVCURLState. The benifit is we can now release the CURLState (and associated sockets) while keep the fetched data for later use, and simplies the prefetch and buffer logic for some degree. Note: There's already page cache in guest kernel, why cache data here? Since we don't want to submit http/ftp/* request for every 2KB in sequencial read, but there are crude guest that sends small IO reqs, which will result in horrible performance. GRUB/isolinux loading kernel is a typical case and we workaround this by prefetch cache. This is what curl.c has been doing along. This patch just refectors the buffer. Signed-off-by: Fam Zheng f...@redhat.com --- block/curl.c | 136 +-- 1 file changed, 67 insertions(+), 69 deletions(-) diff --git a/block/curl.c b/block/curl.c index 4fd5bb9..e387ae1 100644 --- a/block/curl.c +++ b/block/curl.c @@ -43,10 +43,6 @@ #define SECTOR_SIZE 512 #define READ_AHEAD_SIZE (256 * 1024) -#define FIND_RET_NONE 0 -#define FIND_RET_OK 1 -#define FIND_RET_WAIT 2 - struct BDRVCURLState; typedef struct CURLAIOCB { @@ -61,6 +57,16 @@ typedef struct CURLAIOCB { size_t end; } CURLAIOCB; +typedef struct CURLDataCache { +char *data; +size_t base_pos; +size_t data_len; +size_t write_pos; +/* Ref count for CURLState */ +int use_count; +QLIST_ENTRY(CURLDataCache) next; +} CURLDataCache; + typedef struct CURLState { struct BDRVCURLState *s; @@ -90,6 +96,8 @@ typedef struct BDRVCURLState { char *url; size_t readahead_size; QEMUTimer *timer; +/* List of data cache ordered by access, freed from tail */ +QLIST_HEAD(, CURLDataCache) cache; /* Whether http server accept range in header */ bool accept_range; } BDRVCURLState; @@ -98,6 +106,19 @@ static void curl_clean_state(CURLState *s); static void curl_fd_handler(void *arg); static int curl_aio_flush(void *opaque); +static CURLDataCache *curl_find_cache(BDRVCURLState *bs, + size_t start, size_t len) +{ +CURLDataCache *c; +QLIST_FOREACH(c, bs-cache, next) { +if (start = c-base_pos +start + len = c-base_pos + c-write_pos) { +return c; +} +} +return NULL; +} + static int curl_sock_cb(CURL *curl, curl_socket_t fd, int action, void *s, void *sp) { @@ -181,6 +202,23 @@ static int curl_multi_timer_cb(CURLM *multi, long timeout_ms, void *s) return 0; } +static void curl_complete_io(BDRVCURLState *bs, CURLAIOCB *acb, + CURLDataCache *cache) +{ +size_t aio_base = acb-sector_num * SECTOR_SIZE; +size_t aio_bytes = acb-nb_sectors * SECTOR_SIZE; +size_t off = aio_base - cache-base_pos; + +qemu_iovec_from_buf(acb-qiov, 0, cache-data + off, aio_bytes); +acb-common.cb(acb-common.opaque, 0); +DPRINTF(AIO Request OK: %10zd %10zd\n, aio_base, aio_bytes); +qemu_aio_release(acb); +acb = NULL; +/* Move cache next in the list */ +QLIST_REMOVE(cache, next); +QLIST_INSERT_HEAD(bs-cache, cache, next); +} + static size_t curl_read_cb(void *ptr, size_t size, size_t nmemb, void *opaque) { CURLState *s = ((CURLState*)opaque); @@ -214,59 +252,6 @@ read_end: return realsize; } -static int curl_find_buf(BDRVCURLState *s, size_t start, size_t len, - CURLAIOCB *acb) -{ -int i; -size_t end = start + len; - -for (i=0; iCURL_NUM_STATES; i++) { -CURLState *state = s-states[i]; -size_t buf_end = (state-buf_start + state-buf_off); -size_t buf_fend = (state-buf_start + state-buf_len); - -if (!state-orig_buf) -continue; -if (!state-buf_off) -continue; - -// Does the existing buffer cover our section? -if ((start = state-buf_start) -(start = buf_end) -(end = state-buf_start) -(end = buf_end)) -{ -char *buf = state-orig_buf + (start - state-buf_start); - -qemu_iovec_from_buf(acb-qiov, 0, buf, len); -acb-common.cb(acb-common.opaque, 0); - -return FIND_RET_OK; -} - -// Wait for unfinished chunks -if ((start = state-buf_start) -(start = buf_fend) -(end = state-buf_start) -(end = buf_fend)) -{ -int j; - -acb-start = start - state-buf_start; -acb-end = acb-start + len; - -for (j=0; jCURL_NUM_ACB; j++) { -if (!state-acb[j]) { -state-acb[j] = acb; -return FIND_RET_WAIT; -} -} -} -} - -return FIND_RET_NONE; -} - static void curl_fd_handler(void *arg) {
[Qemu-devel] [PATCH v5 11/11] block/curl.c: Refuse to open the handle for writes.
From: Richard W.M. Jones rjo...@redhat.com Signed-off-by: Richard W.M. Jones rjo...@redhat.com Signed-off-by: Fam Zheng f...@redhat.com --- block/curl.c | 4 1 file changed, 4 insertions(+) diff --git a/block/curl.c b/block/curl.c index b6cc5a0..4499445 100644 --- a/block/curl.c +++ b/block/curl.c @@ -454,6 +454,10 @@ static int curl_open(BlockDriverState *bs, QDict *options, int flags) static int inited = 0; +if (flags BDRV_O_RDWR) { +return -ENOTSUP; +} + opts = qemu_opts_create_nofail(runtime_opts); qemu_opts_absorb_qdict(opts, options, local_err); if (error_is_set(local_err)) { -- 1.8.2.3
[Qemu-devel] [PATCH v5 03/11] curl: change curl_multi_do to curl_fd_handler
The driver calls curl_multi_do to take action at several points, while it's also registered as socket fd handler. This patch removes internal call of curl_multi_do because they are not necessary when handler can be called by socket data update. Since curl_multi_do becomes a pure fd handler, the function is renamed. It takes a pointer to CURLSockInfo now, instead of pointer to BDRVCURLState. Signed-off-by: Fam Zheng f...@redhat.com --- block/curl.c | 28 +--- 1 file changed, 17 insertions(+), 11 deletions(-) diff --git a/block/curl.c b/block/curl.c index d0bc66f..4a56cfd 100644 --- a/block/curl.c +++ b/block/curl.c @@ -92,7 +92,7 @@ typedef struct BDRVCURLState { } BDRVCURLState; static void curl_clean_state(CURLState *s); -static void curl_multi_do(void *arg); +static void curl_fd_handler(void *arg); static int curl_aio_flush(void *opaque); static int curl_sock_cb(CURL *curl, curl_socket_t fd, int action, @@ -110,17 +110,23 @@ static int curl_sock_cb(CURL *curl, curl_socket_t fd, int action, } switch (action) { case CURL_POLL_IN: -qemu_aio_set_fd_handler(fd, curl_multi_do, NULL, curl_aio_flush, s); +qemu_aio_set_fd_handler(fd, curl_fd_handler, NULL, +curl_aio_flush, sock); +sock-action |= CURL_CSELECT_IN; break; case CURL_POLL_OUT: -qemu_aio_set_fd_handler(fd, NULL, curl_multi_do, curl_aio_flush, s); +qemu_aio_set_fd_handler(fd, NULL, curl_fd_handler, curl_aio_flush, +sock); +sock-action |= CURL_CSELECT_OUT; break; case CURL_POLL_INOUT: -qemu_aio_set_fd_handler(fd, curl_multi_do, curl_multi_do, -curl_aio_flush, s); +qemu_aio_set_fd_handler(fd, curl_fd_handler, curl_fd_handler, +curl_aio_flush, sock); +sock-action |= CURL_CSELECT_IN | CURL_CSELECT_OUT; break; case CURL_POLL_REMOVE: qemu_aio_set_fd_handler(fd, NULL, NULL, NULL, NULL); +sock-action = 0; break; } @@ -226,9 +232,10 @@ static int curl_find_buf(BDRVCURLState *s, size_t start, size_t len, return FIND_RET_NONE; } -static void curl_multi_do(void *arg) +static void curl_fd_handler(void *arg) { -BDRVCURLState *s = (BDRVCURLState *)arg; +CURLSockInfo *sock = (CURLSockInfo *)arg; +BDRVCURLState *s = sock-s; int running; int r; int msgs_in_queue; @@ -237,7 +244,9 @@ static void curl_multi_do(void *arg) return; do { -r = curl_multi_socket_all(s-multi, running); +r = curl_multi_socket_action(s-multi, +sock-fd, sock-action, +running); } while(r == CURLM_CALL_MULTI_PERFORM); /* Try to find done transfers, so we can free the easy @@ -302,7 +311,6 @@ static CURLState *curl_init_state(BDRVCURLState *s) } if (!state) { g_usleep(100); -curl_multi_do(s); } } while(!state); @@ -483,7 +491,6 @@ static int curl_open(BlockDriverState *bs, QDict *options, int flags) s-multi = curl_multi_init(); curl_multi_setopt(s-multi, CURLMOPT_SOCKETDATA, s); curl_multi_setopt(s-multi, CURLMOPT_SOCKETFUNCTION, curl_sock_cb); -curl_multi_do(s); qemu_opts_del(opts); return 0; @@ -575,7 +582,6 @@ static void curl_readv_bh_cb(void *p) curl_easy_setopt(state-curl, CURLOPT_RANGE, state-range); curl_multi_add_handle(s-multi, state-curl); -curl_multi_do(s); } -- 1.8.2.3
[Qemu-devel] [PATCH v5 09/11] curl: add cache quota.
Introduce a cache quota: BDRVCURLState.cache_quota. When adding new CURLDataCache to BDRVCURLState, if number of existing CURLDataCache is larger than CURL_CACHE_QUOTA, try to release some first to limit the in memory cache size. A least used entry is selected for releasing. Signed-off-by: Fam Zheng f...@redhat.com --- block/curl.c | 27 +++ 1 file changed, 27 insertions(+) diff --git a/block/curl.c b/block/curl.c index 3ea2552..5adbc84 100644 --- a/block/curl.c +++ b/block/curl.c @@ -40,6 +40,7 @@ #define SECTOR_SIZE 512 #define READ_AHEAD_SIZE (256 * 1024) +#define CURL_CACHE_QUOTA 10 struct BDRVCURLState; @@ -90,6 +91,8 @@ typedef struct BDRVCURLState { QEMUTimer *timer; /* List of data cache ordered by access, freed from tail */ QLIST_HEAD(, CURLDataCache) cache; +/* Threshold to release unused cache when cache list is longer than it */ +int cache_quota; /* Whether http server accept range in header */ bool accept_range; } BDRVCURLState; @@ -526,6 +529,8 @@ static int curl_open(BlockDriverState *bs, QDict *options, int flags) curl_multi_socket_action(s-multi, CURL_SOCKET_TIMEOUT, 0, running); qemu_opts_del(opts); +s-cache_quota = CURL_CACHE_QUOTA; + return 0; out: @@ -595,6 +600,27 @@ static void curl_readv_bh_cb(void *p) cache-data_len = aio_bytes + s-readahead_size; cache-write_pos = 0; cache-data = g_malloc(cache-data_len); +/* Try to release some cache */ +while (0 s-cache_quota = 0) { +CURLDataCache *p; +CURLDataCache *q = NULL; +assert(!QLIST_EMPTY(s-cache)); +for (p = QLIST_FIRST(s-cache); + p; p = QLIST_NEXT(p, next)) { +if (p-use_count == 0) { +q = p; +} +} +if (!q) { +break; +} +QLIST_REMOVE(q, next); +g_free(q-data); +q-data = NULL; +g_free(q); +q = NULL; +s-cache_quota++; +} QLIST_INSERT_HEAD(s-acbs, acb, next); snprintf(state-range, sizeof(state-range) - 1, %zd-%zd, cache-base_pos, @@ -605,6 +631,7 @@ static void curl_readv_bh_cb(void *p) QLIST_INSERT_HEAD(s-cache, cache, next); state-cache = cache; cache-use_count++; +s-cache_quota--; curl_multi_add_handle(s-multi, state-curl); /* kick off curl to start the action */ curl_multi_socket_action(s-multi, 0, CURL_SOCKET_TIMEOUT, running); -- 1.8.2.3
[Qemu-devel] [PATCH v5 08/11] curl: use list to store CURLState
Make it consistent to other structures to use QLIST to store CURLState. It also simplifies initialization and releasing of data. Signed-off-by: Fam Zheng f...@redhat.com --- block/curl.c | 96 1 file changed, 51 insertions(+), 45 deletions(-) diff --git a/block/curl.c b/block/curl.c index 4c4752b..3ea2552 100644 --- a/block/curl.c +++ b/block/curl.c @@ -38,7 +38,6 @@ CURLPROTO_FTP | CURLPROTO_FTPS | \ CURLPROTO_TFTP) -#define CURL_NUM_STATES 8 #define SECTOR_SIZE 512 #define READ_AHEAD_SIZE (256 * 1024) @@ -64,14 +63,13 @@ typedef struct CURLDataCache { QLIST_ENTRY(CURLDataCache) next; } CURLDataCache; -typedef struct CURLState -{ +typedef struct CURLState { struct BDRVCURLState *s; CURL *curl; char range[128]; char errmsg[CURL_ERROR_SIZE]; CURLDataCache *cache; -char in_use; +QLIST_ENTRY(CURLState) next; } CURLState; typedef struct CURLSockInfo { @@ -84,7 +82,7 @@ typedef struct CURLSockInfo { typedef struct BDRVCURLState { CURLM *multi; size_t len; -CURLState states[CURL_NUM_STATES]; +QLIST_HEAD(, CURLState) curl_states; QLIST_HEAD(, CURLAIOCB) acbs; QLIST_HEAD(, CURLSockInfo) socks; char *url; @@ -303,6 +301,9 @@ static void curl_fd_handler(void *arg) } curl_clean_state(state); +QLIST_REMOVE(state, next); +g_free(state); +state = NULL; break; } default: @@ -314,29 +315,17 @@ static void curl_fd_handler(void *arg) static CURLState *curl_init_state(BDRVCURLState *s) { -CURLState *state = NULL; -int i; - -do { -for (i=0; iCURL_NUM_STATES; i++) { -if (s-states[i].in_use) -continue; - -state = s-states[i]; -state-in_use = 1; -break; -} -if (!state) { -g_usleep(100); -} -} while(!state); - -if (state-curl) -goto has_curl; +CURLState *state; +state = g_malloc0(sizeof(CURLState)); +state-s = s; state-curl = curl_easy_init(); -if (!state-curl) -return NULL; +if (!state-curl) { +DPRINTF(CURL: curl_easy_init failed\n); +g_free(state); +state = NULL; +goto out; +} curl_easy_setopt(state-curl, CURLOPT_URL, s-url); curl_easy_setopt(state-curl, CURLOPT_TIMEOUT, 5); curl_easy_setopt(state-curl, CURLOPT_WRITEFUNCTION, (void *)curl_read_cb); @@ -362,19 +351,19 @@ static CURLState *curl_init_state(BDRVCURLState *s) #ifdef DEBUG_VERBOSE curl_easy_setopt(state-curl, CURLOPT_VERBOSE, 1); #endif - -has_curl: - -state-s = s; - +out: return state; } static void curl_clean_state(CURLState *s) { -if (s-s-multi) -curl_multi_remove_handle(s-s-multi, s-curl); -s-in_use = 0; +if (s-curl) { +if (s-s-multi) { +curl_multi_remove_handle(s-s-multi, s-curl); +} +curl_easy_cleanup(s-curl); +s-curl = NULL; +} if (s-cache) { s-cache-use_count--; assert(s-cache-use_count = 0); @@ -483,6 +472,7 @@ static int curl_open(BlockDriverState *bs, QDict *options, int flags) QLIST_INIT(s-socks); QLIST_INIT(s-cache); QLIST_INIT(s-acbs); +QLIST_INIT(s-curl_states); DPRINTF(CURL: Opening %s\n, file); s-url = g_strdup(file); @@ -520,7 +510,8 @@ static int curl_open(BlockDriverState *bs, QDict *options, int flags) curl_clean_state(state); curl_easy_cleanup(state-curl); -state-curl = NULL; +g_free(state); +state = NULL; // Now we know the file exists and its size, so let's // initialize the multi interface! @@ -610,6 +601,7 @@ static void curl_readv_bh_cb(void *p) cache-base_pos + cache-data_len); DPRINTF(Reading range: %s\n, state-range); curl_easy_setopt(state-curl, CURLOPT_RANGE, state-range); +QLIST_INSERT_HEAD(s-curl_states, state, next); QLIST_INSERT_HEAD(s-cache, cache, next); state-cache = cache; cache-use_count++; @@ -631,7 +623,6 @@ err_release: qemu_aio_release(acb); return; - } static BlockDriverAIOCB *curl_aio_readv(BlockDriverState *bs, @@ -660,9 +651,13 @@ static BlockDriverAIOCB *curl_aio_readv(BlockDriverState *bs, static void curl_close(BlockDriverState *bs) { BDRVCURLState *s = bs-opaque; -int i; DPRINTF(CURL: Close\n); +if (s-timer) { +qemu_del_timer(s-timer); +qemu_free_timer(s-timer); +s-timer = NULL; +} if (s-timer) { qemu_del_timer(s-timer); @@ -670,16 +665,26 @@ static void curl_close(BlockDriverState *bs) s-timer = NULL; } -for (i=0; iCURL_NUM_STATES; i++) { -if (s-states[i].in_use) -curl_clean_state(s-states[i]); -if (s-states[i].curl) { -
[Qemu-devel] [PATCH v5 10/11] curl: introduce ssl_no_cert runtime option.
Added an option to let curl disable ssl certificate check. Signed-off-by: Fam Zheng f...@redhat.com --- block/curl.c | 12 +++- 1 file changed, 11 insertions(+), 1 deletion(-) diff --git a/block/curl.c b/block/curl.c index 5adbc84..b6cc5a0 100644 --- a/block/curl.c +++ b/block/curl.c @@ -95,6 +95,8 @@ typedef struct BDRVCURLState { int cache_quota; /* Whether http server accept range in header */ bool accept_range; +/* Whether certificated ssl only */ +bool ssl_no_cert; } BDRVCURLState; static void curl_clean_state(CURLState *s); @@ -339,6 +341,8 @@ static CURLState *curl_init_state(BDRVCURLState *s) curl_easy_setopt(state-curl, CURLOPT_NOSIGNAL, 1); curl_easy_setopt(state-curl, CURLOPT_ERRORBUFFER, state-errmsg); curl_easy_setopt(state-curl, CURLOPT_FAILONERROR, 1); +curl_easy_setopt(state-curl, CURLOPT_SSL_VERIFYPEER, + s-ssl_no_cert ? 0 : 1); /* Restrict supported protocols to avoid security issues in the more * obscure protocols. For example, do not allow POP3/SMTP/IMAP see @@ -429,7 +433,12 @@ static QemuOptsList runtime_opts = { .type = QEMU_OPT_SIZE, .help = Readahead size, }, -{ /* end of list */ } +{ +.name = ssl_no_cert, +.type = QEMU_OPT_BOOL, +.help = SSL certificate check, +}, +{ /* End of list */ } }, }; @@ -467,6 +476,7 @@ static int curl_open(BlockDriverState *bs, QDict *options, int flags) goto out_noclean; } +s-ssl_no_cert = qemu_opt_get_bool(opts, ssl_no_cert, true); if (!inited) { curl_global_init(CURL_GLOBAL_ALL); inited = 1; -- 1.8.2.3
Re: [Qemu-devel] [PATCH 10/30] memory: make memory_global_sync_dirty_bitmap take an AddressSpace
On Tue, May 21, 2013 at 12:57:11PM +0200, Paolo Bonzini wrote: Suggested-by: Peter Maydell peter.mayd...@linaro.org Signed-off-by: Paolo Bonzini pbonz...@redhat.com Hrm. This could surely do with some sort of rationale in the comment. -- David Gibson| I'll have my music baroque, and my code david AT gibson.dropbear.id.au | minimalist, thank you. NOT _the_ _other_ | _way_ _around_! http://www.ozlabs.org/~dgibson signature.asc Description: Digital signature
Re: [Qemu-devel] [PATCH v2 2/2] net: introduce command to query mac-table information
On Thu, May 16, 2013 at 09:38:01AM -0600, Eric Blake wrote: On 05/16/2013 05:07 AM, Amos Kong wrote: We want to implement mac programming over macvtap through Libvirt. The previous patch adds QMP event to notify management of mac-table change. This patch adds a monitor command to query rx mode information of mac-tables. Focusing my review on just the QMP interface this go-around: +++ b/qapi-schema.json @@ -3619,3 +3619,60 @@ '*cpuid-input-ecx': 'int', 'cpuid-register': 'X86CPURegister32', 'features': 'int' } } + +# @MacTableInfo: +# +# Rx-mode information of mac-table for a net client. +# +# @name: the net client name +# +# @promisc: #optional promiscuous mode (default: false) +# +# @allmulti: #optional all multicast mode (default: false) +# +# @alluni: #optional all unicast mode (default: false) +# +# @nomulti: #optional no multicast mode (default: false) Are allmulti and numulti orthogonal (all four pairings of possible), or are they tied together (tri-state)? allmulti: receive all multicast packets nomulti: don't receive multicast packets normal: only receive multicast (in mac-table) packets If the latter, then maybe it's better to have an enum value for the three states (none, normal, all) and a singe @multi that resolves to one of those three states. Sounds good. I frankly output the raw RX filter controls without process integrate. We don't necessarily need to abbreviate; would it be better giving these fields a longer name, such as no-multicast? I used the abbreviations because they are same as parameters of config tool. Management don't need this reminder, I will use no-multicast. +# +# @nouni: #optional no unicast mode (default: false) Same orthogonal vs. tri-state question about alluni/nouni. +# +# @nobcast: #optional no broadcast mode (default: false) Again, if we don't abbreviate, should this be no-broadcast? Double negatives can be awkward to work with; is it better to name this 'broadcast-allowed' with true being the default? Ok. Is the point of labeling these fields #optional so that you can avoid emitting them if they are in their default state? No. Currently we only use rx-filter for virtio-nic, some rx-filer may not be used by other emulated nic, so I used optional. option 1: nic doesn't have the rx filter, nothing emitted (default :false) nic have the rx filter, emit the real value no matter it's true/falue option 2: only emit when nic has the rx filter and value is true option 2 should be right. Does it hurt to always list all fields, instead of omitting ones in their default state? +# +# @multi_overflow: #optional multicast overflow mode (default: false) ... +- promisc: promiscuous mode (json-bool, optional) +- allmulti: all multicast mode (json-bool, optional) +- alluni: all unicast mode (json-bool, optional) +- nomulti:no multicast mode (json-bool, optional) +- nouni: no unicast mode (json-bool, optional) +- nobcast: no broadcast mode (json-bool, optional) +- multi-overflow: multicast overflow mode (json-bool, optional) +- uni-overflow: unicast overflow mode (json-bool, optional) For all of the optional bools, please document what the default is if the field is omitted. Or maybe we should just always emit them. Yes, emit them all the time. If nic doesn't have the rx-filter, just set it to default 'false'. +- unicast: a jason-array of unicast macaddr string (optional) s/jason/json/ Isn't it better to omit a 0-length array when there is explicitly no unicast MAC registered, rather than omitting the field? An omitted field implies that there is not enough information available to decide how many unicast addresses are registered. So will remove the optional for unicast/multicast +- multicast: a jason-array of multicast macaddr string (optional) Likewise on preferring a 0-length array rather than omitting the field altogether. Looking forward to v2. Thanks. -- Amos.