Re: [Qemu-devel] [PATCH_v2 0/9] target-openrisc: Corrections and speed improvements

2013-10-25 Thread Jia Liu
On Fri, Oct 25, 2013 at 7:23 AM, Sebastian Macke sebast...@macke.de wrote:
 On 22/10/2013 8:47 PM, Jia Liu wrote:

 Hi Sebastian,

 On Tue, Oct 22, 2013 at 8:12 AM, Sebastian Macke sebast...@macke.de
 wrote:

 This series is the first part to make the OpenRISC target more
 reliable and faster.
 It corrects several severe problems which prevented the OpenRISC
 emulation
 for being useful in the past.

 The patchset was tested with
- the tests/tcg/openrisc tests
- booting Linux 3.11
- run configure + make + gcc of a simple terminal graphic demo called
 cmatrix
- run benchmark tool nbench in qemu-user mode and in the softmmu mode

 The speed improvement is less than 10% because the overhead is still to
 high
 as the openrisc target does not support translation block chaining.
 This will be included in one of the future patches.

 Only the patch which removes the npc and ppc variables removes a little
 feature
 from the OpenRISC target but which does not break the specification and
 will lead to
 a significant speed improvement.

 For v2 0/9 - 9/9
 Acked-by: Jia Liu pro...@gmail.com

 I'll add some comment into the code to explain why we separate flags from
 sr
 and send a pull request if nobody raise a rejection.


 Ok great, the next bunch of patches is already in development.

Then, I'll make one pull request when you finish all you jobs,
please let me know when you finish your last work, is it OK?




 Sebastian Macke (9):
target-openrisc: Speed up move instruction
target-openrisc: Remove unnecessary code generated by jump
  instructions
target-openrisc: Remove executable flag for every page
target-openrisc: Correct wrong epcr register in interrupt handler
openrisc-timer: Reduce overhead, Separate clock update functions
target-openrisc: Correct memory bounds checking for the tlb buffers
target-openrisc: Separate branch flag from Supervision register
target-openrisc: Complete remove of npc and ppc variables
target-openrisc: Correct carry flag check of l.addc and l.addic test
  cases

   hw/openrisc/cputimer.c |  29 --
   target-openrisc/cpu.c  |   1 +
   target-openrisc/cpu.h  |  16 ++-
   target-openrisc/gdbstub.c  |  20 +---
   target-openrisc/interrupt.c|  27 ++---
   target-openrisc/interrupt_helper.c |   3 +-
   target-openrisc/machine.c  |   3 +-
   target-openrisc/mmu.c  |   4 +-
   target-openrisc/sys_helper.c   |  74 ++
   target-openrisc/translate.c| 201
 -
   tests/tcg/openrisc/test_addc.c |   8 +-
   tests/tcg/openrisc/test_addic.c|  10 +-
   12 files changed, 175 insertions(+), 221 deletions(-)

 --
 1.8.4.1

 Regards,
 Jia





[Qemu-devel] migration: question about buggy implementation of traditional live migration with storage that migrating the storage in iteration way

2013-10-25 Thread Zhanghaoyu (A)
Hi, all

Could someone make a detailed statement for the buggy implementation of 
traditional live migration with storage that migrating the storage in iteration 
way?

Thanks,
Zhang Haoyu

 hi Michal,

 I used libvirt-1.0.3, ran below command to perform live migration, why no 
 progress shown up?
 virsh migrate --live --verbose --copy-storage-all domain
 qemu+tcp://dest ip/system

 If replacing libvirt-1.0.3 with libvirt-1.0.2, the migration 
 progress shown up, if performing migration without --copy-storage-all, 
 the migration progress shown up, too.

 Thanks,
 Zhang Haoyu


 Because since 1.0.3 we are using NBD to migrate storage. Truth is, 
 qemu is reporting progress of storage migration, however, there is no 
 generic formula to combine storage migration and internal state migration 
 into one number. With NBD the process is something like this:
 
 How to use NBD to migrate storage?
 Does NBD server in destination start automatically as soon as migration 
 initiated, or some other configurations needed?
 What's the advantages of using NBD to migrate storage over traditional 
 method that migrating the storage in iteration way, just like the way in 
 which migrating the memory?
 Sorry for my poor knowledge in NBD, by which I used to implement shared 
 storage for live migration without storage.

NBD is used whenever both src and dst of migration is new enough to use it. 
That is, libvirt = 1.0.3 and qemu = 1.0.3. The NBD is turned on by libvirt 
whenever the conditions are met. User has no control over this.
The advantage is: only specified disks can be transferred (currently not 
supported in libvirt), the previous implementation was buggy (according to 
some qemu developers), the storage is migrated via separate channel (a new 
connection) so it can be possible (in the future) to split migration of RAM + 
internal state and storage.

So frankly speaking, there's no real advantage for users now - besides not 
using buggy implementation.

Michal



Re: [Qemu-devel] About VM fork in QEMU

2013-10-25 Thread Eric Blake
On 10/23/2013 03:36 PM, Xinyang Ge wrote:
 Live cloning is a disaster waiting to happen if not done in a very
 carefully controlled environment (I could maybe see it useful across two
 private networks for forensic analysis or running what-if scenarios,
 but never for provisioning enterprise-quality public-facing servers).
 Remember, if you ever expose both forks of a live clone to the same
 network at the same time, you have a security vulnerability if you did
 not manage to scrube the random pool of the two guests to be different,
 where the crypto behavior of the second guest can be guessed by
 observing the behavior of the first. But scrubbing memory correctly
 requires knowing EXACTLY where in memory the random pool is stored,
 which is highly guest-dependent, and may be spread across multiple guest
 locations.  With offline disk images, the set of information to scrub is
 a bit easier, and in fact, 'virt-sysprep' from the libguestfs tools can
 do it for a number of guests, but virt-sysprep (rightfully) refuses to
 try to scrub a live image.  Do your forked guests really have to run in
 parallel, or is it sufficient to serialize the running of one variation
 followed by the other variation?
 
 It's better to have them run in parallel since our project doesn't
 have any network stuff.

Good, then it sounds like you are being careful about avoiding the worst
aspect of live cloning (as long as two guests are never visible to the
same network, then you aren't exposing security risks over that network).

 However, running each variation sequentially
 is also sufficient for us. What we are concerned the most is whether
 we can get a snapshot in milliseconds because we don't really need to
 save the memory state to disk for future reversion. Could you let me
 know if it's possible for qemu or qemu-kvm with minor changes?

External snapshots (via the blockdev-snapshot-sync QMP command) can be
taken in a matter of milliseconds if you only care about disk state.
Furthermore, if you want to take a snapshot of both memory and disk
state, such that the clone can be resumed from the same time, you can do
that with a guest downtime that only lasts as long as the
blockdev-snapshot-sync, by first doing a migrate to file then doing the
disk snapshot when the VM pauses at the end of migration.  Resuming the
original guest is fast; resuming from the migration file is a bit
longer, but it is still the fastest way possible to resume from a
memory+disk snapshot.  If you need anything faster, then yes, you would
have to write patches to qemu to attempt cloning via fork() that makes
sure to modify the active disk in use by the fork child so as not to
interfere with the fork parent.

-- 
Eric Blake   eblake redhat com+1-919-301-3266
Libvirt virtualization library http://libvirt.org



signature.asc
Description: OpenPGP digital signature


Re: [Qemu-devel] [Qemu-ppc] [PULL 00/29] ppc patch queue 2013-10-25

2013-10-25 Thread Alexander Graf
Hey Mark,

Am 25.10.2013 um 23:59 schrieb Mark Cave-Ayland mark.cave-ayl...@ilande.co.uk:

 On 25/10/13 22:27, Alexander Graf wrote:
 
 Hi Blue / Aurelien / Anthony,
 
 This is my current patch queue for ppc.  Please pull.
 
 Alex
 
 Hi Alex,
 
 Did you get my repost of the PPC PCI configuration space patch to qemu-devel 
 here: http://lists.gnu.org/archive/html/qemu-devel/2013-10/msg01491.html? Or 
 should that go via someone else's tree?

Thanks a lot for the reminder. There is absolutely nothibg wrong with the 
patch, but I wanted to make sure that I have a fully autotested tree synced out 
before the hard freeze. I'll send this together with the next SLOF update as 
soon as the SLOF git tree is synchronized.

Since this is a genuine bugfix, we can always get it into QEMU after the hard 
freeze deadline.


Alex




Re: [Qemu-devel] [PATCHv1 0/4] Timers: add timer debugging through -timer-debug-log

2013-10-25 Thread Alex Bligh

On 26 Oct 2013, at 00:00, Paolo Bonzini wrote:

 his is a bug in the distro, if it is Linux.  There is no reason not to
 enable the stap trace format when running on Linux (Fedora does for
 other packages than QEMU, too---most notably glib and glibc).
 
 If it is useful, adding debugging information to timer_new_ns (please
 make file and line two separate arguments, though) can definitely be
 done unconditionally and added to the traces.  I think adding a
 tracepoint in timerlist_run_timers would provide very similar
 information to that in your file.

I read the tracepoint information. Doesn't that require the end
user to have far more skills (and far more stuff installed) to
get things like average expiry delta? Especially as that's
not something we'd normally calculate as we don't get the clock
value when setting a timer normally.

-- 
Alex Bligh







<    1   2