RE: [Qemu-devel] Re: AltGr keystrokes

2006-08-01 Thread Gaetano Sferra

Wait!
This doesn't reply to my question, I never talked about VNC servers or 
clients.
If you want replies about a similar but quite different matter, post a new 
topic, don't takeover the mine.


Thank you,
--
Gaetano Sferra

_
Personalizza MSN Messenger con sfondi e fotografie! 
http://spaces.msn.com/morespaces.aspx




___
Qemu-devel mailing list
Qemu-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/qemu-devel


Re: [Qemu-devel] Ensuring data is written to disk

2006-08-01 Thread Jamie Lokier
Armistead, Jason wrote:
 I've been following the thread about disk data consistency with some
 interest.  Given that many IDE disk drives may choose to hold data in their
 write buffers before actually writing it to disk, and given that the
 ordering of the writes may not be the same as the OS or application expects,
 the only obvious way I can see to overcome this, and ensure the data is
 truly written to the physical platters without disabling write caching is to
 overwhelm the disk drive with more data than can fit in its internal write
 buffer.
 
 So, if you have an IDE disk with an 8Mb cache, guess what, send it an 8Mb
 chunk of random data to write out when you do an fsync().  Better still,
 locate this 8Mb as close to the middle of the travel of its heads, so that
 performance is not affected any more than necessary.  If the drive firmware
 uses a LILO or LRU policy to determine when to do its disk writes,
 overwhelming its buffers should ensure that the actual data you sent to it
 gets written out 

It doesn't work.

I thought that too, for a while, as a way to avoid sending CACHEFLUSH
commands for fs journal ordering when there is a lot of data being
written anyway.

But there is no guarantee that the drive uses a LILO or LRU policy,
and if the firmware is optimised for cache performance over a range of
benchmarks, it won't use those - there are better strategies.

You could write 8MB to the drive, but it could easily pass through the
cache without evicting some of the other data you want written.
_Especially_ if the 8MB is written to an area in the middle of the
head sweep.

 Of course, guessing the disk drive write buffer size and trying not to kill
 system I/O performance with all these writes is another question entirely
 ... sigh !!!

If you just want to evict all data from the drive's cache, and don't
actually have other data to write, there is a CACHEFLUSH command you
can send to the drive which will be more dependable than writing as
much data as the cache size.

-- Jamie


___
Qemu-devel mailing list
Qemu-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/qemu-devel


Re: [Qemu-devel] Ensuring data is written to disk

2006-08-01 Thread Jens Axboe
On Tue, Aug 01 2006, Jamie Lokier wrote:
  Of course, guessing the disk drive write buffer size and trying not to kill
  system I/O performance with all these writes is another question entirely
  ... sigh !!!
 
 If you just want to evict all data from the drive's cache, and don't
 actually have other data to write, there is a CACHEFLUSH command you
 can send to the drive which will be more dependable than writing as
 much data as the cache size.

Exactly, and this is what the OS fsync() should do once the drive has
acknowledged that the data has been written (to cache). At least
reiserfs w/barriers on Linux does this.

Random write tricks are worthless, as you cannot make any assumptions
about what the drive firmware will do.

-- 
Jens Axboe



___
Qemu-devel mailing list
Qemu-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/qemu-devel


Re: [Qemu-devel] Run Real Time Guest OS?

2006-08-01 Thread Brad Campbell

Steve Ellenoff wrote:
Is it possible to run a real time OS under qemu? What changes would need 
to be made?

Can it even be done?

The guest OS I'm trying to run sets the RTC System Timer 0 to a 0.25ms 
interval (~4000Hz)!! The program I'm trying to run on it, expects this 
time to be accurate, and as such, visually the program seems to be 4-5x 
too slow in qemu, which makes sense given that it's delivering only a 
1024Hz timer irq.


I've noticed in the source code that qemu sets this max value of 1024Hz 
(1ms) for the timer, which from what I understand is a limit of the 
Linux kernel itself, ie, that's the most the kernel can support.


Not at all.. for a single qemu instance on linux it tries to use the PIT in the rtc, and I've seen 
this run upto 8192hz. Why not crank it up in the qemu source t0 4096 and see what happens. It's not 
going to hurt anything in any case.

You would most certainly want a HZ value of 1000 to try this.

Brad
--
Human beings, who are almost unique in having the ability
to learn from the experience of others, are also remarkable
for their apparent disinclination to do so. -- Douglas Adams


___
Qemu-devel mailing list
Qemu-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/qemu-devel


[Qemu-devel] More about i386 mmu

2006-08-01 Thread Alessandro Corradi
Hi all,Can I have some additional info regarding mmu emulation in i386? In particular, in tech doc is written that qemu uses mmap system call to emulate cpu mmu, can you help me to understand this point? For instance, how it translate virtual address in io address and access to emulated devices?
ThanksAle
___
Qemu-devel mailing list
Qemu-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/qemu-devel


Re: [Qemu-devel] Ensuring data is written to disk

2006-08-01 Thread Jamie Lokier
Jens Axboe wrote:
 On Tue, Aug 01 2006, Jamie Lokier wrote:
   Of course, guessing the disk drive write buffer size and trying not to 
   kill
   system I/O performance with all these writes is another question entirely
   ... sigh !!!
  
  If you just want to evict all data from the drive's cache, and don't
  actually have other data to write, there is a CACHEFLUSH command you
  can send to the drive which will be more dependable than writing as
  much data as the cache size.
 
 Exactly, and this is what the OS fsync() should do once the drive has
 acknowledged that the data has been written (to cache). At least
 reiserfs w/barriers on Linux does this.

1. Are you sure this happens, w/ reiserfs on Linux, even if the disk
   is an SATA or SCSI type that supports ordered tagged commands?  My
   understanding is that barriers force an ordering between write
   commands, and that CACHEFLUSH is used only with disks that don't have
   more sophisticated write ordering commands.  Is the data still
   committed to the disk platter before fsync() returns on those?

2. Do you know if ext3 (in ordered mode) w/barriers on Linux does it too,
   for in-place writes which don't modify the inode and therefore don't
   have a journal entry?

On Darwin, fsync() does not issue CACHEFLUSH to the drive.  Instead,
it has an fcntl F_FULLSYNC which does that, which is documented in
Darwin's fsync() page as working with all Darwin's filesystems,
provided the hardware honours CACHEFLUSH or the equivalent.

From what little documentation I've found, on Linux it appears to be
much less predictable.  It seems that some filesystems, with some
kernel versions, and some mount options, on some types of disk, with
some drive settings, will commit data to a platter before fsync()
returns, and others won't.  And an application calling fsync() has no
easy way to find out.  Have I got this wrong?

ps. (An aside question): do you happen to know of a good patch which
implements IDE barriers w/ ext3 on 2.4 kernels?  I found a patch by
googling, but it seemed that the ext3 parts might not be finished, so
I don't trust it.  I've found turning off the IDE write cache makes
writes safe, but with a huge performance cost.

Thanks,
-- Jamie


___
Qemu-devel mailing list
Qemu-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/qemu-devel


[Qemu-devel] qemu osdep.c

2006-08-01 Thread Fabrice Bellard
CVSROOT:/sources/qemu
Module name:qemu
Changes by: Fabrice Bellard bellard   06/08/01 15:50:07

Modified files:
.  : osdep.c 

Log message:
removed unused code

CVSWeb URLs:
http://cvs.savannah.gnu.org/viewcvs/qemu/osdep.c?cvsroot=qemur1=1.11r2=1.12


___
Qemu-devel mailing list
Qemu-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/qemu-devel


[Qemu-devel] qemu osdep.h

2006-08-01 Thread Fabrice Bellard
CVSROOT:/sources/qemu
Module name:qemu
Changes by: Fabrice Bellard bellard   06/08/01 15:50:14

Modified files:
.  : osdep.h 

Log message:
removed unused code

CVSWeb URLs:
http://cvs.savannah.gnu.org/viewcvs/qemu/osdep.h?cvsroot=qemur1=1.6r2=1.7


___
Qemu-devel mailing list
Qemu-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/qemu-devel


[Qemu-devel] qemu qemu-img.c

2006-08-01 Thread Fabrice Bellard
CVSROOT:/sources/qemu
Module name:qemu
Changes by: Fabrice Bellard bellard   06/08/01 15:51:11

Modified files:
.  : qemu-img.c 

Log message:
show backing file name

CVSWeb URLs:
http://cvs.savannah.gnu.org/viewcvs/qemu/qemu-img.c?cvsroot=qemur1=1.10r2=1.11


___
Qemu-devel mailing list
Qemu-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/qemu-devel


[Qemu-devel] qemu monitor.c

2006-08-01 Thread Fabrice Bellard
CVSROOT:/sources/qemu
Module name:qemu
Changes by: Fabrice Bellard bellard   06/08/01 15:52:40

Modified files:
.  : monitor.c 

Log message:
commit to specific devices

CVSWeb URLs:
http://cvs.savannah.gnu.org/viewcvs/qemu/monitor.c?cvsroot=qemur1=1.54r2=1.55


___
Qemu-devel mailing list
Qemu-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/qemu-devel


[Qemu-devel] qemu Changelog Makefile Makefile.target block-b...

2006-08-01 Thread Fabrice Bellard
CVSROOT:/sources/qemu
Module name:qemu
Changes by: Fabrice Bellard bellard   06/08/01 16:21:11

Modified files:
.  : Changelog Makefile Makefile.target 
 block-bochs.c block-cloop.c block-cow.c 
 block-dmg.c block-qcow.c block-vmdk.c 
 block-vpc.c block-vvfat.c block.c block_int.h 
 vl.c vl.h 
Added files:
.  : block-raw.c 

Log message:
async file I/O API

CVSWeb URLs:
http://cvs.savannah.gnu.org/viewcvs/qemu/Changelog?cvsroot=qemur1=1.121r2=1.122
http://cvs.savannah.gnu.org/viewcvs/qemu/Makefile?cvsroot=qemur1=1.104r2=1.105
http://cvs.savannah.gnu.org/viewcvs/qemu/Makefile.target?cvsroot=qemur1=1.121r2=1.122
http://cvs.savannah.gnu.org/viewcvs/qemu/block-bochs.c?cvsroot=qemur1=1.1r2=1.2
http://cvs.savannah.gnu.org/viewcvs/qemu/block-cloop.c?cvsroot=qemur1=1.3r2=1.4
http://cvs.savannah.gnu.org/viewcvs/qemu/block-cow.c?cvsroot=qemur1=1.6r2=1.7
http://cvs.savannah.gnu.org/viewcvs/qemu/block-dmg.c?cvsroot=qemur1=1.4r2=1.5
http://cvs.savannah.gnu.org/viewcvs/qemu/block-qcow.c?cvsroot=qemur1=1.7r2=1.8
http://cvs.savannah.gnu.org/viewcvs/qemu/block-vmdk.c?cvsroot=qemur1=1.8r2=1.9
http://cvs.savannah.gnu.org/viewcvs/qemu/block-vpc.c?cvsroot=qemur1=1.3r2=1.4
http://cvs.savannah.gnu.org/viewcvs/qemu/block-vvfat.c?cvsroot=qemur1=1.6r2=1.7
http://cvs.savannah.gnu.org/viewcvs/qemu/block.c?cvsroot=qemur1=1.28r2=1.29
http://cvs.savannah.gnu.org/viewcvs/qemu/block_int.h?cvsroot=qemur1=1.5r2=1.6
http://cvs.savannah.gnu.org/viewcvs/qemu/vl.c?cvsroot=qemur1=1.202r2=1.203
http://cvs.savannah.gnu.org/viewcvs/qemu/vl.h?cvsroot=qemur1=1.136r2=1.137
http://cvs.savannah.gnu.org/viewcvs/qemu/block-raw.c?cvsroot=qemurev=1.1


___
Qemu-devel mailing list
Qemu-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/qemu-devel


Re: [Qemu-devel] Ensuring data is written to disk

2006-08-01 Thread Jens Axboe
On Tue, Aug 01 2006, Jamie Lokier wrote:
 Jens Axboe wrote:
  On Tue, Aug 01 2006, Jamie Lokier wrote:
Of course, guessing the disk drive write buffer size and trying not to 
kill
system I/O performance with all these writes is another question 
entirely
... sigh !!!
   
   If you just want to evict all data from the drive's cache, and don't
   actually have other data to write, there is a CACHEFLUSH command you
   can send to the drive which will be more dependable than writing as
   much data as the cache size.
  
  Exactly, and this is what the OS fsync() should do once the drive has
  acknowledged that the data has been written (to cache). At least
  reiserfs w/barriers on Linux does this.
 
 1. Are you sure this happens, w/ reiserfs on Linux, even if the disk
is an SATA or SCSI type that supports ordered tagged commands?  My
understanding is that barriers force an ordering between write
commands, and that CACHEFLUSH is used only with disks that don't have
more sophisticated write ordering commands.  Is the data still
committed to the disk platter before fsync() returns on those?

No SATA drive supports ordered tags, that is a SCSI only property. The
barrier writes is a separate thing, probably reiser ties the two
together because it needs to know if the flush cache command works as
expected. Drives are funny sometimes...

For SATA you always need at least one cache flush (you need one if you
have the FUA/Forced Unit Access write available, you need two if not).

 2. Do you know if ext3 (in ordered mode) w/barriers on Linux does it too,
for in-place writes which don't modify the inode and therefore don't
have a journal entry?

I don't think that it does, however it may have changed. A quick grep
would seem to indicate that it has not changed.

 On Darwin, fsync() does not issue CACHEFLUSH to the drive.  Instead,
 it has an fcntl F_FULLSYNC which does that, which is documented in
 Darwin's fsync() page as working with all Darwin's filesystems,
 provided the hardware honours CACHEFLUSH or the equivalent.

That seems somewhat strange to me, I'd much rather be able to say that
fsync() itself is safe. An added fcntl hack doesn't really help the
applications that already rely on the correct behaviour.

 rom what little documentation I've found, on Linux it appears to be
 much less predictable.  It seems that some filesystems, with some
 kernel versions, and some mount options, on some types of disk, with
 some drive settings, will commit data to a platter before fsync()
 returns, and others won't.  And an application calling fsync() has no
 easy way to find out.  Have I got this wrong?

Nope, I'm afraid that is pretty much true... reiser and (it looks like,
just grepped) XFS has best support for this. Unfortunately I don't think
the user can actually tell if the OS does the right thing, outside of
running a blktrace and verifying that it actually sends a flush cache
down the queue.

 ps. (An aside question): do you happen to know of a good patch which
 implements IDE barriers w/ ext3 on 2.4 kernels?  I found a patch by
 googling, but it seemed that the ext3 parts might not be finished, so
 I don't trust it.  I've found turning off the IDE write cache makes
 writes safe, but with a huge performance cost.

The hard part (the IDE code) can be grabbed from the SLES8 latest
kernels, I developed and tested the code there. That also has the ext3
bits, IIRC.

-- 
Jens Axboe



___
Qemu-devel mailing list
Qemu-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/qemu-devel


[Qemu-devel] Interesting QEMU + OpenVPN

2006-08-01 Thread Jonathan Kalbfeld

I have an instance of NetBSD 3.0.1 that runs inside of QEMU emulating an i386.

On the parent system, whether it is Windows, Linux, Solaris, or *BSD,
you can run an OpenVPN instance and set up a tunnel.

On the guest system, you can then run OpenVPN and connect to the other
end of the tunnel.

Voila!  Now, from the parent system, you can connect directly to your
QEMU instance by using the tunnel.

I set this up on my Sun Blade and am using it to test protocols.  My
next task will be setting up a Virtual Ethernet Bridge between an
emulated i386 on QEMU here in LA and a real-live NetBSD brick at my
parents' home in Detroit.

Interesting stuff!

jonathan
--
--
Jonathan Kalbfeld
+1 323 620 6682


___
Qemu-devel mailing list
Qemu-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/qemu-devel


Re: [Qemu-devel] Ensuring data is written to disk

2006-08-01 Thread Jamie Lokier
Jens Axboe wrote:
If you just want to evict all data from the drive's cache, and don't
actually have other data to write, there is a CACHEFLUSH command you
can send to the drive which will be more dependable than writing as
much data as the cache size.
   
   Exactly, and this is what the OS fsync() should do once the drive has
   acknowledged that the data has been written (to cache). At least
   reiserfs w/barriers on Linux does this.
  
  1. Are you sure this happens, w/ reiserfs on Linux, even if the disk
 is an SATA or SCSI type that supports ordered tagged commands?  My
 understanding is that barriers force an ordering between write
 commands, and that CACHEFLUSH is used only with disks that don't have
 more sophisticated write ordering commands.  Is the data still
 committed to the disk platter before fsync() returns on those?
 
 No SATA drive supports ordered tags, that is a SCSI only property. The
 barrier writes is a separate thing, probably reiser ties the two
 together because it needs to know if the flush cache command works as
 expected. Drives are funny sometimes...
 
 For SATA you always need at least one cache flush (you need one if you
 have the FUA/Forced Unit Access write available, you need two if not).

Well my question wasn't intended to be specific to ATA (sorry if that
wasn't clear), but a general question about writing to disks on Linux.

And I don't understand your answer.  Are you saying that reiserfs on
Linux (presumably 2.6) commits data (and file metadata) to disk
platters before returning from fsync(), for all types of disk
including PATA, SATA and SCSI?  Or if not, is that a known property of
PATA only, or PATA and SATA only?  (And in all cases, presumably only
ordinary controllers can be depended on, not RAID controllers or
USB/Firewire bridges which ignore cache flushes for no good reason).

  2. Do you know if ext3 (in ordered mode) w/barriers on Linux does it too,
 for in-place writes which don't modify the inode and therefore don't
 have a journal entry?
 
 I don't think that it does, however it may have changed. A quick grep
 would seem to indicate that it has not changed.

Ew.  What do databases do to be reliable then?  Or aren't they, on Linux?

  On Darwin, fsync() does not issue CACHEFLUSH to the drive.  Instead,
  it has an fcntl F_FULLSYNC which does that, which is documented in
  Darwin's fsync() page as working with all Darwin's filesystems,
  provided the hardware honours CACHEFLUSH or the equivalent.
 
 That seems somewhat strange to me, I'd much rather be able to say that
 fsync() itself is safe. An added fcntl hack doesn't really help the
 applications that already rely on the correct behaviour.

According to the Darwin fsync(2) man page, it claims Darwin is the
only OS which has a facility to commit the data to disk platters.
(And it claims to do this with IDE, SCSI and FibreChannel.  With
journalling filesystems, it requests the journal to do the commit but
the cache flush still ultimately reaches the disk.  Sounds like a good
implementation to me).

SQLite (a nice open source database) will use F_FULLSYNC on Darwin to
do this, and it appears to add a large performance penalty relative to
using fsync() alone.  People noticed and wondered why.

Other OSes show similar performance as Darwin with fsync() only.

So it looks like the man page is probably accurate: other OSes,
particularly including Linux, don't commit the data reliably to disk
platters when using fsync().

In which case, I'd imagine that's why Darwin has a separate option,
because if Darwin's fsync() was many times slower than all the other
OSes, most people would take that as a sign of a badly performing OS,
rather than understanding the benefits.

  from what little documentation I've found, on Linux it appears to be
  much less predictable.  It seems that some filesystems, with some
  kernel versions, and some mount options, on some types of disk, with
  some drive settings, will commit data to a platter before fsync()
  returns, and others won't.  And an application calling fsync() has no
  easy way to find out.  Have I got this wrong?
 
 Nope, I'm afraid that is pretty much true... reiser and (it looks like,
 just grepped) XFS has best support for this. Unfortunately I don't think
 the user can actually tell if the OS does the right thing, outside of
 running a blktrace and verifying that it actually sends a flush cache
 down the queue.

Ew.  So what do databases on Linux do?  Or are database commits
unreliable because of this?

  ps. (An aside question): do you happen to know of a good patch which
  implements IDE barriers w/ ext3 on 2.4 kernels?  I found a patch by
  googling, but it seemed that the ext3 parts might not be finished, so
  I don't trust it.  I've found turning off the IDE write cache makes
  writes safe, but with a huge performance cost.
 
 The hard part (the IDE code) can be grabbed from the SLES8 latest
 kernels, I developed 

[Qemu-devel] Re: AltGr keystrokes

2006-08-01 Thread Edu

Hi,

I also have issues with extended characters in my keyboard layout when
using VNC. Part of them seem to be caused by sign extension in
function read_u32 from vnc.c, which should not be done.

I need to do some more testing, though.

Eduardo Felipe


___
Qemu-devel mailing list
Qemu-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/qemu-devel


VNC (or RFB), was Re: [Qemu-devel] Re: AltGr keystrokes

2006-08-01 Thread Johannes Schindelin
Hi,

On Wed, 2 Aug 2006, Edu wrote:

 I also have issues with extended characters in my keyboard layout when 
 using VNC. Part of them seem to be caused by sign extension in function 
 read_u32 from vnc.c, which should not be done.

I just _cannot_ resist in pointing out that I never had issues like these 
with the old RFB patch.

AFAICT the only issue the old RFB patch had, was that a slow connection 
could actually slow down QEmu, since the RFB server would block on input. 

But I am sure that this can be fixed in LibVNCServer, bringing benefit not 
only to QEmu, but also to other users of the lib. I cannot help but being 
sad about the solution in QEmu.

Ciao,
Dscho



___
Qemu-devel mailing list
Qemu-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/qemu-devel


RE: [Qemu-devel] How to Simulate hardware that counts scanlines?

2006-08-01 Thread Armistead, Jason
Steve Ellenoff wrote:

You misunderstand, I have no control over the running program. I didn't
write it, I don't have source code, and I surely wouldn't have used a 
polling mechanism for determining the vblank as you suggested.

My problem is that I wish to run this program through qemu. I've made a 
bunch of hardware specific additions to qemu to emulate the specific 
hardware this program runs on. I'm just not sure the best way to simulate 
the scanline counting the hardware does.

Seems nobody here has any ideas either, which is kind of hard to believe. I

don't know if this would work, but one idea I had was to divide up the gui 
timer into 260 slices (that's the # of scanlines the hardware expects), and

simply update the hardware register that counts the scanlines this way.

Does anyone thing that's the way to go, or if there's a better way?

As I see it, one of the problems in Steve's scenario is that QEMU does
dynamic translations based on blocks of code, and the interrupts or changes
to emulated hardware state are delivered only at the end of the execution of
an entire basic block.  While this might be adequate for an operating system
which cares for the most part very little about real world timing, it may
not be sufficient for every case where you want to do an emulation of a
processor in an embedded device or a curious non-standard PC like Steve's.

I think that's why the game system emulators MAME and MESS are perhaps more
akin to what he's wanting to do, in that they are able to deliver interrupts
in exactly the same way as the real CPU sees them i.e. at the end of
execution of the current instruction, and they consider instruction
execution timing to maintain an accurate internal time base, independent
from what the real world outside is doing.

On most modern fast PC CPUs, they can easily emulate the older arcade game
or computer system hardware with plenty of horsepower to spare, and so it
appears realtime, synchronised via RDTSC or similar.  I guess if you ran
them on an underperforming PC like an old 486 or early Pentium, you may see
things go at less than real speed.

Maybe I'm totally off the mark, but at least that's how I read the QEMU docs
relating to hardware interrupts

http://www.qemu.org/qemu-tech.html#SEC18

and the preceding sections about the way instruction blocks are translated
and executed.

I'm sure Fabrice and others can shoot me down if needs be ... 

Cheers

Jason


___
Qemu-devel mailing list
Qemu-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/qemu-devel


[Qemu-devel] Re: AltGr keystrokes

2006-08-01 Thread Anthony Liguori
On Wed, 02 Aug 2006 02:22:44 +0200, Edu wrote:

 Hi,
 
 I also have issues with extended characters in my keyboard layout when
 using VNC. Part of them seem to be caused by sign extension in function
 read_u32 from vnc.c, which should not be done.

You're right, I'll submit a patch to fix that.  Let me know if it helps.

Regards,

Anthony Liguori
 
 I need to do some more testing, though.

 Eduardo Felipe




___
Qemu-devel mailing list
Qemu-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/qemu-devel


Re: [Qemu-devel] Wipe patch

2006-08-01 Thread andrzej zaborowski

On 02/08/06, Brad Campbell [EMAIL PROTECTED] wrote:

ZIGLIO, Frediano, VF-IT wrote:
 Hi,
   well, this is not a definitive patch but it works. The aim is to be
 able to wipe the disk without allocating entire space. When you wipe a
 disk the program fill disk with zero bytes so disk image increase to
 allocate all space. This just patch detect null byte writes and do not
 write all zero byte clusters.


I've been giving this some pretty heavy testing over the last week and can say 
I've not noticed any
negative performance impact or any other adverse side effects, not to mention 
the speedup when doing
re-packing (which I do fairly regularly on both ext3 and ntfs guest 
filesystems).

While I'm here does anyone know of a simple program, either dos or linux based 
for wiping unused
space on fat filesystems? The only ones I've found so far have been windows 
based.


I don't know if you mean just zeroing unused parts or reordering the
data and stuff like defragmentation. If you mean the former, there's a
universal method:
 dd if=/dev/zero of=xxx; rm xxx
where xxx is a path to a new file on the filesystem, which must be
mounted. It will creata a zero filled file there, which will fill all
availiable space, and remove the file afterwards. I used this when I
needed to send filesystem images through internet so that they
compressed well.
If you add dd=a-big-number-here it might take less time to write the file.



This patch now conflicts pretty heavily with the new AIO changes it would seem. 
Further
investigation required.

Ta,
Brad
--
Human beings, who are almost unique in having the ability
to learn from the experience of others, are also remarkable
for their apparent disinclination to do so. -- Douglas Adams


___
Qemu-devel mailing list
Qemu-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/qemu-devel



Regards,
--
balrog 2oo6

Dear Outlook users: Please remove me from your address books
http://www.newsforge.com/article.pl?sid=03/08/21/143258


___
Qemu-devel mailing list
Qemu-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/qemu-devel


Re: [Qemu-devel] Wipe patch

2006-08-01 Thread Brad Campbell

andrzej zaborowski wrote:


I don't know if you mean just zeroing unused parts or reordering the
data and stuff like defragmentation. If you mean the former, there's a
universal method:
  dd if=/dev/zero of=xxx; rm xxx
where xxx is a path to a new file on the filesystem, which must be
mounted. It will creata a zero filled file there, which will fill all
availiable space, and remove the file afterwards. I used this when I
needed to send filesystem images through internet so that they
compressed well.
If you add dd=a-big-number-here it might take less time to write the 
file.

Oops, I mean bs= ofcourse.


Yep, been doing similar, but the neato wipe programs generally also do cluster tails and unused 
directory entries and allow a really great compression ratio. Ta for the advice though.


Brad

--
Human beings, who are almost unique in having the ability
to learn from the experience of others, are also remarkable
for their apparent disinclination to do so. -- Douglas Adams


___
Qemu-devel mailing list
Qemu-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/qemu-devel


[Qemu-devel] Qemu + wireless

2006-08-01 Thread S . P . T . Krishnan

Hi,

This is my first post to this list.

I have been using qemu for the last several versions.  Works great, in
fact I just booted off a Vista beta 2 yesterday from a linux host.

My query is whether Qemu either now or in the near future will support
wireless interface ?

Example: Qemu_VM could be configured to use wireless (in a different
subnet) while the host be on wired interface.

regards,
Krishnan


___
Qemu-devel mailing list
Qemu-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/qemu-devel


Re: [Qemu-devel] Interesting QEMU + OpenVPN

2006-08-01 Thread Dirk Behme

Jonathan Kalbfeld wrote:
I have an instance of NetBSD 3.0.1 that runs inside of QEMU emulating an 
i386.


On the parent system, whether it is Windows, Linux, Solaris, or *BSD,
you can run an OpenVPN instance and set up a tunnel.

On the guest system, you can then run OpenVPN and connect to the other
end of the tunnel.

Voila!  Now, from the parent system, you can connect directly to your
QEMU instance by using the tunnel.


Maybe you like to add some details (something like a small 
howto?) and add a small description of this to QEMU Wiki [1]?


Dirk

[1] http://kidsquid.com/cgi-bin/moin.cgi


___
Qemu-devel mailing list
Qemu-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/qemu-devel