Daniel P. Berrange wrote:
2/ two instances of kvm can be passed the same -hda. There is no locking
whatsoever. This messes up things seriously.
That depends entirely on what you are doing with the disk in the guest OS.
The disk could be hosting a cluster filesystem. The guest OS could
Glauber Costa wrote:
This patch introduces QEMUAccel, a placeholder for function pointers
that aims at helping qemu to abstract accelerators such as kqemu and
kvm (actually, the 'accelerator' name was proposed by avi kivity, since
he loves referring to kvm that way).
Just a little thought...
Anthony Liguori wrote:
Perhaps. This raises another point about AIO vs. threads:
If I submit sequential O_DIRECT reads with aio_read(), will they enter
the device read queue in the same order, and reach the disk in that
order (allowing for reordering when worthwhile by the elevator)?
Avi Kivity wrote:
Anthony Liguori wrote:
If I submit sequential O_DIRECT reads with aio_read(), will they enter
the device read queue in the same order, and reach the disk in that
order (allowing for reordering when worthwhile by the elevator)?
There's no guarantee that any sort of order
Avi Kivity wrote:
Perhaps. This raises another point about AIO vs. threads:
If I submit sequential O_DIRECT reads with aio_read(), will they enter
the device read queue in the same order, and reach the disk in that
order (allowing for reordering when worthwhile by the elevator)?
Yes,
Avi Kivity wrote:
And video streaming on some embedded devices with no MMU! (Due to the
page cache heuristics working poorly with no MMU, sustained reliable
streaming is managed with O_DIRECT and the app managing cache itself
(like a database), and that needs AIO to keep the request queue
Avi Kivity wrote:
At such a tiny difference, I'm wondering why Linux-AIO exists at all,
as it complicates the kernel rather a lot. I can see the theoretical
appeal, but if performance is so marginal, I'm surprised it's in
there.
Linux aio exists, but that's all that can be said for it. It
Avi Kivity wrote:
For the majority of deployments posix aio should be sufficient. The few
that need something else can use Linux aio.
Does that mean for the majority of deployments, the slow version is
sufficient. The few that care about performance can use Linux AIO?
I'm under the
Daniel P. Berrange wrote:
Those cases aren't always discoverable. Linux-aio just falls back to
using synchronous IO. It's pretty terrible. We need a new AIO
interface for Linux (and yes, we're working on this). Once we have
something better, we'll change that to be the default and
Anthony Liguori wrote:
I'm of the view that '-aio auto' would be a really good option - and
when it's proven itself, it should be the default. It could work on
all QEMU hosts: it would pick synchronous IO when there is nothing else.
Right now, not specifying the -aio option is equivalent to
Marcelo Tosatti wrote:
Its necessary to guarantee that pending AIO writes have reached stable
storage when the flush request returns.
Also change fsync() to fdatasync(), since the modification time is not
critical data.
+if (aio_fsync(O_DSYNC, acb-aiocb) 0) {
BDRVRawState *s =
Marcelo Tosatti wrote:
On Fri, Mar 28, 2008 at 03:07:03PM +, Jamie Lokier wrote:
Marcelo Tosatti wrote:
Its necessary to guarantee that pending AIO writes have reached stable
storage when the flush request returns.
Also change fsync() to fdatasync(), since the modification time
Marcelo Tosatti wrote:
I don't think the first qemu_aio_flush() is necessary because the fsync
request will be enqueued after pending ones:
aio_fsync() function does a sync on all outstanding
asynchronous I/O operations associated with
aiocbp-aio_fildes.
More
Marcelo Tosatti wrote:
static void raw_flush(BlockDriverState *bs)
{
BDRVRawState *s = bs-opaque;
-fsync(s-fd);
+raw_aio_flush(bs);
+
+/* We rely on the fact that no other AIO will be submitted
+ * in parallel, but this should be fixed by per-device
+ * AIO
Paul Brook wrote:
That'll depend on what kind of device is emulated. Does the SCSI
emulation handle multiple in-flight commands with any guarantee on
order?
SCSI definitely allows (and we emulate) multiple in flight commands.
I can't find any requirement that writes must complete before
Gilad Ben-Yossef wrote:
Glauber Costa wrote:
This patch introduces a thread_id variable to CPUState.
It's duty will be to hold the process, or more generally, thread
id of the current executing cpu
env-nb_watchpoints = 0;
+#ifdef __WIN32
+env-thread_id = GetCurrentProcessId();
M. Warner Losh wrote:
In message: [EMAIL PROTECTED]
Jamie Lokier [EMAIL PROTECTED] writes:
: Btw, unfortunately pthread_self() is not safe to call from signal
: handlers.
And also often times meaningless, as signal handlers can run in
arbitrary threads...
That's usually
Paul Brook wrote:
What you really want to do is ask your virtualization module what
features it supports.
Yes, that needs to be an additional filter.
I'd have thought that would be the *only* interesting set for autodetection.
If that means the same as the features which are
Avi Kivity wrote:
Well, the guest will invoke its own workaround logic to disable buggy
features, so I see no issue here.
The guest can only do this if it has exactly the correct id
information for the host processor (e.g. This is an Intel Pentium Pro
model XXX), not just the list of safe to
Avi Kivity wrote:
Let's start with '-cpu host' as 'cpu host-cpuid' and implement '-cpu
host-os' on the first bug report? I have a feeling we won't ever see it.
I have a feeling you won't ever see it either, but not because it's a
missing feature.
Instead, I think a very small number of users
Avi Kivity wrote:
I agree. If the host OS has disabled a feature, it's a fair bet it's done
that for a reason.
The reason may not be relevant to the guest.
For most guests the relevant features are those which work correctly
and efficiently on the virtual CPU.
If the host OS has disabled a
Anthony Liguori wrote:
I like this idea but I have some suggestions about the general approach.
I think instead of defining another machine type, it would be better to
just have a command line option like -cpuid that took a comma separate
string of features with all meaning all features that
Avi Kivity wrote:
In this case the dyn-tick minimum res will be 1msec. I believe it should
work ok since this is the case without any dyn-tick.
Actually minimum resolution depends on host HZ setting, but - yes -
essentially you have the same behaviour of the unix timer, plus the
23 matches
Mail list logo