On Thu, Sep 21, 2023 at 02:41:54PM +0530, Kishon Vijay Abraham I wrote:
> > PCI Endpoint function driver is implemented using the PCIe Endpoint
> > framework, but it requires physical boards for testing, and it is difficult
> > to test sufficiently. In order to find bugs and hardware-dependent
> >
On Wed, Jul 12, 2023 at 10:28:00AM +0200, Stefano Garzarella wrote:
> The problem is that the SCSI stack does not send this command, so we
> should do it in the driver. In fact we do it for
> VIRTIO_SCSI_EVT_RESET_RESCAN (hotplug), but not for
> VIRTIO_SCSI_EVT_RESET_REMOVED (hotunplug).
No, you s
Hi all,
qemu 7.2.0 fails to boot my usual test setup using -kernel (see
the actual script below). I've bisected this down to:
commit ffe2d2382e5f1aae1abc4081af407905ef380311
Author: Jason A. Donenfeld
Date: Wed Sep 21 11:31:34 2022 +0200
x86: re-enable rng seeding via SetupData
with thi
Please don't do this. OCP is acting as a counter standard to the
proper NVMe standard here and should in absolutely no way be supported
by open source projects that needs to stick to the actual standards.
Please work with the NVMe technical working group to add this (very
useful) functionality to
On Thu, Sep 29, 2022 at 10:37:22AM -0600, Keith Busch wrote:
> I don't think so. Memory alignment and length granularity are two completely
> different concepts. If anything, the kernel's ABI had been that the length
> requirement was also required for the memory alignment, not the other way
> arou
On Mon, Jun 27, 2022 at 01:47:28PM +0200, Niklas Cassel via wrote:
> CRMS.CRWMS bit shall be set to 1 on controllers compliant with versions
> later than NVMe 1.4.
>
> The first version later than NVMe 1.4 is NVMe 2.0
>
> Let's claim compliance with NVMe 2.0 such that a follow up patch can
> set
VM launch, it is not spec compliant and is of
> little use since the UUID cannot be used reliably anyway and the
> behavior prior to this patch must be considered buggy.
>
> Reviewed-by: Keith Busch
> Signed-off-by: Klaus Jensen
Looks good:
Reviewed-by: Christoph Hellwig
; `eui64=UINT64`.
Looks good:
Reviewed-by: Christoph Hellwig
On Wed, Apr 20, 2022 at 07:51:32AM +0200, Klaus Jensen wrote:
> > So unlike the EUI, UUIDs are designed to be autogenerated even if the
> > current algorithm is completely broken. We'd just need to persist them.
> > Note that NVMe at least in theory requires providing at least on of
> > the unique
Looks good:
Reviewed-by: Christoph Hellwig
On Tue, Apr 19, 2022 at 02:10:38PM +0200, Klaus Jensen wrote:
> From: Klaus Jensen
>
> Do not default to generate an UUID for namespaces if it is not
> explicitly specified.
>
> This is a technically a breaking change in behavior. However, since the
> UUID changes on every VM launch, it is not s
On Tue, Apr 19, 2022 at 02:10:36PM +0200, Klaus Jensen wrote:
> From: Klaus Jensen
>
> Unconditionally set an EUI64 for namespaces. The nvme-ns device defaults
> to auto-generating a persistent EUI64 if not specified, but for single
> namespace setups (-device nvme,drive=...), this does not happe
Signed-off-by: Klaus Jensen
Looks good:
Reviewed-by: Christoph Hellwig
On Tue, Nov 16, 2021 at 10:58:30AM +, Stefan Hajnoczi wrote:
> Question for Jens and Christoph:
>
> Is there a way for userspace to detect whether a Linux block device
> supports SECDISCARD?
I don't know of one.
> If not, then maybe a new sysfs attribute can be added:
This looks correct, bu
On Mon, Jul 12, 2021 at 12:03:27PM +0100, Stefan Hajnoczi wrote:
> Why did you decide to implement -device nvme-mi as a device on
> TYPE_NVME_BUS? If the NVMe spec somehow requires this then I'm surprised
> that there's no NVMe bus interface (callbacks). It seems like this could
> just as easily be
Looks good,
Reviewed-by: Christoph Hellwig
On Tue, May 04, 2021 at 02:59:07PM +0200, Greg Kroah-Hartman wrote:
> > Hi Christoph,
> >
> > FYI, these uapi changes break build of QEMU.
>
> What uapi changes?
>
> What exactly breaks?
>
> Why does QEMU require kernel driver stuff?
Looks like it pull in the uapi struct definitions unconditio
On Wed, Apr 22, 2020 at 01:14:44PM -0400, Jon Derrick wrote:
> The two patches (Linux & QEMU) add support for passthrough VMD devices
> in QEMU/KVM. VMD device 28C0 already supports passthrough natively by
> providing the Host Physical Address in a shadow register to the guest
> for correct bridge
s/KABI/UAPI/ in the subject and anywhere else in the series.
Please avoid __packed__ structures and just properly pad them, they
have a major performance impact on some platforms and will cause
compiler warnings when taking addresses of members.
On Fri, Dec 13, 2019 at 02:46:26PM +, Stefan Hajnoczi wrote:
> The Linux virtio_blk.ko guest driver is removing legacy SCSI passthrough
> support. Deprecate this feature in QEMU too.
>
> Signed-off-by: Stefan Hajnoczi
Fine with me as the original author:
Reviewed-by: Christoph Hellwig
On Fri, Nov 01, 2019 at 04:25:10PM +0100, Max Reitz wrote:
> The XFS kernel driver has a bug that may cause data corruption for qcow2
> images as of qemu commit c8bb23cbdbe32f. We can work around it by
> treating post-EOF fallocates as serializing up until infinity (INT64_MAX
> in practice).
This
On Thu, Apr 18, 2019 at 09:05:05AM -0700, Dan Williams wrote:
> > > I'd either add a comment about avoiding retpoline overhead here or just
> > > make ->flush == NULL mean generic_nvdimm_flush(). Just so that people
> > > don't
> > > get confused by the code.
> >
> > Isn't this premature optimizat
On Mon, Mar 11, 2019 at 09:11:53AM -0600, Keith Busch wrote:
> The implementation used blocks units rather than the expected bytes.
Thank,
looks good:
Reviewed-by: Christoph Hellwig
And sorry for causing this mess.
On Tue, Oct 16, 2018 at 11:42:35PM +0530, Kirti Wankhede wrote:
> - Added vfio_device_migration_info structure to use interact with vendor
> driver.
There is no such thing as a 'vendor driver' in Linux - all drivers ate
treated equal. And I don't see any single driver supporting this yet,
so yo
On Mon, Feb 05, 2018 at 09:19:46AM +1300, Michael Clark wrote:
> BTW I've created branches in my own personal trees for Privileged ISA
> v1.9.1. These trees are what I use for v1.9.1 backward compatibility
> testing in QEMU:
>
> - https://github.com/michaeljclark/riscv-linux/tree/riscv-linux-4.6.2
On Fri, Jan 12, 2018 at 07:24:54AM +1300, Michael Clark wrote:
> I'm going to be restoring branches for bbl and riscv-linux that work again
> priv 1.9.1. There are still other emulators and RTL that support priv1.9.1.
> Folk will have silicon against different versions of spec going forward.
> Like
#ifdef CONFIG_USER_ONLY
int riscv_cpu_mmu_index(CPURISCVState *env, bool ifetch)
{
return 0;
}
bool riscv_cpu_exec_interrupt(CPUState *cs, int interrupt_request)
{
return false;
}
int riscv_cpu_handle_mmu_fault(CPUState *cs, vaddr address,
int access_type, int mmu_idx)
{
cs->e
On Wed, Jan 10, 2018 at 03:46:19PM -0800, Michael Clark wrote:
> - RISC-V Instruction Set Manual Volume I: User-Level ISA Version 2.2
> - RISC-V Instruction Set Manual Volume II: Privileged ISA Version 1.9.1
> - RISC-V Instruction Set Manual Volume II: Privileged ISA Version 1.10
Same question as
On Wed, Jan 03, 2018 at 01:44:15PM +1300, Michael Clark wrote:
> HTIF (Host Target Interface) provides console emulation for QEMU. HTIF
> allows identical copies of BBL (Berkeley Boot Loader) and linux to run
> on both Spike and QEMU. BBL provides HTIF console access via the
> SBI (Supervisor Binar
> +if (env->priv_ver >= PRIV_VERSION_1_10_0) {
> +if (get_field(env->satp, SATP_MODE) == VM_1_09_MBARE) {
> +mode = PRV_M;
> +}
> +} else {
> +if (get_field(env->mstatus, MSTATUS_VM) == VM_1_10_MBARE) {
> +mode = PRV_M;
> +}
> +}
> The RISC-V QEMU port implements the following specifications:
> - RISC-V Instruction Set Manual Volume I: User-Level ISA Version 2.2
> - RISC-V Instruction Set Manual Volume II: Privileged ISA Version 1.9.1
> - RISC-V Instruction Set Manual Volume II: Privileged ISA Version 1.10
What is the reas
On Thu, Nov 23, 2017 at 03:02:05PM +0100, Marc-André Lureau wrote:
> The following patch is going to use the symbol from the fw_cfg module,
> to call the function and write the note location details in the
> vmcoreinfo entry, so qemu can produce dumps with the vmcoreinfo note.
Sounds like fw_cfg s
On Fri, Oct 20, 2017 at 08:05:09AM -0700, Dan Williams wrote:
> Right, that's the same recommendation I gave.
>
> https://lists.gnu.org/archive/html/qemu-devel/2017-07/msg08404.html
>
> ...so maybe I'm misunderstanding your concern? It sounds like we're on
> the same page.
Yes, the above is
On Thu, Oct 19, 2017 at 11:21:26AM -0700, Dan Williams wrote:
> The difference is that nvdimm_flush() is not mandatory, and that the
> platform will automatically perform the same flush at power-fail.
> Applications should be able to assume that if they are using MAP_SYNC
> that no other coordinati
On Wed, Oct 18, 2017 at 08:51:37AM -0700, Dan Williams wrote:
> This use case is not "Persistent Memory". Persistent Memory is
> something you can map and make persistent with CPU instructions.
> Anything that requires a driver call is device driver managed "Shared
> Memory".
How is this any diffe
On Tue, Oct 17, 2017 at 03:40:56AM -0400, Pankaj Gupta wrote:
> Are you saying do it as existing i.e ACPI pmem like interface?
> The reason we have created this new driver is exiting pmem driver
> does not define proper semantics for guest flushing requests.
At this point I'm caring about the Linu
I think this driver is at entirely the wrong level.
If you want to expose pmem to a guest with flushing assist do it
as pmem, and not a block driver.
This didn't seem to make it into mainline, does it need a ping?
Can you send a patch with just the PSDT flag check? The rest should
only be in an eventually patch to add SGL support.
On Tue, Jun 06, 2017 at 03:38:05PM +0800, Qu Wenruo wrote:
> Update nvme header to catch up with kernel.
> Most of the newly added members are from 1.2 and 1.3 spec, while the
> status code is only kept the same with kernel (around 1.1 spec).
>
> The major update is to add Scatter Gather List rela
On Fri, May 05, 2017 at 12:03:40PM +0200, Paolo Bonzini wrote:
> While that's allowed and it makes sense indeed on SSDs, for QEMU's
> typical usage it can lead to fragmentation and worse performance. On
> extent-based file systems, write zeroes without deallocate can be
> implemented very efficien
Signed-off-by: Keith Busch
[hch: ported over from qemu-nvme.git to mainline]
Signed-off-by: Christoph Hellwig
---
hw/block/nvme.c | 26 ++
hw/block/nvme.h | 1 +
2 files changed, 27 insertions(+)
Changes since v1:
- add BDRV_REQ_MAY_UNMAP flag
diff --git a/hw/block
On Fri, May 05, 2017 at 11:30:11AM +0200, Paolo Bonzini wrote:
> could you pass BDRV_REQ_MAY_UNMAP for the flags here if the deallocate
> bit (dword 12 bit 25) is set?
In fact we should do that unconditonally. The deallocate bit is new
in 1.3 (which we don't claim to support) and forces deallocat
Signed-off-by: Keith Busch
[hch: ported over from qemu-nvme.git to mainline]
Signed-off-by: Christoph Hellwig
---
hw/block/nvme.c | 26 ++
hw/block/nvme.h | 1 +
2 files changed, 27 insertions(+)
diff --git a/hw/block/nvme.c b/hw/block/nvme.c
index ae303d44e5
On Tue, Mar 28, 2017 at 04:39:25PM +0800, Changpeng Liu wrote:
> Currently virtio-blk driver does not provide discard feature flag, so the
> filesystems which built on top of the block device will not send discard
> command. This is okay for HDD backend, but it will impact the performance
> for SSD
infrastructure properly to not block
the main thread on discard requests, and cleaned up a little bit.
Signed-off-by: Christoph Hellwig
---
hw/block/nvme.c | 87 +
hw/block/nvme.h | 1 +
2 files changed, 88 insertions(+)
diff --git a/hw/block/nvme.c b
Hi all,
this series implements two more NVMe commands: DSM and Write Zeroes.
Both trace their lineage to Keith's qemu-nvme.git repository, and
while the Write Zeroes one is taken from there almost literally
the DSM one has seen a major rewrite to not block the main thread
as well as various other
From: Keith Busch
Signed-off-by: Keith Busch
[hch: ported over from qemu-nvme.git to mainline]
Signed-off-by: Christoph Hellwig
---
hw/block/nvme.c | 27 ++-
hw/block/nvme.h | 1 +
2 files changed, 27 insertions(+), 1 deletion(-)
diff --git a/hw/block/nvme.c b/hw
On Sun, Jul 31, 2016 at 04:52:16PM -0700, Ashish Mittal wrote:
> This patch adds support for a new block device type called "vxhs".
> Source code for the library that this code loads can be downloaded from:
> https://github.com/MittalAshish/libqnio.git
Do you also have a pointer to the server impl
Third resent of this series after this didn't get picked up the
previous times. The Qemu NVMe implementation mistakes the cns
field in the Identify command as a boolean. This was never
true, and is actively harmful since NVMe1.1 (which the Qemu
device claims to support) supports more than two Ide
NVMe 1.1 requires devices to implement a Namespace List subcommand of
the identify command. Qemu not only not implements this features, but
also misinterprets it as an Identify Controller request. Due to this
any OS trying to use the Namespace List will fail the probe.
Signed-off-by: Christoph
bug fix.
Signed-off-by: Christoph Hellwig
---
hw/block/nvme.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/hw/block/nvme.c b/hw/block/nvme.c
index a0655a3..cef3bb4 100644
--- a/hw/block/nvme.c
+++ b/hw/block/nvme.c
@@ -954,7 +954,7 @@ static void nvme_class_init
Thanks Ming,
from a first quick view this looks great. I'll look over it in a bit
more detail once I get a bit more time.
On Thu, Nov 19, 2015 at 04:21:03PM -0800, Ming Lin wrote:
> #define NVMET_SUBSYS_NAME_LEN256
> charsubsys_name[NVMET_SUBSYS_NAME_LEN];
> +
> + void*opaque;
> + void(*start)(void *);
> };
Why can't vhost use
Meh, this was still missing the uncommited changes for the nsid
off by one vs the array index:
diff --git a/hw/block/nvme.c b/hw/block/nvme.c
index 360be71..4f768d5 100644
--- a/hw/block/nvme.c
+++ b/hw/block/nvme.c
@@ -499,10 +499,10 @@ static uint16_t nvme_identify_nslist(NvmeCtrl *n,
NvmeIdent
NVMe 1.1 requires devices to implement a Namespace List subcommand of
the identify command. Qemu not only not implements this features, but
also misinterprets it as an Identify Controller request. Due to this
any OS trying to use the Namespace List will fail the probe.
Signed-off-by: Christoph
bug fix.
Signed-off-by: Christoph Hellwig
---
hw/block/nvme.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/hw/block/nvme.c b/hw/block/nvme.c
index 4a6443f..360be71 100644
--- a/hw/block/nvme.c
+++ b/hw/block/nvme.c
@@ -946,7 +946,7 @@ static void nvme_class_init
First one fixes Identify to behave as mandated by the spec, and the
second bumps the PCI revision so that guest drivers can detect
the fixed version of the device so that only the old version has
to be blacklisted.
On Tue, Nov 17, 2015 at 05:41:04PM +, Keith Busch wrote:
> On Tue, Nov 17, 2015 at 09:33:11AM -0800, Busch, Keith wrote:
> > I accidently deleted my comment. Here's what it said:
> >
> > +list = g_malloc(data_len);
> > +for (i = 0; i < n->num_namespaces; i++) {
> > +if (i <= mi
From: Christoph Hellwig
Subject: a nasty nvme fix
In-Reply-To:
Hi all,
below is a fix for a bug in the qemu NVMe identify implementation that's
causing us some trouble with an updated Linux driver. We'll have to
blacklist the existing Qemu device ID for it, so I wonder how we can
a
NVMe 1.1 requires devices to implement a Namespace List subcommand of
the identify command. Qemu not only not implements this features, but
also misinterprets it as an Identify Controller request. Due to this
any OS trying to use the Namespace List will fail the probe.
Signed-off-by: Christoph
On Wed, Jun 17, 2015 at 02:24:06PM +0200, Kevin Wolf wrote:
> Am 17.06.2015 um 13:59 hat Christoph Hellwig geschrieben:
> > Thank Eric.
> >
> > Kevin,
> >
> > do you want me to resend the series with these cover letter/cc fixes or
> > is it okay this time?
Thank Eric.
Kevin,
do you want me to resend the series with these cover letter/cc fixes or
is it okay this time?
here.
Signed-off-by: Christoph Hellwig
diff --git a/hw/block/nvme.c b/hw/block/nvme.c
index 1e07166..50d76f1 100644
--- a/hw/block/nvme.c
+++ b/hw/block/nvme.c
@@ -479,6 +479,9 @@ static uint16_t nvme_get_feature(NvmeCtrl *n, NvmeCmd *cmd,
NvmeRequest *req)
req->cqe.res
On Thu, Nov 20, 2014 at 02:00:59PM -0500, Mike Snitzer wrote:
> virtio_blk incorrectly established -1U as the default for these
> queue_limits. Set these limits to sane default values to avoid crashing
> the kernel. But the virtio-blk protocol should probably be extended to
> allow proper stackin
On Tue, May 06, 2014 at 09:00:54PM +0200, Max Reitz wrote:
> The current version of raw-posix always uses ioctl(FS_IOC_FIEMAP) if
> FIEMAP is available; lseek with SEEK_HOLE/SEEK_DATA are not even
> compiled in in this case. However, there may be implementations which
> support the latter but not t
On Tue, Jul 16, 2013 at 06:29:26PM +0200, Paolo Bonzini wrote:
> Signed-off-by: Paolo Bonzini
This isn't really XFS specific, at least ext4 and ocfs2 can report the same.
On Wed, Jul 31, 2013 at 08:19:51AM +0200, Paolo Bonzini wrote:
> Most of the block layer is under the BSD license, thus it is reasonable
> to license block/raw.c the same way. CCed people should ACK by replying
> with a Signed-off-by line.
The coded was intended to be GPLv2.
On Wed, Oct 31, 2012 at 01:46:22PM +0100, Paolo Bonzini wrote:
> qemu-nbd does support AIO in the latest versions. There's also
> --cache=MODE and --aio=MODE command-line options.
Oh true, it's just hidden behind coroutines. With --aio-native and
--nocache I actually get fairly reasonable perfor
On Mon, Oct 01, 2012 at 04:52:23PM +0200, Paolo Bonzini wrote:
> Adding an NBD server inside QEMU is trivial, since all the logic is
> in nbd.c and can be shared easily between qemu-nbd and QEMU itself.
> The main difference is that qemu-nbd serves a single unnamed export,
> while QEMU serves named
On Mon, Oct 15, 2012 at 12:07:37PM -0600, Eric Blake wrote:
> On 10/15/2012 11:29 AM, Alex Bligh wrote:
> > This patch allows qemu-img rebase to rebase an image to
> > have no backing file, as opposed to merely allowing it to
> > rebase to an existing backing file.
>
> You can already do that by r
On Tue, Oct 09, 2012 at 05:42:01PM +0800, Chen HanXiao wrote:
> When we use SCSI generic device as disk image, function lseek
> could not get the size of this kind of device.
> So try to use SCSI command Read Capacity(10) when lseek failed to get
> the size of SCSI generic device.
Eww, this is ugl
On Fri, Jul 13, 2012 at 12:42:41PM +0200, Kevin Wolf wrote:
> It still feels a bit more like qemu-io-style operations. Not sure what
> your use case looks like exactly, but adding a qemu-io command that
> reads data from a file and writes it at a given offset into the images
> (or vice versa) shoul
On Fri, Jul 13, 2012 at 10:13:15AM +0100, Stefan Hajnoczi wrote:
> How is that different from all the qemu-io commands?
qemu-io has no modes to just dumb the output without additional
information / statistics or for the write case just take user input
instead of a pattern. I actually tried to add
Only buffers that map to unallocated blocks need to be zeroed.
Signed-off-by: Christoph Hellwig
---
block/sheepdog.c | 37 ++---
1 file changed, 18 insertions(+), 19 deletions(-)
Index: qemu/block/sheepdog.c
On Mon, Jul 09, 2012 at 04:54:08PM +0800, Wenchao Xia wrote:
> Hi, Paolo and folks,
> qemu have good capabilities to access different virtual disks, I want
> to expose its block layer API to let 3rd party program linked in, such
> as management stack or block tools, to access images data directly
Only buffers that map to unallocated blocks need to be zeroed.
Signed-off-by: Christoph Hellwig
---
block/sheepdog.c | 37 ++---
1 file changed, 18 insertions(+), 19 deletions(-)
Index: qemu/block/sheepdog.c
Only buffers that map to unallocated blocks need to be zeroed.
Signed-off-by: Christoph Hellwig
---
block/sheepdog.c | 28 ++--
1 file changed, 18 insertions(+), 10 deletions(-)
Index: qemu/block/sheepdog.c
On Thu, Jun 28, 2012 at 04:06:24PM +0900, MORITA Kazutaka wrote:
>
> 'offset' is the offset of the sheepdog object. I think it should be
> 'done' since we need to pass the number of skip bytes.
Indeed. Odd that mny tests didn't catch this.
>
> > goto done;
> > }
Only buffers that map to unallocated blocks need to be zeroed.
Signed-off-by: Christoph Hellwig
Index: qemu/block/sheepdog.c
===
--- qemu.orig/block/sheepdog.c 2012-06-27 18:02:41.849867899 +0200
+++ qemu/block/sheepdog.c
On Fri, Jun 22, 2012 at 10:48:56AM -0700, Chris Wedgwood wrote:
> > FITRIM is a mounted filesystem feature to discard (or "trim") blocks which
> > are not in use by the filesystem. This is useful for solid-state drives
> > (SSDs) and thinly-provisioned storage. Provide access to the feature
> > fr
On Wed, May 02, 2012 at 12:54:21AM +0200, Andreas F??rber wrote:
> > +fds = fopen("/proc/sys/crypto/fips_enabled", "r");
>
> How standardized is this? Should we limit this to __linux__ or something?
It's completelt non-standard and doesn't even exist in mainline Linux.
All the FIPS bullshit
On Mon, Apr 30, 2012 at 12:59:53PM +0100, Stefan Hajnoczi wrote:
> It's not ideal but if we had a kickstart file or another way of
> building the guest with a single command, then at least regular QEMU
> SCSI contributors and maintainers can use the test suite - I think a
> Fedora guest would be fi
On Fri, Apr 27, 2012 at 04:15:43PM +0100, Stefan Hajnoczi wrote:
> Christoph Hellwig has announced a new testsuite for the Linux
> in-kernel SCSI target:
>
> http://risingtidesystems.com/git/?p=scsi-testsuite.git;a=tree
>
> We will need something similar for virtio-s
On Thu, Apr 26, 2012 at 03:49:25PM +0200, Christian Borntraeger wrote:
> From: Einar Lueck
>
> This patch provides a new function to guess physical and logical block
> sizes and exploits them in the context of s390 virtio bus. On s390
> there may be block sizes different then 512. Therefore, we w
On Wed, Apr 25, 2012 at 12:21:53PM +0100, Stefano Stabellini wrote:
> That is true, in fact I couldn't figure out what I had to implement just
> reading the comment. So I went through the blkback code and tried to
> understand what I had to do, but I got it wrong.
>
> Reading the code again it see
On Wed, Apr 25, 2012 at 10:02:45AM +0100, Ian Campbell wrote:
> The blkif spec was recently much improved, you can find it at
> http://xenbits.xen.org/docs/unstable/hypercall/include,public,io,blkif.h.html
>
> TBH I'm not sure it actually answers your questions wrt
> BLKIF_OP_FLUSH_DISKCACHE, if n
On Tue, Apr 24, 2012 at 12:28:35PM +0100, Stefano Stabellini wrote:
> xen_disk: use bdrv_aio_flush instead of bdrv_flush
This one seems completely broken, as it just queues up the flushes and
writes without any ordering between them. Linux filesystems absolutely
rely on a REQ_FUA request wh
> -case BLKIF_OP_WRITE_BARRIER:
> +case BLKIF_OP_FLUSH_DISKCACHE:
> if (!ioreq->req.nr_segments) {
> ioreq->presync = 1;
> return 0;
> }
> -ioreq->presync = ioreq->postsync = 1;
> +ioreq->postsync = 1;
> /* fall through */
On Tue, Apr 24, 2012 at 01:26:43AM +0900, MORITA Kazutaka wrote:
> SD_FLAG_CMD_CACHE is ignored in the older version of Sheepdog, so,
> even if we specify cache=writeback or cache=none, the data is written
> with O_DSYNC always and cannot be cached in the server's page cache or
> volatile disk cach
On Fri, Apr 20, 2012 at 12:15:36PM -0700, MORITA Kazutaka wrote:
> His patch sets the SD_FLAG_CMD_CACHE flag for writes only when the
> user selects cache=writeback or cache=none. If SD_FLAG_CMD_CACHE is
> not set in the request, Sheepdog servers are forced to flush the cache
> like FUA commands.
On Tue, Apr 03, 2012 at 01:35:50AM +0800, Liu Yuan wrote:
> From: Liu Yuan
>
> Flush operation is supposed to flush the write-back cache of
> sheepdog cluster.
>
> By issuing flush operation, we can assure the Guest of data
> reaching the sheepdog cluster storage.
How does qemu know that the ca
On Tue, Mar 27, 2012 at 04:48:19PM +0100, Stefano Stabellini wrote:
> Anthony,
> please pull this small patch series that allows xen_disk to be used
> correctly with NATIVE_AIO and O_DIRECT.
>
> This series should be backported to the stable branch too.
Any plans to add BLKIF_OP_FLUSH_DISKCACHE s
On Mon, Mar 26, 2012 at 02:40:47PM -0500, Richard Laager wrote:
> On Sat, 2012-03-24 at 16:27 +0100, Christoph Hellwig wrote:
> > > has_discard = !fallocate(s->fd, FALLOC_FL_PUNCH_HOLE |
> > > FALLOC_FL_KEEP_SIZE,
> >
> > There is no point in usi
On Mon, Mar 26, 2012 at 10:44:07AM +0100, Daniel P. Berrange wrote:
> This suggests that there be a new command line param to '-drive' to turn
> discard support on/off, since QEMU can't reliably know if the raw file
> it is given is intended to be fully pre-allocated by the mgmt app.
Yes.
On Sun, Mar 11, 2012 at 04:03:01PM -0500, Leonardo E. Reiter wrote:
> indeed mmap() is used in the code. This is unfortunate that it cannot be
> used. It's a really high performance way to achieve what we want here, and
> very safe for the use-case. Of course the only medium we support in the
>
On Thu, Mar 08, 2012 at 06:15:17PM +0100, Paolo Bonzini wrote:
> SEEK_DATA and SEEK_HOLE can be used to implement the is_allocated
> callback for raw files. These currently work on btrfs, with an XFS
> implementation also coming soon.
Btw - if you're interested in a bit more kernel hacking it wou
On Fri, Mar 09, 2012 at 02:36:50PM -0600, Richard Laager wrote:
> I'm not sure if fallocate() and/or BLKDISCARD always guarantee that the
> discard has made it to stable storage. If they don't, does O_DIRECT or
> O_DSYNC on open() cause them to make any such guarantee? If not, should
> you be calli
On Thu, Mar 08, 2012 at 06:15:13PM +0100, Paolo Bonzini wrote:
> Allow discard to fail, and fall back to the write operation. This
> is needed because there's no simple way to probe for availability
> of FALLOC_FL_PUNCH_HOLE.
So you switch on advertising TRIM support in the patch before, and then
On Wed, Mar 14, 2012 at 01:49:48PM +0100, Paolo Bonzini wrote:
> It does make the distinction. "I don't care" is UNMAP (or WRITE
> SAME(16) with the UNMAP bit set); "I want to have zeroes" is WRITE
> SAME(10) or WRITE SAME(16) with an all-zero payload.
But once the taget sets the unmap zeroes dat
1 - 100 of 765 matches
Mail list logo