On 02/11/12 18:05, Konrad Rzeszutek Wilk wrote:
On Fri, Nov 02, 2012 at 04:43:04PM +0100, Roger Pau Monne wrote:
This patch contains fixes for persistent grants implementation v2:
* handle == 0 is a valid handle, so initialize grants in blkback
setting the handle to BLKBACK_INVALID_HANDLE
On 19/10/12 03:34, James Harper wrote:
This patch implements persistent grants for the xen-blk{front,back}
mechanism. The effect of this change is to reduce the number of unmap
operations performed, since they cause a (costly) TLB shootdown. This allows
the I/O performance to scale better
On 22/10/12 15:47, Konrad Rzeszutek Wilk wrote:
On Thu, Oct 18, 2012 at 01:22:01PM +0200, Roger Pau Monne wrote:
This patch implements persistent grants for the xen-blk{front,back}
mechanism. The effect of this change is to reduce the number of unmap
operations performed, since they cause a
On 23/10/12 19:20, Konrad Rzeszutek Wilk wrote:
diff --git a/drivers/block/xen-blkback/blkback.c
b/drivers/block/xen-blkback/blkback.c
index c6decb9..2b982b2 100644
--- a/drivers/block/xen-blkback/blkback.c
+++ b/drivers/block/xen-blkback/blkback.c
@@ -78,6 +78,7 @@ struct pending_req {
On 23/10/12 20:50, Konrad Rzeszutek Wilk wrote:
On Tue, Oct 23, 2012 at 08:09:27PM +0200, Roger Pau Monné wrote:
On 23/10/12 19:20, Konrad Rzeszutek Wilk wrote:
diff --git a/drivers/block/xen-blkback/blkback.c
b/drivers/block/xen-blkback/blkback.c
index c6decb9..2b982b2 100644
--- a/drivers
On 24/10/12 14:40, liuxiaolei1124 wrote:
Dear Roger:
i have put the patch Persistent grant maps for xen blk drivers
(https://lkml.org/lkml/2012/10/18/191) into my Dom0 which is 2.6.32.36.
And when i start a vm, sometimes the blkback go to the BUG_ON.
I'm working on top of the next
On 24/09/12 13:36, Jan Beulich wrote:
On 21.09.12 at 17:52, Oliver Chick oliver.ch...@citrix.com wrote:
Changes since v1:
* Maximum number of persistent grants per device now 64, rather than
256, as this is the actual maxmimum request in a (1 page) ring.
As said previously, I don't see
global, now each blkback instance has
it's own list of free pages that can be used to map grants. Also, a
run time parameter (max_buffer_pages) has been added in order to tune
the maximum number of free pages each blkback instance will keep in
it's buffer.
Signed-off-by: Roger Pau Monné roger
Perhaps you could update the comment from saying 'map this grant' (which
implies doing it NOW as opposed to have done it already), and say:
/*
.. continue using the grant non-persistently. Note that
we mapped it in the earlier loop and the earlier if conditional
sets
global, now each blkback instance has
it's own list of free pages that can be used to map grants. Also, a
run time parameter (max_buffer_pages) has been added in order to tune
the maximum number of free pages each blkback instance will keep in
it's buffer.
Signed-off-by: Roger Pau Monné roger
global, now each blkback instance has
it's own list of free pages that can be used to map grants. Also, a
run time parameter (max_buffer_pages) has been added in order to tune
the maximum number of free pages each blkback instance will keep in
it's buffer.
Signed-off-by: Roger Pau Monné roger
-xen-backend
b/Documentation/ABI/stable/sysfs-bus-xen-backend
index e04afe0..7595b38 100644
--- a/Documentation/ABI/stable/sysfs-bus-xen-backend
+++ b/Documentation/ABI/stable/sysfs-bus-xen-backend
@@ -81,3 +81,10 @@ Contact:Roger Pau Monné roger@citrix.com
Description
On 09/04/13 18:13, Konrad Rzeszutek Wilk wrote:
On Wed, Mar 27, 2013 at 12:10:41PM +0100, Roger Pau Monne wrote:
Remove the last dependency from blkbk by moving the list of free
requests to blkif. This change reduces the contention on the list of
available requests.
Signed-off-by: Roger Pau
On 17/04/13 16:25, Konrad Rzeszutek Wilk wrote:
Perhaps the xen-blkfront part of the patch should be just split out to make
this easier?
Perhaps what we really should have is just the 'max' value of megabytes
we want to handle on the ring.
As right now 32 ring requests * 32 segments = 4MB.
the
maximum number of segments allowed in a request can change depending
on the backend, so we have to requeue all the requests in the ring and
in the queue and split the bios in them if they are bigger than the
new maximum number of segments.
Signed-off-by: Roger Pau Monné roger@citrix.com
Cc
On 18/04/13 14:43, Jens Axboe wrote:
On Wed, Apr 17 2013, Konrad Rzeszutek Wilk wrote:
On Wed, Apr 17, 2013 at 07:04:51PM +0200, Roger Pau Monné wrote:
On 17/04/13 16:25, Konrad Rzeszutek Wilk wrote:
Perhaps the xen-blkfront part of the patch should be just split out to
make
this easier
On 18/04/13 16:26, Jens Axboe wrote:
I've just set that to something that brings a performance benefit
without having to map an insane number of persistent grants in blkback.
Yes, the values are correct, but the device request queue (rq) is only
able to provide read requests with 64 segments
On 05/03/13 22:49, Konrad Rzeszutek Wilk wrote:
This could be written a bit differently to also run outside the
xen_blkif_schedule
(so a new thread). This would require using the lock mechanism and
converting
this big loop to two smaller loops:
1) - one quick that holds the lock - to take
On 28/02/13 11:28, Roger Pau Monne wrote:
Indirect descriptors introduce a new block operation
(BLKIF_OP_INDIRECT) that passes grant references instead of segments
in the request. This grant references are filled with arrays of
blkif_request_segment_aligned, this way we can send more segments
;
is:
;
which makes the loop unbound.
Since we always manipulate the list while holding the io_lock, there's
no need for additional locking (llist used previously is safe to use
concurrently without additional locking).
Should be backported to 3.8 stable.
Signed-off-by: Roger Pau Monné
which as the fourth argument expects the offset.
We hadn't used the physical address as part of this at all.
Signed-off-by: Roger Pau Monné roger@citrix.com
Cc: Konrad Rzeszutek Wilk konrad.w...@oracle.com
Cc: xen-de...@lists.xen.org
[v1: s/buf/offset
On 04/03/13 21:22, Konrad Rzeszutek Wilk wrote:
[...]
@@ -535,13 +604,17 @@ purge_gnt_list:
msecs_to_jiffies(xen_blkif_lru_interval);
}
+ remove_free_pages(blkif, xen_blkif_max_buffer_pages);
+
if (log_stats
On 04/03/13 21:22, Konrad Rzeszutek Wilk wrote:
@@ -194,14 +260,15 @@ static void add_persistent_gnt(struct rb_root *root,
else if (persistent_gnt-gnt this-gnt)
new = ((*new)-rb_right);
else {
- pr_alert(DRV_PFX trying to
On 05/03/13 22:46, Konrad Rzeszutek Wilk wrote:
On Tue, Mar 05, 2013 at 06:07:57PM +0100, Roger Pau Monné wrote:
On 04/03/13 21:41, Konrad Rzeszutek Wilk wrote:
On Thu, Feb 28, 2013 at 11:28:55AM +0100, Roger Pau Monne wrote:
Indirect descriptors introduce a new block operation
On 28/02/13 11:58, Jan Beulich wrote:
On 28.02.13 at 11:28, Roger Pau Monne roger@citrix.com wrote:
dev_bus_addr returned in the grant ref map operation is the mfn of the
passed page, there's no need to store it in the persistent grant
entry, since we can always get it provided that we
if we are requesting a lot of
pages since it is using the emergency memory pools.
Allocating all the pages at init prevents us from having to call
alloc_page, thus preventing possible failures.
Signed-off-by: Roger Pau Monné roger@citrix.com
Cc: Konrad Rzeszutek Wilk konrad.w...@oracle.com
On 05/03/13 15:18, Konrad Rzeszutek Wilk wrote:
On Tue, Mar 05, 2013 at 12:04:41PM +0100, Roger Pau Monné wrote:
On 04/03/13 20:39, Konrad Rzeszutek Wilk wrote:
On Thu, Feb 28, 2013 at 11:28:47AM +0100, Roger Pau Monne wrote:
This prevents us from having to call alloc_page while we
On 05/03/13 15:16, Konrad Rzeszutek Wilk wrote:
On Tue, Mar 05, 2013 at 08:11:19AM +, Jan Beulich wrote:
On 04.03.13 at 21:44, Konrad Rzeszutek Wilk konrad.w...@oracle.com
wrote:
nods 'op' sounds good. With a comment saying it can do all of the
BLKIF_OPS_..
except the BLKIF_OP_INDIRECT
On 05/03/13 09:06, Jan Beulich wrote:
On 04.03.13 at 18:19, Roger Pau Monnéroger@citrix.com wrote:
On 28/02/13 11:58, Jan Beulich wrote:
On 28.02.13 at 11:28, Roger Pau Monne roger@citrix.com wrote:
And then the biolist[] array really can be folded into a union
with the remaining
On 04/03/13 21:41, Konrad Rzeszutek Wilk wrote:
On Thu, Feb 28, 2013 at 11:28:55AM +0100, Roger Pau Monne wrote:
Indirect descriptors introduce a new block operation
(BLKIF_OP_INDIRECT) that passes grant references instead of segments
in the request. This grant references are filled with
that will be
persistently mapped.
* lru_interval: minimum interval (in ms) at which the LRU should be
run
* lru_num_clean: number of persistent grants to remove when executing
the LRU.
Signed-off-by: Roger Pau Monné roger@citrix.com
Cc: Konrad Rzeszutek Wilk konrad.w...@oracle.com
Cc: xen-de
On 05/03/13 22:53, Konrad Rzeszutek Wilk wrote:
/* No more gnttab callback work. */
gnttab_cancel_free_callback(info-callback);
@@ -1088,6 +1120,12 @@ again:
goto destroy_blkring;
}
+/* Allocate memory for grants */
+err =
On 27/02/13 05:52, Chen Gang wrote:
if call xen_vbd_translate failed, the preq.dev will be not initialized.
so use blkif-vbd.pdevice instead (still better to print relative info).
preq.dev is initialized a a couple of lines prior to calling
xen_vbd_translate:
preq.dev =
On 28/02/13 11:49, Jan Beulich wrote:
On 28.02.13 at 11:28, Roger Pau Monne roger@citrix.com wrote:
This series contains the initial implementation of indirect
descriptors for Linux blkback/blkfront.
Patches 1, 2, 3, 4 and 5 are bug fixes and minor optimizations.
Patch 6 contains a LRU
On 28/02/13 12:35, Jan Beulich wrote:
On 28.02.13 at 12:25, Roger Pau Monnéroger@citrix.com wrote:
This is the expanded graph that also contains indirect descriptors
without persistent grants:
http://xenbits.xen.org/people/royger/plot_indirect_nopers.png
Thanks. Interesting - this
On 28/02/13 12:19, Jan Beulich wrote:
On 28.02.13 at 11:28, Roger Pau Monne roger@citrix.com wrote:
@@ -109,6 +111,16 @@ typedef uint64_t blkif_sector_t;
*/
#define BLKIF_MAX_SEGMENTS_PER_REQUEST 11
+#define BLKIF_MAX_INDIRECT_GREFS_PER_REQUEST 8
+
+struct
On 25/01/13 18:32, Konrad Rzeszutek Wilk wrote:
We want to be able to exit if the difference between the request
produced (what the frontend tells us) and the requests consumed
(what we have so far processed) is greater than the ring size.
If so, we should terminate the loop as the request
to a sane value, use
balloon grant pages instead, as the gntdev device does.
Signed-off-by: Roger Pau Monné roger@citrix.com
Cc: xen-de...@lists.xen.org
Cc: Konrad Rzeszutek Wilk konrad.w...@oracle.com
I think this patch is missing the following change in Kconfig, but
gntdev doesn't
them AFAIK, we allocated them using alloc_page,
passed them to gnttab_map, used them, and when closing the backend we
only unmapped them, but they where never freed.
Signed-off-by: Roger Pau Monné roger@citrix.com
Cc: xen-de...@lists.xen.org
Cc: Konrad Rzeszutek Wilk konrad.w...@oracle.com
On 15/02/13 19:56, Konrad Rzeszutek Wilk wrote:
Should be backported to 3.8 stable.
Lets do one thing at a time.
The patch I have in the tree (and that I've asked Jens to pull for 3.9 - so
he might have already in his tree) is the old hybrid where we still use llist
but change the loop
On 25/02/13 17:49, Konrad Rzeszutek Wilk wrote:
On Fri, Feb 15, 2013 at 08:12:52PM +0100, Roger Pau Monné wrote:
On 15/02/13 19:56, Konrad Rzeszutek Wilk wrote:
Should be backported to 3.8 stable.
Lets do one thing at a time.
The patch I have in the tree (and that I've asked Jens to pull
On 07/12/12 21:20, Konrad Rzeszutek Wilk wrote:
On Tue, Dec 04, 2012 at 03:21:53PM +0100, Roger Pau Monne wrote:
Implement a safe version of llist_for_each_entry, and use it in
blkif_free. Previously grants where freed while iterating the list,
which lead to dereferences when trying to fetch
On 10/12/12 16:15, Konrad Rzeszutek Wilk wrote:
On Mon, Dec 10, 2012 at 01:15:50PM +0100, Roger Pau Monné wrote:
On 07/12/12 21:20, Konrad Rzeszutek Wilk wrote:
On Tue, Dec 04, 2012 at 03:21:53PM +0100, Roger Pau Monne wrote:
Implement a safe version of llist_for_each_entry, and use
On 12/12/12 01:37, Huang Ying wrote:
On Tue, 2012-12-11 at 12:25 +0100, Roger Pau Monne wrote:
Signed-off-by: Roger Pau Monné roger@citrix.com
Cc: Huang Ying ying.hu...@intel.com
Cc: Konrad Rzeszutek Wilk kon...@kernel.org
---
Changes since v2:
* Allow to pass a NULL node as the first
On 01/08/13 16:18, Roger Pau Monné wrote:
On 01/08/13 14:30, David Vrabel wrote:
On 01/08/13 13:08, Roger Pau Monne wrote:
Right now the maximum number of grant operations that can be batched
in a single request is BLKIF_MAX_SEGMENTS_PER_REQUEST (11). This was
OK before indirect descriptors
Pau Monné roger@citrix.com
Cc: Stefano Stabellini stefano.stabell...@eu.citrix.com
Cc: Konrad Rzeszutek Wilk konrad.w...@oracle.com
Cc: David Vrabel david.vra...@citrix.com
It looks pretty good overall.
Changes since RFC:
* Move shared code between _single and _batch to helper
in blkfront is the same as in blkback (and is controlled by the
value in blkback).
Signed-off-by: Roger Pau Monné roger@citrix.com
Cc: Konrad Rzeszutek Wilk konrad.w...@oracle.com
---
drivers/block/xen-blkfront.c | 33 +
1 files changed, 29 insertions(+), 4
On 10/07/13 15:54, Egger, Christoph wrote:
On 10.07.13 11:19, Roger Pau Monné wrote:
On 08/07/13 21:41, Konrad Rzeszutek Wilk wrote:
On Mon, Jul 08, 2013 at 03:03:27PM +0200, Roger Pau Monne wrote:
Right now blkfront has no way to unmap grant refs, if using persistent
grants once a grant
On 11/07/13 15:20, David Vrabel wrote:
On 08/07/13 14:03, Roger Pau Monne wrote:
Improve the calculation of required grants to process a request by
using nr_phys_segments instead of always assuming a request is going
to use all posible segments.
This isn't obviously correct to me. Why is
On 11/07/13 15:32, David Vrabel wrote:
On 08/07/13 14:03, Roger Pau Monne wrote:
Prevent blkfront from hoarding all grants by adding a minimum number
of grants that must be free at all times. We still need a way to free
unused grants in blkfront, but this patch will mitigate the problem
in
On 11/07/13 15:48, David Vrabel wrote:
On 10/07/13 10:19, Roger Pau Monné wrote:
From 1ede72ba10a7ec13d57ba6d2af54e86a099d7125 Mon Sep 17 00:00:00 2001
From: Roger Pau Monne roger@citrix.com
Date: Wed, 10 Jul 2013 10:22:19 +0200
Subject: [PATCH RFC] xen-blkfront: revoke foreign access
On 11/07/13 17:26, David Vrabel wrote:
On 11/07/13 16:12, Roger Pau Monné wrote:
On 11/07/13 15:48, David Vrabel wrote:
On 10/07/13 10:19, Roger Pau Monné wrote:
From 1ede72ba10a7ec13d57ba6d2af54e86a099d7125 Mon Sep 17 00:00:00 2001
From: Roger Pau Monne roger@citrix.com
Date: Wed, 10
in the list. If we add the same callback twice
we end up with an infinite loop, were callback == callback-next.
Replace this check with a proper one that iterates over the list to
see if the callback has already been added.
Signed-off-by: Roger Pau Monné roger@citrix.com
Cc: Konrad Rzeszutek
On 21/06/13 13:46, Jan Beulich wrote:
On 21.06.13 at 12:56, Roger Pau Monne roger@citrix.com wrote:
@@ -1236,7 +1236,8 @@ static int dispatch_rw_block_io(struct xen_blkif
*blkif,
seg[i].nsec 9,
seg[i].offset) == 0)) {
On 08/07/13 15:03, Roger Pau Monne wrote:
The following patches prevent blkfront from hoarding all grants in the
system by allowing blkfront to request blkback to unmap certain grants
so they can be freed by blkfront. This is done periodically by
blkfront, unmapping a certain amount of
On 01/08/13 11:53, David Vrabel wrote:
On 31/07/13 17:07, Roger Pau Monne wrote:
The new GNTTABOP_unmap_and_duplicate operation doesn't zero the
mapping passed in new_addr, allowing us to perform batch unmaps in p2m
code without requiring the use of a multicall.
Thanks. This looks like it
On 01/08/13 14:30, David Vrabel wrote:
On 01/08/13 13:08, Roger Pau Monne wrote:
Right now the maximum number of grant operations that can be batched
in a single request is BLKIF_MAX_SEGMENTS_PER_REQUEST (11). This was
OK before indirect descriptors because the maximum number of segments
in a
contention around it.
Signed-off-by: Roger Pau Monné roger@citrix.com
Reported-by: Matt Wilson m...@amazon.com
Cc: Matt Wilson m...@amazon.com
Cc: Konrad Rzeszutek Wilk konrad.w...@oracle.com
Cc: David Vrabel david.vra...@citrix.com
Cc: Boris Ostrovsky david.vra...@citrix.com
Sorry Boris, I
On 05/11/13 13:36, David Vrabel wrote:
On 05/11/13 11:24, Roger Pau Monne wrote:
IMHO there's no reason to set a m2p override if the mapping is done in
kernel space, so only set the m2p override when kmap_ops is set.
Can you provide a more detailed reasoning about why this is safe?
To tell
On 05/11/13 15:56, Konrad Rzeszutek Wilk wrote:
On Tue, Nov 05, 2013 at 03:47:08PM +0100, Roger Pau Monné wrote:
On 05/11/13 13:36, David Vrabel wrote:
On 05/11/13 11:24, Roger Pau Monne wrote:
IMHO there's no reason to set a m2p override if the mapping is done in
kernel space, so only set
On 05/11/13 16:08, Ian Campbell wrote:
On Tue, 2013-11-05 at 16:01 +0100, Roger Pau Monné wrote:
On 05/11/13 15:56, Konrad Rzeszutek Wilk wrote:
On Tue, Nov 05, 2013 at 03:47:08PM +0100, Roger Pau Monné wrote:
On 05/11/13 13:36, David Vrabel wrote:
On 05/11/13 11:24, Roger Pau Monne wrote
On 09/11/13 16:36, Felipe Pena wrote:
In the blkif_release function the bdget_disk() call might returns
a NULL ptr which might be dereferenced on bdev-bd_openers checking
Signed-off-by: Felipe Pena felipe...@gmail.com
---
drivers/block/xen-blkfront.c |4
1 file changed, 4
here
unsigned long pfn;
Cc: Konrad Rzeszutek Wilk konrad.w...@oracle.com
Cc: Boris Ostrovsky boris.ostrov...@oracle.com
Cc: David Vrabel david.vra...@citrix.com
Signed-off-by: Tim Gardner tim.gard...@canonical.com
Acked-by: Roger Pau Monné roger@citrix.com
---
drivers/block/xen
On 03/12/13 12:01, David Vrabel wrote:
On 03/12/13 10:57, Roger Pau Monne wrote:
Using __packed__ on the public interface is not correct, this
structures should be compiled using the native ABI, and __packed__
should only be used in the backend counterpart of those structures
(which needs to
unless the
Dom0 also has this patch.
This is acceptable IMHO, the ARM ABI is clearly defined and previous
kernels were simply buggy. The fact that front and backend were
equivalently buggy and so it happened to work is not an excuse.
Signed-off-by: Roger Pau Monné roger@citrix.com
On 03/12/13 12:14, Jan Beulich wrote:
On 03.12.13 at 11:57, Roger Pau Monne roger@citrix.com wrote:
struct blkif_request_rw {
uint8_tnr_segments; /* number of segments */
blkif_vdev_t handle; /* only for read/write requests */
-#ifdef
On 04/12/13 10:28, Ian Campbell wrote:
On Tue, 2013-12-03 at 15:11 -0500, Konrad Rzeszutek Wilk wrote:
If Konrad and Boris agree that breaking the kernel's ABI in this way is
acceptable in this specific case, I'll defer to them.
My opinion as Xen on ARM hypervisor maintainer is that this is
for the first one have the newly
introduced PIRQ_MSI_GROUP flag set. This prevents calling
PHYSDEVOP_unmap_pirq on them, since the unmap must be done with the
first pirq in the group.
Signed-off-by: Roger Pau Monné roger@citrix.com
---
Tested with an Intel ICH8 AHCI SATA controller
On 27/02/14 17:33, Boris Ostrovsky wrote:
On 02/27/2014 10:45 AM, Roger Pau Monné wrote:
@@ -291,7 +290,10 @@ static int xen_initdom_setup_msi_irqs(struct
pci_dev *dev, int nvec, int type)
(pci_domain_nr(dev-bus) 16);
map_irq.devfn = dev-devfn
On 28/02/14 18:20, Boris Ostrovsky wrote:
On 02/27/2014 01:45 PM, Boris Ostrovsky wrote:
On 02/27/2014 01:15 PM, Roger Pau Monne wrote:
Add support for MSI message groups for Xen Dom0 using the
MAP_PIRQ_TYPE_MULTI_MSI pirq map type.
In order to keep track of which pirq is the first one in
On 28/02/14 19:00, Boris Ostrovsky wrote:
On 02/28/2014 12:46 PM, Roger Pau Monné wrote:
On 28/02/14 18:20, Boris Ostrovsky wrote:
On 02/27/2014 01:45 PM, Boris Ostrovsky wrote:
On 02/27/2014 01:15 PM, Roger Pau Monne wrote:
Add support for MSI message groups for Xen Dom0 using
on xen_blkif_alloc, so that when flush_work is called on
shutdown the struct is initialized even if it hasn't been used.
Signed-off-by: Roger Pau Monné roger@citrix.com
diff --git a/drivers/block/xen-blkback/xenbus.c
b/drivers/block/xen-blkback/xenbus.c
index 84973c6..3df7575
On 11/02/14 18:52, David Vrabel wrote:
On 11/02/14 17:40, Roger Pau Monné wrote:
On 11/02/14 17:07, Sander Eikelenboom wrote:
Tuesday, February 11, 2014, 4:56:50 PM, you wrote:
On Tue, Feb 11, 2014 at 04:52:15PM +0100, Sander Eikelenboom wrote:
Hi Konrad,
Today decided to tryout another
On 29/04/13 20:37, Konrad Rzeszutek Wilk wrote:
On Fri, Apr 26, 2013 at 03:47:40PM +0100, David Vrabel wrote:
On 26/04/13 14:45, Roger Pau Monne wrote:
Allocate pending requests in smaller chunks instead of allocating them
all at the same time.
This change also removes the global array of
On 04/05/13 09:34, Sander Eikelenboom wrote:
Hello Sander,
Monday, April 29, 2013, 6:05:20 PM, you wrote:
Monday, April 29, 2013, 5:46:23 PM, you wrote:
On Wed, Apr 24, 2013 at 08:16:37PM +0200, Sander Eikelenboom wrote:
Friday, April 19, 2013, 4:44:01 PM, you wrote:
Hey Jens,
On 24/06/13 15:28, Konrad Rzeszutek Wilk wrote:
On Sat, Jun 22, 2013 at 09:59:17AM +0200, Roger Pau Monne wrote:
With the introduction of indirect segments we can receive requests
with a number of segments bigger than the maximum number of allowed
iovecs in a bios, so make sure that blkback
On 04/11/13 16:49, Ian Campbell wrote:
On Mon, 2013-11-04 at 16:38 +0100, Roger Pau Monne wrote:
The new GNTTABOP_unmap_and_duplicate operation
I don't see this op in mainline Xen anywhere...
Was it part of Stefano's original swiotlb for ARM stuff? If so we've
dropped that approach for
On 08/07/13 21:41, Konrad Rzeszutek Wilk wrote:
On Mon, Jul 08, 2013 at 03:03:27PM +0200, Roger Pau Monne wrote:
Right now blkfront has no way to unmap grant refs, if using persistent
grants once a grant is used blkfront cannot assure if blkback will
have this grant mapped or not. To solve
On 10/06/14 15:19, Vitaly Kuznetsov wrote:
Vitaly Kuznetsov vkuzn...@redhat.com writes:
Jiri Slaby jsl...@suse.cz writes:
On 06/04/2014 07:48 AM, Greg KH wrote:
On Wed, May 14, 2014 at 03:11:22PM -0400, Konrad Rzeszutek Wilk wrote:
Hey Greg
This email is in regards to backporting two
Ping?
On 23/05/14 20:08, Roger Pau Monné wrote:
On 23/05/14 19:51, Konrad Rzeszutek Wilk wrote:
On Thu, May 22, 2014 at 04:40:07PM +0200, Roger Pau Monne wrote:
We are missing a check to see if the backend supports persistent
grants on resume, meaning we will always run with the value fetched
On 20/05/14 11:54, Vitaly Kuznetsov wrote:
Vitaly Kuznetsov vkuzn...@redhat.com writes:
1) ramdisks (/dev/ram*) (persistent grants and indirect descriptors
disabled)
sorry, there was a typo. persistent grants and indirect descriptors are
enabled with ramdisks, otherwise such testing won't
of persistent
grants in blkfront is the same as in blkback (and is controlled by the
value in blkback).
Signed-off-by: Roger Pau Monné roger@citrix.com
Reviewed-by: David Vrabel david.vra...@citrix.com
Acked-by: Matt Wilson m...@amazon.com
Cc: Konrad Rzeszutek Wilk konrad.w...@oracle.com
Cc: David Vrabel
on.
Signed-off-by: Roger Pau Monné roger@citrix.com
Cc: Konrad Rzeszutek Wilk konrad.w...@oracle.com
Cc: David Vrabel david.vra...@citrix.com
---
drivers/block/xen-blkfront.c | 23 ---
1 files changed, 16 insertions(+), 7 deletions(-)
diff --git a/drivers/block/xen
On 24/05/14 03:33, Mukesh Rathor wrote:
When running as dom0 in pvh mode, foreign pfns that are accessed must be
added to our p2m which is managed by xen. This is done via
XENMEM_add_to_physmap_range hypercall. This is needed for toolstack
building guests and mapping guest memory, xentrace
On 29/01/14 09:52, Jan Beulich wrote:
On 28.01.14 at 18:43, Roger Pau Monne roger@citrix.com wrote:
--- a/drivers/block/xen-blkback/blkback.c
+++ b/drivers/block/xen-blkback/blkback.c
@@ -985,17 +985,31 @@ static void __end_block_io_op(struct pending_req
*pending_req, int error)
*
On 04/02/14 09:02, Jan Beulich wrote:
On 03.02.14 at 17:58, Roger Pau Monnéroger@citrix.com wrote:
On 29/01/14 09:52, Jan Beulich wrote:
On 28.01.14 at 18:43, Roger Pau Monne roger@citrix.com wrote:
+ free_req(blkif, pending_req);
+ /*
+ * Make sure the
On 04/02/14 09:31, Jan Beulich wrote:
On 04.02.14 at 09:16, Roger Pau Monnéroger@citrix.com wrote:
On 04/02/14 09:02, Jan Beulich wrote:
On 03.02.14 at 17:58, Roger Pau Monnéroger@citrix.com wrote:
On 29/01/14 09:52, Jan Beulich wrote:
On 28.01.14 at 18:43, Roger Pau Monne
.
Good catch.
Cc: Konrad Rzeszutek Wilk konrad.w...@oracle.com
Cc: Roger Pau Monné roger@citrix.com
Cc: Ian Campbell ian.campb...@citrix.com
Cc: David Vrabel david.vra...@citrix.com
Cc: linux-kernel@vger.kernel.org
Cc: xen-de...@lists.xen.org
Cc: Anthony Liguori aligu...@amazon.com
Signed
On 09/01/14 16:30, Wei Liu wrote:
On Wed, Jan 08, 2014 at 12:10:10AM +, Zoltan Kiss wrote:
This patch contains the new definitions necessary for grant mapping.
v2:
- move unmapping to separate thread. The NAPI instance has to be scheduled
even from thread context, which can cause huge
to the
free_pages lists or persistent grants to the persistent_gnts
red-black tree.
Also, add some checks in xen_blkif_free to make sure we are cleaning
everything.
Signed-off-by: Roger Pau Monné roger@citrix.com
Cc: Konrad Rzeszutek Wilk konrad.w...@oracle.com
Cc: David Vrabel
to the
free_pages lists or persistent grants to the persistent_gnts
red-black tree.
Also, add some checks in xen_blkif_free to make sure we are cleaning
everything.
Signed-off-by: Roger Pau Monné roger@citrix.com
Cc: Konrad Rzeszutek Wilk konrad.w...@oracle.com
Cc: David Vrabel
On 28/01/14 16:37, Konrad Rzeszutek Wilk wrote:
On Tue, Jan 28, 2014 at 01:44:37PM +0100, Roger Pau Monné wrote:
On 27/01/14 22:21, Konrad Rzeszutek Wilk wrote:
On Mon, Jan 27, 2014 at 11:13:41AM +0100, Roger Pau Monne wrote:
@@ -976,17 +983,19 @@ static void __end_block_io_op(struct
On 29/01/14 08:52, Jan Beulich wrote:
On 28.01.14 at 18:43, Roger Pau Monne roger@citrix.com wrote:
--- a/drivers/block/xen-blkback/blkback.c
+++ b/drivers/block/xen-blkback/blkback.c
@@ -985,17 +985,31 @@ static void __end_block_io_op(struct pending_req
*pending_req, int error)
*
On 30/01/14 00:15, Mukesh Rathor wrote:
Konrad,
The CR4 settings were dropped from my earlier patch because you didn't
wanna enable them. But since you do now, we need to set them in the APs
also. If you decide not too again, please apply my prev patch
pvh: disable pse feature for now.
On 11/12/13 17:18, Stefano Stabellini wrote:
On Tue, 3 Dec 2013, Konrad Rzeszutek Wilk wrote:
If Konrad and Boris agree that breaking the kernel's ABI in this way is
acceptable in this specific case, I'll defer to them.
My opinion as Xen on ARM hypervisor maintainer is that this is the right
padding if Linux is compiled for this architecture.
Konrad asked for confirmation that this didn't change x86.
I've also tested this using various combinations of kernels, and it
seems to be perfectly fine, so:
Acked-by: Roger Pau Monné roger@citrix.com
--
To unsubscribe from this list
in blkfront is the same as in blkback (and is controlled by the
value in blkback).
Signed-off-by: Roger Pau Monné roger@citrix.com
Reviewed-by: David Vrabel david.vra...@citrix.com
Roger,
Could you repost patch #2 and #3 (as #1 is in v3.11-rc4) with the
comments and the Ack from Matt
On 04/04/14 17:01, David Vrabel wrote:
On 04/04/14 15:41, Roger Pau Monne wrote:
Blkback cannot work properly on auto-translated guests if Xen doesn't
update the IOMMU when performing grant maps/unmaps, so only attach if
the newly introduced XENFEAT_hvm_gntmap_supports_iommu is found.
Can
On 08/04/14 19:25, kon...@kernel.org wrote:
From: Konrad Rzeszutek Wilk konrad.w...@oracle.com
When we migrate an HVM guest, by default our shared_info can
only hold up to 32 CPUs. As such the hypercall
VCPUOP_register_vcpu_info was introduced which allowed us to
setup per-page areas for
On 08/04/14 20:53, Konrad Rzeszutek Wilk wrote:
On Tue, Apr 08, 2014 at 08:18:48PM +0200, Roger Pau Monné wrote:
On 08/04/14 19:25, kon...@kernel.org wrote:
From: Konrad Rzeszutek Wilk konrad.w...@oracle.com
When we migrate an HVM guest, by default our shared_info can
only hold up to 32 CPUs
1 - 100 of 640 matches
Mail list logo