This series contains the cleanups and fixes found in my previous
indirect descriptors series, that are aimed for linux 3.9.
Available in the git repository at:
git://xenbits.xen.org/people/royger/linux.git blk-for-3.9
Roger Pau Monne (5):
xen-blkback: don't store dev_bus_addr
dev_bus_addr returned in the grant ref map operation is the mfn of the
passed page, there's no need to store it in the persistent grant
entry, since we can always get it provided that we have the page.
This reduces the memory overhead of persistent grants in blkback.
Signed-off-by: Roger Pau Monn
We may use foreach_grant_safe in the future with empty lists, so make
sure we can handle them.
Signed-off-by: Roger Pau Monné
Cc: xen-de...@lists.xen.org
Cc: Konrad Rzeszutek Wilk
---
drivers/block/xen-blkback/blkback.c |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/d
We already have the frame (pfn of the grant page) stored inside struct
grant, so there's no need to keep an aditional list of mapped frames
for a specific request. This reduces memory usage in blkfront.
Signed-off-by: Roger Pau Monné
Cc: Konrad Rzeszutek Wilk
Cc: xen-de...@lists.xen.org
---
dri
This prevents us from having to call alloc_page while we are preparing
the request. Since blkfront was calling alloc_page with a spinlock
held we used GFP_ATOMIC, which can fail if we are requesting a lot of
pages since it is using the emergency memory pools.
Allocating all the pages at init preve
Replace the use of llist with list.
llist_for_each_entry_safe can trigger a bug in GCC 4.1, so it's best
to remove it and use a doubly linked list, which is used extensively
in the kernel already.
Specifically this bug can be triggered by hot-unplugging a disk,
either by doing xm block-detach or
Replace llist_for_each_entry_safe with a while loop and
llist_del_first.
llist_for_each_entry_safe can trigger a bug in GCC 4.1, so it's best
to remove it and use a while loop and llist_del_first (which is
already in llist.h).
Since xen-blkfront is the only user of the llist_for_each_entry_safe
m
dev_bus_addr returned in the grant ref map operation is the mfn of the
passed page, there's no need to store it in the persistent grant
entry, since we can always get it provided that we have the page.
This reduces the memory overhead of persistent grants in blkback.
Signed-off-by: Roger Pau Monn
Moving grant ref handles from blkbk to pending_req will allow us to
get rid of the shared blkbk structure.
Signed-off-by: Roger Pau Monné
Cc: Konrad Rzeszutek Wilk
Cc: xen-de...@lists.xen.org
---
drivers/block/xen-blkback/blkback.c | 16
1 files changed, 4 insertions(+), 12 d
Indirect descriptors introduce a new block operation
(BLKIF_OP_INDIRECT) that passes grant references instead of segments
in the request. This grant references are filled with arrays of
blkif_request_segment_aligned, this way we can send more segments in a
request.
The proposed implementation sets
Preparatory change for implementing indirect descriptors. Change
xen_blkbk_{map/unmap} in order to be able to map/unmap a random amount
of grants (previously it was limited to
BLKIF_MAX_SEGMENTS_PER_REQUEST). Also, remove the usage of pending_req
in the map/unmap functions, so we can map/unmap gran
Remove the last dependency from blkbk by moving the list of free
requests to blkif. This change reduces the contention on the list of
available requests.
Signed-off-by: Roger Pau Monné
Cc: Konrad Rzeszutek Wilk
Cc: xen-de...@lists.xen.org
---
drivers/block/xen-blkback/blkback.c | 123 +++--
Using balloon pages for all granted pages allows us to simplify the
logic in blkback, specially in the xen_blkbk_map function, since now
we can decide if we want to map a grant persistently or not after we
have actually mapped it. This could not be done before because
persistent grants used balloon
This mechanism allows blkback to change the number of grants
persistently mapped at run time.
The algorithm uses a simple LRU mechanism that removes (if needed) the
persistent grants that have not been used since the last LRU run, or
if all grants have been used it removes the first grants in the
This series contains the initial implementation of indirect
descriptors for Linux blkback/blkfront.
Patches 1, 2, 3, 4 and 5 are bug fixes and minor optimizations.
Patch 6 contains a LRU implementation for blkback that will be needed
when using indirect descriptors (since we are no longer able
Signed-off-by: Roger Pau Monné
Cc: xen-de...@lists.xen.org
Cc: Konrad Rzeszutek Wilk
---
drivers/block/xen-blkback/blkback.c |5 +++--
1 files changed, 3 insertions(+), 2 deletions(-)
diff --git a/drivers/block/xen-blkback/blkback.c
b/drivers/block/xen-blkback/blkback.c
index c14b736..b5e7
This prevents us from having to call alloc_page while we are preparing
the request. Since blkfront was calling alloc_page with a spinlock
held we used GFP_ATOMIC, which can fail if we are requesting a lot of
pages since it is using the emergency memory pools.
Allocating all the pages at init preve
We already have the frame (pfn of the grant page) stored inside struct
grant, so there's no need to keep an aditional list of mapped frames
for a specific request. This reduces memory usage in blkfront.
Signed-off-by: Roger Pau Monné
Cc: Konrad Rzeszutek Wilk
Cc: xen-de...@lists.xen.org
---
dri
Replace the use of llist with list.
llist_for_each_entry_safe can trigger a bug in GCC 4.1, so it's best
to remove it and use a doubly linked list, which is used extensively
in the kernel already.
Specifically this bug can be triggered by hot-unplugging a disk,
either by doing xm block-detach or
We may use foreach_grant_safe in the future with empty lists, so make
sure we can handle them.
Signed-off-by: Roger Pau Monné
Cc: xen-de...@lists.xen.org
Cc: Konrad Rzeszutek Wilk
---
drivers/block/xen-blkback/blkback.c |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/d
ional cost to the guest of using persistent grants. There is
perhaps a small saving, from the reduced number of hypercalls
performed in granting, and ending foreign access.
Signed-off-by: Oliver Chick
Signed-off-by: Roger Pau Monne
Cc:
Cc:
---
Changes since v1:
* Changed the unmap_seg arr
sure we don't try to
access segments that have not been set.
Signed-off-by: Roger Pau Monne
Cc:
Cc:
---
drivers/block/xen-blkback/blkback.c | 15 +--
drivers/block/xen-blkback/xenbus.c |2 +-
drivers/block/xen-blkfront.c|3 ++-
3 files changed, 12 insertions(+), 8
The new GNTTABOP_unmap_and_duplicate operation doesn't zero the
mapping passed in new_addr, allowing us to perform batch unmaps in p2m
code without requiring the use of a multicall.
Signed-off-by: Roger Pau Monné
Cc: Stefano Stabellini
Cc: Konrad Rzeszutek Wilk
Cc: David Vrabel
---
Changes sin
Right now blkfront has no way to unmap grant refs, if using persistent
grants once a grant is used blkfront cannot assure if blkback will
have this grant mapped or not. To solve this problem, a new request
type (BLKIF_OP_UNMAP) that allows requesting blkback to unmap certain
grants is introduced.
With the current implementation, the callback in the tail of the list
can be added twice, because the check done in
gnttab_request_free_callback is bogus, callback->next can be NULL if
it is the last callback in the list. If we add the same callback twice
we end up with an infinite loop, were callb
Improve the calculation of required grants to process a request by
using nr_phys_segments instead of always assuming a request is going
to use all posible segments.
Signed-off-by: Roger Pau Monné
Cc: Konrad Rzeszutek Wilk
---
drivers/block/xen-blkfront.c | 11 +++
1 files changed, 7 i
The following patches prevent blkfront from hoarding all grants in the
system by allowing blkfront to request blkback to unmap certain grants
so they can be freed by blkfront. This is done periodically by
blkfront, unmapping a certain amount of unused persistent grants.
This series also include
Prevent blkfront from hoarding all grants by adding a minimum number
of grants that must be free at all times. We still need a way to free
unused grants in blkfront, but this patch will mitigate the problem
in the meantime.
Signed-off-by: Roger Pau Monné
Cc: Konrad Rzeszutek Wilk
---
drivers/bl
Change foreach_grant iterator to a safe version, that allows freeing
the element while iterating. Also move the free code in
free_persistent_gnts to prevent freeing the element before the rb_next
call.
Reported-by: Dan Carpenter
Signed-off-by: Roger Pau Monné
Cc: Konrad Rzeszutek Wilk
Cc: xen-d
Implement a safe version of llist_for_each_entry, and use it in
blkif_free. Previously grants where freed while iterating the list,
which lead to dereferences when trying to fetch the next item.
Reported-by: Dan Carpenter
Signed-off-by: Roger Pau Monné
Cc: Konrad Rzeszutek Wilk
Cc: xen-de...@li
here is
perhaps a small saving, from the reduced number of hypercalls
performed in granting, and ending foreign access.
Signed-off-by: Oliver Chick
Signed-off-by: Roger Pau Monne
Cc:
Cc:
---
Benchmarks showing the impact of this patch in blk performance can be
found at:
http://xenbits.xens
Currently blkfront fails to handle cases in blkif_completion like the
following:
1st loop in rq_for_each_segment
* bv_offset: 3584
* bv_len: 512
* offset += bv_len
* i: 0
2nd loop:
* bv_offset: 0
* bv_len: 512
* i: 0
In the second loop i should be 1, since we assume we only wanted to
read
Signed-off-by: Roger Pau Monné
Cc: Huang Ying
Cc: Konrad Rzeszutek Wilk
---
include/linux/llist.h | 27 +++
1 files changed, 27 insertions(+), 0 deletions(-)
diff --git a/include/linux/llist.h b/include/linux/llist.h
index a5199f6..f611cd8 100644
--- a/include/linux/l
Change foreach_grant iterator to a safe version, that allows freeing
the element while iterating. Also move the free code in
free_persistent_gnts to prevent freeing the element before the rb_next
call.
Reported-by: Dan Carpenter
Signed-off-by: Roger Pau Monné
Cc: Konrad Rzeszutek Wilk
Cc: xen-d
Use llist_for_each_entry_safe in blkif_free. Previously grants where
freed while iterating the list, which lead to dereferences when trying
to fetch the next item.
Reported-by: Dan Carpenter
Signed-off-by: Roger Pau Monné
Cc: Konrad Rzeszutek Wilk
Cc: xen-de...@lists.xen.org
---
drivers/block/
Signed-off-by: Roger Pau Monné
Cc: Huang Ying
Cc: Konrad Rzeszutek Wilk
---
Changes since v2:
* Allow to pass a NULL node as the first entry of deleted list
entries.
---
include/linux/llist.h | 27 +++
1 files changed, 27 insertions(+), 0 deletions(-)
diff --git a
Free the page allocated for the persistent grant.
Signed-off-by: Roger Pau Monné
---
drivers/block/xen-blkfront.c |1 +
1 files changed, 1 insertions(+), 0 deletions(-)
diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index f1de806..96e9b00 100644
--- a/drivers/block
Move the code that frees persistent grants from the red-black tree
to a function. This will make it easier for other consumers to move
this to a common place.
Signed-off-by: Roger Pau Monné
---
drivers/block/xen-blkback/blkback.c | 68 +++
1 files changed, 37 in
Moving grant ref handles from blkbk to pending_req will allow us to
get rid of the shared blkbk structure.
Signed-off-by: Roger Pau Monné
Cc: Konrad Rzeszutek Wilk
Cc: xen-de...@lists.xen.org
---
drivers/block/xen-blkback/blkback.c | 16
1 files changed, 4 insertions(+), 12 d
Implementation of indirect descriptors v2, addressing Konrad's
comments. A graph on performance can be found at:
http://xenbits.xen.org/people/royger/plot_indirect_read4k.png
Thanks for the review, Roger.
Roger Pau Monne (7):
xen-blkback: print stats about persistent grants
Using balloon pages for all granted pages allows us to simplify the
logic in blkback, especially in the xen_blkbk_map function, since now
we can decide if we want to map a grant persistently or not after we
have actually mapped it. This could not be done before because
persistent grants used balloo
Signed-off-by: Roger Pau Monné
Cc: xen-de...@lists.xen.org
Cc: Konrad Rzeszutek Wilk
---
drivers/block/xen-blkback/blkback.c |6 --
1 files changed, 4 insertions(+), 2 deletions(-)
diff --git a/drivers/block/xen-blkback/blkback.c
b/drivers/block/xen-blkback/blkback.c
index dd5b2fe..f75
Indirect descriptors introduce a new block operation
(BLKIF_OP_INDIRECT) that passes grant references instead of segments
in the request. This grant references are filled with arrays of
blkif_request_segment_aligned, this way we can send more segments in a
request.
The proposed implementation sets
Preparatory change for implementing indirect descriptors. Change
xen_blkbk_{map/unmap} in order to be able to map/unmap a random amount
of grants (previously it was limited to
BLKIF_MAX_SEGMENTS_PER_REQUEST). Also, remove the usage of pending_req
in the map/unmap functions, so we can map/unmap gran
This mechanism allows blkback to change the number of grants
persistently mapped at run time.
The algorithm uses a simple LRU mechanism that removes (if needed) the
persistent grants that have not been used since the last LRU run, or
if all grants have been used it removes the first grants in the
Remove the last dependency from blkbk by moving the list of free
requests to blkif. This change reduces the contention on the list of
available requests.
Signed-off-by: Roger Pau Monné
Cc: Konrad Rzeszutek Wilk
Cc: xen-de...@lists.xen.org
---
Changes since RFC:
* Replace kzalloc with kcalloc.
C
Indirect descriptors introduce a new block operation
(BLKIF_OP_INDIRECT) that passes grant references instead of segments
in the request. This grant references are filled with arrays of
blkif_request_segment_aligned, this way we can send more segments in a
request.
The proposed implementation sets
The code generat with gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-54)
creates an unbound loop for the second foreach_grant_safe loop in
purge_persistent_gnt.
The workaround is to avoid having this second loop and instead
perform all the work inside the first loop by adding a new variable,
clean_used,
Now that indirect segments are enabled blk_queue_max_hw_sectors must
be set to match the maximum number of sectors we can handle in a
request.
Signed-off-by: Roger Pau Monné
Reported-by: Felipe Franciosi
Cc: Konrad Rzeszutek Wilk
---
drivers/block/xen-blkfront.c |2 +-
1 files changed, 1 i
When using certain storage devices (like RAID) having a bigger number
of segments per request provides better performance.
Signed-off-by: Roger Pau Monné
Reported-by: Steven Haigh
Cc: Konrad Rzeszutek Wilk
---
drivers/block/xen-blkfront.c |4 ++--
1 files changed, 2 insertions(+), 2 deleti
This series contains a small number of bug fixes and improvements for
xen-block indirect descriptors.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
With the introduction of indirect segments we can receive requests
with a number of segments bigger than the maximum number of allowed
iovecs in a bios, so make sure that blkback doesn't try to allocate a
bios with more iovecs than BIO_MAX_PAGES
Signed-off-by: Roger Pau Monné
Cc: Konrad Rzeszutek
With the introduction of indirect segments we can receive requests
with a number of segments bigger than the maximum number of allowed
iovecs in a bios, so make sure that blkback doesn't try to allocate a
bios with more iovecs than BIO_MAX_PAGES
Signed-off-by: Roger Pau Monné
Cc: Konrad Rzeszutek
This series contain a small bugfix for the grant table code (patch 1)
and a couple of improvements to blkfront (patches 2 and 3) to make it
work better if there's a shortage on available free grants.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a messa
There's no need to keep the foreign access in a grant if it is not
persistently mapped by the backend. This allows us to free grants that
are not mapped by the backend, thus preventing blkfront from hoarding
all grants.
The main effect of this is that blkfront will only persistently map
the same g
Improve the calculation of required grants to process a request by
using nr_phys_segments instead of always assuming a request is going
to use all posible segments.
nr_phys_segments contains the number of scatter-gather DMA addr+len
pairs, which is basically what we put at every granted page.
for_
With the current implementation, the callback in the tail of the list
can be added twice, because the check done in
gnttab_request_free_callback is bogus, callback->next can be NULL if
it is the last callback in the list. If we add the same callback twice
we end up with an infinite loop, were callb
The new GNTTABOP_unmap_and_duplicate operation doesn't zero the
mapping passed in new_addr, allowing us to perform batch unmaps in p2m
code without requiring the use of a multicall.
Signed-off-by: Roger Pau Monné
Cc: Stefano Stabellini
Cc: Konrad Rzeszutek Wilk
---
I don't currently have a NFS
The new GNTTABOP_unmap_and_duplicate operation doesn't zero the
mapping passed in new_addr, allowing us to perform batch unmaps in p2m
code without requiring the use of a multicall.
Signed-off-by: Roger Pau Monné
Cc: Stefano Stabellini
Cc: Konrad Rzeszutek Wilk
Cc: David Vrabel
---
Changes sin
Right now the maximum number of grant operations that can be batched
in a single request is BLKIF_MAX_SEGMENTS_PER_REQUEST (11). This was
OK before indirect descriptors because the maximum number of segments
in a request was 11, but with the introduction of indirect
descriptors the maximum number o
This series contains a couple of improvements to blkfront to make it
work better if there's a shortage on available free grants.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kern
There's no need to keep the foreign access in a grant if it is not
persistently mapped by the backend. This allows us to free grants that
are not mapped by the backend, thus preventing blkfront from hoarding
all grants.
The main effect of this is that blkfront will only persistently map
the same g
Improve the calculation of required grants to process a request by
using nr_phys_segments instead of always assuming a request is going
to use all posible segments.
nr_phys_segments contains the number of scatter-gather DMA addr+len
pairs, which is basically what we put at every granted page.
for_
Allocate pending requests in smaller chunks instead of allocating them
all at the same time.
This change also removes the global array of pending_reqs, it is no
longer necessay.
Variables related to the grant mapping have been grouped into a struct
called "grant_page", this allows to allocate the
Allocate pending requests in smaller chunks instead of allocating them
all at the same time.
This change also removes the global array of pending_reqs, it is no
longer necessay.
Variables related to the grant mapping have been grouped into a struct
called "grant_page", this allows to allocate the
Allocate pending requests in smaller chunks instead of allocating them
all at the same time.
This change also removes the global array of pending_reqs, it is no
longer necessay.
Variables related to the grant mapping have been grouped into a struct
called "grant_page", this allows to allocate the
accurate comparison:
http://xenbits.xen.org/people/royger/plot_indirect_read4k.png
Also, the default number of segments per indirect request has been set
to 32 in order to map them all persistently, but this can be changed
at runtime by the user.
Roger Pau Monne (7):
xen-blkback: print
This mechanism allows blkback to change the number of grants
persistently mapped at run time.
The algorithm uses a simple LRU mechanism that removes (if needed) the
persistent grants that have not been used since the last LRU run, or
if all grants have been used it removes the first grants in the
Using balloon pages for all granted pages allows us to simplify the
logic in blkback, especially in the xen_blkbk_map function, since now
we can decide if we want to map a grant persistently or not after we
have actually mapped it. This could not be done before because
persistent grants used balloo
Moving grant ref handles from blkbk to pending_req will allow us to
get rid of the shared blkbk structure.
Signed-off-by: Roger Pau Monné
Cc: Konrad Rzeszutek Wilk
Cc: xen-de...@lists.xen.org
---
drivers/block/xen-blkback/blkback.c | 16
1 files changed, 4 insertions(+), 12 d
Indirect descriptors introduce a new block operation
(BLKIF_OP_INDIRECT) that passes grant references instead of segments
in the request. This grant references are filled with arrays of
blkif_request_segment_aligned, this way we can send more segments in a
request.
The proposed implementation sets
Preparatory change for implementing indirect descriptors. Change
xen_blkbk_{map/unmap} in order to be able to map/unmap a random amount
of grants (previously it was limited to
BLKIF_MAX_SEGMENTS_PER_REQUEST). Also, remove the usage of pending_req
in the map/unmap functions, so we can map/unmap gran
Remove the last dependency from blkbk by moving the list of free
requests to blkif. This change reduces the contention on the list of
available requests.
Signed-off-by: Roger Pau Monné
Cc: Konrad Rzeszutek Wilk
Cc: xen-de...@lists.xen.org
---
Changes since RFC:
* Replace kzalloc with kcalloc.
-
Signed-off-by: Roger Pau Monné
Cc: xen-de...@lists.xen.org
Cc: Konrad Rzeszutek Wilk
---
drivers/block/xen-blkback/blkback.c |6 --
1 files changed, 4 insertions(+), 2 deletions(-)
diff --git a/drivers/block/xen-blkback/blkback.c
b/drivers/block/xen-blkback/blkback.c
index dd5b2fe..f75
David Vrabel wrote:
> On 09/07/12 15:45, Konrad Rzeszutek Wilk wrote:
>> On Fri, Jun 22, 2012 at 05:14:41PM +0100, Stefano Stabellini wrote:
>>> We used to rely on a core_initcall to initialize Xen on ARM, however
>>> core_initcalls are actually called after early consoles are initialized.
>>> That
With current persistent grants implementation we are not freeing the
persistent grants after we disconnect the device. Since grant map
operations change the mfn of the allocated page, and we can no longer
pass it to __free_page without setting the mfn to a sane value, use
balloon grant pages instea
Replace the use of llist with list.
llist_for_each_entry_safe can trigger a bug in GCC 4.1, so it's best
to remove it and use a doubly linked list, which is used extensively
in the kernel already.
Specifically this bug can be triggered by hot-unplugging a disk,
either by doing xm block-detach or
Use llist_for_each_entry_safe in blkif_free. Previously grants where
freed while iterating the list, which lead to dereferences when trying
to fetch the next item.
Reported-by: Dan Carpenter
Signed-off-by: Roger Pau Monné
Cc: Konrad Rzeszutek Wilk
Cc: xen-de...@lists.xen.org
---
drivers/block/
Change foreach_grant iterator to a safe version, that allows freeing
the element while iterating. Also move the free code in
free_persistent_gnts to prevent freeing the element before the rb_next
call.
Reported-by: Dan Carpenter
Signed-off-by: Roger Pau Monné
Cc: Konrad Rzeszutek Wilk
Cc: xen-d
Signed-off-by: Roger Pau Monné
Cc: Huang Ying
Cc: Konrad Rzeszutek Wilk
---
Changes since v3:
* Change n to use type *, to keep the same semantics as
list_for_each_entry_safe.
Changes since v2:
* Allow to pass a NULL node as the first entry of deleted list
entries.
---
include/linux/lli
On systems with memory maps with ranges that don't end at page boundaries,
like:
[...]
(XEN) 0010 - dfdf9c00 (usable)
(XEN) dfdf9c00 - dfe4bc00 (ACPI NVS)
[...]
xen_add_extra_mem will create a protected range that ends up at 0xdfdf9c00,
but the function used
On systems with memory maps with ranges that don't end at page boundaries,
like:
[...]
(XEN) 0010 - dfdf9c00 (usable)
(XEN) dfdf9c00 - dfe4bc00 (ACPI NVS)
[...]
xen_add_extra_mem will create a protected range that ends up at 0xdfdf9c00,
but the function used
Request allocation has been moved to connect_ring, which is called every
time blkback connects to the frontend (this can happen multiple times during
a blkback instance life cycle). On the other hand, request freeing has not
been moved, so it's only called when destroying the backend instance. Due
I've done quite a lot of work in blkfront/blkback, and I usually end up
looking at the patches, so add myself as maintainer together with Konrad.
Signed-off-by: Roger Pau Monné
Cc: Konrad Rzeszutek Wilk
Cc: Boris Ostrovsky
Cc: David Vrabel
Cc: xen-de...@lists.xenproject.org
---
MAINTAINERS |
Current cleanup in the error path of xen_bind_pirq_msi_to_irq is
wrong. First of all there's an off-by-one in the cleanup loop, which
can lead to unbinding wrong IRQs.
Secondly IRQs not bound won't be freed, thus leaking IRQ numbers.
Note that there's no need to differentiate between bound and un
MEMORY_DEVICE_DEVDAX is
renamed to MEMORY_DEVICE_GENERIC, as using DEVDAX in the Xen code to
allocate unpopulated memory felt wrong.
Thanks, Roger.
Roger Pau Monne (2):
memremap: rename MEMORY_DEVICE_DEVDAX to MEMORY_DEVICE_GENERIC
xen: add helpers to allocate unpopulated memory
drivers/dax/device.c
This is in preparation for the logic behind MEMORY_DEVICE_DEVDAX also
being used by non DAX devices.
No functional change intended.
Signed-off-by: Roger Pau Monné
---
Cc: Dan Williams
Cc: Vishal Verma
Cc: Dave Jiang
Cc: Andrew Morton
Cc: Jason Gunthorpe
Cc: Ira Weiny
Cc: "Aneesh Kumar K.V"
on some platforms.
Signed-off-by: Roger Pau Monné
---
Cc: Oleksandr Andrushchenko
Cc: David Airlie
Cc: Daniel Vetter
Cc: Boris Ostrovsky
Cc: Juergen Gross
Cc: Stefano Stabellini
Cc: Dan Carpenter
Cc: Roger Pau Monne
Cc: Wei Liu
Cc: Yan Yankovskyi
Cc: dri-de...@lists.freedesktop.org
Cc
Allow issuing an IOCTL_PRIVCMD_MMAP_RESOURCE ioctl with num = 0 and
addr = 0 in order to fetch the size of a specific resource.
Add a shortcut to the default map resource path, since fetching the
size requires no address to be passed in, and thus no VMA to setup.
Fixes: 3ad0876554caf ('xen/privcm
Allow issuing an IOCTL_PRIVCMD_MMAP_RESOURCE ioctl with num = 0 and
addr = 0 in order to fetch the size of a specific resource.
Add a shortcut to the default map resource path, since fetching the
size requires no address to be passed in, and thus no VMA to setup.
This is missing from the initial
Don't require the discard-alignment xenstore node to be present in
order to correctly setup the feature. This can happen with versions of
QEMU that only write the discard-granularity but not the
discard-alignment node.
Assume discard-alignment is 0 if not present. While there also fix the
logic to
This is inline with the specification described in blkif.h:
* discard-granularity: should be set to the physical block size if
node is not present.
* discard-alignment, discard-secure: should be set to 0 if node not
present.
This was detected as QEMU would only create the discard-granular
When parsing the capability list make sure the offset is between the
MMIO region mapped in 'regs', or else the kernel hits a page fault.
This fault has been seen when running as a Xen PVH dom0, which doesn't
have the MMIO regions mapped into the domain physical memory map,
despite having the devic
when XEN_BALLOON_MEMORY_HOTPLUG is disabled
with XEN_UNPOPULATED_ALLOC.
Thanks, Roger.
Roger Pau Monne (2):
xen/x86: make XEN_BALLOON_MEMORY_HOTPLUG_LIMIT depend on
MEMORY_HOTPLUG
Revert "xen: fix p2m size in dom0 for disabled memory hotplug case"
arch/x86/include/asm/xen/page.h | 12 ---
This partially reverts commit 882213990d32fd224340a4533f6318dd152be4b2.
There's no need to special case XEN_UNPOPULATED_ALLOC anymore in order
to correctly size the p2m. The generic memory hotplug option has
already been tied together with the Xen hotplug limit, so enabling
memory hotplug should a
The Xen memory hotplug limit should depend on the memory hotplug
generic option, rather than the Xen balloon configuration. It's
possible to have a kernel with generic memory hotplug enabled, but
without Xen balloon enabled, at which point memory hotplug won't work
correctly due to the size limitat
When parsing the capability list make sure the offset is between the
MMIO region mapped in 'regs', or else the kernel hits a page fault.
This fault has been seen when running as a Xen PVH dom0, which doesn't
have the MMIO regions mapped into the domain physical memory map,
despite having the devic
Hello,
The following series adds some consistency checks to the values returned
by some of the MMIO registers of the Intel pinctrl device.
That done to avoid a crash when running as a PVH dom0. See patch #1 for
more details.
Thanks, Roger.
Roger Pau Monne (2):
intel/pinctrl: check REVID
Use the value read from the REVID register in order to check for the
presence of the device. A read of all ones is treated as if the device
is not present, and hence probing is ended.
This fixes an issue when running as a Xen PVH dom0, where the ACPI
DSDT table is provided unmodified to dom0 and h
When parsing the capability list make sure the offset is between the
MMIO region mapped in 'regs', or else the kernel hits a page fault.
Adding the check is harmless, and prevents buggy or broken systems
from crashing the kernel if the capability linked list is somehow
broken.
Fixes: 91d898e51e60
1 - 100 of 163 matches
Mail list logo