[ovmf test] 169416: regressions - FAIL

2022-04-14 Thread osstest service owner
flight 169416 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/169416/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64   6 xen-buildfail REGR. vs. 168254
 build-amd64-xsm   6 xen-buildfail REGR. vs. 168254
 build-i386-xsm6 xen-buildfail REGR. vs. 168254
 build-i3866 xen-buildfail REGR. vs. 168254

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt   1 build-check(1)   blocked  n/a
 build-i386-libvirt1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)  blocked n/a

version targeted for testing:
 ovmf 9bf7291d636ebd816b8f81edcf366dac926f9f44
baseline version:
 ovmf b1b89f9009f2390652e0061bd7b24fc40732bc70

Last test of basis   168254  2022-02-28 10:41:46 Z   45 days
Failing since168258  2022-03-01 01:55:31 Z   45 days  404 attempts
Testing same since   169414  2022-04-15 04:11:47 Z0 days2 attempts


People who touched revisions under test:
  Abdul Lateef Attar 
  Abdul Lateef Attar via groups.io 
  Abner Chang 
  Akihiko Odaki 
  Anthony PERARD 
  Bo Chang Ke 
  Bob Feng 
  Chen Lin Z 
  Chen, Lin Z 
  Dandan Bi 
  Dun Tan 
  Feng, Bob C 
  Gerd Hoffmann 
  Guo Dong 
  Guomin Jiang 
  Hao A Wu 
  Heng Luo 
  Hua Ma 
  Huang, Li-Xia 
  Jagadeesh Ujja 
  Jason 
  Jason Lou 
  Ke, Bo-ChangX 
  Ken Lautner 
  Kenneth Lautner 
  Kuo, Ted 
  Laszlo Ersek 
  Lean Sheng Tan 
  Leif Lindholm 
  Li, Zhihao 
  Liming Gao 
  Liu 
  Liu Yun 
  Liu Yun Y 
  Lixia Huang 
  Lou, Yun 
  Ma, Hua 
  Mara Sophie Grosch 
  Mara Sophie Grosch via groups.io 
  Matt DeVillier 
  Michael D Kinney 
  Michael Kubacki 
  Michael Kubacki 
  Min Xu 
  Oliver Steffen 
  Patrick Rudolph 
  Purna Chandra Rao Bandaru 
  Ray Ni 
  Rebecca Cran 
  Sami Mujawar 
  Sean Rhodes 
  Sean Rhodes sean@starlabs.systems
  Sebastien Boeuf 
  Sunny Wang 
  Ted Kuo 
  Wenyi Xie 
  wenyi,xie via groups.io 
  Xiaolu.Jiang 
  Xie, Yuanhao 
  Yi Li 
  yi1 li 
  Yuanhao Xie 
  Zhihao Li 

jobs:
 build-amd64-xsm  fail
 build-i386-xsm   fail
 build-amd64  fail
 build-i386   fail
 build-amd64-libvirt  blocked 
 build-i386-libvirt   blocked 
 build-amd64-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64 blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64  blocked 



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 5135 lines long.)



Re: [PATCH 26/27] block: decouple REQ_OP_SECURE_ERASE from REQ_OP_DISCARD

2022-04-14 Thread Chaitanya Kulkarni
On 4/14/22 21:52, Christoph Hellwig wrote:
> Secure erase is a very different operation from discard in that it is
> a data integrity operation vs hint.  Fully split the limits and helper
> infrastructure to make the separation more clear.
> 
> Signed-off-by: Christoph Hellwig 
> Reviewed-by: Martin K. Petersen 
> Acked-by: Christoph Böhmwalder  [drbd]
> Acked-by: Ryusuke Konishi  [nifs2]
> Acked-by: Jaegeuk Kim  [f2fs]
> Acked-by: Coly Li  [bcache]
> Acked-by: David Sterba  [btrfs]
> ---

Looks good.

Reviewed-by: Chaitanya Kulkarni 

-ck



Re: [PATCH 24/27] block: remove QUEUE_FLAG_DISCARD

2022-04-14 Thread Chaitanya Kulkarni
On 4/14/22 21:52, Christoph Hellwig wrote:
> Just use a non-zero max_discard_sectors as an indicator for discard
> support, similar to what is done for write zeroes.
> 
> The only places where needs special attention is the RAID5 driver,
> which must clear discard support for security reasons by default,
> even if the default stacking rules would allow for it.
> 
> Signed-off-by: Christoph Hellwig 
> Reviewed-by: Martin K. Petersen 
> Acked-by: Christoph Böhmwalder  [drbd]
> Acked-by: Jan Höppner  [s390]
> Acked-by: Coly Li  [bcache]
> Acked-by: David Sterba  [btrfs]
> ---


Looks good.

Reviewed-by: Chaitanya Kulkarni 

-ck




Re: [PATCH 23/27] block: add a bdev_max_discard_sectors helper

2022-04-14 Thread Chaitanya Kulkarni
On 4/14/22 21:52, Christoph Hellwig wrote:
> Add a helper to query the number of sectors support per each discard bio
> based on the block device and use this helper to stop various places from
> poking into the request_queue to see if discard is supported and if so how
> much.  This mirrors what is done e.g. for write zeroes as well.
> 
> Signed-off-by: Christoph Hellwig 
> Reviewed-by: Martin K. Petersen 
> Acked-by: Christoph Böhmwalder  [drbd]
> Acked-by: Coly Li  [bcache]
> Acked-by: David Sterba  [btrfs]
> ---

Looks good.

Reviewed-by: Chaitanya Kulkarni 

-ck




Re: [PATCH 22/27] block: refactor discard bio size limiting

2022-04-14 Thread Chaitanya Kulkarni
On 4/14/22 21:52, Christoph Hellwig wrote:
> Move all the logic to limit the discard bio size into a common helper
> so that it is better documented.
> 
> Signed-off-by: Christoph Hellwig 
> Reviewed-by: Martin K. Petersen 
> Acked-by: Coly Li 
> ---

Looks good.

Reviewed-by: Chaitanya Kulkarni 

-ck



Re: [PATCH 16/27] block: use bdev_alignment_offset in part_alignment_offset_show

2022-04-14 Thread Chaitanya Kulkarni
On 4/14/22 21:52, Christoph Hellwig wrote:
> Replace the open coded offset calculation with the proper helper.
> This is an ABI change in that the -1 for a misaligned partition is
> properly propagated, which can be considered a bug fix and matches
> what is done on the whole device.
> 
> Signed-off-by: Christoph Hellwig 
> Reviewed-by: Martin K. Petersen 
> ---

Neat!

Looks good.

Reviewed-by: Chaitanya Kulkarni 

-ck



Re: [PATCH 15/27] block: add a bdev_max_zone_append_sectors helper

2022-04-14 Thread Chaitanya Kulkarni
On 4/14/22 21:52, Christoph Hellwig wrote:
> Add a helper to check the max supported sectors for zone append based on
> the block_device instead of having to poke into the block layer internal
> request_queue.
> 
> Signed-off-by: Christoph Hellwig 
> Acked-by: Damien Le Moal 
> Reviewed-by: Martin K. Petersen 
> Reviewed-by: Johannes Thumshirn 
> ---

Looks good.

Reviewed-by: Chaitanya Kulkarni 

-ck




Re: [PATCH 14/27] block: add a bdev_stable_writes helper

2022-04-14 Thread Chaitanya Kulkarni
On 4/14/22 21:52, Christoph Hellwig wrote:
> Add a helper to check the stable writes flag based on the block_device
> instead of having to poke into the block layer internal request_queue.
> 
> Signed-off-by: Christoph Hellwig 
> Reviewed-by: Martin K. Petersen 
> ---

Looks good.

Reviewed-by: Chaitanya Kulkarni 

-ck



Re: [PATCH 13/27] block: add a bdev_fua helper

2022-04-14 Thread Chaitanya Kulkarni
On 4/14/22 21:52, Christoph Hellwig wrote:
> Add a helper to check the FUA flag based on the block_device instead of
> having to poke into the block layer internal request_queue.
> 
> Signed-off-by: Christoph Hellwig 
> Reviewed-by: Martin K. Petersen 

Looks good.

Reviewed-by: Chaitanya Kulkarni 

-ck



Re: [PATCH 12/27] block: add a bdev_write_cache helper

2022-04-14 Thread Chaitanya Kulkarni
On 4/14/22 21:52, Christoph Hellwig wrote:
> Add a helper to check the write cache flag based on the block_device
> instead of having to poke into the block layer internal request_queue.
> 
> Signed-off-by: Christoph Hellwig 
> Reviewed-by: Martin K. Petersen 
> Acked-by: David Sterba  [btrfs]
> ---

Looks good.

Reviewed-by: Chaitanya Kulkarni 

-ck



Re: [PATCH 11/27] block: add a bdev_nonrot helper

2022-04-14 Thread Chaitanya Kulkarni
On 4/14/22 21:52, Christoph Hellwig wrote:
> Add a helper to check the nonrot flag based on the block_device instead
> of having to poke into the block layer internal request_queue.
> 
> Signed-off-by: Christoph Hellwig 
> Reviewed-by: Martin K. Petersen 
> Acked-by: David Sterba  [btrfs]
> ---

Looks good.

Reviewed-by: Chaitanya Kulkarni 

-ck





Re: [PATCH 10/27] mm: use bdev_is_zoned in claim_swapfile

2022-04-14 Thread Chaitanya Kulkarni
On 4/14/22 21:52, Christoph Hellwig wrote:
> Use the bdev based helper instead of poking into the queue.
> 
> Signed-off-by: Christoph Hellwig 
> ---

Looks good.

Reviewed-by: Chaitanya Kulkarni 

-ck




Re: [PATCH 08/27] btrfs: use bdev_max_active_zones instead of open coding it

2022-04-14 Thread Chaitanya Kulkarni
On 4/14/22 21:52, Christoph Hellwig wrote:
> Signed-off-by: Christoph Hellwig 
> Reviewed-by: Johannes Thumshirn 
> Acked-by: David Sterba 
> ---

Looks good.

Reviewed-by: Chaitanya Kulkarni 

-ck



Re: [PATCH 03/27] target: fix discard alignment on partitions

2022-04-14 Thread Chaitanya Kulkarni
On 4/14/22 21:52, Christoph Hellwig wrote:
> Use the proper bdev_discard_alignment helper that accounts for partition
> offsets.
> 
> Fixes: c66ac9db8d4a ("[SCSI] target: Add LIO target core v4.0.0-rc6")
> Signed-off-by: Christoph Hellwig 
> Reviewed-by: Martin K. Petersen 
> ---

Helper does handle the case for of partition.

Looks good.

Reviewed-by: Chaitanya Kulkarni 

-ck



Re: [PATCH 02/27] target: pass a block_device to target_configure_unmap_from_queue

2022-04-14 Thread Chaitanya Kulkarni
On 4/14/22 21:52, Christoph Hellwig wrote:
> The SCSI target drivers is a consumer of the block layer and shoul
> d generally work on struct block_device.
>  > Signed-off-by: Christoph Hellwig 
> Reviewed-by: Martin K. Petersen 

Except from split word in log "should", looks good.

Reviewed-by: Chaitanya Kulkarni 

-ck



Re: [PATCH 01/27] target: remove an incorrect unmap zeroes data deduction

2022-04-14 Thread Chaitanya Kulkarni
On 4/14/22 21:52, Christoph Hellwig wrote:
> For block devices, the SCSI target drivers implements UNMAP as calls to
> blkdev_issue_discard, which does not guarantee zeroing just because
> Write Zeroes is supported.
> 
> Note that this does not affect the file backed path which uses
> fallocate to punch holes.
> 
> Fixes: 2237498f0b5c ("target/iblock: Convert WRITE_SAME to 
> blkdev_issue_zeroout")
> Signed-off-by: Christoph Hellwig 
> Reviewed-by: Martin K. Petersen 

Not a good assumption to have for sure.

Looks good.

Reviewed-by: Chaitanya Kulkarni 

-ck




Re: [PATCH 10/27] mm: use bdev_is_zoned in claim_swapfile

2022-04-14 Thread Damien Le Moal
On 4/15/22 13:52, Christoph Hellwig wrote:
> Use the bdev based helper instead of poking into the queue.
> 
> Signed-off-by: Christoph Hellwig 
> ---
>  mm/swapfile.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/mm/swapfile.c b/mm/swapfile.c
> index 63c61f8b26118..4c7537162af5e 100644
> --- a/mm/swapfile.c
> +++ b/mm/swapfile.c
> @@ -2761,7 +2761,7 @@ static int claim_swapfile(struct swap_info_struct *p, 
> struct inode *inode)
>* write only restriction.  Hence zoned block devices are not
>* suitable for swapping.  Disallow them here.
>*/
> - if (blk_queue_is_zoned(p->bdev->bd_disk->queue))
> + if (bdev_is_zoned(p->bdev))
>   return -EINVAL;
>   p->flags |= SWP_BLKDEV;
>   } else if (S_ISREG(inode->i_mode)) {

Looks good.

Reviewed-by: Damien Le Moal 

-- 
Damien Le Moal
Western Digital Research



Re: [PATCH 27/27] direct-io: remove random prefetches

2022-04-14 Thread Damien Le Moal
On 4/15/22 13:52, Christoph Hellwig wrote:
> Randomly poking into block device internals for manual prefetches isn't
> exactly a very maintainable thing to do.  And none of the performance
> criticil direct I/O implementations still use this library function

s/criticil/critical

> anyway, so just drop it.
> 
> Signed-off-by: Christoph Hellwig 

Looks good to me.

Reviewed-by: Damien Le Moal 


-- 
Damien Le Moal
Western Digital Research



[PATCH 23/27] block: add a bdev_max_discard_sectors helper

2022-04-14 Thread Christoph Hellwig
Add a helper to query the number of sectors support per each discard bio
based on the block device and use this helper to stop various places from
poking into the request_queue to see if discard is supported and if so how
much.  This mirrors what is done e.g. for write zeroes as well.

Signed-off-by: Christoph Hellwig 
Reviewed-by: Martin K. Petersen 
Acked-by: Christoph Böhmwalder  [drbd]
Acked-by: Coly Li  [bcache]
Acked-by: David Sterba  [btrfs]
---
 drivers/block/drbd/drbd_nl.c| 8 +---
 drivers/block/drbd/drbd_receiver.c  | 2 +-
 drivers/block/rnbd/rnbd-srv-dev.h   | 3 +--
 drivers/md/dm-io.c  | 2 +-
 drivers/target/target_core_device.c | 7 +++
 fs/f2fs/segment.c   | 6 ++
 include/linux/blkdev.h  | 5 +
 7 files changed, 18 insertions(+), 15 deletions(-)

diff --git a/drivers/block/drbd/drbd_nl.c b/drivers/block/drbd/drbd_nl.c
index 4d00986d6f588..a0a06e238e917 100644
--- a/drivers/block/drbd/drbd_nl.c
+++ b/drivers/block/drbd/drbd_nl.c
@@ -1439,7 +1439,8 @@ static bool write_ordering_changed(struct disk_conf *a, 
struct disk_conf *b)
 static void sanitize_disk_conf(struct drbd_device *device, struct disk_conf 
*disk_conf,
   struct drbd_backing_dev *nbc)
 {
-   struct request_queue * const q = nbc->backing_bdev->bd_disk->queue;
+   struct block_device *bdev = nbc->backing_bdev;
+   struct request_queue *q = bdev->bd_disk->queue;
 
if (disk_conf->al_extents < DRBD_AL_EXTENTS_MIN)
disk_conf->al_extents = DRBD_AL_EXTENTS_MIN;
@@ -1455,6 +1456,7 @@ static void sanitize_disk_conf(struct drbd_device 
*device, struct disk_conf *dis
 
if (disk_conf->rs_discard_granularity) {
int orig_value = disk_conf->rs_discard_granularity;
+   sector_t discard_size = bdev_max_discard_sectors(bdev) << 9;
int remainder;
 
if (q->limits.discard_granularity > 
disk_conf->rs_discard_granularity)
@@ -1463,8 +1465,8 @@ static void sanitize_disk_conf(struct drbd_device 
*device, struct disk_conf *dis
remainder = disk_conf->rs_discard_granularity % 
q->limits.discard_granularity;
disk_conf->rs_discard_granularity += remainder;
 
-   if (disk_conf->rs_discard_granularity > 
q->limits.max_discard_sectors << 9)
-   disk_conf->rs_discard_granularity = 
q->limits.max_discard_sectors << 9;
+   if (disk_conf->rs_discard_granularity > discard_size)
+   disk_conf->rs_discard_granularity = discard_size;
 
if (disk_conf->rs_discard_granularity != orig_value)
drbd_info(device, "rs_discard_granularity changed to 
%d\n",
diff --git a/drivers/block/drbd/drbd_receiver.c 
b/drivers/block/drbd/drbd_receiver.c
index 08da922f81d1d..0b4c7de463989 100644
--- a/drivers/block/drbd/drbd_receiver.c
+++ b/drivers/block/drbd/drbd_receiver.c
@@ -1524,7 +1524,7 @@ int drbd_issue_discard_or_zero_out(struct drbd_device 
*device, sector_t start, u
granularity = max(q->limits.discard_granularity >> 9, 1U);
alignment = (bdev_discard_alignment(bdev) >> 9) % granularity;
 
-   max_discard_sectors = min(q->limits.max_discard_sectors, (1U << 22));
+   max_discard_sectors = min(bdev_max_discard_sectors(bdev), (1U << 22));
max_discard_sectors -= max_discard_sectors % granularity;
if (unlikely(!max_discard_sectors))
goto zero_out;
diff --git a/drivers/block/rnbd/rnbd-srv-dev.h 
b/drivers/block/rnbd/rnbd-srv-dev.h
index 2c3df02b5e8ec..f82fbb4bbda8e 100644
--- a/drivers/block/rnbd/rnbd-srv-dev.h
+++ b/drivers/block/rnbd/rnbd-srv-dev.h
@@ -52,8 +52,7 @@ static inline int rnbd_dev_get_max_discard_sects(const struct 
rnbd_dev *dev)
if (!blk_queue_discard(bdev_get_queue(dev->bdev)))
return 0;
 
-   return blk_queue_get_max_sectors(bdev_get_queue(dev->bdev),
-REQ_OP_DISCARD);
+   return bdev_max_discard_sectors(dev->bdev);
 }
 
 static inline int rnbd_dev_get_discard_granularity(const struct rnbd_dev *dev)
diff --git a/drivers/md/dm-io.c b/drivers/md/dm-io.c
index 5762366333a27..e4b95eaeec8c7 100644
--- a/drivers/md/dm-io.c
+++ b/drivers/md/dm-io.c
@@ -311,7 +311,7 @@ static void do_region(int op, int op_flags, unsigned region,
 * Reject unsupported discard and write same requests.
 */
if (op == REQ_OP_DISCARD)
-   special_cmd_max_sectors = q->limits.max_discard_sectors;
+   special_cmd_max_sectors = bdev_max_discard_sectors(where->bdev);
else if (op == REQ_OP_WRITE_ZEROES)
special_cmd_max_sectors = q->limits.max_write_zeroes_sectors;
if ((op == REQ_OP_DISCARD || op == REQ_OP_WRITE_ZEROES) &&
diff --git a/drivers/target/target_core_device.c 
b/drivers/target/target_core_device.c
index 16e775bcf4a7c..c3e25bac90d59 100644
--- 

[PATCH 13/27] block: add a bdev_fua helper

2022-04-14 Thread Christoph Hellwig
Add a helper to check the FUA flag based on the block_device instead of
having to poke into the block layer internal request_queue.

Signed-off-by: Christoph Hellwig 
Reviewed-by: Martin K. Petersen 
---
 drivers/block/rnbd/rnbd-srv.c   | 3 +--
 drivers/target/target_core_iblock.c | 3 +--
 fs/iomap/direct-io.c| 3 +--
 include/linux/blkdev.h  | 6 +-
 4 files changed, 8 insertions(+), 7 deletions(-)

diff --git a/drivers/block/rnbd/rnbd-srv.c b/drivers/block/rnbd/rnbd-srv.c
index f8cc3c5fecb4b..beaef43a67b9d 100644
--- a/drivers/block/rnbd/rnbd-srv.c
+++ b/drivers/block/rnbd/rnbd-srv.c
@@ -533,7 +533,6 @@ static void rnbd_srv_fill_msg_open_rsp(struct 
rnbd_msg_open_rsp *rsp,
struct rnbd_srv_sess_dev *sess_dev)
 {
struct rnbd_dev *rnbd_dev = sess_dev->rnbd_dev;
-   struct request_queue *q = bdev_get_queue(rnbd_dev->bdev);
 
rsp->hdr.type = cpu_to_le16(RNBD_MSG_OPEN_RSP);
rsp->device_id =
@@ -560,7 +559,7 @@ static void rnbd_srv_fill_msg_open_rsp(struct 
rnbd_msg_open_rsp *rsp,
rsp->cache_policy = 0;
if (bdev_write_cache(rnbd_dev->bdev))
rsp->cache_policy |= RNBD_WRITEBACK;
-   if (blk_queue_fua(q))
+   if (bdev_fua(rnbd_dev->bdev))
rsp->cache_policy |= RNBD_FUA;
 }
 
diff --git a/drivers/target/target_core_iblock.c 
b/drivers/target/target_core_iblock.c
index 03013e85ffc03..c4a903b8a47fc 100644
--- a/drivers/target/target_core_iblock.c
+++ b/drivers/target/target_core_iblock.c
@@ -727,14 +727,13 @@ iblock_execute_rw(struct se_cmd *cmd, struct scatterlist 
*sgl, u32 sgl_nents,
 
if (data_direction == DMA_TO_DEVICE) {
struct iblock_dev *ib_dev = IBLOCK_DEV(dev);
-   struct request_queue *q = bdev_get_queue(ib_dev->ibd_bd);
/*
 * Force writethrough using REQ_FUA if a volatile write cache
 * is not enabled, or if initiator set the Force Unit Access 
bit.
 */
opf = REQ_OP_WRITE;
miter_dir = SG_MITER_TO_SG;
-   if (test_bit(QUEUE_FLAG_FUA, >queue_flags)) {
+   if (bdev_fua(ib_dev->ibd_bd)) {
if (cmd->se_cmd_flags & SCF_FUA)
opf |= REQ_FUA;
else if (!bdev_write_cache(ib_dev->ibd_bd))
diff --git a/fs/iomap/direct-io.c b/fs/iomap/direct-io.c
index b08f5dc31780d..62da020d02a11 100644
--- a/fs/iomap/direct-io.c
+++ b/fs/iomap/direct-io.c
@@ -265,8 +265,7 @@ static loff_t iomap_dio_bio_iter(const struct iomap_iter 
*iter,
 * cache flushes on IO completion.
 */
if (!(iomap->flags & (IOMAP_F_SHARED|IOMAP_F_DIRTY)) &&
-   (dio->flags & IOMAP_DIO_WRITE_FUA) &&
-   blk_queue_fua(bdev_get_queue(iomap->bdev)))
+   (dio->flags & IOMAP_DIO_WRITE_FUA) && bdev_fua(iomap->bdev))
use_fua = true;
}
 
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 807a49aa5a27a..075b16d4560e7 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -602,7 +602,6 @@ bool blk_queue_flag_test_and_set(unsigned int flag, struct 
request_queue *q);
 REQ_FAILFAST_DRIVER))
 #define blk_queue_quiesced(q)  test_bit(QUEUE_FLAG_QUIESCED, &(q)->queue_flags)
 #define blk_queue_pm_only(q)   atomic_read(&(q)->pm_only)
-#define blk_queue_fua(q)   test_bit(QUEUE_FLAG_FUA, &(q)->queue_flags)
 #define blk_queue_registered(q)test_bit(QUEUE_FLAG_REGISTERED, 
&(q)->queue_flags)
 #define blk_queue_nowait(q)test_bit(QUEUE_FLAG_NOWAIT, &(q)->queue_flags)
 
@@ -1336,6 +1335,11 @@ static inline bool bdev_write_cache(struct block_device 
*bdev)
return test_bit(QUEUE_FLAG_WC, _get_queue(bdev)->queue_flags);
 }
 
+static inline bool bdev_fua(struct block_device *bdev)
+{
+   return test_bit(QUEUE_FLAG_FUA, _get_queue(bdev)->queue_flags);
+}
+
 static inline enum blk_zoned_model bdev_zoned_model(struct block_device *bdev)
 {
struct request_queue *q = bdev_get_queue(bdev);
-- 
2.30.2




[PATCH 26/27] block: decouple REQ_OP_SECURE_ERASE from REQ_OP_DISCARD

2022-04-14 Thread Christoph Hellwig
Secure erase is a very different operation from discard in that it is
a data integrity operation vs hint.  Fully split the limits and helper
infrastructure to make the separation more clear.

Signed-off-by: Christoph Hellwig 
Reviewed-by: Martin K. Petersen 
Acked-by: Christoph Böhmwalder  [drbd]
Acked-by: Ryusuke Konishi  [nifs2]
Acked-by: Jaegeuk Kim  [f2fs]
Acked-by: Coly Li  [bcache]
Acked-by: David Sterba  [btrfs]
---
 block/blk-core.c|  2 +-
 block/blk-lib.c | 64 -
 block/blk-mq-debugfs.c  |  1 -
 block/blk-settings.c| 16 +++-
 block/fops.c|  2 +-
 block/ioctl.c   | 43 +++
 drivers/block/drbd/drbd_receiver.c  |  5 ++-
 drivers/block/rnbd/rnbd-clt.c   |  4 +-
 drivers/block/rnbd/rnbd-srv-dev.h   |  2 +-
 drivers/block/xen-blkback/blkback.c | 15 +++
 drivers/block/xen-blkback/xenbus.c  |  5 +--
 drivers/block/xen-blkfront.c|  5 ++-
 drivers/md/bcache/alloc.c   |  2 +-
 drivers/md/dm-table.c   |  8 ++--
 drivers/md/dm-thin.c|  4 +-
 drivers/md/md.c |  2 +-
 drivers/md/raid5-cache.c|  6 +--
 drivers/mmc/core/queue.c|  2 +-
 drivers/nvme/target/io-cmd-bdev.c   |  2 +-
 drivers/target/target_core_file.c   |  2 +-
 drivers/target/target_core_iblock.c |  2 +-
 fs/btrfs/extent-tree.c  |  4 +-
 fs/ext4/mballoc.c   |  2 +-
 fs/f2fs/file.c  | 16 
 fs/f2fs/segment.c   |  2 +-
 fs/jbd2/journal.c   |  2 +-
 fs/nilfs2/sufile.c  |  4 +-
 fs/nilfs2/the_nilfs.c   |  4 +-
 fs/ntfs3/super.c|  2 +-
 fs/xfs/xfs_discard.c|  2 +-
 fs/xfs/xfs_log_cil.c|  2 +-
 include/linux/blkdev.h  | 27 +++-
 mm/swapfile.c   |  6 +--
 33 files changed, 168 insertions(+), 99 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index b5c3a8049134c..ee18b6a699bdf 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -824,7 +824,7 @@ void submit_bio_noacct(struct bio *bio)
goto not_supported;
break;
case REQ_OP_SECURE_ERASE:
-   if (!blk_queue_secure_erase(q))
+   if (!bdev_max_secure_erase_sectors(bdev))
goto not_supported;
break;
case REQ_OP_ZONE_APPEND:
diff --git a/block/blk-lib.c b/block/blk-lib.c
index 43aa4d7fe859f..09b7e1200c0f4 100644
--- a/block/blk-lib.c
+++ b/block/blk-lib.c
@@ -36,26 +36,15 @@ static sector_t bio_discard_limit(struct block_device 
*bdev, sector_t sector)
 }
 
 int __blkdev_issue_discard(struct block_device *bdev, sector_t sector,
-   sector_t nr_sects, gfp_t gfp_mask, int flags,
-   struct bio **biop)
+   sector_t nr_sects, gfp_t gfp_mask, struct bio **biop)
 {
-   struct request_queue *q = bdev_get_queue(bdev);
struct bio *bio = *biop;
-   unsigned int op;
sector_t bs_mask;
 
if (bdev_read_only(bdev))
return -EPERM;
-
-   if (flags & BLKDEV_DISCARD_SECURE) {
-   if (!blk_queue_secure_erase(q))
-   return -EOPNOTSUPP;
-   op = REQ_OP_SECURE_ERASE;
-   } else {
-   if (!bdev_max_discard_sectors(bdev))
-   return -EOPNOTSUPP;
-   op = REQ_OP_DISCARD;
-   }
+   if (!bdev_max_discard_sectors(bdev))
+   return -EOPNOTSUPP;
 
/* In case the discard granularity isn't set by buggy device driver */
if (WARN_ON_ONCE(!bdev_discard_granularity(bdev))) {
@@ -77,7 +66,7 @@ int __blkdev_issue_discard(struct block_device *bdev, 
sector_t sector,
sector_t req_sects =
min(nr_sects, bio_discard_limit(bdev, sector));
 
-   bio = blk_next_bio(bio, bdev, 0, op, gfp_mask);
+   bio = blk_next_bio(bio, bdev, 0, REQ_OP_DISCARD, gfp_mask);
bio->bi_iter.bi_sector = sector;
bio->bi_iter.bi_size = req_sects << 9;
sector += req_sects;
@@ -103,21 +92,19 @@ EXPORT_SYMBOL(__blkdev_issue_discard);
  * @sector:start sector
  * @nr_sects:  number of sectors to discard
  * @gfp_mask:  memory allocation flags (for bio_alloc)
- * @flags: BLKDEV_DISCARD_* flags to control behaviour
  *
  * Description:
  *Issue a discard request for the sectors in question.
  */
 int blkdev_issue_discard(struct block_device *bdev, sector_t sector,
-   sector_t nr_sects, gfp_t gfp_mask, unsigned long flags)
+   sector_t nr_sects, gfp_t gfp_mask)
 {
struct bio *bio = NULL;
struct blk_plug plug;
int ret;
 
blk_start_plug();
-   ret = __blkdev_issue_discard(bdev, sector, nr_sects, gfp_mask, flags,

[PATCH 25/27] block: add a bdev_discard_granularity helper

2022-04-14 Thread Christoph Hellwig
Abstract away implementation details from file systems by providing a
block_device based helper to retrieve the discard granularity.

Signed-off-by: Christoph Hellwig 
Reviewed-by: Martin K. Petersen 
Acked-by: Christoph Böhmwalder  [drbd]
Acked-by: Ryusuke Konishi 
Acked-by: David Sterba  [btrfs]
---
 block/blk-lib.c |  5 ++---
 drivers/block/drbd/drbd_nl.c|  9 +
 drivers/block/drbd/drbd_receiver.c  |  3 +--
 drivers/block/loop.c|  2 +-
 drivers/target/target_core_device.c |  3 +--
 fs/btrfs/ioctl.c| 12 
 fs/exfat/file.c |  3 +--
 fs/ext4/mballoc.c   |  6 +++---
 fs/f2fs/file.c  |  3 +--
 fs/fat/file.c   |  3 +--
 fs/gfs2/rgrp.c  |  7 +++
 fs/jfs/ioctl.c  |  3 +--
 fs/nilfs2/ioctl.c   |  4 ++--
 fs/ntfs3/file.c |  4 ++--
 fs/ntfs3/super.c|  6 ++
 fs/ocfs2/ioctl.c|  3 +--
 fs/xfs/xfs_discard.c|  4 ++--
 include/linux/blkdev.h  |  5 +
 18 files changed, 38 insertions(+), 47 deletions(-)

diff --git a/block/blk-lib.c b/block/blk-lib.c
index 8b4b66d3a9bfc..43aa4d7fe859f 100644
--- a/block/blk-lib.c
+++ b/block/blk-lib.c
@@ -12,8 +12,7 @@
 
 static sector_t bio_discard_limit(struct block_device *bdev, sector_t sector)
 {
-   unsigned int discard_granularity =
-   bdev_get_queue(bdev)->limits.discard_granularity;
+   unsigned int discard_granularity = bdev_discard_granularity(bdev);
sector_t granularity_aligned_sector;
 
if (bdev_is_partition(bdev))
@@ -59,7 +58,7 @@ int __blkdev_issue_discard(struct block_device *bdev, 
sector_t sector,
}
 
/* In case the discard granularity isn't set by buggy device driver */
-   if (WARN_ON_ONCE(!q->limits.discard_granularity)) {
+   if (WARN_ON_ONCE(!bdev_discard_granularity(bdev))) {
char dev_name[BDEVNAME_SIZE];
 
bdevname(bdev, dev_name);
diff --git a/drivers/block/drbd/drbd_nl.c b/drivers/block/drbd/drbd_nl.c
index 0678ceb505799..a6280dcb37679 100644
--- a/drivers/block/drbd/drbd_nl.c
+++ b/drivers/block/drbd/drbd_nl.c
@@ -1425,7 +1425,6 @@ static void sanitize_disk_conf(struct drbd_device 
*device, struct disk_conf *dis
   struct drbd_backing_dev *nbc)
 {
struct block_device *bdev = nbc->backing_bdev;
-   struct request_queue *q = bdev->bd_disk->queue;
 
if (disk_conf->al_extents < DRBD_AL_EXTENTS_MIN)
disk_conf->al_extents = DRBD_AL_EXTENTS_MIN;
@@ -1442,12 +1441,14 @@ static void sanitize_disk_conf(struct drbd_device 
*device, struct disk_conf *dis
if (disk_conf->rs_discard_granularity) {
int orig_value = disk_conf->rs_discard_granularity;
sector_t discard_size = bdev_max_discard_sectors(bdev) << 9;
+   unsigned int discard_granularity = 
bdev_discard_granularity(bdev);
int remainder;
 
-   if (q->limits.discard_granularity > 
disk_conf->rs_discard_granularity)
-   disk_conf->rs_discard_granularity = 
q->limits.discard_granularity;
+   if (discard_granularity > disk_conf->rs_discard_granularity)
+   disk_conf->rs_discard_granularity = discard_granularity;
 
-   remainder = disk_conf->rs_discard_granularity % 
q->limits.discard_granularity;
+   remainder = disk_conf->rs_discard_granularity %
+   discard_granularity;
disk_conf->rs_discard_granularity += remainder;
 
if (disk_conf->rs_discard_granularity > discard_size)
diff --git a/drivers/block/drbd/drbd_receiver.c 
b/drivers/block/drbd/drbd_receiver.c
index 8a4a47da56fe9..275c53c7b629e 100644
--- a/drivers/block/drbd/drbd_receiver.c
+++ b/drivers/block/drbd/drbd_receiver.c
@@ -1511,7 +1511,6 @@ void drbd_bump_write_ordering(struct drbd_resource 
*resource, struct drbd_backin
 int drbd_issue_discard_or_zero_out(struct drbd_device *device, sector_t start, 
unsigned int nr_sectors, int flags)
 {
struct block_device *bdev = device->ldev->backing_bdev;
-   struct request_queue *q = bdev_get_queue(bdev);
sector_t tmp, nr;
unsigned int max_discard_sectors, granularity;
int alignment;
@@ -1521,7 +1520,7 @@ int drbd_issue_discard_or_zero_out(struct drbd_device 
*device, sector_t start, u
goto zero_out;
 
/* Zero-sector (unknown) and one-sector granularities are the same.  */
-   granularity = max(q->limits.discard_granularity >> 9, 1U);
+   granularity = max(bdev_discard_granularity(bdev) >> 9, 1U);
alignment = (bdev_discard_alignment(bdev) >> 9) % granularity;
 
max_discard_sectors = min(bdev_max_discard_sectors(bdev), (1U << 22));
diff --git a/drivers/block/loop.c 

[PATCH 21/27] block: move {bdev,queue_limit}_discard_alignment out of line

2022-04-14 Thread Christoph Hellwig
No need to inline these fairly larger helpers.  Also fix the return value
to be unsigned, just like the field in struct queue_limits.

Signed-off-by: Christoph Hellwig 
Reviewed-by: Martin K. Petersen 
---
 block/blk-settings.c   | 35 +++
 include/linux/blkdev.h | 34 +-
 2 files changed, 36 insertions(+), 33 deletions(-)

diff --git a/block/blk-settings.c b/block/blk-settings.c
index 94410a13c0dee..fd83d674afd0a 100644
--- a/block/blk-settings.c
+++ b/block/blk-settings.c
@@ -478,6 +478,30 @@ static int queue_limit_alignment_offset(struct 
queue_limits *lim,
return (granularity + lim->alignment_offset - alignment) % granularity;
 }
 
+static unsigned int queue_limit_discard_alignment(struct queue_limits *lim,
+   sector_t sector)
+{
+   unsigned int alignment, granularity, offset;
+
+   if (!lim->max_discard_sectors)
+   return 0;
+
+   /* Why are these in bytes, not sectors? */
+   alignment = lim->discard_alignment >> SECTOR_SHIFT;
+   granularity = lim->discard_granularity >> SECTOR_SHIFT;
+   if (!granularity)
+   return 0;
+
+   /* Offset of the partition start in 'granularity' sectors */
+   offset = sector_div(sector, granularity);
+
+   /* And why do we do this modulus *again* in blkdev_issue_discard()? */
+   offset = (granularity + alignment - offset) % granularity;
+
+   /* Turn it back into bytes, gaah */
+   return offset << SECTOR_SHIFT;
+}
+
 static unsigned int blk_round_down_sectors(unsigned int sectors, unsigned int 
lbs)
 {
sectors = round_down(sectors, lbs >> SECTOR_SHIFT);
@@ -924,3 +948,14 @@ int bdev_alignment_offset(struct block_device *bdev)
return q->limits.alignment_offset;
 }
 EXPORT_SYMBOL_GPL(bdev_alignment_offset);
+
+unsigned int bdev_discard_alignment(struct block_device *bdev)
+{
+   struct request_queue *q = bdev_get_queue(bdev);
+
+   if (bdev_is_partition(bdev))
+   return queue_limit_discard_alignment(>limits,
+   bdev->bd_start_sect);
+   return q->limits.discard_alignment;
+}
+EXPORT_SYMBOL_GPL(bdev_discard_alignment);
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 5a9b7aeda010b..34b1cfd067421 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -1252,39 +1252,7 @@ bdev_zone_write_granularity(struct block_device *bdev)
 }
 
 int bdev_alignment_offset(struct block_device *bdev);
-
-static inline int queue_limit_discard_alignment(struct queue_limits *lim, 
sector_t sector)
-{
-   unsigned int alignment, granularity, offset;
-
-   if (!lim->max_discard_sectors)
-   return 0;
-
-   /* Why are these in bytes, not sectors? */
-   alignment = lim->discard_alignment >> SECTOR_SHIFT;
-   granularity = lim->discard_granularity >> SECTOR_SHIFT;
-   if (!granularity)
-   return 0;
-
-   /* Offset of the partition start in 'granularity' sectors */
-   offset = sector_div(sector, granularity);
-
-   /* And why do we do this modulus *again* in blkdev_issue_discard()? */
-   offset = (granularity + alignment - offset) % granularity;
-
-   /* Turn it back into bytes, gaah */
-   return offset << SECTOR_SHIFT;
-}
-
-static inline int bdev_discard_alignment(struct block_device *bdev)
-{
-   struct request_queue *q = bdev_get_queue(bdev);
-
-   if (bdev_is_partition(bdev))
-   return queue_limit_discard_alignment(>limits,
-   bdev->bd_start_sect);
-   return q->limits.discard_alignment;
-}
+unsigned int bdev_discard_alignment(struct block_device *bdev);
 
 static inline unsigned int bdev_write_zeroes_sectors(struct block_device *bdev)
 {
-- 
2.30.2




[PATCH 12/27] block: add a bdev_write_cache helper

2022-04-14 Thread Christoph Hellwig
Add a helper to check the write cache flag based on the block_device
instead of having to poke into the block layer internal request_queue.

Signed-off-by: Christoph Hellwig 
Reviewed-by: Martin K. Petersen 
Acked-by: David Sterba  [btrfs]
---
 drivers/block/rnbd/rnbd-srv.c   | 2 +-
 drivers/block/xen-blkback/xenbus.c  | 2 +-
 drivers/target/target_core_iblock.c | 8 ++--
 fs/btrfs/disk-io.c  | 3 +--
 include/linux/blkdev.h  | 5 +
 5 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/drivers/block/rnbd/rnbd-srv.c b/drivers/block/rnbd/rnbd-srv.c
index f04df6294650b..f8cc3c5fecb4b 100644
--- a/drivers/block/rnbd/rnbd-srv.c
+++ b/drivers/block/rnbd/rnbd-srv.c
@@ -558,7 +558,7 @@ static void rnbd_srv_fill_msg_open_rsp(struct 
rnbd_msg_open_rsp *rsp,
rsp->secure_discard =
cpu_to_le16(rnbd_dev_get_secure_discard(rnbd_dev));
rsp->cache_policy = 0;
-   if (test_bit(QUEUE_FLAG_WC, >queue_flags))
+   if (bdev_write_cache(rnbd_dev->bdev))
rsp->cache_policy |= RNBD_WRITEBACK;
if (blk_queue_fua(q))
rsp->cache_policy |= RNBD_FUA;
diff --git a/drivers/block/xen-blkback/xenbus.c 
b/drivers/block/xen-blkback/xenbus.c
index f09040435e2e5..8b691fe50475f 100644
--- a/drivers/block/xen-blkback/xenbus.c
+++ b/drivers/block/xen-blkback/xenbus.c
@@ -517,7 +517,7 @@ static int xen_vbd_create(struct xen_blkif *blkif, 
blkif_vdev_t handle,
vbd->type |= VDISK_REMOVABLE;
 
q = bdev_get_queue(bdev);
-   if (q && test_bit(QUEUE_FLAG_WC, >queue_flags))
+   if (bdev_write_cache(bdev))
vbd->flush_support = true;
 
if (q && blk_queue_secure_erase(q))
diff --git a/drivers/target/target_core_iblock.c 
b/drivers/target/target_core_iblock.c
index b41ee5c3b5b82..03013e85ffc03 100644
--- a/drivers/target/target_core_iblock.c
+++ b/drivers/target/target_core_iblock.c
@@ -737,7 +737,7 @@ iblock_execute_rw(struct se_cmd *cmd, struct scatterlist 
*sgl, u32 sgl_nents,
if (test_bit(QUEUE_FLAG_FUA, >queue_flags)) {
if (cmd->se_cmd_flags & SCF_FUA)
opf |= REQ_FUA;
-   else if (!test_bit(QUEUE_FLAG_WC, >queue_flags))
+   else if (!bdev_write_cache(ib_dev->ibd_bd))
opf |= REQ_FUA;
}
} else {
@@ -886,11 +886,7 @@ iblock_parse_cdb(struct se_cmd *cmd)
 
 static bool iblock_get_write_cache(struct se_device *dev)
 {
-   struct iblock_dev *ib_dev = IBLOCK_DEV(dev);
-   struct block_device *bd = ib_dev->ibd_bd;
-   struct request_queue *q = bdev_get_queue(bd);
-
-   return test_bit(QUEUE_FLAG_WC, >queue_flags);
+   return bdev_write_cache(IBLOCK_DEV(dev)->ibd_bd);
 }
 
 static const struct target_backend_ops iblock_ops = {
diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
index b30309f187cf0..092e986b8e8ed 100644
--- a/fs/btrfs/disk-io.c
+++ b/fs/btrfs/disk-io.c
@@ -4247,8 +4247,7 @@ static void write_dev_flush(struct btrfs_device *device)
 * of simplicity, since this is a debug tool and not meant for use in
 * non-debug builds.
 */
-   struct request_queue *q = bdev_get_queue(device->bdev);
-   if (!test_bit(QUEUE_FLAG_WC, >queue_flags))
+   if (!bdev_write_cache(device->bdev))
return;
 #endif
 
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 3a9578e14a6b0..807a49aa5a27a 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -1331,6 +1331,11 @@ static inline bool bdev_nonrot(struct block_device *bdev)
return blk_queue_nonrot(bdev_get_queue(bdev));
 }
 
+static inline bool bdev_write_cache(struct block_device *bdev)
+{
+   return test_bit(QUEUE_FLAG_WC, _get_queue(bdev)->queue_flags);
+}
+
 static inline enum blk_zoned_model bdev_zoned_model(struct block_device *bdev)
 {
struct request_queue *q = bdev_get_queue(bdev);
-- 
2.30.2




[PATCH 11/27] block: add a bdev_nonrot helper

2022-04-14 Thread Christoph Hellwig
Add a helper to check the nonrot flag based on the block_device instead
of having to poke into the block layer internal request_queue.

Signed-off-by: Christoph Hellwig 
Reviewed-by: Martin K. Petersen 
Acked-by: David Sterba  [btrfs]
---
 block/ioctl.c   | 2 +-
 drivers/block/loop.c| 2 +-
 drivers/md/dm-table.c   | 4 +---
 drivers/md/md.c | 3 +--
 drivers/md/raid1.c  | 2 +-
 drivers/md/raid10.c | 2 +-
 drivers/md/raid5.c  | 2 +-
 drivers/target/target_core_file.c   | 3 +--
 drivers/target/target_core_iblock.c | 2 +-
 fs/btrfs/volumes.c  | 4 ++--
 fs/ext4/mballoc.c   | 2 +-
 include/linux/blkdev.h  | 5 +
 mm/swapfile.c   | 4 ++--
 13 files changed, 19 insertions(+), 18 deletions(-)

diff --git a/block/ioctl.c b/block/ioctl.c
index 4a86340133e46..ad3771b268b81 100644
--- a/block/ioctl.c
+++ b/block/ioctl.c
@@ -489,7 +489,7 @@ static int blkdev_common_ioctl(struct block_device *bdev, 
fmode_t mode,
queue_max_sectors(bdev_get_queue(bdev)));
return put_ushort(argp, max_sectors);
case BLKROTATIONAL:
-   return put_ushort(argp, 
!blk_queue_nonrot(bdev_get_queue(bdev)));
+   return put_ushort(argp, !bdev_nonrot(bdev));
case BLKRASET:
case BLKFRASET:
if(!capable(CAP_SYS_ADMIN))
diff --git a/drivers/block/loop.c b/drivers/block/loop.c
index a58595f5ee2c8..8d800d46e4985 100644
--- a/drivers/block/loop.c
+++ b/drivers/block/loop.c
@@ -903,7 +903,7 @@ static void loop_update_rotational(struct loop_device *lo)
 
/* not all filesystems (e.g. tmpfs) have a sb->s_bdev */
if (file_bdev)
-   nonrot = blk_queue_nonrot(bdev_get_queue(file_bdev));
+   nonrot = bdev_nonrot(file_bdev);
 
if (nonrot)
blk_queue_flag_set(QUEUE_FLAG_NONROT, q);
diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
index 03541cfc2317c..5e38d0dd009d5 100644
--- a/drivers/md/dm-table.c
+++ b/drivers/md/dm-table.c
@@ -1820,9 +1820,7 @@ static int device_dax_write_cache_enabled(struct 
dm_target *ti,
 static int device_is_rotational(struct dm_target *ti, struct dm_dev *dev,
sector_t start, sector_t len, void *data)
 {
-   struct request_queue *q = bdev_get_queue(dev->bdev);
-
-   return !blk_queue_nonrot(q);
+   return !bdev_nonrot(dev->bdev);
 }
 
 static int device_is_not_random(struct dm_target *ti, struct dm_dev *dev,
diff --git a/drivers/md/md.c b/drivers/md/md.c
index 309b3af906ad3..19636c2f2cda4 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -5991,8 +5991,7 @@ int md_run(struct mddev *mddev)
bool nonrot = true;
 
rdev_for_each(rdev, mddev) {
-   if (rdev->raid_disk >= 0 &&
-   !blk_queue_nonrot(bdev_get_queue(rdev->bdev))) {
+   if (rdev->raid_disk >= 0 && !bdev_nonrot(rdev->bdev)) {
nonrot = false;
break;
}
diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
index 99d5464a51f81..d81b896855f9f 100644
--- a/drivers/md/raid1.c
+++ b/drivers/md/raid1.c
@@ -704,7 +704,7 @@ static int read_balance(struct r1conf *conf, struct r1bio 
*r1_bio, int *max_sect
/* At least two disks to choose from so failfast is OK 
*/
set_bit(R1BIO_FailFast, _bio->state);
 
-   nonrot = blk_queue_nonrot(bdev_get_queue(rdev->bdev));
+   nonrot = bdev_nonrot(rdev->bdev);
has_nonrot_disk |= nonrot;
pending = atomic_read(>nr_pending);
dist = abs(this_sector - conf->mirrors[disk].head_position);
diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
index dfe7d62d3fbdd..7816c8b2e8087 100644
--- a/drivers/md/raid10.c
+++ b/drivers/md/raid10.c
@@ -796,7 +796,7 @@ static struct md_rdev *read_balance(struct r10conf *conf,
if (!do_balance)
break;
 
-   nonrot = blk_queue_nonrot(bdev_get_queue(rdev->bdev));
+   nonrot = bdev_nonrot(rdev->bdev);
has_nonrot_disk |= nonrot;
pending = atomic_read(>nr_pending);
if (min_pending > pending && nonrot) {
diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
index 351d341a1ffa4..0bbae0e638666 100644
--- a/drivers/md/raid5.c
+++ b/drivers/md/raid5.c
@@ -7242,7 +7242,7 @@ static struct r5conf *setup_conf(struct mddev *mddev)
rdev_for_each(rdev, mddev) {
if (test_bit(Journal, >flags))
continue;
-   if (blk_queue_nonrot(bdev_get_queue(rdev->bdev))) {
+   if (bdev_nonrot(rdev->bdev)) {
conf->batch_bio_dispatch = false;
  

[PATCH 19/27] block: remove queue_discard_alignment

2022-04-14 Thread Christoph Hellwig
Just use bdev_alignment_offset in disk_discard_alignment_show instead.
That helpers is the same except for an always false branch that doesn't
matter in this slow path.

Signed-off-by: Christoph Hellwig 
Reviewed-by: Martin K. Petersen 
---
 block/genhd.c  | 2 +-
 include/linux/blkdev.h | 8 
 2 files changed, 1 insertion(+), 9 deletions(-)

diff --git a/block/genhd.c b/block/genhd.c
index 712031ce19070..36532b9318419 100644
--- a/block/genhd.c
+++ b/block/genhd.c
@@ -1019,7 +1019,7 @@ static ssize_t disk_discard_alignment_show(struct device 
*dev,
 {
struct gendisk *disk = dev_to_disk(dev);
 
-   return sprintf(buf, "%d\n", queue_discard_alignment(disk->queue));
+   return sprintf(buf, "%d\n", bdev_alignment_offset(disk->part0));
 }
 
 static ssize_t diskseq_show(struct device *dev,
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 0a1795ac26275..5a9b7aeda010b 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -1253,14 +1253,6 @@ bdev_zone_write_granularity(struct block_device *bdev)
 
 int bdev_alignment_offset(struct block_device *bdev);
 
-static inline int queue_discard_alignment(const struct request_queue *q)
-{
-   if (q->limits.discard_misaligned)
-   return -1;
-
-   return q->limits.discard_alignment;
-}
-
 static inline int queue_limit_discard_alignment(struct queue_limits *lim, 
sector_t sector)
 {
unsigned int alignment, granularity, offset;
-- 
2.30.2




[PATCH 15/27] block: add a bdev_max_zone_append_sectors helper

2022-04-14 Thread Christoph Hellwig
Add a helper to check the max supported sectors for zone append based on
the block_device instead of having to poke into the block layer internal
request_queue.

Signed-off-by: Christoph Hellwig 
Acked-by: Damien Le Moal 
Reviewed-by: Martin K. Petersen 
Reviewed-by: Johannes Thumshirn 
---
 drivers/nvme/target/zns.c | 3 +--
 fs/zonefs/super.c | 3 +--
 include/linux/blkdev.h| 6 ++
 3 files changed, 8 insertions(+), 4 deletions(-)

diff --git a/drivers/nvme/target/zns.c b/drivers/nvme/target/zns.c
index e34718b095504..82b61acf7a72b 100644
--- a/drivers/nvme/target/zns.c
+++ b/drivers/nvme/target/zns.c
@@ -34,8 +34,7 @@ static int validate_conv_zones_cb(struct blk_zone *z,
 
 bool nvmet_bdev_zns_enable(struct nvmet_ns *ns)
 {
-   struct request_queue *q = ns->bdev->bd_disk->queue;
-   u8 zasl = nvmet_zasl(queue_max_zone_append_sectors(q));
+   u8 zasl = nvmet_zasl(bdev_max_zone_append_sectors(ns->bdev));
struct gendisk *bd_disk = ns->bdev->bd_disk;
int ret;
 
diff --git a/fs/zonefs/super.c b/fs/zonefs/super.c
index 3614c7834007d..7a63807b736c4 100644
--- a/fs/zonefs/super.c
+++ b/fs/zonefs/super.c
@@ -678,13 +678,12 @@ static ssize_t zonefs_file_dio_append(struct kiocb *iocb, 
struct iov_iter *from)
struct inode *inode = file_inode(iocb->ki_filp);
struct zonefs_inode_info *zi = ZONEFS_I(inode);
struct block_device *bdev = inode->i_sb->s_bdev;
-   unsigned int max;
+   unsigned int max = bdev_max_zone_append_sectors(bdev);
struct bio *bio;
ssize_t size;
int nr_pages;
ssize_t ret;
 
-   max = queue_max_zone_append_sectors(bdev_get_queue(bdev));
max = ALIGN_DOWN(max << SECTOR_SHIFT, inode->i_sb->s_blocksize);
iov_iter_truncate(from, max);
 
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index a433798c3343e..f8c50b77543eb 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -1188,6 +1188,12 @@ static inline unsigned int 
queue_max_zone_append_sectors(const struct request_qu
return min(l->max_zone_append_sectors, l->max_sectors);
 }
 
+static inline unsigned int
+bdev_max_zone_append_sectors(struct block_device *bdev)
+{
+   return queue_max_zone_append_sectors(bdev_get_queue(bdev));
+}
+
 static inline unsigned queue_logical_block_size(const struct request_queue *q)
 {
int retval = 512;
-- 
2.30.2




[PATCH 24/27] block: remove QUEUE_FLAG_DISCARD

2022-04-14 Thread Christoph Hellwig
Just use a non-zero max_discard_sectors as an indicator for discard
support, similar to what is done for write zeroes.

The only places where needs special attention is the RAID5 driver,
which must clear discard support for security reasons by default,
even if the default stacking rules would allow for it.

Signed-off-by: Christoph Hellwig 
Reviewed-by: Martin K. Petersen 
Acked-by: Christoph Böhmwalder  [drbd]
Acked-by: Jan Höppner  [s390]
Acked-by: Coly Li  [bcache]
Acked-by: David Sterba  [btrfs]
---
 arch/um/drivers/ubd_kern.c  |  2 --
 block/blk-core.c|  2 +-
 block/blk-lib.c |  2 +-
 block/blk-mq-debugfs.c  |  1 -
 block/ioctl.c   |  3 +--
 drivers/block/drbd/drbd_main.c  |  2 +-
 drivers/block/drbd/drbd_nl.c| 19 ++-
 drivers/block/drbd/drbd_receiver.c  |  3 +--
 drivers/block/loop.c| 11 +++
 drivers/block/nbd.c |  5 +
 drivers/block/null_blk/main.c   |  1 -
 drivers/block/rbd.c |  1 -
 drivers/block/rnbd/rnbd-clt.c   |  2 --
 drivers/block/rnbd/rnbd-srv-dev.h   |  3 ---
 drivers/block/virtio_blk.c  |  2 --
 drivers/block/xen-blkback/xenbus.c  |  2 +-
 drivers/block/xen-blkfront.c|  3 +--
 drivers/block/zram/zram_drv.c   |  1 -
 drivers/md/bcache/request.c |  4 ++--
 drivers/md/bcache/super.c   |  3 +--
 drivers/md/bcache/sysfs.c   |  2 +-
 drivers/md/dm-cache-target.c|  9 +
 drivers/md/dm-clone-target.c|  9 +
 drivers/md/dm-log-writes.c  |  3 +--
 drivers/md/dm-raid.c|  9 ++---
 drivers/md/dm-table.c   |  9 ++---
 drivers/md/dm-thin.c| 11 +--
 drivers/md/dm.c |  3 +--
 drivers/md/md-linear.c  | 11 +--
 drivers/md/raid0.c  |  7 ---
 drivers/md/raid1.c  | 16 +---
 drivers/md/raid10.c | 18 ++
 drivers/md/raid5-cache.c|  2 +-
 drivers/md/raid5.c  | 12 
 drivers/mmc/core/queue.c|  1 -
 drivers/mtd/mtd_blkdevs.c   |  1 -
 drivers/nvme/host/core.c|  4 ++--
 drivers/s390/block/dasd_fba.c   |  1 -
 drivers/scsi/sd.c   |  2 --
 drivers/target/target_core_device.c |  2 +-
 fs/btrfs/extent-tree.c  |  4 ++--
 fs/btrfs/ioctl.c|  2 +-
 fs/exfat/file.c |  2 +-
 fs/exfat/super.c| 10 +++---
 fs/ext4/ioctl.c | 10 +++---
 fs/ext4/super.c | 10 +++---
 fs/f2fs/f2fs.h  |  3 +--
 fs/fat/file.c   |  2 +-
 fs/fat/inode.c  | 10 +++---
 fs/gfs2/rgrp.c  |  2 +-
 fs/jbd2/journal.c   |  7 ++-
 fs/jfs/ioctl.c  |  2 +-
 fs/jfs/super.c  |  8 ++--
 fs/nilfs2/ioctl.c   |  2 +-
 fs/ntfs3/file.c |  2 +-
 fs/ntfs3/super.c|  2 +-
 fs/ocfs2/ioctl.c|  2 +-
 fs/xfs/xfs_discard.c|  2 +-
 fs/xfs/xfs_super.c  | 12 
 include/linux/blkdev.h  |  2 --
 mm/swapfile.c   | 17 ++---
 61 files changed, 73 insertions(+), 244 deletions(-)

diff --git a/arch/um/drivers/ubd_kern.c b/arch/um/drivers/ubd_kern.c
index b03269faef714..085ffdf98e57e 100644
--- a/arch/um/drivers/ubd_kern.c
+++ b/arch/um/drivers/ubd_kern.c
@@ -483,7 +483,6 @@ static void ubd_handler(void)
if ((io_req->error == BLK_STS_NOTSUPP) && 
(req_op(io_req->req) == REQ_OP_DISCARD)) {
blk_queue_max_discard_sectors(io_req->req->q, 
0);

blk_queue_max_write_zeroes_sectors(io_req->req->q, 0);
-   blk_queue_flag_clear(QUEUE_FLAG_DISCARD, 
io_req->req->q);
}
blk_mq_end_request(io_req->req, io_req->error);
kfree(io_req);
@@ -803,7 +802,6 @@ static int ubd_open_dev(struct ubd *ubd_dev)
ubd_dev->queue->limits.discard_alignment = SECTOR_SIZE;
blk_queue_max_discard_sectors(ubd_dev->queue, UBD_MAX_REQUEST);
blk_queue_max_write_zeroes_sectors(ubd_dev->queue, 
UBD_MAX_REQUEST);
-   blk_queue_flag_set(QUEUE_FLAG_DISCARD, ubd_dev->queue);
}
blk_queue_flag_set(QUEUE_FLAG_NONROT, ubd_dev->queue);
return 0;
diff --git a/block/blk-core.c b/block/blk-core.c
index 937bb6b863317..b5c3a8049134c 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -820,7 +820,7 @@ void submit_bio_noacct(struct bio *bio)
 
switch (bio_op(bio)) {
case REQ_OP_DISCARD:
-   if 

[PATCH 10/27] mm: use bdev_is_zoned in claim_swapfile

2022-04-14 Thread Christoph Hellwig
Use the bdev based helper instead of poking into the queue.

Signed-off-by: Christoph Hellwig 
---
 mm/swapfile.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/swapfile.c b/mm/swapfile.c
index 63c61f8b26118..4c7537162af5e 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -2761,7 +2761,7 @@ static int claim_swapfile(struct swap_info_struct *p, 
struct inode *inode)
 * write only restriction.  Hence zoned block devices are not
 * suitable for swapping.  Disallow them here.
 */
-   if (blk_queue_is_zoned(p->bdev->bd_disk->queue))
+   if (bdev_is_zoned(p->bdev))
return -EINVAL;
p->flags |= SWP_BLKDEV;
} else if (S_ISREG(inode->i_mode)) {
-- 
2.30.2




[PATCH 27/27] direct-io: remove random prefetches

2022-04-14 Thread Christoph Hellwig
Randomly poking into block device internals for manual prefetches isn't
exactly a very maintainable thing to do.  And none of the performance
criticil direct I/O implementations still use this library function
anyway, so just drop it.

Signed-off-by: Christoph Hellwig 
---
 fs/direct-io.c | 32 
 1 file changed, 4 insertions(+), 28 deletions(-)

diff --git a/fs/direct-io.c b/fs/direct-io.c
index aef06e607b405..840752006f601 100644
--- a/fs/direct-io.c
+++ b/fs/direct-io.c
@@ -1115,11 +1115,10 @@ static inline int drop_refcount(struct dio *dio)
  * individual fields and will generate much worse code. This is important
  * for the whole file.
  */
-static inline ssize_t
-do_blockdev_direct_IO(struct kiocb *iocb, struct inode *inode,
- struct block_device *bdev, struct iov_iter *iter,
- get_block_t get_block, dio_iodone_t end_io,
- dio_submit_t submit_io, int flags)
+ssize_t __blockdev_direct_IO(struct kiocb *iocb, struct inode *inode,
+   struct block_device *bdev, struct iov_iter *iter,
+   get_block_t get_block, dio_iodone_t end_io,
+   dio_submit_t submit_io, int flags)
 {
unsigned i_blkbits = READ_ONCE(inode->i_blkbits);
unsigned blkbits = i_blkbits;
@@ -1334,29 +1333,6 @@ do_blockdev_direct_IO(struct kiocb *iocb, struct inode 
*inode,
kmem_cache_free(dio_cache, dio);
return retval;
 }
-
-ssize_t __blockdev_direct_IO(struct kiocb *iocb, struct inode *inode,
-struct block_device *bdev, struct iov_iter *iter,
-get_block_t get_block,
-dio_iodone_t end_io, dio_submit_t submit_io,
-int flags)
-{
-   /*
-* The block device state is needed in the end to finally
-* submit everything.  Since it's likely to be cache cold
-* prefetch it here as first thing to hide some of the
-* latency.
-*
-* Attempt to prefetch the pieces we likely need later.
-*/
-   prefetch(>bd_disk->part_tbl);
-   prefetch(bdev->bd_disk->queue);
-   prefetch((char *)bdev->bd_disk->queue + SMP_CACHE_BYTES);
-
-   return do_blockdev_direct_IO(iocb, inode, bdev, iter, get_block,
-end_io, submit_io, flags);
-}
-
 EXPORT_SYMBOL(__blockdev_direct_IO);
 
 static __init int dio_init(void)
-- 
2.30.2




[PATCH 14/27] block: add a bdev_stable_writes helper

2022-04-14 Thread Christoph Hellwig
Add a helper to check the stable writes flag based on the block_device
instead of having to poke into the block layer internal request_queue.

Signed-off-by: Christoph Hellwig 
Reviewed-by: Martin K. Petersen 
---
 drivers/md/dm-table.c  | 4 +---
 fs/super.c | 2 +-
 include/linux/blkdev.h | 6 ++
 mm/swapfile.c  | 2 +-
 4 files changed, 9 insertions(+), 5 deletions(-)

diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
index 5e38d0dd009d5..d46839faa0ca5 100644
--- a/drivers/md/dm-table.c
+++ b/drivers/md/dm-table.c
@@ -1950,9 +1950,7 @@ static int device_requires_stable_pages(struct dm_target 
*ti,
struct dm_dev *dev, sector_t start,
sector_t len, void *data)
 {
-   struct request_queue *q = bdev_get_queue(dev->bdev);
-
-   return blk_queue_stable_writes(q);
+   return bdev_stable_writes(dev->bdev);
 }
 
 int dm_table_set_restrictions(struct dm_table *t, struct request_queue *q,
diff --git a/fs/super.c b/fs/super.c
index f1d4a193602d6..60f57c7bc0a69 100644
--- a/fs/super.c
+++ b/fs/super.c
@@ -1204,7 +1204,7 @@ static int set_bdev_super(struct super_block *s, void 
*data)
s->s_dev = s->s_bdev->bd_dev;
s->s_bdi = bdi_get(s->s_bdev->bd_disk->bdi);
 
-   if (blk_queue_stable_writes(s->s_bdev->bd_disk->queue))
+   if (bdev_stable_writes(s->s_bdev))
s->s_iflags |= SB_I_STABLE_WRITES;
return 0;
 }
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 075b16d4560e7..a433798c3343e 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -1330,6 +1330,12 @@ static inline bool bdev_nonrot(struct block_device *bdev)
return blk_queue_nonrot(bdev_get_queue(bdev));
 }
 
+static inline bool bdev_stable_writes(struct block_device *bdev)
+{
+   return test_bit(QUEUE_FLAG_STABLE_WRITES,
+   _get_queue(bdev)->queue_flags);
+}
+
 static inline bool bdev_write_cache(struct block_device *bdev)
 {
return test_bit(QUEUE_FLAG_WC, _get_queue(bdev)->queue_flags);
diff --git a/mm/swapfile.c b/mm/swapfile.c
index d5ab7ec4d92ca..4069f17a82c8e 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -3065,7 +3065,7 @@ SYSCALL_DEFINE2(swapon, const char __user *, specialfile, 
int, swap_flags)
goto bad_swap_unlock_inode;
}
 
-   if (p->bdev && blk_queue_stable_writes(p->bdev->bd_disk->queue))
+   if (p->bdev && bdev_stable_writes(p->bdev))
p->flags |= SWP_STABLE_WRITES;
 
if (p->bdev && p->bdev->bd_disk->fops->rw_page)
-- 
2.30.2




[PATCH 22/27] block: refactor discard bio size limiting

2022-04-14 Thread Christoph Hellwig
Move all the logic to limit the discard bio size into a common helper
so that it is better documented.

Signed-off-by: Christoph Hellwig 
Reviewed-by: Martin K. Petersen 
Acked-by: Coly Li 
---
 block/blk-lib.c | 59 -
 block/blk.h | 14 
 2 files changed, 29 insertions(+), 44 deletions(-)

diff --git a/block/blk-lib.c b/block/blk-lib.c
index 237d60d8b5857..2ae32a722851c 100644
--- a/block/blk-lib.c
+++ b/block/blk-lib.c
@@ -10,6 +10,32 @@
 
 #include "blk.h"
 
+static sector_t bio_discard_limit(struct block_device *bdev, sector_t sector)
+{
+   unsigned int discard_granularity =
+   bdev_get_queue(bdev)->limits.discard_granularity;
+   sector_t granularity_aligned_sector;
+
+   if (bdev_is_partition(bdev))
+   sector += bdev->bd_start_sect;
+
+   granularity_aligned_sector =
+   round_up(sector, discard_granularity >> SECTOR_SHIFT);
+
+   /*
+* Make sure subsequent bios start aligned to the discard granularity if
+* it needs to be split.
+*/
+   if (granularity_aligned_sector != sector)
+   return granularity_aligned_sector - sector;
+
+   /*
+* Align the bio size to the discard granularity to make splitting the 
bio
+* at discard granularity boundaries easier in the driver if needed.
+*/
+   return round_down(UINT_MAX, discard_granularity) >> SECTOR_SHIFT;
+}
+
 int __blkdev_issue_discard(struct block_device *bdev, sector_t sector,
sector_t nr_sects, gfp_t gfp_mask, int flags,
struct bio **biop)
@@ -17,7 +43,7 @@ int __blkdev_issue_discard(struct block_device *bdev, 
sector_t sector,
struct request_queue *q = bdev_get_queue(bdev);
struct bio *bio = *biop;
unsigned int op;
-   sector_t bs_mask, part_offset = 0;
+   sector_t bs_mask;
 
if (bdev_read_only(bdev))
return -EPERM;
@@ -48,36 +74,9 @@ int __blkdev_issue_discard(struct block_device *bdev, 
sector_t sector,
if (!nr_sects)
return -EINVAL;
 
-   /* In case the discard request is in a partition */
-   if (bdev_is_partition(bdev))
-   part_offset = bdev->bd_start_sect;
-
while (nr_sects) {
-   sector_t granularity_aligned_lba, req_sects;
-   sector_t sector_mapped = sector + part_offset;
-
-   granularity_aligned_lba = round_up(sector_mapped,
-   q->limits.discard_granularity >> SECTOR_SHIFT);
-
-   /*
-* Check whether the discard bio starts at a discard_granularity
-* aligned LBA,
-* - If no: set (granularity_aligned_lba - sector_mapped) to
-*   bi_size of the first split bio, then the second bio will
-*   start at a discard_granularity aligned LBA on the device.
-* - If yes: use bio_aligned_discard_max_sectors() as the max
-*   possible bi_size of the first split bio. Then when this bio
-*   is split in device drive, the split ones are very probably
-*   to be aligned to discard_granularity of the device's queue.
-*/
-   if (granularity_aligned_lba == sector_mapped)
-   req_sects = min_t(sector_t, nr_sects,
- bio_aligned_discard_max_sectors(q));
-   else
-   req_sects = min_t(sector_t, nr_sects,
- granularity_aligned_lba - 
sector_mapped);
-
-   WARN_ON_ONCE((req_sects << 9) > UINT_MAX);
+   sector_t req_sects =
+   min(nr_sects, bio_discard_limit(bdev, sector));
 
bio = blk_next_bio(bio, bdev, 0, op, gfp_mask);
bio->bi_iter.bi_sector = sector;
diff --git a/block/blk.h b/block/blk.h
index 4ea5167dc3392..434017701403f 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -346,20 +346,6 @@ static inline unsigned int bio_allowed_max_sectors(struct 
request_queue *q)
return round_down(UINT_MAX, queue_logical_block_size(q)) >> 9;
 }
 
-/*
- * The max bio size which is aligned to q->limits.discard_granularity. This
- * is a hint to split large discard bio in generic block layer, then if device
- * driver needs to split the discard bio into smaller ones, their bi_size can
- * be very probably and easily aligned to discard_granularity of the device's
- * queue.
- */
-static inline unsigned int bio_aligned_discard_max_sectors(
-   struct request_queue *q)
-{
-   return round_down(UINT_MAX, q->limits.discard_granularity) >>
-   SECTOR_SHIFT;
-}
-
 /*
  * Internal io_context interface
  */
-- 
2.30.2




[PATCH 20/27] block: use bdev_discard_alignment in part_discard_alignment_show

2022-04-14 Thread Christoph Hellwig
Use the bdev based alignment helper instead of open coding it.

Signed-off-by: Christoph Hellwig 
Reviewed-by: Martin K. Petersen 
---
 block/partitions/core.c | 6 +-
 1 file changed, 1 insertion(+), 5 deletions(-)

diff --git a/block/partitions/core.c b/block/partitions/core.c
index 240b3fff521e4..70dec1c78521d 100644
--- a/block/partitions/core.c
+++ b/block/partitions/core.c
@@ -206,11 +206,7 @@ static ssize_t part_alignment_offset_show(struct device 
*dev,
 static ssize_t part_discard_alignment_show(struct device *dev,
   struct device_attribute *attr, char 
*buf)
 {
-   struct block_device *bdev = dev_to_bdev(dev);
-
-   return sprintf(buf, "%u\n",
-   queue_limit_discard_alignment(_get_queue(bdev)->limits,
-   bdev->bd_start_sect));
+   return sprintf(buf, "%u\n", bdev_discard_alignment(dev_to_bdev(dev)));
 }
 
 static DEVICE_ATTR(partition, 0444, part_partition_show, NULL);
-- 
2.30.2




[PATCH 16/27] block: use bdev_alignment_offset in part_alignment_offset_show

2022-04-14 Thread Christoph Hellwig
Replace the open coded offset calculation with the proper helper.
This is an ABI change in that the -1 for a misaligned partition is
properly propagated, which can be considered a bug fix and matches
what is done on the whole device.

Signed-off-by: Christoph Hellwig 
Reviewed-by: Martin K. Petersen 
---
 block/partitions/core.c | 6 +-
 1 file changed, 1 insertion(+), 5 deletions(-)

diff --git a/block/partitions/core.c b/block/partitions/core.c
index 2ef8dfa1e5c85..240b3fff521e4 100644
--- a/block/partitions/core.c
+++ b/block/partitions/core.c
@@ -200,11 +200,7 @@ static ssize_t part_ro_show(struct device *dev,
 static ssize_t part_alignment_offset_show(struct device *dev,
  struct device_attribute *attr, char 
*buf)
 {
-   struct block_device *bdev = dev_to_bdev(dev);
-
-   return sprintf(buf, "%u\n",
-   queue_limit_alignment_offset(_get_queue(bdev)->limits,
-   bdev->bd_start_sect));
+   return sprintf(buf, "%u\n", bdev_alignment_offset(dev_to_bdev(dev)));
 }
 
 static ssize_t part_discard_alignment_show(struct device *dev,
-- 
2.30.2




[PATCH 18/27] block: move bdev_alignment_offset and queue_limit_alignment_offset out of line

2022-04-14 Thread Christoph Hellwig
No need to inline these fairly larger helpers.

Signed-off-by: Christoph Hellwig 
Reviewed-by: Martin K. Petersen 
---
 block/blk-settings.c   | 23 +++
 include/linux/blkdev.h | 21 +
 2 files changed, 24 insertions(+), 20 deletions(-)

diff --git a/block/blk-settings.c b/block/blk-settings.c
index b83df3d2eebca..94410a13c0dee 100644
--- a/block/blk-settings.c
+++ b/block/blk-settings.c
@@ -468,6 +468,16 @@ void blk_queue_io_opt(struct request_queue *q, unsigned 
int opt)
 }
 EXPORT_SYMBOL(blk_queue_io_opt);
 
+static int queue_limit_alignment_offset(struct queue_limits *lim,
+   sector_t sector)
+{
+   unsigned int granularity = max(lim->physical_block_size, lim->io_min);
+   unsigned int alignment = sector_div(sector, granularity >> SECTOR_SHIFT)
+   << SECTOR_SHIFT;
+
+   return (granularity + lim->alignment_offset - alignment) % granularity;
+}
+
 static unsigned int blk_round_down_sectors(unsigned int sectors, unsigned int 
lbs)
 {
sectors = round_down(sectors, lbs >> SECTOR_SHIFT);
@@ -901,3 +911,16 @@ void blk_queue_set_zoned(struct gendisk *disk, enum 
blk_zoned_model model)
}
 }
 EXPORT_SYMBOL_GPL(blk_queue_set_zoned);
+
+int bdev_alignment_offset(struct block_device *bdev)
+{
+   struct request_queue *q = bdev_get_queue(bdev);
+
+   if (q->limits.misaligned)
+   return -1;
+   if (bdev_is_partition(bdev))
+   return queue_limit_alignment_offset(>limits,
+   bdev->bd_start_sect);
+   return q->limits.alignment_offset;
+}
+EXPORT_SYMBOL_GPL(bdev_alignment_offset);
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index d5346e72e3645..0a1795ac26275 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -1251,26 +1251,7 @@ bdev_zone_write_granularity(struct block_device *bdev)
return queue_zone_write_granularity(bdev_get_queue(bdev));
 }
 
-static inline int queue_limit_alignment_offset(struct queue_limits *lim, 
sector_t sector)
-{
-   unsigned int granularity = max(lim->physical_block_size, lim->io_min);
-   unsigned int alignment = sector_div(sector, granularity >> SECTOR_SHIFT)
-   << SECTOR_SHIFT;
-
-   return (granularity + lim->alignment_offset - alignment) % granularity;
-}
-
-static inline int bdev_alignment_offset(struct block_device *bdev)
-{
-   struct request_queue *q = bdev_get_queue(bdev);
-
-   if (q->limits.misaligned)
-   return -1;
-   if (bdev_is_partition(bdev))
-   return queue_limit_alignment_offset(>limits,
-   bdev->bd_start_sect);
-   return q->limits.alignment_offset;
-}
+int bdev_alignment_offset(struct block_device *bdev);
 
 static inline int queue_discard_alignment(const struct request_queue *q)
 {
-- 
2.30.2




[PATCH 17/27] block: use bdev_alignment_offset in disk_alignment_offset_show

2022-04-14 Thread Christoph Hellwig
This does the same as the open coded variant except for an extra branch,
and allows to remove queue_alignment_offset entirely.

Signed-off-by: Christoph Hellwig 
Reviewed-by: Martin K. Petersen 
---
 block/genhd.c  | 2 +-
 include/linux/blkdev.h | 8 
 2 files changed, 1 insertion(+), 9 deletions(-)

diff --git a/block/genhd.c b/block/genhd.c
index b8b6759d670f0..712031ce19070 100644
--- a/block/genhd.c
+++ b/block/genhd.c
@@ -1010,7 +1010,7 @@ static ssize_t disk_alignment_offset_show(struct device 
*dev,
 {
struct gendisk *disk = dev_to_disk(dev);
 
-   return sprintf(buf, "%d\n", queue_alignment_offset(disk->queue));
+   return sprintf(buf, "%d\n", bdev_alignment_offset(disk->part0));
 }
 
 static ssize_t disk_discard_alignment_show(struct device *dev,
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index f8c50b77543eb..d5346e72e3645 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -1251,14 +1251,6 @@ bdev_zone_write_granularity(struct block_device *bdev)
return queue_zone_write_granularity(bdev_get_queue(bdev));
 }
 
-static inline int queue_alignment_offset(const struct request_queue *q)
-{
-   if (q->limits.misaligned)
-   return -1;
-
-   return q->limits.alignment_offset;
-}
-
 static inline int queue_limit_alignment_offset(struct queue_limits *lim, 
sector_t sector)
 {
unsigned int granularity = max(lim->physical_block_size, lim->io_min);
-- 
2.30.2




[PATCH 09/27] ntfs3: use bdev_logical_block_size instead of open coding it

2022-04-14 Thread Christoph Hellwig
Signed-off-by: Christoph Hellwig 
Reviewed-by: Johannes Thumshirn 
---
 fs/ntfs3/super.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/fs/ntfs3/super.c b/fs/ntfs3/super.c
index 278dcf5024102..cd30e81abbce0 100644
--- a/fs/ntfs3/super.c
+++ b/fs/ntfs3/super.c
@@ -920,7 +920,7 @@ static int ntfs_fill_super(struct super_block *sb, struct 
fs_context *fc)
}
 
/* Parse boot. */
-   err = ntfs_init_from_boot(sb, rq ? queue_logical_block_size(rq) : 512,
+   err = ntfs_init_from_boot(sb, bdev_logical_block_size(bdev),
  bdev_nr_bytes(bdev));
if (err)
goto out;
-- 
2.30.2




[PATCH 07/27] drbd: cleanup decide_on_discard_support

2022-04-14 Thread Christoph Hellwig
Sanitize the calling conventions and use a goto label to cleanup the
code flow.

Signed-off-by: Christoph Hellwig 
Acked-by: Christoph Böhmwalder 
---
 drivers/block/drbd/drbd_nl.c | 68 +++-
 1 file changed, 35 insertions(+), 33 deletions(-)

diff --git a/drivers/block/drbd/drbd_nl.c b/drivers/block/drbd/drbd_nl.c
index b7216c186ba4d..4d00986d6f588 100644
--- a/drivers/block/drbd/drbd_nl.c
+++ b/drivers/block/drbd/drbd_nl.c
@@ -1204,38 +1204,42 @@ static unsigned int drbd_max_discard_sectors(struct 
drbd_connection *connection)
 }
 
 static void decide_on_discard_support(struct drbd_device *device,
-   struct request_queue *q,
-   struct request_queue *b,
-   bool discard_zeroes_if_aligned)
+   struct drbd_backing_dev *bdev)
 {
-   /* q = drbd device queue (device->rq_queue)
-* b = backing device queue 
(device->ldev->backing_bdev->bd_disk->queue),
-* or NULL if diskless
-*/
-   struct drbd_connection *connection = 
first_peer_device(device)->connection;
-   bool can_do = b ? blk_queue_discard(b) : true;
-
-   if (can_do && connection->cstate >= C_CONNECTED && 
!(connection->agreed_features & DRBD_FF_TRIM)) {
-   can_do = false;
-   drbd_info(connection, "peer DRBD too old, does not support 
TRIM: disabling discards\n");
-   }
-   if (can_do) {
-   /* We don't care for the granularity, really.
-* Stacking limits below should fix it for the local
-* device.  Whether or not it is a suitable granularity
-* on the remote device is not our problem, really. If
-* you care, you need to use devices with similar
-* topology on all peers. */
-   blk_queue_discard_granularity(q, 512);
-   q->limits.max_discard_sectors = 
drbd_max_discard_sectors(connection);
-   blk_queue_flag_set(QUEUE_FLAG_DISCARD, q);
-   q->limits.max_write_zeroes_sectors = 
drbd_max_discard_sectors(connection);
-   } else {
-   blk_queue_flag_clear(QUEUE_FLAG_DISCARD, q);
-   blk_queue_discard_granularity(q, 0);
-   q->limits.max_discard_sectors = 0;
-   q->limits.max_write_zeroes_sectors = 0;
+   struct drbd_connection *connection =
+   first_peer_device(device)->connection;
+   struct request_queue *q = device->rq_queue;
+
+   if (bdev && !blk_queue_discard(bdev->backing_bdev->bd_disk->queue))
+   goto not_supported;
+
+   if (connection->cstate >= C_CONNECTED &&
+   !(connection->agreed_features & DRBD_FF_TRIM)) {
+   drbd_info(connection,
+   "peer DRBD too old, does not support TRIM: disabling 
discards\n");
+   goto not_supported;
}
+
+   /*
+* We don't care for the granularity, really.
+*
+* Stacking limits below should fix it for the local device.  Whether or
+* not it is a suitable granularity on the remote device is not our
+* problem, really. If you care, you need to use devices with similar
+* topology on all peers.
+*/
+   blk_queue_discard_granularity(q, 512);
+   q->limits.max_discard_sectors = drbd_max_discard_sectors(connection);
+   blk_queue_flag_set(QUEUE_FLAG_DISCARD, q);
+   q->limits.max_write_zeroes_sectors =
+   drbd_max_discard_sectors(connection);
+   return;
+
+not_supported:
+   blk_queue_flag_clear(QUEUE_FLAG_DISCARD, q);
+   blk_queue_discard_granularity(q, 0);
+   q->limits.max_discard_sectors = 0;
+   q->limits.max_write_zeroes_sectors = 0;
 }
 
 static void fixup_discard_if_not_supported(struct request_queue *q)
@@ -1273,7 +1277,6 @@ static void drbd_setup_queue_param(struct drbd_device 
*device, struct drbd_backi
unsigned int max_segments = 0;
struct request_queue *b = NULL;
struct disk_conf *dc;
-   bool discard_zeroes_if_aligned = true;
 
if (bdev) {
b = bdev->backing_bdev->bd_disk->queue;
@@ -1282,7 +1285,6 @@ static void drbd_setup_queue_param(struct drbd_device 
*device, struct drbd_backi
rcu_read_lock();
dc = rcu_dereference(device->ldev->disk_conf);
max_segments = dc->max_bio_bvecs;
-   discard_zeroes_if_aligned = dc->discard_zeroes_if_aligned;
rcu_read_unlock();
 
blk_set_stacking_limits(>limits);
@@ -1292,7 +1294,7 @@ static void drbd_setup_queue_param(struct drbd_device 
*device, struct drbd_backi
/* This is the workaround for "bio would need to, but cannot, be split" 
*/
blk_queue_max_segments(q, max_segments ? max_segments : 
BLK_MAX_SEGMENTS);
blk_queue_segment_boundary(q, PAGE_SIZE-1);
-   decide_on_discard_support(device, q, b, discard_zeroes_if_aligned);
+  

[PATCH 06/27] drbd: use bdev_alignment_offset instead of queue_alignment_offset

2022-04-14 Thread Christoph Hellwig
The bdev version does the right thing for partitions, so use that.

Fixes: 9104d31a759f ("drbd: introduce WRITE_SAME support")
Signed-off-by: Christoph Hellwig 
Acked-by: Christoph Böhmwalder 
---
 drivers/block/drbd/drbd_main.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/block/drbd/drbd_main.c b/drivers/block/drbd/drbd_main.c
index c39b04bda261f..7b501c8d59928 100644
--- a/drivers/block/drbd/drbd_main.c
+++ b/drivers/block/drbd/drbd_main.c
@@ -939,7 +939,7 @@ int drbd_send_sizes(struct drbd_peer_device *peer_device, 
int trigger_reply, enu
p->qlim->logical_block_size =
cpu_to_be32(bdev_logical_block_size(bdev));
p->qlim->alignment_offset =
-   cpu_to_be32(queue_alignment_offset(q));
+   cpu_to_be32(bdev_alignment_offset(bdev));
p->qlim->io_min = cpu_to_be32(bdev_io_min(bdev));
p->qlim->io_opt = cpu_to_be32(bdev_io_opt(bdev));
p->qlim->discard_enabled = blk_queue_discard(q);
-- 
2.30.2




[ovmf test] 169414: regressions - FAIL

2022-04-14 Thread osstest service owner
flight 169414 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/169414/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64   6 xen-buildfail REGR. vs. 168254
 build-amd64-xsm   6 xen-buildfail REGR. vs. 168254
 build-i386-xsm6 xen-buildfail REGR. vs. 168254
 build-i3866 xen-buildfail REGR. vs. 168254

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt   1 build-check(1)   blocked  n/a
 build-i386-libvirt1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)  blocked n/a

version targeted for testing:
 ovmf 9bf7291d636ebd816b8f81edcf366dac926f9f44
baseline version:
 ovmf b1b89f9009f2390652e0061bd7b24fc40732bc70

Last test of basis   168254  2022-02-28 10:41:46 Z   45 days
Failing since168258  2022-03-01 01:55:31 Z   45 days  403 attempts
Testing same since   169414  2022-04-15 04:11:47 Z0 days1 attempts


People who touched revisions under test:
  Abdul Lateef Attar 
  Abdul Lateef Attar via groups.io 
  Abner Chang 
  Akihiko Odaki 
  Anthony PERARD 
  Bo Chang Ke 
  Bob Feng 
  Chen Lin Z 
  Chen, Lin Z 
  Dandan Bi 
  Dun Tan 
  Feng, Bob C 
  Gerd Hoffmann 
  Guo Dong 
  Guomin Jiang 
  Hao A Wu 
  Heng Luo 
  Hua Ma 
  Huang, Li-Xia 
  Jagadeesh Ujja 
  Jason 
  Jason Lou 
  Ke, Bo-ChangX 
  Ken Lautner 
  Kenneth Lautner 
  Kuo, Ted 
  Laszlo Ersek 
  Lean Sheng Tan 
  Leif Lindholm 
  Li, Zhihao 
  Liming Gao 
  Liu 
  Liu Yun 
  Liu Yun Y 
  Lixia Huang 
  Lou, Yun 
  Ma, Hua 
  Mara Sophie Grosch 
  Mara Sophie Grosch via groups.io 
  Matt DeVillier 
  Michael D Kinney 
  Michael Kubacki 
  Michael Kubacki 
  Min Xu 
  Oliver Steffen 
  Patrick Rudolph 
  Purna Chandra Rao Bandaru 
  Ray Ni 
  Rebecca Cran 
  Sami Mujawar 
  Sean Rhodes 
  Sean Rhodes sean@starlabs.systems
  Sebastien Boeuf 
  Sunny Wang 
  Ted Kuo 
  Wenyi Xie 
  wenyi,xie via groups.io 
  Xiaolu.Jiang 
  Xie, Yuanhao 
  Yi Li 
  yi1 li 
  Yuanhao Xie 
  Zhihao Li 

jobs:
 build-amd64-xsm  fail
 build-i386-xsm   fail
 build-amd64  fail
 build-i386   fail
 build-amd64-libvirt  blocked 
 build-i386-libvirt   blocked 
 build-amd64-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64 blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64  blocked 



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 5135 lines long.)



[PATCH 02/27] target: pass a block_device to target_configure_unmap_from_queue

2022-04-14 Thread Christoph Hellwig
The SCSI target drivers is a consumer of the block layer and shoul
d generally work on struct block_device.

Signed-off-by: Christoph Hellwig 
Reviewed-by: Martin K. Petersen 
---
 drivers/target/target_core_device.c  | 5 +++--
 drivers/target/target_core_file.c| 7 ---
 drivers/target/target_core_iblock.c  | 2 +-
 include/target/target_core_backend.h | 4 ++--
 4 files changed, 10 insertions(+), 8 deletions(-)

diff --git a/drivers/target/target_core_device.c 
b/drivers/target/target_core_device.c
index fa866acef5bb2..3a1ec705cd80b 100644
--- a/drivers/target/target_core_device.c
+++ b/drivers/target/target_core_device.c
@@ -834,9 +834,10 @@ struct se_device *target_alloc_device(struct se_hba *hba, 
const char *name)
  * in ATA and we need to set TPE=1
  */
 bool target_configure_unmap_from_queue(struct se_dev_attrib *attrib,
-  struct request_queue *q)
+  struct block_device *bdev)
 {
-   int block_size = queue_logical_block_size(q);
+   struct request_queue *q = bdev_get_queue(bdev);
+   int block_size = bdev_logical_block_size(bdev);
 
if (!blk_queue_discard(q))
return false;
diff --git a/drivers/target/target_core_file.c 
b/drivers/target/target_core_file.c
index 8190b840065f3..8d191fdc33217 100644
--- a/drivers/target/target_core_file.c
+++ b/drivers/target/target_core_file.c
@@ -134,10 +134,11 @@ static int fd_configure_device(struct se_device *dev)
 */
inode = file->f_mapping->host;
if (S_ISBLK(inode->i_mode)) {
-   struct request_queue *q = bdev_get_queue(I_BDEV(inode));
+   struct block_device *bdev = I_BDEV(inode);
+   struct request_queue *q = bdev_get_queue(bdev);
unsigned long long dev_size;
 
-   fd_dev->fd_block_size = bdev_logical_block_size(I_BDEV(inode));
+   fd_dev->fd_block_size = bdev_logical_block_size(bdev);
/*
 * Determine the number of bytes from i_size_read() minus
 * one (1) logical sector from underlying struct block_device
@@ -150,7 +151,7 @@ static int fd_configure_device(struct se_device *dev)
dev_size, div_u64(dev_size, fd_dev->fd_block_size),
fd_dev->fd_block_size);
 
-   if (target_configure_unmap_from_queue(>dev_attrib, q))
+   if (target_configure_unmap_from_queue(>dev_attrib, bdev))
pr_debug("IFILE: BLOCK Discard support available,"
 " disabled by default\n");
/*
diff --git a/drivers/target/target_core_iblock.c 
b/drivers/target/target_core_iblock.c
index 87ede165ddba4..b886ce1770bfd 100644
--- a/drivers/target/target_core_iblock.c
+++ b/drivers/target/target_core_iblock.c
@@ -119,7 +119,7 @@ static int iblock_configure_device(struct se_device *dev)
dev->dev_attrib.hw_max_sectors = queue_max_hw_sectors(q);
dev->dev_attrib.hw_queue_depth = q->nr_requests;
 
-   if (target_configure_unmap_from_queue(>dev_attrib, q))
+   if (target_configure_unmap_from_queue(>dev_attrib, bd))
pr_debug("IBLOCK: BLOCK Discard support available,"
 " disabled by default\n");
 
diff --git a/include/target/target_core_backend.h 
b/include/target/target_core_backend.h
index 675f3a1fe6139..773963a1e0b53 100644
--- a/include/target/target_core_backend.h
+++ b/include/target/target_core_backend.h
@@ -14,7 +14,7 @@
 #define TRANSPORT_FLAG_PASSTHROUGH_ALUA0x2
 #define TRANSPORT_FLAG_PASSTHROUGH_PGR  0x4
 
-struct request_queue;
+struct block_device;
 struct scatterlist;
 
 struct target_backend_ops {
@@ -117,7 +117,7 @@ sense_reason_t passthrough_parse_cdb(struct se_cmd *cmd,
 bool target_sense_desc_format(struct se_device *dev);
 sector_t target_to_linux_sector(struct se_device *dev, sector_t lb);
 bool target_configure_unmap_from_queue(struct se_dev_attrib *attrib,
-  struct request_queue *q);
+  struct block_device *bdev);
 
 static inline bool target_dev_configured(struct se_device *se_dev)
 {
-- 
2.30.2




use block_device based APIs in block layer consumers v3

2022-04-14 Thread Christoph Hellwig
Hi Jens,

this series cleanups up the block layer API so that APIs consumed
by file systems are (almost) only struct block_devic based, so that
file systems don't have to poke into block layer internals like the
request_queue.

I also found a bunch of existing bugs related to partition offsets
and discard so these are fixed while going along.

Changes since v2:
 - fix an inverted check in btrfs
 - set max_discard_sectors to 0 in all places where the flag was
   previously cleared
 - fix a few sligtly incorrect collected Acks

Changes since v1:
 - fix a bisection hazard
 - minor spelling fixes
 - reorder hunks between two patches to make the changes more obvious
 - reorder a patch to be earlier in the series to ease backporting


Diffstat:
 arch/um/drivers/ubd_kern.c   |2 
 block/blk-core.c |4 -
 block/blk-lib.c  |  124 ---
 block/blk-mq-debugfs.c   |2 
 block/blk-settings.c |   74 
 block/blk.h  |   14 ---
 block/fops.c |2 
 block/genhd.c|4 -
 block/ioctl.c|   48 ++---
 block/partitions/core.c  |   12 ---
 drivers/block/drbd/drbd_main.c   |   51 ++
 drivers/block/drbd/drbd_nl.c |   94 +++---
 drivers/block/drbd/drbd_receiver.c   |   13 +--
 drivers/block/loop.c |   15 +---
 drivers/block/nbd.c  |5 -
 drivers/block/null_blk/main.c|1 
 drivers/block/rbd.c  |1 
 drivers/block/rnbd/rnbd-clt.c|6 -
 drivers/block/rnbd/rnbd-srv-dev.h|8 --
 drivers/block/rnbd/rnbd-srv.c|5 -
 drivers/block/virtio_blk.c   |2 
 drivers/block/xen-blkback/blkback.c  |   15 ++--
 drivers/block/xen-blkback/xenbus.c   |9 --
 drivers/block/xen-blkfront.c |8 +-
 drivers/block/zram/zram_drv.c|1 
 drivers/md/bcache/alloc.c|2 
 drivers/md/bcache/request.c  |4 -
 drivers/md/bcache/super.c|3 
 drivers/md/bcache/sysfs.c|2 
 drivers/md/dm-cache-target.c |9 --
 drivers/md/dm-clone-target.c |9 --
 drivers/md/dm-io.c   |2 
 drivers/md/dm-log-writes.c   |3 
 drivers/md/dm-raid.c |9 --
 drivers/md/dm-table.c|   25 +--
 drivers/md/dm-thin.c |   15 
 drivers/md/dm.c  |3 
 drivers/md/md-linear.c   |   11 ---
 drivers/md/md.c  |5 -
 drivers/md/raid0.c   |7 -
 drivers/md/raid1.c   |   18 -
 drivers/md/raid10.c  |   20 -
 drivers/md/raid5-cache.c |8 +-
 drivers/md/raid5.c   |   14 +--
 drivers/mmc/core/queue.c |3 
 drivers/mtd/mtd_blkdevs.c|1 
 drivers/nvme/host/core.c |4 -
 drivers/nvme/target/io-cmd-bdev.c|2 
 drivers/nvme/target/zns.c|3 
 drivers/s390/block/dasd_fba.c|1 
 drivers/scsi/sd.c|2 
 drivers/target/target_core_device.c  |   20 ++---
 drivers/target/target_core_file.c|   10 +-
 drivers/target/target_core_iblock.c  |   17 +---
 fs/btrfs/disk-io.c   |3 
 fs/btrfs/extent-tree.c   |8 +-
 fs/btrfs/ioctl.c |   12 +--
 fs/btrfs/volumes.c   |4 -
 fs/btrfs/zoned.c |3 
 fs/direct-io.c   |   32 +
 fs/exfat/file.c  |5 -
 fs/exfat/super.c |   10 --
 fs/ext4/ioctl.c  |   10 --
 fs/ext4/mballoc.c|   10 +-
 fs/ext4/super.c  |   10 --
 fs/f2fs/f2fs.h   |3 
 fs/f2fs/file.c   |   19 ++---
 fs/f2fs/segment.c|8 --
 fs/fat/file.c|5 -
 fs/fat/inode.c   |   10 --
 fs/gfs2/rgrp.c   |7 -
 fs/iomap/direct-io.c |3 
 fs/jbd2/journal.c|9 --
 fs/jfs/ioctl.c   |5 -
 fs/jfs/super.c   |8 --
 fs/nilfs2/ioctl.c|6 -
 fs/nilfs2/sufile.c   |4 -
 fs/nilfs2/the_nilfs.c|4 -
 fs/ntfs3/file.c  |6 -
 fs/ntfs3/super.c |   10 +-
 fs/ocfs2/ioctl.c |5 -
 fs/super.c   |2 
 fs/xfs/xfs_discard.c |8 +-
 fs/xfs/xfs_log_cil.c |2 
 fs/xfs/xfs_super.c   |   12 +--
 fs/zonefs/super.c|3 
 include/linux/blkdev.h   |  112 

[PATCH 01/27] target: remove an incorrect unmap zeroes data deduction

2022-04-14 Thread Christoph Hellwig
For block devices, the SCSI target drivers implements UNMAP as calls to
blkdev_issue_discard, which does not guarantee zeroing just because
Write Zeroes is supported.

Note that this does not affect the file backed path which uses
fallocate to punch holes.

Fixes: 2237498f0b5c ("target/iblock: Convert WRITE_SAME to 
blkdev_issue_zeroout")
Signed-off-by: Christoph Hellwig 
Reviewed-by: Martin K. Petersen 
---
 drivers/target/target_core_device.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/drivers/target/target_core_device.c 
b/drivers/target/target_core_device.c
index 44bb380e7390c..fa866acef5bb2 100644
--- a/drivers/target/target_core_device.c
+++ b/drivers/target/target_core_device.c
@@ -850,7 +850,6 @@ bool target_configure_unmap_from_queue(struct se_dev_attrib 
*attrib,
attrib->unmap_granularity = q->limits.discard_granularity / block_size;
attrib->unmap_granularity_alignment = q->limits.discard_alignment /
block_size;
-   attrib->unmap_zeroes_data = !!(q->limits.max_write_zeroes_sectors);
return true;
 }
 EXPORT_SYMBOL(target_configure_unmap_from_queue);
-- 
2.30.2




[PATCH 03/27] target: fix discard alignment on partitions

2022-04-14 Thread Christoph Hellwig
Use the proper bdev_discard_alignment helper that accounts for partition
offsets.

Fixes: c66ac9db8d4a ("[SCSI] target: Add LIO target core v4.0.0-rc6")
Signed-off-by: Christoph Hellwig 
Reviewed-by: Martin K. Petersen 
---
 drivers/target/target_core_device.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/target/target_core_device.c 
b/drivers/target/target_core_device.c
index 3a1ec705cd80b..16e775bcf4a7c 100644
--- a/drivers/target/target_core_device.c
+++ b/drivers/target/target_core_device.c
@@ -849,8 +849,8 @@ bool target_configure_unmap_from_queue(struct se_dev_attrib 
*attrib,
 */
attrib->max_unmap_block_desc_count = 1;
attrib->unmap_granularity = q->limits.discard_granularity / block_size;
-   attrib->unmap_granularity_alignment = q->limits.discard_alignment /
-   block_size;
+   attrib->unmap_granularity_alignment =
+   bdev_discard_alignment(bdev) / block_size;
return true;
 }
 EXPORT_SYMBOL(target_configure_unmap_from_queue);
-- 
2.30.2




[PATCH 05/27] drbd: use bdev based limit helpers in drbd_send_sizes

2022-04-14 Thread Christoph Hellwig
Use the bdev based limits helpers where they exist.

Signed-off-by: Christoph Hellwig 
Acked-by: Christoph Böhmwalder 
---
 drivers/block/drbd/drbd_main.c | 12 +++-
 1 file changed, 7 insertions(+), 5 deletions(-)

diff --git a/drivers/block/drbd/drbd_main.c b/drivers/block/drbd/drbd_main.c
index 367715205c860..c39b04bda261f 100644
--- a/drivers/block/drbd/drbd_main.c
+++ b/drivers/block/drbd/drbd_main.c
@@ -924,7 +924,9 @@ int drbd_send_sizes(struct drbd_peer_device *peer_device, 
int trigger_reply, enu
 
memset(p, 0, packet_size);
if (get_ldev_if_state(device, D_NEGOTIATING)) {
-   struct request_queue *q = 
bdev_get_queue(device->ldev->backing_bdev);
+   struct block_device *bdev = device->ldev->backing_bdev;
+   struct request_queue *q = bdev_get_queue(bdev);
+
d_size = drbd_get_max_capacity(device->ldev);
rcu_read_lock();
u_size = rcu_dereference(device->ldev->disk_conf)->disk_size;
@@ -933,13 +935,13 @@ int drbd_send_sizes(struct drbd_peer_device *peer_device, 
int trigger_reply, enu
max_bio_size = queue_max_hw_sectors(q) << 9;
max_bio_size = min(max_bio_size, DRBD_MAX_BIO_SIZE);
p->qlim->physical_block_size =
-   cpu_to_be32(queue_physical_block_size(q));
+   cpu_to_be32(bdev_physical_block_size(bdev));
p->qlim->logical_block_size =
-   cpu_to_be32(queue_logical_block_size(q));
+   cpu_to_be32(bdev_logical_block_size(bdev));
p->qlim->alignment_offset =
cpu_to_be32(queue_alignment_offset(q));
-   p->qlim->io_min = cpu_to_be32(queue_io_min(q));
-   p->qlim->io_opt = cpu_to_be32(queue_io_opt(q));
+   p->qlim->io_min = cpu_to_be32(bdev_io_min(bdev));
+   p->qlim->io_opt = cpu_to_be32(bdev_io_opt(bdev));
p->qlim->discard_enabled = blk_queue_discard(q);
put_ldev(device);
} else {
-- 
2.30.2




[PATCH 08/27] btrfs: use bdev_max_active_zones instead of open coding it

2022-04-14 Thread Christoph Hellwig
Signed-off-by: Christoph Hellwig 
Reviewed-by: Johannes Thumshirn 
Acked-by: David Sterba 
---
 fs/btrfs/zoned.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c
index 1b1b310c3c510..f72cad7391a11 100644
--- a/fs/btrfs/zoned.c
+++ b/fs/btrfs/zoned.c
@@ -350,7 +350,6 @@ int btrfs_get_dev_zone_info(struct btrfs_device *device, 
bool populate_cache)
struct btrfs_fs_info *fs_info = device->fs_info;
struct btrfs_zoned_device_info *zone_info = NULL;
struct block_device *bdev = device->bdev;
-   struct request_queue *queue = bdev_get_queue(bdev);
unsigned int max_active_zones;
unsigned int nactive;
sector_t nr_sectors;
@@ -410,7 +409,7 @@ int btrfs_get_dev_zone_info(struct btrfs_device *device, 
bool populate_cache)
if (!IS_ALIGNED(nr_sectors, zone_sectors))
zone_info->nr_zones++;
 
-   max_active_zones = queue_max_active_zones(queue);
+   max_active_zones = bdev_max_active_zones(bdev);
if (max_active_zones && max_active_zones < BTRFS_MIN_ACTIVE_ZONES) {
btrfs_err_in_rcu(fs_info,
 "zoned: %s: max active zones %u is too small, need at least %u active zones",
-- 
2.30.2




[PATCH 04/27] drbd: remove assign_p_sizes_qlim

2022-04-14 Thread Christoph Hellwig
Fold each branch into its only caller.

Signed-off-by: Christoph Hellwig 
Acked-by: Christoph Böhmwalder 
---
 drivers/block/drbd/drbd_main.c | 47 +++---
 1 file changed, 20 insertions(+), 27 deletions(-)

diff --git a/drivers/block/drbd/drbd_main.c b/drivers/block/drbd/drbd_main.c
index 4b0b25cc916ee..367715205c860 100644
--- a/drivers/block/drbd/drbd_main.c
+++ b/drivers/block/drbd/drbd_main.c
@@ -903,31 +903,6 @@ void drbd_gen_and_send_sync_uuid(struct drbd_peer_device 
*peer_device)
}
 }
 
-/* communicated if (agreed_features & DRBD_FF_WSAME) */
-static void
-assign_p_sizes_qlim(struct drbd_device *device, struct p_sizes *p,
-   struct request_queue *q)
-{
-   if (q) {
-   p->qlim->physical_block_size = 
cpu_to_be32(queue_physical_block_size(q));
-   p->qlim->logical_block_size = 
cpu_to_be32(queue_logical_block_size(q));
-   p->qlim->alignment_offset = 
cpu_to_be32(queue_alignment_offset(q));
-   p->qlim->io_min = cpu_to_be32(queue_io_min(q));
-   p->qlim->io_opt = cpu_to_be32(queue_io_opt(q));
-   p->qlim->discard_enabled = blk_queue_discard(q);
-   p->qlim->write_same_capable = 0;
-   } else {
-   q = device->rq_queue;
-   p->qlim->physical_block_size = 
cpu_to_be32(queue_physical_block_size(q));
-   p->qlim->logical_block_size = 
cpu_to_be32(queue_logical_block_size(q));
-   p->qlim->alignment_offset = 0;
-   p->qlim->io_min = cpu_to_be32(queue_io_min(q));
-   p->qlim->io_opt = cpu_to_be32(queue_io_opt(q));
-   p->qlim->discard_enabled = 0;
-   p->qlim->write_same_capable = 0;
-   }
-}
-
 int drbd_send_sizes(struct drbd_peer_device *peer_device, int trigger_reply, 
enum dds_flags flags)
 {
struct drbd_device *device = peer_device->device;
@@ -957,14 +932,32 @@ int drbd_send_sizes(struct drbd_peer_device *peer_device, 
int trigger_reply, enu
q_order_type = drbd_queue_order_type(device);
max_bio_size = queue_max_hw_sectors(q) << 9;
max_bio_size = min(max_bio_size, DRBD_MAX_BIO_SIZE);
-   assign_p_sizes_qlim(device, p, q);
+   p->qlim->physical_block_size =
+   cpu_to_be32(queue_physical_block_size(q));
+   p->qlim->logical_block_size =
+   cpu_to_be32(queue_logical_block_size(q));
+   p->qlim->alignment_offset =
+   cpu_to_be32(queue_alignment_offset(q));
+   p->qlim->io_min = cpu_to_be32(queue_io_min(q));
+   p->qlim->io_opt = cpu_to_be32(queue_io_opt(q));
+   p->qlim->discard_enabled = blk_queue_discard(q);
put_ldev(device);
} else {
+   struct request_queue *q = device->rq_queue;
+
+   p->qlim->physical_block_size =
+   cpu_to_be32(queue_physical_block_size(q));
+   p->qlim->logical_block_size =
+   cpu_to_be32(queue_logical_block_size(q));
+   p->qlim->alignment_offset = 0;
+   p->qlim->io_min = cpu_to_be32(queue_io_min(q));
+   p->qlim->io_opt = cpu_to_be32(queue_io_opt(q));
+   p->qlim->discard_enabled = 0;
+
d_size = 0;
u_size = 0;
q_order_type = QUEUE_ORDERED_NONE;
max_bio_size = DRBD_MAX_BIO_SIZE; /* ... multiple BIOs per 
peer_request */
-   assign_p_sizes_qlim(device, p, NULL);
}
 
if (peer_device->connection->agreed_pro_version <= 94)
-- 
2.30.2




[ovmf test] 169413: regressions - FAIL

2022-04-14 Thread osstest service owner
flight 169413 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/169413/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64   6 xen-buildfail REGR. vs. 168254
 build-amd64-xsm   6 xen-buildfail REGR. vs. 168254
 build-i386-xsm6 xen-buildfail REGR. vs. 168254
 build-i3866 xen-buildfail REGR. vs. 168254

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt   1 build-check(1)   blocked  n/a
 build-i386-libvirt1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)  blocked n/a

version targeted for testing:
 ovmf 4cfb28f12a8d24cab32d3223275a772227062a39
baseline version:
 ovmf b1b89f9009f2390652e0061bd7b24fc40732bc70

Last test of basis   168254  2022-02-28 10:41:46 Z   45 days
Failing since168258  2022-03-01 01:55:31 Z   45 days  402 attempts
Testing same since   169405  2022-04-14 20:10:31 Z0 days7 attempts


People who touched revisions under test:
  Abdul Lateef Attar 
  Abdul Lateef Attar via groups.io 
  Abner Chang 
  Akihiko Odaki 
  Anthony PERARD 
  Bob Feng 
  Chen Lin Z 
  Chen, Lin Z 
  Dandan Bi 
  Dun Tan 
  Feng, Bob C 
  Gerd Hoffmann 
  Guo Dong 
  Guomin Jiang 
  Hao A Wu 
  Heng Luo 
  Hua Ma 
  Huang, Li-Xia 
  Jagadeesh Ujja 
  Jason 
  Jason Lou 
  Ken Lautner 
  Kenneth Lautner 
  Kuo, Ted 
  Laszlo Ersek 
  Lean Sheng Tan 
  Leif Lindholm 
  Li, Zhihao 
  Liming Gao 
  Liu 
  Liu Yun 
  Liu Yun Y 
  Lixia Huang 
  Lou, Yun 
  Ma, Hua 
  Mara Sophie Grosch 
  Mara Sophie Grosch via groups.io 
  Matt DeVillier 
  Michael D Kinney 
  Michael Kubacki 
  Michael Kubacki 
  Min Xu 
  Oliver Steffen 
  Patrick Rudolph 
  Purna Chandra Rao Bandaru 
  Ray Ni 
  Rebecca Cran 
  Sami Mujawar 
  Sean Rhodes 
  Sean Rhodes sean@starlabs.systems
  Sebastien Boeuf 
  Sunny Wang 
  Ted Kuo 
  Wenyi Xie 
  wenyi,xie via groups.io 
  Xiaolu.Jiang 
  Xie, Yuanhao 
  Yi Li 
  yi1 li 
  Yuanhao Xie 
  Zhihao Li 

jobs:
 build-amd64-xsm  fail
 build-i386-xsm   fail
 build-amd64  fail
 build-i386   fail
 build-amd64-libvirt  blocked 
 build-i386-libvirt   blocked 
 build-amd64-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64 blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64  blocked 



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 5119 lines long.)



[ovmf test] 169411: regressions - FAIL

2022-04-14 Thread osstest service owner
flight 169411 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/169411/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64   6 xen-buildfail REGR. vs. 168254
 build-amd64-xsm   6 xen-buildfail REGR. vs. 168254
 build-i386-xsm6 xen-buildfail REGR. vs. 168254
 build-i3866 xen-buildfail REGR. vs. 168254

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt   1 build-check(1)   blocked  n/a
 build-i386-libvirt1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)  blocked n/a

version targeted for testing:
 ovmf 4cfb28f12a8d24cab32d3223275a772227062a39
baseline version:
 ovmf b1b89f9009f2390652e0061bd7b24fc40732bc70

Last test of basis   168254  2022-02-28 10:41:46 Z   45 days
Failing since168258  2022-03-01 01:55:31 Z   45 days  401 attempts
Testing same since   169405  2022-04-14 20:10:31 Z0 days6 attempts


People who touched revisions under test:
  Abdul Lateef Attar 
  Abdul Lateef Attar via groups.io 
  Abner Chang 
  Akihiko Odaki 
  Anthony PERARD 
  Bob Feng 
  Chen Lin Z 
  Chen, Lin Z 
  Dandan Bi 
  Dun Tan 
  Feng, Bob C 
  Gerd Hoffmann 
  Guo Dong 
  Guomin Jiang 
  Hao A Wu 
  Heng Luo 
  Hua Ma 
  Huang, Li-Xia 
  Jagadeesh Ujja 
  Jason 
  Jason Lou 
  Ken Lautner 
  Kenneth Lautner 
  Kuo, Ted 
  Laszlo Ersek 
  Lean Sheng Tan 
  Leif Lindholm 
  Li, Zhihao 
  Liming Gao 
  Liu 
  Liu Yun 
  Liu Yun Y 
  Lixia Huang 
  Lou, Yun 
  Ma, Hua 
  Mara Sophie Grosch 
  Mara Sophie Grosch via groups.io 
  Matt DeVillier 
  Michael D Kinney 
  Michael Kubacki 
  Michael Kubacki 
  Min Xu 
  Oliver Steffen 
  Patrick Rudolph 
  Purna Chandra Rao Bandaru 
  Ray Ni 
  Rebecca Cran 
  Sami Mujawar 
  Sean Rhodes 
  Sean Rhodes sean@starlabs.systems
  Sebastien Boeuf 
  Sunny Wang 
  Ted Kuo 
  Wenyi Xie 
  wenyi,xie via groups.io 
  Xiaolu.Jiang 
  Xie, Yuanhao 
  Yi Li 
  yi1 li 
  Yuanhao Xie 
  Zhihao Li 

jobs:
 build-amd64-xsm  fail
 build-i386-xsm   fail
 build-amd64  fail
 build-i386   fail
 build-amd64-libvirt  blocked 
 build-i386-libvirt   blocked 
 build-amd64-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64 blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64  blocked 



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 5119 lines long.)



[ovmf test] 169410: regressions - FAIL

2022-04-14 Thread osstest service owner
flight 169410 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/169410/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64   6 xen-buildfail REGR. vs. 168254
 build-amd64-xsm   6 xen-buildfail REGR. vs. 168254
 build-i386-xsm6 xen-buildfail REGR. vs. 168254
 build-i3866 xen-buildfail REGR. vs. 168254

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt   1 build-check(1)   blocked  n/a
 build-i386-libvirt1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)  blocked n/a

version targeted for testing:
 ovmf 4cfb28f12a8d24cab32d3223275a772227062a39
baseline version:
 ovmf b1b89f9009f2390652e0061bd7b24fc40732bc70

Last test of basis   168254  2022-02-28 10:41:46 Z   45 days
Failing since168258  2022-03-01 01:55:31 Z   44 days  400 attempts
Testing same since   169405  2022-04-14 20:10:31 Z0 days5 attempts


People who touched revisions under test:
  Abdul Lateef Attar 
  Abdul Lateef Attar via groups.io 
  Abner Chang 
  Akihiko Odaki 
  Anthony PERARD 
  Bob Feng 
  Chen Lin Z 
  Chen, Lin Z 
  Dandan Bi 
  Dun Tan 
  Feng, Bob C 
  Gerd Hoffmann 
  Guo Dong 
  Guomin Jiang 
  Hao A Wu 
  Heng Luo 
  Hua Ma 
  Huang, Li-Xia 
  Jagadeesh Ujja 
  Jason 
  Jason Lou 
  Ken Lautner 
  Kenneth Lautner 
  Kuo, Ted 
  Laszlo Ersek 
  Lean Sheng Tan 
  Leif Lindholm 
  Li, Zhihao 
  Liming Gao 
  Liu 
  Liu Yun 
  Liu Yun Y 
  Lixia Huang 
  Lou, Yun 
  Ma, Hua 
  Mara Sophie Grosch 
  Mara Sophie Grosch via groups.io 
  Matt DeVillier 
  Michael D Kinney 
  Michael Kubacki 
  Michael Kubacki 
  Min Xu 
  Oliver Steffen 
  Patrick Rudolph 
  Purna Chandra Rao Bandaru 
  Ray Ni 
  Rebecca Cran 
  Sami Mujawar 
  Sean Rhodes 
  Sean Rhodes sean@starlabs.systems
  Sebastien Boeuf 
  Sunny Wang 
  Ted Kuo 
  Wenyi Xie 
  wenyi,xie via groups.io 
  Xiaolu.Jiang 
  Xie, Yuanhao 
  Yi Li 
  yi1 li 
  Yuanhao Xie 
  Zhihao Li 

jobs:
 build-amd64-xsm  fail
 build-i386-xsm   fail
 build-amd64  fail
 build-i386   fail
 build-amd64-libvirt  blocked 
 build-i386-libvirt   blocked 
 build-amd64-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64 blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64  blocked 



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 5119 lines long.)



Re: Proposal for Porting Xen to Armv8-R64 - DraftB

2022-04-14 Thread Stefano Stabellini
On Fri, 25 Mar 2022, Wei Chen wrote:
> # Proposal for Porting Xen to Armv8-R64
> 
> This proposal will introduce the PoC work of porting Xen to Armv8-R64,
> which includes:
> - The changes of current Xen capability, like Xen build system, memory
>   management, domain management, vCPU context switch.
> - The expanded Xen capability, like static-allocation and direct-map.
> 
> ***Notes:***
> 1. ***This proposal only covers the work of porting Xen to Armv8-R64***
>***single CPU.Xen SMP support on Armv8-R64 relates to Armv8-R***
>***Trusted-Frimware (TF-R). This is an external dependency,***
>***so we think the discussion of Xen SMP support on Armv8-R64***
>***should be started when single-CPU support is complete.***
> 2. ***This proposal will not touch xen-tools. In current stange,***
>***Xen on Armv8-R64 only support dom0less, all guests should***
>***be booted from device tree.***
> 
> ## Changelogs
> Draft-A -> Draft-B:
> 1. Update Kconfig options usage.
> 2. Update the section for XEN_START_ADDRESS.
> 3. Add description of MPU initialization before parsing device tree.
> 4. Remove CONFIG_ARM_MPU_EL1_PROTECTION_REGIONS.
> 5. Update the description of ioremap_nocache/cache.
> 6. Update about the free_init_memory on Armv8-R.
> 7. Describe why we need to switch the MPU configuration later.
> 8. Add alternative proposal in TODO.
> 9. Add use tool to generate Xen Armv8-R device tree in TODO.
> 10. Add Xen PIC/PIE discussion in TODO.
> 11. Add Xen event channel support in TODO.
> 
> ## Contributors:
> Wei Chen 
> Penny Zheng 
> 
> ## 1. Essential Background
> 
> ### 1.1. Armv8-R64 Profile
> The Armv-R architecture profile was designed to support use cases that
> have a high sensitivity to deterministic execution. (e.g. Fuel Injection,
> Brake control, Drive trains, Motor control etc)
> 
> Arm announced Armv8-R in 2013, it is the latest generation Arm architecture
> targeted at the Real-time profile. It introduces virtualization at the highest
> security level while retaining the Protected Memory System Architecture (PMSA)
> based on a Memory Protection Unit (MPU). In 2020, Arm announced Cortex-R82,
> which is the first Arm 64-bit Cortex-R processor based on Armv8-R64.
> 
> - The latest Armv8-R64 document can be found here:
>   [Arm Architecture Reference Manual Supplement - Armv8, for Armv8-R AArch64 
> architecture 
> profile](https://developer.arm.com/documentation/ddi0600/latest/).
> 
> - Armv-R Architecture progression:
>   Armv7-R -> Armv8-R AArch32 -> Armv8 AArch64
>   The following figure is a simple comparison of "R" processors based on
>   different Armv-R Architectures.
>   
> ![image](https://drive.google.com/uc?export=view=1nE5RAXaX8zY2KPZ8imBpbvIr2eqBguEB)
> 
> - The Armv8-R architecture evolved additional features on top of Armv7-R:
> - An exception model that is compatible with the Armv8-A model
> - Virtualization with support for guest operating systems
> - PMSA virtualization using MPUs In EL2.
> - The new features of Armv8-R64 architecture
> - Adds support for the 64-bit A64 instruction set, previously Armv8-R
>   only supported A32.
> - Supports up to 48-bit physical addressing, previously up to 32-bit
>   addressing was supported.
> - Optional Arm Neon technology and Advanced SIMD
> - Supports three Exception Levels (ELs)
> - Secure EL2 - The Highest Privilege, MPU only, for firmware, 
> hypervisor
> - Secure EL1 - RichOS (MMU) or RTOS (MPU)
> - Secure EL0 - Application Workloads
> - Optionally supports Virtual Memory System Architecture at S-EL1/S-EL0.
>   This means it's possible to run rich OS kernels - like Linux - either
>   bare-metal or as a guest.
> - Differences with the Armv8-A AArch64 architecture
> - Supports only a single Security state - Secure. There is not Non-Secure
>   execution state supported.
> - EL3 is not supported, EL2 is mandatory. This means secure EL2 is the
>   highest EL.
> - Supports the A64 ISA instruction
> - With a small set of well-defined differences
> - Provides a PMSA (Protected Memory System Architecture) based
>   virtualization model.
> - As opposed to Armv8-A AArch64's VMSA based Virtualization
> - Can support address bits up to 52 if FEAT_LPA is enabled,
>   otherwise 48 bits.
> - Determines the access permissions and memory attributes of
>   the target PA.
> - Can implement PMSAv8-64 at EL1 and EL2
> - Address translation flat-maps the VA to the PA for EL2 Stage 1.
> - Address translation flat-maps the VA to the PA for EL1 Stage 1.
> - Address translation flat-maps the IPA to the PA for EL1 Stage 2.
> - PMSA in EL1 & EL2 is configurable, VMSA in EL1 is configurable.
> 
> ### 1.2. Xen Challenges with PMSA Virtualization
> Xen is PMSA unaware Type-1 Hypervisor, it will need modifications to run
> with an MPU and host multiple 

[ovmf test] 169408: regressions - FAIL

2022-04-14 Thread osstest service owner
flight 169408 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/169408/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64   6 xen-buildfail REGR. vs. 168254
 build-amd64-xsm   6 xen-buildfail REGR. vs. 168254
 build-i386-xsm6 xen-buildfail REGR. vs. 168254
 build-i3866 xen-buildfail REGR. vs. 168254

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt   1 build-check(1)   blocked  n/a
 build-i386-libvirt1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)  blocked n/a

version targeted for testing:
 ovmf 4cfb28f12a8d24cab32d3223275a772227062a39
baseline version:
 ovmf b1b89f9009f2390652e0061bd7b24fc40732bc70

Last test of basis   168254  2022-02-28 10:41:46 Z   45 days
Failing since168258  2022-03-01 01:55:31 Z   44 days  399 attempts
Testing same since   169405  2022-04-14 20:10:31 Z0 days4 attempts


People who touched revisions under test:
  Abdul Lateef Attar 
  Abdul Lateef Attar via groups.io 
  Abner Chang 
  Akihiko Odaki 
  Anthony PERARD 
  Bob Feng 
  Chen Lin Z 
  Chen, Lin Z 
  Dandan Bi 
  Dun Tan 
  Feng, Bob C 
  Gerd Hoffmann 
  Guo Dong 
  Guomin Jiang 
  Hao A Wu 
  Heng Luo 
  Hua Ma 
  Huang, Li-Xia 
  Jagadeesh Ujja 
  Jason 
  Jason Lou 
  Ken Lautner 
  Kenneth Lautner 
  Kuo, Ted 
  Laszlo Ersek 
  Lean Sheng Tan 
  Leif Lindholm 
  Li, Zhihao 
  Liming Gao 
  Liu 
  Liu Yun 
  Liu Yun Y 
  Lixia Huang 
  Lou, Yun 
  Ma, Hua 
  Mara Sophie Grosch 
  Mara Sophie Grosch via groups.io 
  Matt DeVillier 
  Michael D Kinney 
  Michael Kubacki 
  Michael Kubacki 
  Min Xu 
  Oliver Steffen 
  Patrick Rudolph 
  Purna Chandra Rao Bandaru 
  Ray Ni 
  Rebecca Cran 
  Sami Mujawar 
  Sean Rhodes 
  Sean Rhodes sean@starlabs.systems
  Sebastien Boeuf 
  Sunny Wang 
  Ted Kuo 
  Wenyi Xie 
  wenyi,xie via groups.io 
  Xiaolu.Jiang 
  Xie, Yuanhao 
  Yi Li 
  yi1 li 
  Yuanhao Xie 
  Zhihao Li 

jobs:
 build-amd64-xsm  fail
 build-i386-xsm   fail
 build-amd64  fail
 build-i386   fail
 build-amd64-libvirt  blocked 
 build-i386-libvirt   blocked 
 build-amd64-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64 blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64  blocked 



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 5119 lines long.)



[linux-linus test] 169403: tolerable FAIL - PUSHED

2022-04-14 Thread osstest service owner
flight 169403 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/169403/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stopfail like 169346
 test-armhf-armhf-libvirt 16 saverestore-support-checkfail  like 169346
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 169346
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stopfail like 169346
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stopfail like 169346
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stopfail like 169346
 test-armhf-armhf-libvirt-raw 15 saverestore-support-checkfail  like 169346
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 169346
 test-amd64-amd64-libvirt 15 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-checkfail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-vhd  14 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-vhd  15 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt 15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-checkfail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  14 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  15 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 16 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-checkfail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-checkfail   never pass

version targeted for testing:
 linuxb9b4c79e58305ac64352286ee5030d193fc8aa22
baseline version:
 linuxa19944809fe9942e6a96292490717904d0690c21

Last test of basis   169346  2022-04-13 01:40:44 Z1 days
Testing same since   169403  2022-04-14 19:11:53 Z0 days1 attempts


People who touched revisions under test:
  Borislav Petkov 
  Dave Wysochanski 
  David Howells 
  David Sterba 
  Dennis Zhou 
  Fabio M. De Francesco 
  Haowen Bai 
  Jeffle Xu 
  Jia-Ju Bai 
  Johannes Thumshirn 
  Kai Vehmanen 
  Linus Torvalds 
  Lucas De Marchi 
  Naohiro Aota 
  Nathan Chancellor 
  Nikolay Borisov 
  Pierre-Louis Bossart 
  Randy Dunlap 
  Takashi Iwai 
  Tao Jin 
  Tim Crawford 
  Yue Hu 

jobs:
 

[ovmf test] 169407: regressions - FAIL

2022-04-14 Thread osstest service owner
flight 169407 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/169407/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64   6 xen-buildfail REGR. vs. 168254
 build-amd64-xsm   6 xen-buildfail REGR. vs. 168254
 build-i386-xsm6 xen-buildfail REGR. vs. 168254
 build-i3866 xen-buildfail REGR. vs. 168254

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt   1 build-check(1)   blocked  n/a
 build-i386-libvirt1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)  blocked n/a

version targeted for testing:
 ovmf 4cfb28f12a8d24cab32d3223275a772227062a39
baseline version:
 ovmf b1b89f9009f2390652e0061bd7b24fc40732bc70

Last test of basis   168254  2022-02-28 10:41:46 Z   45 days
Failing since168258  2022-03-01 01:55:31 Z   44 days  398 attempts
Testing same since   169405  2022-04-14 20:10:31 Z0 days3 attempts


People who touched revisions under test:
  Abdul Lateef Attar 
  Abdul Lateef Attar via groups.io 
  Abner Chang 
  Akihiko Odaki 
  Anthony PERARD 
  Bob Feng 
  Chen Lin Z 
  Chen, Lin Z 
  Dandan Bi 
  Dun Tan 
  Feng, Bob C 
  Gerd Hoffmann 
  Guo Dong 
  Guomin Jiang 
  Hao A Wu 
  Heng Luo 
  Hua Ma 
  Huang, Li-Xia 
  Jagadeesh Ujja 
  Jason 
  Jason Lou 
  Ken Lautner 
  Kenneth Lautner 
  Kuo, Ted 
  Laszlo Ersek 
  Lean Sheng Tan 
  Leif Lindholm 
  Li, Zhihao 
  Liming Gao 
  Liu 
  Liu Yun 
  Liu Yun Y 
  Lixia Huang 
  Lou, Yun 
  Ma, Hua 
  Mara Sophie Grosch 
  Mara Sophie Grosch via groups.io 
  Matt DeVillier 
  Michael D Kinney 
  Michael Kubacki 
  Michael Kubacki 
  Min Xu 
  Oliver Steffen 
  Patrick Rudolph 
  Purna Chandra Rao Bandaru 
  Ray Ni 
  Rebecca Cran 
  Sami Mujawar 
  Sean Rhodes 
  Sean Rhodes sean@starlabs.systems
  Sebastien Boeuf 
  Sunny Wang 
  Ted Kuo 
  Wenyi Xie 
  wenyi,xie via groups.io 
  Xiaolu.Jiang 
  Xie, Yuanhao 
  Yi Li 
  yi1 li 
  Yuanhao Xie 
  Zhihao Li 

jobs:
 build-amd64-xsm  fail
 build-i386-xsm   fail
 build-amd64  fail
 build-i386   fail
 build-amd64-libvirt  blocked 
 build-i386-libvirt   blocked 
 build-amd64-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64 blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64  blocked 



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 5119 lines long.)



[ovmf test] 169406: regressions - FAIL

2022-04-14 Thread osstest service owner
flight 169406 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/169406/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64   6 xen-buildfail REGR. vs. 168254
 build-amd64-xsm   6 xen-buildfail REGR. vs. 168254
 build-i386-xsm6 xen-buildfail REGR. vs. 168254
 build-i3866 xen-buildfail REGR. vs. 168254

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt   1 build-check(1)   blocked  n/a
 build-i386-libvirt1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)  blocked n/a

version targeted for testing:
 ovmf 4cfb28f12a8d24cab32d3223275a772227062a39
baseline version:
 ovmf b1b89f9009f2390652e0061bd7b24fc40732bc70

Last test of basis   168254  2022-02-28 10:41:46 Z   45 days
Failing since168258  2022-03-01 01:55:31 Z   44 days  397 attempts
Testing same since   169405  2022-04-14 20:10:31 Z0 days2 attempts


People who touched revisions under test:
  Abdul Lateef Attar 
  Abdul Lateef Attar via groups.io 
  Abner Chang 
  Akihiko Odaki 
  Anthony PERARD 
  Bob Feng 
  Chen Lin Z 
  Chen, Lin Z 
  Dandan Bi 
  Dun Tan 
  Feng, Bob C 
  Gerd Hoffmann 
  Guo Dong 
  Guomin Jiang 
  Hao A Wu 
  Heng Luo 
  Hua Ma 
  Huang, Li-Xia 
  Jagadeesh Ujja 
  Jason 
  Jason Lou 
  Ken Lautner 
  Kenneth Lautner 
  Kuo, Ted 
  Laszlo Ersek 
  Lean Sheng Tan 
  Leif Lindholm 
  Li, Zhihao 
  Liming Gao 
  Liu 
  Liu Yun 
  Liu Yun Y 
  Lixia Huang 
  Lou, Yun 
  Ma, Hua 
  Mara Sophie Grosch 
  Mara Sophie Grosch via groups.io 
  Matt DeVillier 
  Michael D Kinney 
  Michael Kubacki 
  Michael Kubacki 
  Min Xu 
  Oliver Steffen 
  Patrick Rudolph 
  Purna Chandra Rao Bandaru 
  Ray Ni 
  Rebecca Cran 
  Sami Mujawar 
  Sean Rhodes 
  Sean Rhodes sean@starlabs.systems
  Sebastien Boeuf 
  Sunny Wang 
  Ted Kuo 
  Wenyi Xie 
  wenyi,xie via groups.io 
  Xiaolu.Jiang 
  Xie, Yuanhao 
  Yi Li 
  yi1 li 
  Yuanhao Xie 
  Zhihao Li 

jobs:
 build-amd64-xsm  fail
 build-i386-xsm   fail
 build-amd64  fail
 build-i386   fail
 build-amd64-libvirt  blocked 
 build-i386-libvirt   blocked 
 build-amd64-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64 blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64  blocked 



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 5119 lines long.)



[ovmf test] 169405: regressions - FAIL

2022-04-14 Thread osstest service owner
flight 169405 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/169405/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64   6 xen-buildfail REGR. vs. 168254
 build-amd64-xsm   6 xen-buildfail REGR. vs. 168254
 build-i386-xsm6 xen-buildfail REGR. vs. 168254
 build-i3866 xen-buildfail REGR. vs. 168254

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt   1 build-check(1)   blocked  n/a
 build-i386-libvirt1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)  blocked n/a

version targeted for testing:
 ovmf 4cfb28f12a8d24cab32d3223275a772227062a39
baseline version:
 ovmf b1b89f9009f2390652e0061bd7b24fc40732bc70

Last test of basis   168254  2022-02-28 10:41:46 Z   45 days
Failing since168258  2022-03-01 01:55:31 Z   44 days  396 attempts
Testing same since   169405  2022-04-14 20:10:31 Z0 days1 attempts


People who touched revisions under test:
  Abdul Lateef Attar 
  Abdul Lateef Attar via groups.io 
  Abner Chang 
  Akihiko Odaki 
  Anthony PERARD 
  Bob Feng 
  Chen Lin Z 
  Chen, Lin Z 
  Dandan Bi 
  Dun Tan 
  Feng, Bob C 
  Gerd Hoffmann 
  Guo Dong 
  Guomin Jiang 
  Hao A Wu 
  Heng Luo 
  Hua Ma 
  Huang, Li-Xia 
  Jagadeesh Ujja 
  Jason 
  Jason Lou 
  Ken Lautner 
  Kenneth Lautner 
  Kuo, Ted 
  Laszlo Ersek 
  Lean Sheng Tan 
  Leif Lindholm 
  Li, Zhihao 
  Liming Gao 
  Liu 
  Liu Yun 
  Liu Yun Y 
  Lixia Huang 
  Lou, Yun 
  Ma, Hua 
  Mara Sophie Grosch 
  Mara Sophie Grosch via groups.io 
  Matt DeVillier 
  Michael D Kinney 
  Michael Kubacki 
  Michael Kubacki 
  Min Xu 
  Oliver Steffen 
  Patrick Rudolph 
  Purna Chandra Rao Bandaru 
  Ray Ni 
  Rebecca Cran 
  Sami Mujawar 
  Sean Rhodes 
  Sean Rhodes sean@starlabs.systems
  Sebastien Boeuf 
  Sunny Wang 
  Ted Kuo 
  Wenyi Xie 
  wenyi,xie via groups.io 
  Xiaolu.Jiang 
  Xie, Yuanhao 
  Yi Li 
  yi1 li 
  Yuanhao Xie 
  Zhihao Li 

jobs:
 build-amd64-xsm  fail
 build-i386-xsm   fail
 build-amd64  fail
 build-i386   fail
 build-amd64-libvirt  blocked 
 build-i386-libvirt   blocked 
 build-amd64-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64 blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64  blocked 



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 5119 lines long.)



Re: xen-swiotlb issue when NVMe driver is enabled in Dom0 on ARM

2022-04-14 Thread Stefano Stabellini
+ Christoph

Hi Christoph,

Rahul is seeing a swiotlb-xen failure on Ampere Altra triggered by the
NVME driver doing DMA. There is stacktrace below.

I asked Rahul to check the code path taken with and without Xen and it
looks like everything checks out. See below.


On Thu, 14 Apr 2022, Rahul Singh wrote:
> Hi Stefano,
> 
> > On 13 Apr 2022, at 10:24 pm, Stefano Stabellini  
> > wrote:
> >
> > On Wed, 13 Apr 2022, Rahul Singh wrote:
> >> Hello All,
> >>
> >> We are trying to boot the Xen 4.15.1 and dom0 Linux Kernel 
> >> (5.10.27-ampere-lts-standard from [1] ) on Ampere Altra / AVA Developer
> Platform
> >> [2] with ACPI.
> >>
> >> NVMe storage is connected to PCIe. Native Linux kernel boot fine and also 
> >> I am able to detect and access NVMe storage.
> >> However, during XEN boot when NVME driver is requesting the DMA buffer we 
> >> are observing the Oops with XEN.
> >
> > Hi Rahul,
> >
> > Thanks for the bug report. More comments below.
> >
> >
> >
> >> Please find the attached detail logs for Xen and dom0 booting.
> >>
> >> Snip from logs:
> >> (XEN) d0v0: vGICR: SGI: unhandled word write 0x00 to ICACTIVER0
> >> [  0.00] Booting Linux on physical CPU 0x00 [0x413fd0c1]
> >> [  0.00] Linux version 5.10.27-ampere-lts-standard (oe-user@oe-host) 
> >> (aarch64-poky-linux-gcc (GCC) 11.2.0, GNU ld (GNU Binutils)
> >> 2.37.20210721) #1 SMP PREEMPT Sat Sep 18 06:01:59 UTC 2021
> >> [  0.00] Xen XEN_VERSION.XEN_SUBVERSION support found
> >> [  0.00] efi: EFI v2.50 by Xen
> >> [  0.00] efi: ACPI 2.0=0x807f66cece8
> >> [  0.00] ACPI: Early table checksum verification disabled
> >> [  0.00] ACPI: RSDP 0x0807F66CECE8 24 (v02 Ampere)
> >> [  0.00] ACPI: XSDT 0x0807F66CEC38 AC (v01 Ampere Altra  
> >> 0002 AMP. 0113)
> >> [  0.00] ACPI: FACP 0x0807F66CE000 000114 (v06 Ampere Altra  
> >> 0002 AMP. 0113)
> >> [  0.00] ACPI: DSDT 0x0807F8DB0018 02C19E (v02 Ampere Jade  
> >> 0001 INTL 20201217)
> >> [  0.00] ACPI: BERT 0x0807FA0DFF98 30 (v01 Ampere Altra  
> >> 0002 AMP. 0113)
> >> [  0.00] ACPI: DBG2 0x0807FA0DFA98 5C (v00 Ampere Altra  
> >> 0002 AMP. 0113)
> >> [  0.00] ACPI: GTDT 0x0807FA0DE998 000110 (v03 Ampere Altra  
> >> 0002 AMP. 0113)
> >> [  0.00] ACPI: SPCR 0x0807FA0DFE18 50 (v02 Ampere Altra  
> >> 0002 AMP. 0113)
> >> [  0.00] ACPI: EINJ 0x0807FA0DF598 000150 (v01 Ampere Altra  
> >> 0001 INTL 20201217)
> >> [  0.00] ACPI: HEST 0x0807FA0DEB18 0001F4 (v01 Ampere Altra  
> >> 0001 INTL 20201217)
> >> [  0.00] ACPI: SSDT 0x0807FA0DFA18 2D (v02 Ampere Altra  
> >> 0001 INTL 20201217)
> >> [  0.00] ACPI: TPM2 0x0807FA0DFD18 4C (v04 Ampere Altra  
> >> 0002 AMP. 0113)
> >> [  0.00] ACPI: MCFG 0x0807FA0DF718 7C (v01 Ampere Altra  
> >> 0001 AMP. 0113)
> >> [  0.00] ACPI: IORT 0x0807FA0DEF18 0003DC (v00 Ampere Altra  
> >> 0002 AMP. 0113)
> >> [  0.00] ACPI: APIC 0x0807F66CE118 000AF4 (v05 Ampere Altra  
> >> 0002 AMP. 0113)
> >> [  0.00] ACPI: PPTT 0x0807FA0D8618 004520 (v02 Ampere Altra  
> >> 0002 AMP. 0113)
> >> [  0.00] ACPI: SLIT 0x0807FA0DFD98 2D (v01 Ampere Altra  
> >> 0002 AMP. 0113)
> >> [  0.00] ACPI: SRAT 0x0807FA0DCE18 000370 (v03 Ampere Altra  
> >> 0002 AMP. 0113)
> >> [  0.00] ACPI: PCCT 0x0807FA0DE318 000576 (v02 Ampere Altra  
> >> 0002 AMP. 0113)
> >> [  0.00] ACPI: STAO 0x0807F66CEC10 25 (v01 Ampere Altra  
> >> 0002 AMP. 0113)
> >> [  0.00] ACPI: SPCR: console: pl011,mmio32,0x1260,115200
> >> [  0.00] ACPI: SRAT: Node 0 PXM 0 [mem 0x8830-0x883f]
> >> [  0.00] ACPI: SRAT: Node 0 PXM 0 [mem 0x9000-0x]
> >> [  0.00] ACPI: SRAT: Node 0 PXM 0 [mem 0x800-0x8007fff]
> >> [  0.00] ACPI: SRAT: Node 0 PXM 0 [mem 0x801-0x807]
> >> [  0.00] NUMA: NODE_DATA [mem 0x8079fbf5e00-0x8079fbf7fff]
> >> [  0.00] Zone ranges:
> >> [  0.00]  DMA  [mem 0x9800-0x]
> >> [  0.00]  DMA32  empty
> >> [  0.00]  Normal  [mem 0x0001-0x0807fa0d]
> >> [  0.00] Movable zone start for each node
> >> [  0.00] Early memory node ranges
> >> ….
> >>
> >> [  0.00] Dentry cache hash table entries: 262144 (order: 9, 2097152 
> >> bytes, linear)
> >> [  0.00] Inode-cache hash table entries: 131072 (order: 8, 1048576 
> >> bytes, linear)
> >> [  0.00] mem auto-init: stack:off, heap alloc:off, heap free:off
> >> [  0.00] software IO TLB: mapped [mem 
> >> 0xf400-0xf800] (64MB)
> >> [  0.00] Memory: 1929152K/2097412K available (13568K kernel code, 
> >> 1996K rwdata, 3476K rodata, 4160K init, 822K bss, 168260K
> reserved,
> >> 0K cma-reserved)
> >> [  0.00] SLUB: 

Re: [Stratos-dev] Xen Rust VirtIO demos work breakdown for Project Stratos

2022-04-14 Thread Oleksandr Tyshchenko
Hello all.

[Sorry for the possible format issues]

I have an update regarding (valid) concern which has been also raised in
current thread which is the virtio backend's ability (when using Xen
foreign mapping) to map any guest pages without guest "agreement" on that.
There is a PoC (with virtio-mmio on Arm) which is based on Juergen Gross’
work to reuse secure Xen grant mapping for the virtio communications.
All details are at:
https://lore.kernel.org/xen-devel/1649963973-22879-1-git-send-email-olekst...@gmail.com/
https://lore.kernel.org/xen-devel/1649964960-24864-1-git-send-email-olekst...@gmail.com/

-- 
Regards,

Oleksandr Tyshchenko


[ovmf test] 169404: regressions - FAIL

2022-04-14 Thread osstest service owner
flight 169404 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/169404/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64   6 xen-buildfail REGR. vs. 168254
 build-amd64-xsm   6 xen-buildfail REGR. vs. 168254
 build-i386-xsm6 xen-buildfail REGR. vs. 168254
 build-i3866 xen-buildfail REGR. vs. 168254

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt   1 build-check(1)   blocked  n/a
 build-i386-libvirt1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)  blocked n/a

version targeted for testing:
 ovmf f3da13461cbed699e54b1d7ef3fba5144cc3b3b4
baseline version:
 ovmf b1b89f9009f2390652e0061bd7b24fc40732bc70

Last test of basis   168254  2022-02-28 10:41:46 Z   45 days
Failing since168258  2022-03-01 01:55:31 Z   44 days  395 attempts
Testing same since   169397  2022-04-14 15:10:23 Z0 days7 attempts


People who touched revisions under test:
  Abdul Lateef Attar 
  Abdul Lateef Attar via groups.io 
  Abner Chang 
  Akihiko Odaki 
  Anthony PERARD 
  Bob Feng 
  Chen Lin Z 
  Chen, Lin Z 
  Dandan Bi 
  Feng, Bob C 
  Gerd Hoffmann 
  Guo Dong 
  Guomin Jiang 
  Hao A Wu 
  Heng Luo 
  Hua Ma 
  Huang, Li-Xia 
  Jagadeesh Ujja 
  Jason 
  Jason Lou 
  Ken Lautner 
  Kenneth Lautner 
  Kuo, Ted 
  Laszlo Ersek 
  Lean Sheng Tan 
  Leif Lindholm 
  Li, Zhihao 
  Liming Gao 
  Liu 
  Liu Yun 
  Liu Yun Y 
  Lixia Huang 
  Lou, Yun 
  Ma, Hua 
  Mara Sophie Grosch 
  Mara Sophie Grosch via groups.io 
  Matt DeVillier 
  Michael D Kinney 
  Michael Kubacki 
  Michael Kubacki 
  Min Xu 
  Oliver Steffen 
  Patrick Rudolph 
  Purna Chandra Rao Bandaru 
  Ray Ni 
  Rebecca Cran 
  Sami Mujawar 
  Sean Rhodes 
  Sean Rhodes sean@starlabs.systems
  Sebastien Boeuf 
  Sunny Wang 
  Ted Kuo 
  Wenyi Xie 
  wenyi,xie via groups.io 
  Xiaolu.Jiang 
  Xie, Yuanhao 
  Yi Li 
  yi1 li 
  Yuanhao Xie 
  Zhihao Li 

jobs:
 build-amd64-xsm  fail
 build-i386-xsm   fail
 build-amd64  fail
 build-i386   fail
 build-amd64-libvirt  blocked 
 build-i386-libvirt   blocked 
 build-amd64-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64 blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64  blocked 



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 5103 lines long.)



Re: [RFC PATCH 2/6] virtio: add option to restrict memory access under Xen

2022-04-14 Thread H. Peter Anvin
On April 14, 2022 12:19:29 PM PDT, Oleksandr Tyshchenko  
wrote:
>From: Juergen Gross 
>
>In order to support virtio in Xen guests add a config option enabling
>the user to specify whether in all Xen guests virtio should be able to
>access memory via Xen grant mappings only on the host side.
>
>This applies to fully virtualized guests only, as for paravirtualized
>guests this is mandatory.
>
>This requires to switch arch_has_restricted_virtio_memory_access()
>from a pure stub to a real function on x86 systems (Arm systems are
>not covered by now).
>
>Add the needed functionality by providing a special set of DMA ops
>handling the needed grant operations for the I/O pages.
>
>Signed-off-by: Juergen Gross 
>---
> arch/x86/mm/init.c|  15 
> arch/x86/mm/mem_encrypt.c |   5 --
> arch/x86/xen/Kconfig  |   9 +++
> drivers/xen/Kconfig   |  20 ++
> drivers/xen/Makefile  |   1 +
> drivers/xen/xen-virtio.c  | 177 ++
> include/xen/xen-ops.h |   8 +++
> 7 files changed, 230 insertions(+), 5 deletions(-)
> create mode 100644 drivers/xen/xen-virtio.c
>
>diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
>index d8cfce2..526a3b2 100644
>--- a/arch/x86/mm/init.c
>+++ b/arch/x86/mm/init.c
>@@ -8,6 +8,8 @@
> #include 
> #include 
> 
>+#include 
>+
> #include 
> #include 
> #include 
>@@ -1065,3 +1067,16 @@ unsigned long max_swapfile_size(void)
>   return pages;
> }
> #endif
>+
>+#ifdef CONFIG_ARCH_HAS_RESTRICTED_VIRTIO_MEMORY_ACCESS
>+int arch_has_restricted_virtio_memory_access(void)
>+{
>+  if (IS_ENABLED(CONFIG_XEN_PV_VIRTIO) && xen_pv_domain())
>+  return 1;
>+  if (IS_ENABLED(CONFIG_XEN_HVM_VIRTIO_GRANT) && xen_hvm_domain())
>+  return 1;
>+
>+  return cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT);
>+}
>+EXPORT_SYMBOL_GPL(arch_has_restricted_virtio_memory_access);
>+#endif
>diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
>index 50d2099..dda020f 100644
>--- a/arch/x86/mm/mem_encrypt.c
>+++ b/arch/x86/mm/mem_encrypt.c
>@@ -77,8 +77,3 @@ void __init mem_encrypt_init(void)
>   print_mem_encrypt_feature_info();
> }
> 
>-int arch_has_restricted_virtio_memory_access(void)
>-{
>-  return cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT);
>-}
>-EXPORT_SYMBOL_GPL(arch_has_restricted_virtio_memory_access);
>diff --git a/arch/x86/xen/Kconfig b/arch/x86/xen/Kconfig
>index 85246dd..dffdffd 100644
>--- a/arch/x86/xen/Kconfig
>+++ b/arch/x86/xen/Kconfig
>@@ -92,3 +92,12 @@ config XEN_DOM0
>   select X86_X2APIC if XEN_PVH && X86_64
>   help
> Support running as a Xen Dom0 guest.
>+
>+config XEN_PV_VIRTIO
>+  bool "Xen virtio support for PV guests"
>+  depends on XEN_VIRTIO && XEN_PV
>+  default y
>+  help
>+Support virtio for running as a paravirtualized guest. This will
>+need support on the backend side (qemu or kernel, depending on the
>+virtio device types used).
>diff --git a/drivers/xen/Kconfig b/drivers/xen/Kconfig
>index 120d32f..fc61f7a 100644
>--- a/drivers/xen/Kconfig
>+++ b/drivers/xen/Kconfig
>@@ -335,4 +335,24 @@ config XEN_UNPOPULATED_ALLOC
> having to balloon out RAM regions in order to obtain physical memory
> space to create such mappings.
> 
>+config XEN_VIRTIO
>+  bool "Xen virtio support"
>+  default n
>+  depends on VIRTIO && DMA_OPS
>+  select ARCH_HAS_RESTRICTED_VIRTIO_MEMORY_ACCESS
>+  help
>+Enable virtio support for running as Xen guest. Depending on the
>+guest type this will require special support on the backend side
>+(qemu or kernel, depending on the virtio device types used).
>+
>+config XEN_HVM_VIRTIO_GRANT
>+  bool "Require virtio for fully virtualized guests to use grant mappings"
>+  depends on XEN_VIRTIO && X86_64
>+  default y
>+  help
>+Require virtio for fully virtualized guests to use grant mappings.
>+This will avoid the need to give the backend the right to map all
>+of the guest memory. This will need support on the backend side
>+(qemu or kernel, depending on the virtio device types used).
>+
> endmenu
>diff --git a/drivers/xen/Makefile b/drivers/xen/Makefile
>index 5aae66e..767009c 100644
>--- a/drivers/xen/Makefile
>+++ b/drivers/xen/Makefile
>@@ -39,3 +39,4 @@ xen-gntalloc-y   := gntalloc.o
> xen-privcmd-y := privcmd.o privcmd-buf.o
> obj-$(CONFIG_XEN_FRONT_PGDIR_SHBUF)   += xen-front-pgdir-shbuf.o
> obj-$(CONFIG_XEN_UNPOPULATED_ALLOC)   += unpopulated-alloc.o
>+obj-$(CONFIG_XEN_VIRTIO)  += xen-virtio.o
>diff --git a/drivers/xen/xen-virtio.c b/drivers/xen/xen-virtio.c
>new file mode 100644
>index ..cfd5eda
>--- /dev/null
>+++ b/drivers/xen/xen-virtio.c
>@@ -0,0 +1,177 @@
>+// SPDX-License-Identifier: GPL-2.0-only
>+/**
>+ * Xen virtio driver - 

[RFC PATCH] libxl/arm: Insert "xen,dev-domid" property to virtio-mmio device node

2022-04-14 Thread Oleksandr Tyshchenko
From: Oleksandr Tyshchenko 

This is needed for grant table based DMA ops layer (CONFIG_XEN_VIRTIO)
at the guest side to retrieve the ID of Xen domain where the corresponding
backend resides (it is used as an argument to the grant table APIs).

This is a part of restricted memory access under Xen feature.

Signed-off-by: Oleksandr Tyshchenko 
---
!!! This patch is based on non upstreamed yet “Virtio support for toolstack
on Arm” series which is on review now:
https://lore.kernel.org/xen-devel/1649442065-8332-1-git-send-email-olekst...@gmail.com/

All details are at:
https://lore.kernel.org/xen-devel/1649963973-22879-1-git-send-email-olekst...@gmail.com/
---
 tools/libs/light/libxl_arm.c | 14 --
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/tools/libs/light/libxl_arm.c b/tools/libs/light/libxl_arm.c
index 8132a47..d9b26fc 100644
--- a/tools/libs/light/libxl_arm.c
+++ b/tools/libs/light/libxl_arm.c
@@ -875,7 +875,8 @@ static int make_vpci_node(libxl__gc *gc, void *fdt,
 
 
 static int make_virtio_mmio_node(libxl__gc *gc, void *fdt,
- uint64_t base, uint32_t irq)
+ uint64_t base, uint32_t irq,
+ uint32_t backend_domid)
 {
 int res;
 gic_interrupt intr;
@@ -900,6 +901,14 @@ static int make_virtio_mmio_node(libxl__gc *gc, void *fdt,
 res = fdt_property(fdt, "dma-coherent", NULL, 0);
 if (res) return res;
 
+if (backend_domid != LIBXL_TOOLSTACK_DOMID) {
+uint32_t domid[1];
+
+domid[0] = cpu_to_fdt32(backend_domid);
+res = fdt_property(fdt, "xen,dev-domid", domid, sizeof(domid));
+if (res) return res;
+}
+
 res = fdt_end_node(fdt);
 if (res) return res;
 
@@ -1218,7 +1227,8 @@ next_resize:
 libxl_device_disk *disk = _config->disks[i];
 
 if (disk->protocol == LIBXL_DISK_PROTOCOL_VIRTIO_MMIO)
-FDT( make_virtio_mmio_node(gc, fdt, disk->base, disk->irq) );
+FDT( make_virtio_mmio_node(gc, fdt, disk->base, disk->irq,
+   disk->backend_domid) );
 }
 
 if (pfdt)
-- 
2.7.4




Re: [PATCH 3/3] x86/build: Clean up boot/Makefile

2022-04-14 Thread Andrew Cooper
On 14/04/2022 18:45, Anthony PERARD wrote:
> On Thu, Apr 14, 2022 at 12:47:08PM +0100, Andrew Cooper wrote:
>> There are no .S intermediate files, so rework in terms of head-bin-objs.
>>
>> No functional change.
>>
>> Signed-off-by: Andrew Cooper 
> The patch looks fine.
>
> Reviewed-by: Anthony PERARD 
>
>> ---
>> I'm slightly -1 on this, because
>>
>>   head-bin-objs := $(addprefix $(obj)/,$(head-bin-objs))
>>
>> is substantial obfuscation which I'd prefer to bin.
> It might be possible to do something that Kbuild does, which would be to
> teach the build system to look for "$(head-objs)" or maybe
> "$(head-bin-objs)" when it want to build "head.o". That something that's
> done in Kbuild I think to build a module from several source files.
>
>> Anthony: Why does dropping the targets += line interfere with incremental
>> builds?  With it gone, *.bin are regenerated unconditionally, but I can't see
>> what would cause that, nor why the normal dependencies on head.o don't work.
> Try to build with "make V=2", make will display why a target is been
> rebuild (when this target is built with $(if_changed, )
>
> $(targets) is used by Rules.mk to findout which dependencies files (the
> .cmd) to load and only load them if the target exist. Then the
> $(if_changed, ) macro rerun the command if prereq are newer than the
> target or if the command as changed. Without the .cmd file loaded, the
> macro would compare the new command to an empty value and so rebuild the
> target.
>
> Now, the *.bin files are regenerated because cmdline.o is been rebuild
> mostly because make didn't load the record of the previous command run.

I'm not certain if this case is a match with Linux's module logic.  The
module logic is "compile each .c file, then link all the .o's together
into one .ko".

In this case, we're saying "to assemble head.S to head.o, you first need
to build {cmdline,reloc}.bin so the incbin doesn't explode".  I guess it
depends how generic the "$X depends on arbitrary $Y's" expression can be
made.

Between this patch an the previous one, I've clearly got mixed up with
what exactly the target+= and regular dependencies.

The comment specifically refers to the fact that the old #include
"cmdline.S" used to show up as a dep in .head.o.cmd, whereas .incbin
doesn't.  (Not surprising, because -M and friends are from the
preprocessor, not assembler, but it would be helpful if this limitation
didn't exist.)  As a consequence, the dependency needs adding back in
somehow.

From your description above, I assume that simply being listed as a dep
isn't good enough to trigger a recursive load of the .bin's .cmd file
(not that there is one), which is why they need adding specially to targets?

As I have simplified this to (almost) normal build runes, should we be
expressing it differently now to fit in with the new way of doing things?

~Andrew


[RFC PATCH 5/6] arm/xen: Introduce xen_setup_dma_ops()

2022-04-14 Thread Oleksandr Tyshchenko
From: Oleksandr Tyshchenko 

This patch introduces new helper and places it in new header.
The helper's purpose is to assign any Xen specific DMA ops in
a single place. For now, we deal with xen-swiotlb DMA ops only.
The subsequent patch will add xen-virtio DMA ops case.

Also re-use the xen_swiotlb_detect() check on Arm32.

Signed-off-by: Oleksandr Tyshchenko 
---
 arch/arm/include/asm/xen/xen-ops.h   |  1 +
 arch/arm/mm/dma-mapping.c|  5 ++---
 arch/arm64/include/asm/xen/xen-ops.h |  1 +
 arch/arm64/mm/dma-mapping.c  |  5 ++---
 include/xen/arm/xen-ops.h| 13 +
 5 files changed, 19 insertions(+), 6 deletions(-)
 create mode 100644 arch/arm/include/asm/xen/xen-ops.h
 create mode 100644 arch/arm64/include/asm/xen/xen-ops.h
 create mode 100644 include/xen/arm/xen-ops.h

diff --git a/arch/arm/include/asm/xen/xen-ops.h 
b/arch/arm/include/asm/xen/xen-ops.h
new file mode 100644
index ..8d2fa24
--- /dev/null
+++ b/arch/arm/include/asm/xen/xen-ops.h
@@ -0,0 +1 @@
+#include 
diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c
index 82ffac6..a1bf9dd 100644
--- a/arch/arm/mm/dma-mapping.c
+++ b/arch/arm/mm/dma-mapping.c
@@ -33,7 +33,7 @@
 #include 
 #include 
 #include 
-#include 
+#include 
 
 #include "dma.h"
 #include "mm.h"
@@ -2288,8 +2288,7 @@ void arch_setup_dma_ops(struct device *dev, u64 dma_base, 
u64 size,
set_dma_ops(dev, dma_ops);
 
 #ifdef CONFIG_XEN
-   if (xen_initial_domain())
-   dev->dma_ops = _swiotlb_dma_ops;
+   xen_setup_dma_ops(dev);
 #endif
dev->archdata.dma_ops_setup = true;
 }
diff --git a/arch/arm64/include/asm/xen/xen-ops.h 
b/arch/arm64/include/asm/xen/xen-ops.h
new file mode 100644
index ..8d2fa24
--- /dev/null
+++ b/arch/arm64/include/asm/xen/xen-ops.h
@@ -0,0 +1 @@
+#include 
diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c
index 6719f9e..831e673 100644
--- a/arch/arm64/mm/dma-mapping.c
+++ b/arch/arm64/mm/dma-mapping.c
@@ -9,9 +9,9 @@
 #include 
 #include 
 #include 
-#include 
 
 #include 
+#include 
 
 void arch_sync_dma_for_device(phys_addr_t paddr, size_t size,
enum dma_data_direction dir)
@@ -53,7 +53,6 @@ void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 
size,
iommu_setup_dma_ops(dev, dma_base, dma_base + size - 1);
 
 #ifdef CONFIG_XEN
-   if (xen_swiotlb_detect())
-   dev->dma_ops = _swiotlb_dma_ops;
+   xen_setup_dma_ops(dev);
 #endif
 }
diff --git a/include/xen/arm/xen-ops.h b/include/xen/arm/xen-ops.h
new file mode 100644
index ..621da05
--- /dev/null
+++ b/include/xen/arm/xen-ops.h
@@ -0,0 +1,13 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _ASM_ARM_XEN_OPS_H
+#define _ASM_ARM_XEN_OPS_H
+
+#include 
+
+static inline void xen_setup_dma_ops(struct device *dev)
+{
+   if (xen_swiotlb_detect())
+   dev->dma_ops = _swiotlb_dma_ops;
+}
+
+#endif /* _ASM_ARM_XEN_OPS_H */
-- 
2.7.4




[RFC PATCH 0/6] virtio: Solution to restrict memory access under Xen using xen-virtio DMA ops layer

2022-04-14 Thread Oleksandr Tyshchenko
From: Oleksandr Tyshchenko 

Hello all.

The purpose of this RFC patch series is to add support for restricting memory 
access under Xen using specific
grant table based DMA ops layer. Patch series is based on Juergen Gross’ 
initial work [1] which implies using
grant references instead of raw guest physical addresses (GPA) for the virtio 
communications (some kind of
the software IOMMU).

The high level idea is to create new Xen’s grant table based DMA ops layer for 
the guest Linux whose main
purpose is to provide a special 64-bit DMA address which is formed by using the 
grant reference (for a page
to be shared with the backend) with offset and setting the highest address bit 
(this is for the backend to
be able to distinguish grant ref based DMA address from normal GPA). For this 
to work we need the ability
to allocate contiguous (consecutive) grant references for multi-page 
allocations. And the backend then needs
to offer VIRTIO_F_ACCESS_PLATFORM and VIRTIO_F_VERSION_1 feature bits (it must 
support virtio-mmio modern
transport for 64-bit addresses in the virtqueue).

Xen's grant mapping mechanism is the secure and safe solution to share pages 
between domains which proven
to work and works for years (in the context of traditional Xen PV drivers for 
example). So far, the foreign
mapping is used for the virtio backend to map and access guest memory. With the 
foreign mapping, the backend
is able to map arbitrary pages from the guest memory (or even from Dom0 
memory). And as the result, the malicious
backend which runs in a non-trusted domain can take advantage of this. Instead, 
with the grant mapping
the backend is only allowed to map pages which were explicitly granted by the 
guest before and nothing else. 
According to the discussions in various mainline threads this solution would 
likely be welcome because it
perfectly fits in the security model Xen provides. 

What is more, the grant table based solution requires zero changes to the Xen 
hypervisor itself at least
with virtio-mmio and DT (in comparison, for example, with "foreign mapping + 
virtio-iommu" solution which would
require the whole new complex emulator in hypervisor in addition to new 
functionality/hypercall to pass IOVA
from the virtio backend running elsewhere to the hypervisor and translate it to 
the GPA before mapping into
P2M or denying the foreign mapping request if no corresponding IOVA-GPA mapping 
present in the IOMMU page table
for that particular device). We only need to update toolstack to insert a new 
"xen,dev-domid" property to
the virtio-mmio device node when creating a guest device-tree (this is an 
indicator for the guest to use grants
and the ID of Xen domain where the corresponding backend resides, it is used as 
an argument to the grant mapping
APIs). It worth mentioning that toolstack patch is based on non  upstreamed yet 
“Virtio support for toolstack
on Arm” series which is on review now [2].

Please note the following:
- Patch series only covers Arm and virtio-mmio (device-tree) for now. To enable 
the restricted memory access
  feature on Arm the following options should be set:
  CONFIG_XEN_VIRTIO = y
  CONFIG_XEN_HVM_VIRTIO_GRANT = y
- Some callbacks in xen-virtio DMA ops layer (map_sg/unmap_sg, etc) are not 
implemented yet as they are not
  needed/used in the first prototype

Patch series is rebased on Linux 5.18-rc2 tag and tested on Renesas Salvator-X 
board + H3 ES3.0 SoC (Arm64)
with standalone userspace (non-Qemu) virtio-mmio based virtio-disk backend 
running in Driver domain and Linux
guest running on existing virtio-blk driver (frontend). No issues were 
observed. Guest domain 'reboot/destroy'
use-cases work properly. I have also tested other use-cases such as assigning 
several virtio block devices
or a mix of virtio and Xen PV block devices to the guest. 

1. Xen changes located at (last patch):
https://github.com/otyshchenko1/xen/commits/libxl_virtio_next
2. Linux changes located at:
https://github.com/otyshchenko1/linux/commits/virtio_grant5
3. virtio-disk changes located at:
https://github.com/otyshchenko1/virtio-disk/commits/virtio_grant

Any feedback/help would be highly appreciated.

[1] https://www.youtube.com/watch?v=IrlEdaIUDPk
[2] 
https://lore.kernel.org/xen-devel/1649442065-8332-1-git-send-email-olekst...@gmail.com/

Juergen Gross (2):
  xen/grants: support allocating consecutive grants
  virtio: add option to restrict memory access under Xen

Oleksandr Tyshchenko (4):
  dt-bindings: xen: Add xen,dev-domid property description for
xen-virtio layer
  virtio: Various updates to xen-virtio DMA ops layer
  arm/xen: Introduce xen_setup_dma_ops()
  arm/xen: Assign xen-virtio DMA ops for virtio devices in Xen guests

 .../devicetree/bindings/virtio/xen,dev-domid.yaml  |  39 +++
 arch/arm/include/asm/xen/xen-ops.h |   1 +
 arch/arm/mm/dma-mapping.c  |   5 +-
 arch/arm/xen/enlighten.c   |  11 +
 

[RFC PATCH 4/6] virtio: Various updates to xen-virtio DMA ops layer

2022-04-14 Thread Oleksandr Tyshchenko
From: Oleksandr Tyshchenko 

In the context of current patch do the following:
1. Update code to support virtio-mmio devices
2. Introduce struct xen_virtio_data and account passed virtio devices
   (using list) as we need to store some per-device data
3. Add multi-page support for xen_virtio_dma_map(unmap)_page callbacks
4. Harden code against malicious backend
5. Change to use alloc_pages_exact() instead of __get_free_pages()
6. Introduce locking scheme to protect mappings (I am not 100% sure
   whether per-device lock is really needed)
7. Handle virtio device's DMA mask
8. Retrieve the ID of backend domain from DT for virtio-mmio device
   instead of hardcoding it.

Signed-off-by: Oleksandr Tyshchenko 
---
 arch/arm/xen/enlighten.c |  11 +++
 drivers/xen/Kconfig  |   2 +-
 drivers/xen/xen-virtio.c | 200 ++-
 include/xen/xen-ops.h|   5 ++
 4 files changed, 196 insertions(+), 22 deletions(-)

diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
index ec5b082..870d92f 100644
--- a/arch/arm/xen/enlighten.c
+++ b/arch/arm/xen/enlighten.c
@@ -409,6 +409,17 @@ int __init arch_xen_unpopulated_init(struct resource **res)
 }
 #endif
 
+#ifdef CONFIG_ARCH_HAS_RESTRICTED_VIRTIO_MEMORY_ACCESS
+int arch_has_restricted_virtio_memory_access(void)
+{
+   if (IS_ENABLED(CONFIG_XEN_HVM_VIRTIO_GRANT) && xen_hvm_domain())
+   return 1;
+
+   return 0;
+}
+EXPORT_SYMBOL_GPL(arch_has_restricted_virtio_memory_access);
+#endif
+
 static void __init xen_dt_guest_init(void)
 {
struct device_node *xen_node;
diff --git a/drivers/xen/Kconfig b/drivers/xen/Kconfig
index fc61f7a..56afe6a 100644
--- a/drivers/xen/Kconfig
+++ b/drivers/xen/Kconfig
@@ -347,7 +347,7 @@ config XEN_VIRTIO
 
 config XEN_HVM_VIRTIO_GRANT
bool "Require virtio for fully virtualized guests to use grant mappings"
-   depends on XEN_VIRTIO && X86_64
+   depends on XEN_VIRTIO && (X86_64 || ARM || ARM64)
default y
help
  Require virtio for fully virtualized guests to use grant mappings.
diff --git a/drivers/xen/xen-virtio.c b/drivers/xen/xen-virtio.c
index cfd5eda..c5b2ec9 100644
--- a/drivers/xen/xen-virtio.c
+++ b/drivers/xen/xen-virtio.c
@@ -7,12 +7,26 @@
 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
 #include 
 #include 
 
+struct xen_virtio_data {
+   /* The ID of backend domain */
+   domid_t dev_domid;
+   struct device *dev;
+   struct list_head list;
+   spinlock_t lock;
+   /* Is device behaving sane? */
+   bool broken;
+};
+
+static LIST_HEAD(xen_virtio_devices);
+static DEFINE_SPINLOCK(xen_virtio_lock);
+
 #define XEN_GRANT_ADDR_OFF 0x8000ULL
 
 static inline dma_addr_t grant_to_dma(grant_ref_t grant)
@@ -25,6 +39,25 @@ static inline grant_ref_t dma_to_grant(dma_addr_t dma)
return (grant_ref_t)((dma & ~XEN_GRANT_ADDR_OFF) >> PAGE_SHIFT);
 }
 
+static struct xen_virtio_data *find_xen_virtio_data(struct device *dev)
+{
+   struct xen_virtio_data *data = NULL;
+   bool found = false;
+
+   spin_lock(_virtio_lock);
+
+   list_for_each_entry( data, _virtio_devices, list) {
+   if (data->dev == dev) {
+   found = true;
+   break;
+   }
+   }
+
+   spin_unlock(_virtio_lock);
+
+   return found ? data : NULL;
+}
+
 /*
  * DMA ops for Xen virtio frontends.
  *
@@ -43,48 +76,78 @@ static void *xen_virtio_dma_alloc(struct device *dev, 
size_t size,
  dma_addr_t *dma_handle, gfp_t gfp,
  unsigned long attrs)
 {
-   unsigned int n_pages = PFN_UP(size);
-   unsigned int i;
+   struct xen_virtio_data *data;
+   unsigned int i, n_pages = PFN_UP(size);
unsigned long pfn;
grant_ref_t grant;
-   void *ret;
+   void *ret = NULL;
 
-   ret = (void *)__get_free_pages(gfp, get_order(size));
-   if (!ret)
+   data = find_xen_virtio_data(dev);
+   if (!data)
return NULL;
 
+   spin_lock(>lock);
+
+   if (unlikely(data->broken))
+   goto out;
+
+   ret = alloc_pages_exact(n_pages * PAGE_SIZE, gfp);
+   if (!ret)
+   goto out;
+
pfn = virt_to_pfn(ret);
 
if (gnttab_alloc_grant_reference_seq(n_pages, )) {
-   free_pages((unsigned long)ret, get_order(size));
-   return NULL;
+   free_pages_exact(ret, n_pages * PAGE_SIZE);
+   ret = NULL;
+   goto out;
}
 
for (i = 0; i < n_pages; i++) {
-   gnttab_grant_foreign_access_ref(grant + i, 0,
+   gnttab_grant_foreign_access_ref(grant + i, data->dev_domid,
pfn_to_gfn(pfn + i), 0);
}
 
*dma_handle = grant_to_dma(grant);
 
+out:
+   spin_unlock(>lock);
+
return ret;
 }
 
 static void 

[RFC PATCH 3/6] dt-bindings: xen: Add xen,dev-domid property description for xen-virtio layer

2022-04-14 Thread Oleksandr Tyshchenko
From: Oleksandr Tyshchenko 

Introduce Xen specific binding for the virtio-mmio device to be used
by Xen virtio support driver in a subsequent commit.

This binding specifies the ID of Xen domain where the corresponding
device (backend) resides. This is needed for the option to restrict
memory access using Xen grant mappings to work.

Signed-off-by: Oleksandr Tyshchenko 
---
 .../devicetree/bindings/virtio/xen,dev-domid.yaml  | 39 ++
 1 file changed, 39 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/virtio/xen,dev-domid.yaml

diff --git a/Documentation/devicetree/bindings/virtio/xen,dev-domid.yaml 
b/Documentation/devicetree/bindings/virtio/xen,dev-domid.yaml
new file mode 100644
index ..78be993
--- /dev/null
+++ b/Documentation/devicetree/bindings/virtio/xen,dev-domid.yaml
@@ -0,0 +1,39 @@
+# SPDX-License-Identifier: (GPL-2.0-only or BSD-2-Clause)
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/virtio/xen,dev-domid.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: Xen specific binding for the virtio device
+
+maintainers:
+  - Oleksandr Tyshchenko 
+
+select: true
+
+description:
+  This binding specifies the ID of Xen domain where the corresponding device
+  (backend) resides. This is needed for the option to restrict memory access
+  using Xen grant mappings to work.
+
+  Note that current and generic "iommus" bindings are mutually exclusive, since
+  the restricted memory access model on Xen behaves as a kind of software 
IOMMU.
+
+properties:
+  xen,dev-domid:
+$ref: /schemas/types.yaml#/definitions/uint32
+description:
+  Should contain the ID of device's domain.
+
+additionalProperties: true
+
+examples:
+  - |
+virtio_block@3000 {
+compatible = "virtio,mmio";
+reg = <0x3000 0x100>;
+interrupts = <41>;
+
+/* The device is located in Xen domain with ID 1 */
+xen,dev-domid = <1>;
+};
-- 
2.7.4




[RFC PATCH 6/6] arm/xen: Assign xen-virtio DMA ops for virtio devices in Xen guests

2022-04-14 Thread Oleksandr Tyshchenko
From: Oleksandr Tyshchenko 

Call xen_virtio_setup_dma_ops() only for Xen-aware virtio devices
in Xen guests if restricted access to the guest memory is enabled.

Signed-off-by: Oleksandr Tyshchenko 
---
 include/xen/arm/xen-ops.h | 7 +++
 1 file changed, 7 insertions(+)

diff --git a/include/xen/arm/xen-ops.h b/include/xen/arm/xen-ops.h
index 621da05..28b2ad3 100644
--- a/include/xen/arm/xen-ops.h
+++ b/include/xen/arm/xen-ops.h
@@ -2,12 +2,19 @@
 #ifndef _ASM_ARM_XEN_OPS_H
 #define _ASM_ARM_XEN_OPS_H
 
+#include 
 #include 
+#include 
 
 static inline void xen_setup_dma_ops(struct device *dev)
 {
if (xen_swiotlb_detect())
dev->dma_ops = _swiotlb_dma_ops;
+
+#ifdef CONFIG_XEN_VIRTIO
+   if (arch_has_restricted_virtio_memory_access() && 
xen_is_virtio_device(dev))
+   xen_virtio_setup_dma_ops(dev);
+#endif
 }
 
 #endif /* _ASM_ARM_XEN_OPS_H */
-- 
2.7.4




[RFC PATCH 2/6] virtio: add option to restrict memory access under Xen

2022-04-14 Thread Oleksandr Tyshchenko
From: Juergen Gross 

In order to support virtio in Xen guests add a config option enabling
the user to specify whether in all Xen guests virtio should be able to
access memory via Xen grant mappings only on the host side.

This applies to fully virtualized guests only, as for paravirtualized
guests this is mandatory.

This requires to switch arch_has_restricted_virtio_memory_access()
from a pure stub to a real function on x86 systems (Arm systems are
not covered by now).

Add the needed functionality by providing a special set of DMA ops
handling the needed grant operations for the I/O pages.

Signed-off-by: Juergen Gross 
---
 arch/x86/mm/init.c|  15 
 arch/x86/mm/mem_encrypt.c |   5 --
 arch/x86/xen/Kconfig  |   9 +++
 drivers/xen/Kconfig   |  20 ++
 drivers/xen/Makefile  |   1 +
 drivers/xen/xen-virtio.c  | 177 ++
 include/xen/xen-ops.h |   8 +++
 7 files changed, 230 insertions(+), 5 deletions(-)
 create mode 100644 drivers/xen/xen-virtio.c

diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index d8cfce2..526a3b2 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -8,6 +8,8 @@
 #include 
 #include 
 
+#include 
+
 #include 
 #include 
 #include 
@@ -1065,3 +1067,16 @@ unsigned long max_swapfile_size(void)
return pages;
 }
 #endif
+
+#ifdef CONFIG_ARCH_HAS_RESTRICTED_VIRTIO_MEMORY_ACCESS
+int arch_has_restricted_virtio_memory_access(void)
+{
+   if (IS_ENABLED(CONFIG_XEN_PV_VIRTIO) && xen_pv_domain())
+   return 1;
+   if (IS_ENABLED(CONFIG_XEN_HVM_VIRTIO_GRANT) && xen_hvm_domain())
+   return 1;
+
+   return cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT);
+}
+EXPORT_SYMBOL_GPL(arch_has_restricted_virtio_memory_access);
+#endif
diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index 50d2099..dda020f 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -77,8 +77,3 @@ void __init mem_encrypt_init(void)
print_mem_encrypt_feature_info();
 }
 
-int arch_has_restricted_virtio_memory_access(void)
-{
-   return cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT);
-}
-EXPORT_SYMBOL_GPL(arch_has_restricted_virtio_memory_access);
diff --git a/arch/x86/xen/Kconfig b/arch/x86/xen/Kconfig
index 85246dd..dffdffd 100644
--- a/arch/x86/xen/Kconfig
+++ b/arch/x86/xen/Kconfig
@@ -92,3 +92,12 @@ config XEN_DOM0
select X86_X2APIC if XEN_PVH && X86_64
help
  Support running as a Xen Dom0 guest.
+
+config XEN_PV_VIRTIO
+   bool "Xen virtio support for PV guests"
+   depends on XEN_VIRTIO && XEN_PV
+   default y
+   help
+ Support virtio for running as a paravirtualized guest. This will
+ need support on the backend side (qemu or kernel, depending on the
+ virtio device types used).
diff --git a/drivers/xen/Kconfig b/drivers/xen/Kconfig
index 120d32f..fc61f7a 100644
--- a/drivers/xen/Kconfig
+++ b/drivers/xen/Kconfig
@@ -335,4 +335,24 @@ config XEN_UNPOPULATED_ALLOC
  having to balloon out RAM regions in order to obtain physical memory
  space to create such mappings.
 
+config XEN_VIRTIO
+   bool "Xen virtio support"
+   default n
+   depends on VIRTIO && DMA_OPS
+   select ARCH_HAS_RESTRICTED_VIRTIO_MEMORY_ACCESS
+   help
+ Enable virtio support for running as Xen guest. Depending on the
+ guest type this will require special support on the backend side
+ (qemu or kernel, depending on the virtio device types used).
+
+config XEN_HVM_VIRTIO_GRANT
+   bool "Require virtio for fully virtualized guests to use grant mappings"
+   depends on XEN_VIRTIO && X86_64
+   default y
+   help
+ Require virtio for fully virtualized guests to use grant mappings.
+ This will avoid the need to give the backend the right to map all
+ of the guest memory. This will need support on the backend side
+ (qemu or kernel, depending on the virtio device types used).
+
 endmenu
diff --git a/drivers/xen/Makefile b/drivers/xen/Makefile
index 5aae66e..767009c 100644
--- a/drivers/xen/Makefile
+++ b/drivers/xen/Makefile
@@ -39,3 +39,4 @@ xen-gntalloc-y:= gntalloc.o
 xen-privcmd-y  := privcmd.o privcmd-buf.o
 obj-$(CONFIG_XEN_FRONT_PGDIR_SHBUF)+= xen-front-pgdir-shbuf.o
 obj-$(CONFIG_XEN_UNPOPULATED_ALLOC)+= unpopulated-alloc.o
+obj-$(CONFIG_XEN_VIRTIO)   += xen-virtio.o
diff --git a/drivers/xen/xen-virtio.c b/drivers/xen/xen-virtio.c
new file mode 100644
index ..cfd5eda
--- /dev/null
+++ b/drivers/xen/xen-virtio.c
@@ -0,0 +1,177 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/**
+ * Xen virtio driver - enables using virtio devices in Xen guests.
+ *
+ * Copyright (c) 2021, Juergen Gross 
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 

[RFC PATCH 1/6] xen/grants: support allocating consecutive grants

2022-04-14 Thread Oleksandr Tyshchenko
From: Juergen Gross 

For support of virtio via grant mappings in rare cases larger mappings
using consecutive grants are needed. Support those by adding a bitmap
of free grants.

As consecutive grants will be needed only in very rare cases (e.g. when
configuring a virtio device with a multi-page ring), optimize for the
normal case of non-consecutive allocations.

Signed-off-by: Juergen Gross 
---
 drivers/xen/grant-table.c | 238 +++---
 include/xen/grant_table.h |   4 +
 2 files changed, 210 insertions(+), 32 deletions(-)

diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index 8ac..1b458c0 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -33,6 +33,7 @@
 
 #define pr_fmt(fmt) "xen:" KBUILD_MODNAME ": " fmt
 
+#include 
 #include 
 #include 
 #include 
@@ -72,9 +73,32 @@
 
 static grant_ref_t **gnttab_list;
 static unsigned int nr_grant_frames;
+
+/*
+ * Handling of free grants:
+ *
+ * Free grants are in a simple list anchored in gnttab_free_head. They are
+ * linked by grant ref, the last element contains GNTTAB_LIST_END. The number
+ * of free entries is stored in gnttab_free_count.
+ * Additionally there is a bitmap of free entries anchored in
+ * gnttab_free_bitmap. This is being used for simplifying allocation of
+ * multiple consecutive grants, which is needed e.g. for support of virtio.
+ * gnttab_last_free is used to add free entries of new frames at the end of
+ * the free list.
+ * gnttab_free_tail_ptr specifies the variable which references the start
+ * of consecutive free grants ending with gnttab_last_free. This pointer is
+ * updated in a rather defensive way, in order to avoid performance hits in
+ * hot paths.
+ * All those variables are protected by gnttab_list_lock.
+ */
 static int gnttab_free_count;
-static grant_ref_t gnttab_free_head;
+static unsigned int gnttab_size;
+static grant_ref_t gnttab_free_head = GNTTAB_LIST_END;
+static grant_ref_t gnttab_last_free = GNTTAB_LIST_END;
+static grant_ref_t *gnttab_free_tail_ptr;
+static unsigned long *gnttab_free_bitmap;
 static DEFINE_SPINLOCK(gnttab_list_lock);
+
 struct grant_frames xen_auto_xlat_grant_frames;
 static unsigned int xen_gnttab_version;
 module_param_named(version, xen_gnttab_version, uint, 0);
@@ -170,16 +194,111 @@ static int get_free_entries(unsigned count)
 
ref = head = gnttab_free_head;
gnttab_free_count -= count;
-   while (count-- > 1)
-   head = gnttab_entry(head);
+   while (count--) {
+   bitmap_clear(gnttab_free_bitmap, head, 1);
+   if (gnttab_free_tail_ptr == __gnttab_entry(head))
+   gnttab_free_tail_ptr = _free_head;
+   if (count)
+   head = gnttab_entry(head);
+   }
gnttab_free_head = gnttab_entry(head);
gnttab_entry(head) = GNTTAB_LIST_END;
 
+   if (!gnttab_free_count) {
+   gnttab_last_free = GNTTAB_LIST_END;
+   gnttab_free_tail_ptr = NULL;
+   }
+
spin_unlock_irqrestore(_list_lock, flags);
 
return ref;
 }
 
+static int get_seq_entry_count(void)
+{
+   if (gnttab_last_free == GNTTAB_LIST_END || !gnttab_free_tail_ptr ||
+   *gnttab_free_tail_ptr == GNTTAB_LIST_END)
+   return 0;
+
+   return gnttab_last_free - *gnttab_free_tail_ptr + 1;
+}
+
+/* Rebuilds the free grant list and tries to find count consecutive entries. */
+static int get_free_seq(unsigned int count)
+{
+   int ret = -ENOSPC;
+   unsigned int from, to;
+   grant_ref_t *last;
+
+   gnttab_free_tail_ptr = _free_head;
+   last = _free_head;
+
+   for (from = find_first_bit(gnttab_free_bitmap, gnttab_size);
+from < gnttab_size;
+from = find_next_bit(gnttab_free_bitmap, gnttab_size, to + 1)) {
+   to = find_next_zero_bit(gnttab_free_bitmap, gnttab_size,
+   from + 1);
+   if (ret < 0 && to - from >= count) {
+   ret = from;
+   bitmap_clear(gnttab_free_bitmap, ret, count);
+   from += count;
+   gnttab_free_count -= count;
+   if (from == to)
+   continue;
+   }
+
+   while (from < to) {
+   *last = from;
+   last = __gnttab_entry(from);
+   gnttab_last_free = from;
+   from++;
+   }
+   if (to < gnttab_size)
+   gnttab_free_tail_ptr = __gnttab_entry(to - 1);
+   }
+
+   *last = GNTTAB_LIST_END;
+   if (gnttab_last_free != gnttab_size - 1)
+   gnttab_free_tail_ptr = NULL;
+
+   return ret;
+}
+
+static int get_free_entries_seq(unsigned int count)
+{
+   unsigned long flags;
+   int ret = 0;
+
+   spin_lock_irqsave(_list_lock, flags);
+
+  

[ovmf test] 169402: regressions - FAIL

2022-04-14 Thread osstest service owner
flight 169402 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/169402/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64   6 xen-buildfail REGR. vs. 168254
 build-amd64-xsm   6 xen-buildfail REGR. vs. 168254
 build-i386-xsm6 xen-buildfail REGR. vs. 168254
 build-i3866 xen-buildfail REGR. vs. 168254

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt   1 build-check(1)   blocked  n/a
 build-i386-libvirt1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)  blocked n/a

version targeted for testing:
 ovmf f3da13461cbed699e54b1d7ef3fba5144cc3b3b4
baseline version:
 ovmf b1b89f9009f2390652e0061bd7b24fc40732bc70

Last test of basis   168254  2022-02-28 10:41:46 Z   45 days
Failing since168258  2022-03-01 01:55:31 Z   44 days  394 attempts
Testing same since   169397  2022-04-14 15:10:23 Z0 days6 attempts


People who touched revisions under test:
  Abdul Lateef Attar 
  Abdul Lateef Attar via groups.io 
  Abner Chang 
  Akihiko Odaki 
  Anthony PERARD 
  Bob Feng 
  Chen Lin Z 
  Chen, Lin Z 
  Dandan Bi 
  Feng, Bob C 
  Gerd Hoffmann 
  Guo Dong 
  Guomin Jiang 
  Hao A Wu 
  Heng Luo 
  Hua Ma 
  Huang, Li-Xia 
  Jagadeesh Ujja 
  Jason 
  Jason Lou 
  Ken Lautner 
  Kenneth Lautner 
  Kuo, Ted 
  Laszlo Ersek 
  Lean Sheng Tan 
  Leif Lindholm 
  Li, Zhihao 
  Liming Gao 
  Liu 
  Liu Yun 
  Liu Yun Y 
  Lixia Huang 
  Lou, Yun 
  Ma, Hua 
  Mara Sophie Grosch 
  Mara Sophie Grosch via groups.io 
  Matt DeVillier 
  Michael D Kinney 
  Michael Kubacki 
  Michael Kubacki 
  Min Xu 
  Oliver Steffen 
  Patrick Rudolph 
  Purna Chandra Rao Bandaru 
  Ray Ni 
  Rebecca Cran 
  Sami Mujawar 
  Sean Rhodes 
  Sean Rhodes sean@starlabs.systems
  Sebastien Boeuf 
  Sunny Wang 
  Ted Kuo 
  Wenyi Xie 
  wenyi,xie via groups.io 
  Xiaolu.Jiang 
  Xie, Yuanhao 
  Yi Li 
  yi1 li 
  Yuanhao Xie 
  Zhihao Li 

jobs:
 build-amd64-xsm  fail
 build-i386-xsm   fail
 build-amd64  fail
 build-i386   fail
 build-amd64-libvirt  blocked 
 build-i386-libvirt   blocked 
 build-amd64-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64 blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64  blocked 



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 5103 lines long.)



[PATCH v2] xen/build: Fix dependency for the MAP rule

2022-04-14 Thread Andrew Cooper
Signed-off-by: Andrew Cooper 
---
CC: Jan Beulich 
CC: Roger Pau Monné 
CC: Wei Liu 
CC: Anthony PERARD 

v2:
 * Use $(TARGET) not $(TARGET)-syms
---
 xen/Makefile | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/Makefile b/xen/Makefile
index dd05672ff42d..3a4e3bdd0f95 100644
--- a/xen/Makefile
+++ b/xen/Makefile
@@ -599,7 +599,7 @@ cscope:
cscope -k -b -q
 
 .PHONY: _MAP
-_MAP:
+_MAP: $(TARGET)
$(NM) -n $(TARGET)-syms | grep -v '\(compiled\)\|\(\.o$$\)\|\( [aUw] 
\)\|\(\.\.ng$$\)\|\(LASH[RL]DI\)' > System.map
 
 %.o %.i %.s: %.c tools_fixdep FORCE
-- 
2.11.0




Re: [PATCH v7 00/20] Introduce power-off+restart call chain API

2022-04-14 Thread Michał Mirosław
On Tue, Apr 12, 2022 at 02:38:12AM +0300, Dmitry Osipenko wrote:
> Problem
> ---
> 
> SoC devices require power-off call chaining functionality from kernel.
> We have a widely used restart chaining provided by restart notifier API,
> but nothing for power-off.
> 
> Solution
> 
> 
> Introduce new API that provides both restart and power-off call chains.
[...]

For the series:

Reviewed-by: Michał Mirosław 



Re: [PATCH] xen/build: Fix dependency for the MAP rule

2022-04-14 Thread Andrew Cooper
On 14/04/2022 18:49, Anthony PERARD wrote:
> On Thu, Apr 14, 2022 at 05:23:48PM +0100, Andrew Cooper wrote:
>> diff --git a/xen/Makefile b/xen/Makefile
>> index dd05672ff42d..02a274f56dc0 100644
>> --- a/xen/Makefile
>> +++ b/xen/Makefile
>> @@ -599,7 +599,7 @@ cscope:
>>  cscope -k -b -q
>>  
>>  .PHONY: _MAP
>> -_MAP:
>> +_MAP: $(TARGET)-syms
> That's not going to work well as make isn't going to know how to build
> $(TARGET)-syms.

Huh... It appears to work for me, but it's parallel build so who knows.

>  I guess you want to have $(TARGET) as prerequisite or
> add somewhere "$(TARGET)-syms: $(TARGET)".

That becomes cyclic with arch/*/Makefile which has:

$(TARGET): $(TARGET)-syms

The _install rule does make the implication that a dependency on
$(TARGET) builds $(TARGET)-syms so I guess that's good enough for _MAP too.

~Andrew



[ovmf test] 169401: regressions - FAIL

2022-04-14 Thread osstest service owner
flight 169401 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/169401/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64   6 xen-buildfail REGR. vs. 168254
 build-amd64-xsm   6 xen-buildfail REGR. vs. 168254
 build-i386-xsm6 xen-buildfail REGR. vs. 168254
 build-i3866 xen-buildfail REGR. vs. 168254

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt   1 build-check(1)   blocked  n/a
 build-i386-libvirt1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)  blocked n/a

version targeted for testing:
 ovmf f3da13461cbed699e54b1d7ef3fba5144cc3b3b4
baseline version:
 ovmf b1b89f9009f2390652e0061bd7b24fc40732bc70

Last test of basis   168254  2022-02-28 10:41:46 Z   45 days
Failing since168258  2022-03-01 01:55:31 Z   44 days  393 attempts
Testing same since   169397  2022-04-14 15:10:23 Z0 days5 attempts


People who touched revisions under test:
  Abdul Lateef Attar 
  Abdul Lateef Attar via groups.io 
  Abner Chang 
  Akihiko Odaki 
  Anthony PERARD 
  Bob Feng 
  Chen Lin Z 
  Chen, Lin Z 
  Dandan Bi 
  Feng, Bob C 
  Gerd Hoffmann 
  Guo Dong 
  Guomin Jiang 
  Hao A Wu 
  Heng Luo 
  Hua Ma 
  Huang, Li-Xia 
  Jagadeesh Ujja 
  Jason 
  Jason Lou 
  Ken Lautner 
  Kenneth Lautner 
  Kuo, Ted 
  Laszlo Ersek 
  Lean Sheng Tan 
  Leif Lindholm 
  Li, Zhihao 
  Liming Gao 
  Liu 
  Liu Yun 
  Liu Yun Y 
  Lixia Huang 
  Lou, Yun 
  Ma, Hua 
  Mara Sophie Grosch 
  Mara Sophie Grosch via groups.io 
  Matt DeVillier 
  Michael D Kinney 
  Michael Kubacki 
  Michael Kubacki 
  Min Xu 
  Oliver Steffen 
  Patrick Rudolph 
  Purna Chandra Rao Bandaru 
  Ray Ni 
  Rebecca Cran 
  Sami Mujawar 
  Sean Rhodes 
  Sean Rhodes sean@starlabs.systems
  Sebastien Boeuf 
  Sunny Wang 
  Ted Kuo 
  Wenyi Xie 
  wenyi,xie via groups.io 
  Xiaolu.Jiang 
  Xie, Yuanhao 
  Yi Li 
  yi1 li 
  Yuanhao Xie 
  Zhihao Li 

jobs:
 build-amd64-xsm  fail
 build-i386-xsm   fail
 build-amd64  fail
 build-i386   fail
 build-amd64-libvirt  blocked 
 build-i386-libvirt   blocked 
 build-amd64-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64 blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64  blocked 



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 5103 lines long.)



Re: [PATCH] xen/build: Fix dependency for the MAP rule

2022-04-14 Thread Anthony PERARD
On Thu, Apr 14, 2022 at 05:23:48PM +0100, Andrew Cooper wrote:
> diff --git a/xen/Makefile b/xen/Makefile
> index dd05672ff42d..02a274f56dc0 100644
> --- a/xen/Makefile
> +++ b/xen/Makefile
> @@ -599,7 +599,7 @@ cscope:
>   cscope -k -b -q
>  
>  .PHONY: _MAP
> -_MAP:
> +_MAP: $(TARGET)-syms

That's not going to work well as make isn't going to know how to build
$(TARGET)-syms. I guess you want to have $(TARGET) as prerequisite or
add somewhere "$(TARGET)-syms: $(TARGET)".

>   $(NM) -n $(TARGET)-syms | grep -v '\(compiled\)\|\(\.o$$\)\|\( [aUw] 
> \)\|\(\.\.ng$$\)\|\(LASH[RL]DI\)' > System.map
>  
>  %.o %.i %.s: %.c tools_fixdep FORCE

Thanks,

-- 
Anthony PERARD



Re: [PATCH 3/3] x86/build: Clean up boot/Makefile

2022-04-14 Thread Anthony PERARD
On Thu, Apr 14, 2022 at 12:47:08PM +0100, Andrew Cooper wrote:
> There are no .S intermediate files, so rework in terms of head-bin-objs.
> 
> No functional change.
> 
> Signed-off-by: Andrew Cooper 

The patch looks fine.

Reviewed-by: Anthony PERARD 

> ---
> I'm slightly -1 on this, because
> 
>   head-bin-objs := $(addprefix $(obj)/,$(head-bin-objs))
> 
> is substantial obfuscation which I'd prefer to bin.

It might be possible to do something that Kbuild does, which would be to
teach the build system to look for "$(head-objs)" or maybe
"$(head-bin-objs)" when it want to build "head.o". That something that's
done in Kbuild I think to build a module from several source files.

> Anthony: Why does dropping the targets += line interfere with incremental
> builds?  With it gone, *.bin are regenerated unconditionally, but I can't see
> what would cause that, nor why the normal dependencies on head.o don't work.

Try to build with "make V=2", make will display why a target is been
rebuild (when this target is built with $(if_changed, )

$(targets) is used by Rules.mk to findout which dependencies files (the
.cmd) to load and only load them if the target exist. Then the
$(if_changed, ) macro rerun the command if prereq are newer than the
target or if the command as changed. Without the .cmd file loaded, the
macro would compare the new command to an empty value and so rebuild the
target.

Now, the *.bin files are regenerated because cmdline.o is been rebuild
mostly because make didn't load the record of the previous command run.

Thanks,

-- 
Anthony PERARD



[ovmf test] 169400: regressions - FAIL

2022-04-14 Thread osstest service owner
flight 169400 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/169400/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64   6 xen-buildfail REGR. vs. 168254
 build-amd64-xsm   6 xen-buildfail REGR. vs. 168254
 build-i386-xsm6 xen-buildfail REGR. vs. 168254
 build-i3866 xen-buildfail REGR. vs. 168254

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt   1 build-check(1)   blocked  n/a
 build-i386-libvirt1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)  blocked n/a

version targeted for testing:
 ovmf f3da13461cbed699e54b1d7ef3fba5144cc3b3b4
baseline version:
 ovmf b1b89f9009f2390652e0061bd7b24fc40732bc70

Last test of basis   168254  2022-02-28 10:41:46 Z   45 days
Failing since168258  2022-03-01 01:55:31 Z   44 days  392 attempts
Testing same since   169397  2022-04-14 15:10:23 Z0 days4 attempts


People who touched revisions under test:
  Abdul Lateef Attar 
  Abdul Lateef Attar via groups.io 
  Abner Chang 
  Akihiko Odaki 
  Anthony PERARD 
  Bob Feng 
  Chen Lin Z 
  Chen, Lin Z 
  Dandan Bi 
  Feng, Bob C 
  Gerd Hoffmann 
  Guo Dong 
  Guomin Jiang 
  Hao A Wu 
  Heng Luo 
  Hua Ma 
  Huang, Li-Xia 
  Jagadeesh Ujja 
  Jason 
  Jason Lou 
  Ken Lautner 
  Kenneth Lautner 
  Kuo, Ted 
  Laszlo Ersek 
  Lean Sheng Tan 
  Leif Lindholm 
  Li, Zhihao 
  Liming Gao 
  Liu 
  Liu Yun 
  Liu Yun Y 
  Lixia Huang 
  Lou, Yun 
  Ma, Hua 
  Mara Sophie Grosch 
  Mara Sophie Grosch via groups.io 
  Matt DeVillier 
  Michael D Kinney 
  Michael Kubacki 
  Michael Kubacki 
  Min Xu 
  Oliver Steffen 
  Patrick Rudolph 
  Purna Chandra Rao Bandaru 
  Ray Ni 
  Rebecca Cran 
  Sami Mujawar 
  Sean Rhodes 
  Sean Rhodes sean@starlabs.systems
  Sebastien Boeuf 
  Sunny Wang 
  Ted Kuo 
  Wenyi Xie 
  wenyi,xie via groups.io 
  Xiaolu.Jiang 
  Xie, Yuanhao 
  Yi Li 
  yi1 li 
  Yuanhao Xie 
  Zhihao Li 

jobs:
 build-amd64-xsm  fail
 build-i386-xsm   fail
 build-amd64  fail
 build-i386   fail
 build-amd64-libvirt  blocked 
 build-i386-libvirt   blocked 
 build-amd64-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64 blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64  blocked 



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 5103 lines long.)



Re: [PATCH v1.1 2/3] x86/build: Don't convert boot/{cmdline,head}.bin back to .S

2022-04-14 Thread Anthony PERARD
On Thu, Apr 14, 2022 at 05:27:39PM +0100, Andrew Cooper wrote:
> There's no point wasting time converting binaries back to asm source.  Just
> use .incbin directly.  Explain in head.S what these binaries are.
> 
> Also, align the blobs.  While there's very little static data in the blobs,
> they should have at least 4 byte alignment.  There was previously no guarantee
> that cmdline_parse_early was aligned, and there is no longer an implicit
> 4-byte alignment between cmdline_parse_early and reloc caused by the use of
> .long.
> 
> No functional change.
> 
> Signed-off-by: Andrew Cooper 
> ---
> diff --git a/xen/arch/x86/boot/Makefile b/xen/arch/x86/boot/Makefile
> index a5dd094836f6..0670e03b72e0 100644
> --- a/xen/arch/x86/boot/Makefile
> +++ b/xen/arch/x86/boot/Makefile
> @@ -10,7 +10,10 @@ head-srcs := $(addprefix $(obj)/, $(head-srcs))
>  ifdef building_out_of_srctree
>  $(obj)/head.o: CFLAGS-y += -iquote $(obj)

With this patch, we don't the "-iquote" option above, it was only useful
for both "#include" been removed.

>  endif
> -$(obj)/head.o: $(head-srcs)
> +# For .incbin - add $(obj) to the include path and add the dependencies
> +# manually as they're not included in .d
> +$(obj)/head.o: AFLAGS-y += -Wa$(comma)-I$(obj)
> +$(obj)/head.o: $(head-srcs:.S=.bin)

The manual dependencies are needed because `make` needs to know what
other target are needed before building "head.o". The .d files wouldn't
exist on a first build. I don't think a comment about that isn't really
necessary, but if there's one it should be about telling `make` to build
cmdline.bin and head.bin first.

Otherwise, the patch looks good.

Thanks,

-- 
Anthony PERARD



Re: [PATCH] xen/evtchn: Add design for static event channel signaling for domUs..

2022-04-14 Thread Stefano Stabellini
On Thu, 14 Apr 2022, Bertrand Marquis wrote:
> > On 14 Apr 2022, at 02:14, Stefano Stabellini  wrote:
> > 
> > On Mon, 11 Apr 2022, Bertrand Marquis wrote:
> >> What you mention here is actually combining 2 different solutions inside
> >> Xen to build a custom communication solution.
> >> My assumption here is that the user will actually create the device tree
> >> nodes he wants to do that and we should not create guest node entries
> >> as it would enforce some design.
> >> 
> >> If everything can be statically defined for Xen then the user can also
> >> statically define node entries inside his guest to make use of the events
> >> and the shared memories.
> >> 
> >> For example one might need more than one event to build a communication
> >> system, or more than one shared memory or could build something
> >> communicating with multiple guest thus requiring even more events and
> >> shared memories.
> > 
> > Hi Bertrand, Rahul,
> > 
> > If the guests are allowed some level of dynamic discovery, this feature
> > is not needed. They can discover the shared memory location from the
> > domU device tree, then proceed to allocate evtchns as needed and tell
> > the other end the evtchn numbers over shared memory. I already have an
> > example of it here:
> > 
> > https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/2251030537/Xen+Shared+Memory+and+Interrupts+Between+VMs
> > 
> > What if the guest doesn't support device tree at runtime, like baremetal
> > or Zephyr? The shared memory address can be hardcoded or generated from
> > device tree at build time. That's no problem. Then, the event channels
> > can still be allocated at runtime and passed to the other end over
> > shared memory. That's what the example on the wikipage does.
> > 
> > 
> > When are static event channels actually useful? When the application
> > cannot allocate the event channels at runtime at all. The reason for the
> > restriction could be related to safety (no dynamic allocations at
> > runtime) or convenience (everything else is fully static, why should the
> > event channel numbers be dynamic?)
> 
> An other use case here is dom0less: you cannot have dom0 create them.
> 
> > 
> > Given the above, I can see why there is no need to describe the static
> > event channel info in the domU device tree: static event channels are
> > only useful in fully static configurations, and in those configurations
> > the domU device tree dynamically generated by Xen is not needed. I can
> > see where you are coming from.
> > 
> > 
> > The workflow that we have been trying to enable with the System Device
> > Tree effort (System Device Tree is similar to a normal Device Tree plus
> > the xen,domains nodes) is the following:
> > 
> > S-DT ---[lopper]---> Linux DT
> >L--> Zephyr DT ---[Zephyr build]---> Zephyr .h files
> > 
> > S-DT contains all the needed information for both the regular Linux DT
> > generation and also the Zephyr/RTOS/baremetal header files generation,
> > that happens at build time.
> > 
> > S-DT is not the same as the Xen device tree, but so far it has been
> > conceptually and practically similar. I always imagine that the bindings
> > we have in Xen we'll also have corresponding bindings in System Device
> > Tree.
> > 
> > For this workflow to work S-DT needs all the info so that both Linux DT
> > and Zephyr DT and Zephyr .h files can be generated.
> > 
> > Does this proposal contain enough information so that Zephyr .h files
> > could be statically generated with the event channel numbers and static
> > shared memory regions addresses?
> > 
> > I am not sure. Maybe not?
> 
> Yes it should be possible to have all infos as the integrator will setup the
> system and will decide upfront the address and the event(s) number(s).
> 
> > 
> > 
> > It is possible that the shared memory usage is so application specific
> > that there is no point in even talking about it. But I think that
> > introducing a simple bundle of both event channels and shared memory
> > would help a lot.
> > 
> > Something like the following in the Xen device tree would be enough to
> > specify an arbitrary number of event channels connected with the same
> > domains sharing the memory region.
> > 
> > It looks like that if we did the below, we would carry a lot more useful
> > information compared to the original proposal alone. We could add a
> > similar xen,notificaiton property to the domU reserved-memory region in
> > device tree generated by Xen for consistency, so that everything
> > available to the domU is described fully in device tree.
> > 
> > 
> >domU1 {
> >compatible = "xen,domain";
> > 
> >/* one sub-node per local event channel */
> >ec1: evtchn@1 {
> >compatible = "xen,evtchn-v1";
> >/* local-evtchn link-to-foreign-evtchn */
> >xen,evtchn = <0x1 >
> >};
> >ec2: evtchn@2 {
> >compatible = "xen,evtchn-v1";
> >xen,evtchn = <0x2 >

[ovmf test] 169399: regressions - FAIL

2022-04-14 Thread osstest service owner
flight 169399 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/169399/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64   6 xen-buildfail REGR. vs. 168254
 build-amd64-xsm   6 xen-buildfail REGR. vs. 168254
 build-i386-xsm6 xen-buildfail REGR. vs. 168254
 build-i3866 xen-buildfail REGR. vs. 168254

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt   1 build-check(1)   blocked  n/a
 build-i386-libvirt1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)  blocked n/a

version targeted for testing:
 ovmf f3da13461cbed699e54b1d7ef3fba5144cc3b3b4
baseline version:
 ovmf b1b89f9009f2390652e0061bd7b24fc40732bc70

Last test of basis   168254  2022-02-28 10:41:46 Z   45 days
Failing since168258  2022-03-01 01:55:31 Z   44 days  391 attempts
Testing same since   169397  2022-04-14 15:10:23 Z0 days3 attempts


People who touched revisions under test:
  Abdul Lateef Attar 
  Abdul Lateef Attar via groups.io 
  Abner Chang 
  Akihiko Odaki 
  Anthony PERARD 
  Bob Feng 
  Chen Lin Z 
  Chen, Lin Z 
  Dandan Bi 
  Feng, Bob C 
  Gerd Hoffmann 
  Guo Dong 
  Guomin Jiang 
  Hao A Wu 
  Heng Luo 
  Hua Ma 
  Huang, Li-Xia 
  Jagadeesh Ujja 
  Jason 
  Jason Lou 
  Ken Lautner 
  Kenneth Lautner 
  Kuo, Ted 
  Laszlo Ersek 
  Lean Sheng Tan 
  Leif Lindholm 
  Li, Zhihao 
  Liming Gao 
  Liu 
  Liu Yun 
  Liu Yun Y 
  Lixia Huang 
  Lou, Yun 
  Ma, Hua 
  Mara Sophie Grosch 
  Mara Sophie Grosch via groups.io 
  Matt DeVillier 
  Michael D Kinney 
  Michael Kubacki 
  Michael Kubacki 
  Min Xu 
  Oliver Steffen 
  Patrick Rudolph 
  Purna Chandra Rao Bandaru 
  Ray Ni 
  Rebecca Cran 
  Sami Mujawar 
  Sean Rhodes 
  Sean Rhodes sean@starlabs.systems
  Sebastien Boeuf 
  Sunny Wang 
  Ted Kuo 
  Wenyi Xie 
  wenyi,xie via groups.io 
  Xiaolu.Jiang 
  Xie, Yuanhao 
  Yi Li 
  yi1 li 
  Yuanhao Xie 
  Zhihao Li 

jobs:
 build-amd64-xsm  fail
 build-i386-xsm   fail
 build-amd64  fail
 build-i386   fail
 build-amd64-libvirt  blocked 
 build-i386-libvirt   blocked 
 build-amd64-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64 blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64  blocked 



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 5103 lines long.)



Re: [PATCH 1/2] x86: improve .debug_line contents for assembly sources

2022-04-14 Thread Jan Beulich
On 14.04.2022 18:02, Roger Pau Monné wrote:
> On Thu, Apr 14, 2022 at 04:15:22PM +0200, Jan Beulich wrote:
>> On 14.04.2022 15:31, Roger Pau Monné wrote:
>>> On Thu, Apr 14, 2022 at 02:52:47PM +0200, Jan Beulich wrote:
 On 14.04.2022 14:40, Roger Pau Monné wrote:
> On Tue, Apr 12, 2022 at 12:27:34PM +0200, Jan Beulich wrote:
>> While future gas versions will allow line number information to be
>> generated for all instances of .irp and alike [1][2], the same isn't
>> true (nor immediately intended) for .macro [3]. Hence macros, when they
>> do more than just invoke another macro or issue an individual insn, want
>> to have .line directives (in header files also .file ones) in place.
>>
>> Signed-off-by: Jan Beulich 
>>
>> [1] 
>> https://sourceware.org/git?p=binutils-gdb.git;a=commitdiff;h=7992631e8c0b0e711fbaba991348ef6f6e583725
>> [2] 
>> https://sourceware.org/git?p=binutils-gdb.git;a=commitdiff;h=2ee1792bec225ea19c71095cee5a3a9ae6df7c59
>> [3] 
>> https://sourceware.org/git?p=binutils-gdb.git;a=commitdiff;h=6d1ace6861e999361b30d1bc27459ab8094e0d4a
>> ---
>> Using .file has the perhaps undesirable side effect of generating a fair
>> amount of (all identical) STT_FILE entries in the symbol table. We also
>> can't use the supposedly assembler-internal (and hence undocumented)
>> .appfile anymore, as it was removed [4]. Note that .linefile (also
>> internal/undocumented) as well as the "#  " constructs the
>> compiler emits, leading to .linefile insertion by the assembler, aren't
>> of use anyway as these are processed and purged when processing .macro
>> [3].
>>
>> [4] 
>> https://sourceware.org/git?p=binutils-gdb.git;a=commitdiff;h=c39e89c3aaa3a6790f85e80f2da5022bc4bce38b
>>
>> --- a/xen/arch/x86/include/asm/spec_ctrl_asm.h
>> +++ b/xen/arch/x86/include/asm/spec_ctrl_asm.h
>> @@ -24,6 +24,8 @@
>>  #include 
>>  #include 
>>  
>> +#define FILE_AND_LINE .file __FILE__; .line __LINE__
>
> Seeing as this seems to get added to all macros below, I guess you did
> consider (and discarded) introducing a preprocessor macro do to the
> asm macro definitons:
>
> #define DECLARE_MACRO(n, ...) \
> .macro n __VA_ARGS__ \
> .file __FILE__; .line __LINE__

 No, I didn't even consider that. I view such as too obfuscating - there's
 then e.g. no visual match with the .endm. Furthermore, as outlined in the
 description, I don't think this wants applying uniformly. There are
 macros which better don't have this added. Yet I also would prefer to not
 end up with a mix of .macro and DECLARE_MACRO().
>>>
>>> I think it's a dummy question, but why would we want to add this to
>>> some macros?
>>>
>>> Isn't it better to always have the file and line reference where the
>>> macro gets used?
>>
>> Like said in the description, a macro simply invoking another macro,
>> or a macro simply wrapping a single insn, is likely better to have
>> its generated code associated with the original line number. Complex
>> macros, otoh, are imo often better to have line numbers associated
>> with actual macro contents. IOW to some degree I support the cited
>> workaround in binutils (which has been there for many years).
> 
> Seems a bit ad-hoc policy, but it's you and Andrew that mostly deal
> with this stuff, so if you are fine with it.

What other rule of thumb would you suggest? I'd be happy to take
suggestions rather than force in something which looks to be not
entirely uncontroversial.

> Acked-by: roger Pau Monné 

Thanks. Given the above, I guess I'll apply this only provisionally.

Jan




[PATCH v1.1 2/3] x86/build: Don't convert boot/{cmdline,head}.bin back to .S

2022-04-14 Thread Andrew Cooper
There's no point wasting time converting binaries back to asm source.  Just
use .incbin directly.  Explain in head.S what these binaries are.

Also, align the blobs.  While there's very little static data in the blobs,
they should have at least 4 byte alignment.  There was previously no guarantee
that cmdline_parse_early was aligned, and there is no longer an implicit
4-byte alignment between cmdline_parse_early and reloc caused by the use of
.long.

No functional change.

Signed-off-by: Andrew Cooper 
---
CC: Jan Beulich 
CC: Roger Pau Monné 
CC: Wei Liu 
CC: Anthony PERARD 

v1.1:
 * Rebase over the out-of-tree build work

Cleanup to $(head-srcs) deferred to the subsequent patch to make the change
legible.
---
 xen/arch/x86/boot/Makefile |  9 -
 xen/arch/x86/boot/head.S   | 10 --
 2 files changed, 12 insertions(+), 7 deletions(-)

diff --git a/xen/arch/x86/boot/Makefile b/xen/arch/x86/boot/Makefile
index a5dd094836f6..0670e03b72e0 100644
--- a/xen/arch/x86/boot/Makefile
+++ b/xen/arch/x86/boot/Makefile
@@ -10,7 +10,10 @@ head-srcs := $(addprefix $(obj)/, $(head-srcs))
 ifdef building_out_of_srctree
 $(obj)/head.o: CFLAGS-y += -iquote $(obj)
 endif
-$(obj)/head.o: $(head-srcs)
+# For .incbin - add $(obj) to the include path and add the dependencies
+# manually as they're not included in .d
+$(obj)/head.o: AFLAGS-y += -Wa$(comma)-I$(obj)
+$(obj)/head.o: $(head-srcs:.S=.bin)
 
 CFLAGS_x86_32 := $(subst -m64,-m32 -march=i686,$(XEN_TREEWIDE_CFLAGS))
 $(call cc-options-add,CFLAGS_x86_32,CC,$(EMBEDDED_EXTRA_CFLAGS))
@@ -24,10 +27,6 @@ CFLAGS_x86_32 += -I$(srctree)/include
 $(head-srcs:.S=.o): CFLAGS_stack_boundary :=
 $(head-srcs:.S=.o): XEN_CFLAGS := $(CFLAGS_x86_32) -fpic
 
-$(head-srcs): %.S: %.bin
-   (od -v -t x $< | tr -s ' ' | awk 'NR > 1 {print s} {s=$$0}' | \
-   sed 's/ /,0x/g' | sed 's/,0x$$//' | sed 's/^[0-9]*,/ .long /') >$@
-
 %.bin: %.lnk
$(OBJCOPY) -j .text -O binary $< $@
 
diff --git a/xen/arch/x86/boot/head.S b/xen/arch/x86/boot/head.S
index 3db47197b841..0fb7dd3029f2 100644
--- a/xen/arch/x86/boot/head.S
+++ b/xen/arch/x86/boot/head.S
@@ -777,11 +777,17 @@ trampoline_setup:
 /* Jump into the relocated trampoline. */
 lret
 
+/*
+ * cmdline and reloc are written in C, and linked to be 32bit PIC with
+ * entrypoints at 0 and using the stdcall convention.
+ */
+ALIGN
 cmdline_parse_early:
-#include "cmdline.S"
+.incbin "cmdline.bin"
 
+ALIGN
 reloc:
-#include "reloc.S"
+.incbin "reloc.bin"
 
 ENTRY(trampoline_start)
 #include "trampoline.S"
-- 
2.11.0




[PATCH] xen/build: Fix dependency for the MAP rule

2022-04-14 Thread Andrew Cooper
Signed-off-by: Andrew Cooper 
---
CC: Jan Beulich 
CC: Roger Pau Monné 
CC: Wei Liu 
CC: Anthony PERARD 
---
 xen/Makefile | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/Makefile b/xen/Makefile
index dd05672ff42d..02a274f56dc0 100644
--- a/xen/Makefile
+++ b/xen/Makefile
@@ -599,7 +599,7 @@ cscope:
cscope -k -b -q
 
 .PHONY: _MAP
-_MAP:
+_MAP: $(TARGET)-syms
$(NM) -n $(TARGET)-syms | grep -v '\(compiled\)\|\(\.o$$\)\|\( [aUw] 
\)\|\(\.\.ng$$\)\|\(LASH[RL]DI\)' > System.map
 
 %.o %.i %.s: %.c tools_fixdep FORCE
-- 
2.11.0




[ovmf test] 169398: regressions - FAIL

2022-04-14 Thread osstest service owner
flight 169398 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/169398/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64   6 xen-buildfail REGR. vs. 168254
 build-amd64-xsm   6 xen-buildfail REGR. vs. 168254
 build-i386-xsm6 xen-buildfail REGR. vs. 168254
 build-i3866 xen-buildfail REGR. vs. 168254

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt   1 build-check(1)   blocked  n/a
 build-i386-libvirt1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)  blocked n/a

version targeted for testing:
 ovmf f3da13461cbed699e54b1d7ef3fba5144cc3b3b4
baseline version:
 ovmf b1b89f9009f2390652e0061bd7b24fc40732bc70

Last test of basis   168254  2022-02-28 10:41:46 Z   45 days
Failing since168258  2022-03-01 01:55:31 Z   44 days  390 attempts
Testing same since   169397  2022-04-14 15:10:23 Z0 days2 attempts


People who touched revisions under test:
  Abdul Lateef Attar 
  Abdul Lateef Attar via groups.io 
  Abner Chang 
  Akihiko Odaki 
  Anthony PERARD 
  Bob Feng 
  Chen Lin Z 
  Chen, Lin Z 
  Dandan Bi 
  Feng, Bob C 
  Gerd Hoffmann 
  Guo Dong 
  Guomin Jiang 
  Hao A Wu 
  Heng Luo 
  Hua Ma 
  Huang, Li-Xia 
  Jagadeesh Ujja 
  Jason 
  Jason Lou 
  Ken Lautner 
  Kenneth Lautner 
  Kuo, Ted 
  Laszlo Ersek 
  Lean Sheng Tan 
  Leif Lindholm 
  Li, Zhihao 
  Liming Gao 
  Liu 
  Liu Yun 
  Liu Yun Y 
  Lixia Huang 
  Lou, Yun 
  Ma, Hua 
  Mara Sophie Grosch 
  Mara Sophie Grosch via groups.io 
  Matt DeVillier 
  Michael D Kinney 
  Michael Kubacki 
  Michael Kubacki 
  Min Xu 
  Oliver Steffen 
  Patrick Rudolph 
  Purna Chandra Rao Bandaru 
  Ray Ni 
  Rebecca Cran 
  Sami Mujawar 
  Sean Rhodes 
  Sean Rhodes sean@starlabs.systems
  Sebastien Boeuf 
  Sunny Wang 
  Ted Kuo 
  Wenyi Xie 
  wenyi,xie via groups.io 
  Xiaolu.Jiang 
  Xie, Yuanhao 
  Yi Li 
  yi1 li 
  Yuanhao Xie 
  Zhihao Li 

jobs:
 build-amd64-xsm  fail
 build-i386-xsm   fail
 build-amd64  fail
 build-i386   fail
 build-amd64-libvirt  blocked 
 build-i386-libvirt   blocked 
 build-amd64-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64 blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64  blocked 



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 5103 lines long.)



Re: [PATCH 1/2] x86: improve .debug_line contents for assembly sources

2022-04-14 Thread Roger Pau Monné
On Thu, Apr 14, 2022 at 04:15:22PM +0200, Jan Beulich wrote:
> On 14.04.2022 15:31, Roger Pau Monné wrote:
> > On Thu, Apr 14, 2022 at 02:52:47PM +0200, Jan Beulich wrote:
> >> On 14.04.2022 14:40, Roger Pau Monné wrote:
> >>> On Tue, Apr 12, 2022 at 12:27:34PM +0200, Jan Beulich wrote:
>  While future gas versions will allow line number information to be
>  generated for all instances of .irp and alike [1][2], the same isn't
>  true (nor immediately intended) for .macro [3]. Hence macros, when they
>  do more than just invoke another macro or issue an individual insn, want
>  to have .line directives (in header files also .file ones) in place.
> 
>  Signed-off-by: Jan Beulich 
> 
>  [1] 
>  https://sourceware.org/git?p=binutils-gdb.git;a=commitdiff;h=7992631e8c0b0e711fbaba991348ef6f6e583725
>  [2] 
>  https://sourceware.org/git?p=binutils-gdb.git;a=commitdiff;h=2ee1792bec225ea19c71095cee5a3a9ae6df7c59
>  [3] 
>  https://sourceware.org/git?p=binutils-gdb.git;a=commitdiff;h=6d1ace6861e999361b30d1bc27459ab8094e0d4a
>  ---
>  Using .file has the perhaps undesirable side effect of generating a fair
>  amount of (all identical) STT_FILE entries in the symbol table. We also
>  can't use the supposedly assembler-internal (and hence undocumented)
>  .appfile anymore, as it was removed [4]. Note that .linefile (also
>  internal/undocumented) as well as the "#  " constructs the
>  compiler emits, leading to .linefile insertion by the assembler, aren't
>  of use anyway as these are processed and purged when processing .macro
>  [3].
> 
>  [4] 
>  https://sourceware.org/git?p=binutils-gdb.git;a=commitdiff;h=c39e89c3aaa3a6790f85e80f2da5022bc4bce38b
> 
>  --- a/xen/arch/x86/include/asm/spec_ctrl_asm.h
>  +++ b/xen/arch/x86/include/asm/spec_ctrl_asm.h
>  @@ -24,6 +24,8 @@
>   #include 
>   #include 
>   
>  +#define FILE_AND_LINE .file __FILE__; .line __LINE__
> >>>
> >>> Seeing as this seems to get added to all macros below, I guess you did
> >>> consider (and discarded) introducing a preprocessor macro do to the
> >>> asm macro definitons:
> >>>
> >>> #define DECLARE_MACRO(n, ...) \
> >>> .macro n __VA_ARGS__ \
> >>> .file __FILE__; .line __LINE__
> >>
> >> No, I didn't even consider that. I view such as too obfuscating - there's
> >> then e.g. no visual match with the .endm. Furthermore, as outlined in the
> >> description, I don't think this wants applying uniformly. There are
> >> macros which better don't have this added. Yet I also would prefer to not
> >> end up with a mix of .macro and DECLARE_MACRO().
> > 
> > I think it's a dummy question, but why would we want to add this to
> > some macros?
> > 
> > Isn't it better to always have the file and line reference where the
> > macro gets used?
> 
> Like said in the description, a macro simply invoking another macro,
> or a macro simply wrapping a single insn, is likely better to have
> its generated code associated with the original line number. Complex
> macros, otoh, are imo often better to have line numbers associated
> with actual macro contents. IOW to some degree I support the cited
> workaround in binutils (which has been there for many years).

Seems a bit ad-hoc policy, but it's you and Andrew that mostly deal
with this stuff, so if you are fine with it.

Acked-by: roger Pau Monné 

Thanks, Roger.



Re: Virtio on Xen with Rust

2022-04-14 Thread Doug Goldstein


> On Apr 14, 2022, at 9:10 AM, Wei Liu  wrote:
> 
> On Thu, Apr 14, 2022 at 02:36:12PM +0100, Alex Bennée wrote:
>> 
>> Wei Liu  writes:
>> 
>>> On Thu, Apr 14, 2022 at 12:07:10PM +, Andrew Cooper wrote:
 On 14/04/2022 12:45, Wei Liu wrote:
> Hi Viresh
> 
> This is very cool.
> 
> On Thu, Apr 14, 2022 at 02:53:58PM +0530, Viresh Kumar wrote:
>> +xen-devel
>> 
>>> On 14-04-22, 14:45, Viresh Kumar wrote:
 Hello,
 
 We verified our hypervisor-agnostic Rust based vhost-user backends 
 with Qemu
 based setup earlier, and there was growing concern if they were truly
 hypervisor-agnostic.
 
 In order to prove that, we decided to give it a try with Xen, a type-1
 bare-metal hypervisor.
 
 We are happy to announce that we were able to make progress on that 
 front and
 have a working setup where we can test our existing Rust based 
 backends, like
 I2C, GPIO, RNG (though only I2C is tested as of now) over Xen.
 
 Key components:
 --
 
 - Xen: https://github.com/vireshk/xen
 
  Xen requires MMIO and device specific support in order to populate the
  required devices at the guest. This tree contains four patches on the 
 top of
  mainline Xen, two from Oleksandr (mmio/disk) and two from me (I2C).
 
 - libxen-sys: https://github.com/vireshk/libxen-sys
 
  We currently depend on the userspace tools/libraries provided by Xen, 
 like
  xendevicemodel, xenevtchn, xenforeignmemory, etc. This crates 
 provides Rust
  wrappers over those calls, generated automatically with help of 
 bindgen
  utility in Rust, that allow us to use the installed Xen libraries. 
 Though we
  plan to replace this with Rust based "oxerun" (find below) in longer 
 run.
 
 - oxerun (WIP): 
 https://gitlab.com/mathieupoirier/oxerun/-/tree/xen-ioctls
 
  This is Rust based implementations for Ioctl and hypercalls to Xen. 
 This is WIP
  and should eventually replace "libxen-sys" crate entirely (which are 
 C based
  implementation of the same).
 
>> I'm curious to learn why there is a need to replace libxen-sys with the
>> pure Rust implementation. Those libraries (xendevicemodel, xenevtchn,
>> xenforeignmemory) are very stable and battle tested. Their interfaces
>> are stable.
> 
> Very easy.  The library APIs are mess even if they are technically
> stable, and violate various commonly-agreed rules of being a libary such
> as not messing with stdout/stderr behind the applications back, and
> everything gets more simple when you remove an unnecessary level of C
> indirection.
>>> 
>>> You don't have to use the stdio logger FWIW. I don't disagree things can
>>> be simpler though.
>> 
>> Not directly related to this use case but the Rust API can also be
>> built to make direct HYP calls which will be useful for building Rust
>> based unikernels that need to interact with Xen. For example for a
>> dom0less system running a very minimal heartbeat/healthcheck monitor
>> written in pure rust.
>> 
> 
> I think this is a strong reason for not using existing C libraries. It
> would be nice if the APIs can work with no_std.

This was the goal I had with the way I structured the xen-sys crate.
> 
>> We would also like to explore unikernel virtio backends but I suspect
>> currently the rest of the rust-vmm virtio bits assume a degree of POSIX
>> like userspace to set things up.
> 
Same area I had an interest in. As well. I played with a xenstore 
implementation in a unikernel as well. Some of the code was published but 
unfortunately the actual functional bits were not.

—
Doug



PROPOSAL: Delete www-archive.xenproject.org

2022-04-14 Thread George Dunlap
I’m pretty sure www-archive.xenproject.org is at least N-2 for websites; last 
updated nearly 9 years ago.  As far as I can tell there’s nothing terribly 
interesting stored on the site itself.  I’m going to pursue deleting it within 
4 weeks unless someone objects.

 -George




signature.asc
Description: Message signed with OpenPGP


[ovmf test] 169397: regressions - FAIL

2022-04-14 Thread osstest service owner
flight 169397 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/169397/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64   6 xen-buildfail REGR. vs. 168254
 build-amd64-xsm   6 xen-buildfail REGR. vs. 168254
 build-i386-xsm6 xen-buildfail REGR. vs. 168254
 build-i3866 xen-buildfail REGR. vs. 168254

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt   1 build-check(1)   blocked  n/a
 build-i386-libvirt1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)  blocked n/a

version targeted for testing:
 ovmf f3da13461cbed699e54b1d7ef3fba5144cc3b3b4
baseline version:
 ovmf b1b89f9009f2390652e0061bd7b24fc40732bc70

Last test of basis   168254  2022-02-28 10:41:46 Z   45 days
Failing since168258  2022-03-01 01:55:31 Z   44 days  389 attempts
Testing same since   169397  2022-04-14 15:10:23 Z0 days1 attempts


People who touched revisions under test:
  Abdul Lateef Attar 
  Abdul Lateef Attar via groups.io 
  Abner Chang 
  Akihiko Odaki 
  Anthony PERARD 
  Bob Feng 
  Chen Lin Z 
  Chen, Lin Z 
  Dandan Bi 
  Feng, Bob C 
  Gerd Hoffmann 
  Guo Dong 
  Guomin Jiang 
  Hao A Wu 
  Heng Luo 
  Hua Ma 
  Huang, Li-Xia 
  Jagadeesh Ujja 
  Jason 
  Jason Lou 
  Ken Lautner 
  Kenneth Lautner 
  Kuo, Ted 
  Laszlo Ersek 
  Lean Sheng Tan 
  Leif Lindholm 
  Li, Zhihao 
  Liming Gao 
  Liu 
  Liu Yun 
  Liu Yun Y 
  Lixia Huang 
  Lou, Yun 
  Ma, Hua 
  Mara Sophie Grosch 
  Mara Sophie Grosch via groups.io 
  Matt DeVillier 
  Michael D Kinney 
  Michael Kubacki 
  Michael Kubacki 
  Min Xu 
  Oliver Steffen 
  Patrick Rudolph 
  Purna Chandra Rao Bandaru 
  Ray Ni 
  Rebecca Cran 
  Sami Mujawar 
  Sean Rhodes 
  Sean Rhodes sean@starlabs.systems
  Sebastien Boeuf 
  Sunny Wang 
  Ted Kuo 
  Wenyi Xie 
  wenyi,xie via groups.io 
  Xiaolu.Jiang 
  Xie, Yuanhao 
  Yi Li 
  yi1 li 
  Yuanhao Xie 
  Zhihao Li 

jobs:
 build-amd64-xsm  fail
 build-i386-xsm   fail
 build-amd64  fail
 build-i386   fail
 build-amd64-libvirt  blocked 
 build-i386-libvirt   blocked 
 build-amd64-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64 blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64  blocked 



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 5103 lines long.)



[ovmf test] 169396: regressions - FAIL

2022-04-14 Thread osstest service owner
flight 169396 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/169396/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64   6 xen-buildfail REGR. vs. 168254
 build-amd64-xsm   6 xen-buildfail REGR. vs. 168254
 build-i386-xsm6 xen-buildfail REGR. vs. 168254
 build-i3866 xen-buildfail REGR. vs. 168254

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt   1 build-check(1)   blocked  n/a
 build-i386-libvirt1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)  blocked n/a

version targeted for testing:
 ovmf 0c901fcc200e411b78b9ca42d07d5ea4aaa13b21
baseline version:
 ovmf b1b89f9009f2390652e0061bd7b24fc40732bc70

Last test of basis   168254  2022-02-28 10:41:46 Z   45 days
Failing since168258  2022-03-01 01:55:31 Z   44 days  388 attempts
Testing same since   169385  2022-04-14 03:40:29 Z0 days   11 attempts


People who touched revisions under test:
  Abdul Lateef Attar 
  Abdul Lateef Attar via groups.io 
  Abner Chang 
  Akihiko Odaki 
  Anthony PERARD 
  Bob Feng 
  Chen Lin Z 
  Chen, Lin Z 
  Dandan Bi 
  Feng, Bob C 
  Gerd Hoffmann 
  Guo Dong 
  Guomin Jiang 
  Hao A Wu 
  Heng Luo 
  Hua Ma 
  Huang, Li-Xia 
  Jagadeesh Ujja 
  Jason 
  Jason Lou 
  Ken Lautner 
  Kenneth Lautner 
  Kuo, Ted 
  Laszlo Ersek 
  Lean Sheng Tan 
  Leif Lindholm 
  Li, Zhihao 
  Liming Gao 
  Liu 
  Liu Yun 
  Liu Yun Y 
  Lixia Huang 
  Lou, Yun 
  Ma, Hua 
  Mara Sophie Grosch 
  Mara Sophie Grosch via groups.io 
  Matt DeVillier 
  Michael D Kinney 
  Michael Kubacki 
  Michael Kubacki 
  Min Xu 
  Oliver Steffen 
  Patrick Rudolph 
  Purna Chandra Rao Bandaru 
  Ray Ni 
  Rebecca Cran 
  Sami Mujawar 
  Sean Rhodes 
  Sean Rhodes sean@starlabs.systems
  Sebastien Boeuf 
  Sunny Wang 
  Ted Kuo 
  Wenyi Xie 
  wenyi,xie via groups.io 
  Xiaolu.Jiang 
  Xie, Yuanhao 
  Yi Li 
  yi1 li 
  Yuanhao Xie 
  Zhihao Li 

jobs:
 build-amd64-xsm  fail
 build-i386-xsm   fail
 build-amd64  fail
 build-i386   fail
 build-amd64-libvirt  blocked 
 build-i386-libvirt   blocked 
 build-amd64-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64 blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64  blocked 



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 5086 lines long.)



Xen 4.14.5 released

2022-04-14 Thread Jan Beulich
All,

we're pleased to announce the out-of-band release of Xen 4.14.5. This is
available immediately from its git repository
http://xenbits.xen.org/gitweb/?p=xen.git;a=shortlog;h=refs/heads/stable-4.14
(tag RELEASE-4.14.5) or from the XenProject download page
https://xenproject.org/downloads/xen-project-archives/xen-project-4-14-series/xen-project-4-14-5/
(where a list of changes can also be found).

We recommend all users of the 4.14 stable series to update to this
now hopefully really last point release scheduled to be made by the Xen
Project team from this branch.

Regards, Jan




Xen 4.16.1 released

2022-04-14 Thread Jan Beulich
All,

we're pleased to announce the release of Xen 4.16.1. This is available
immediately from its git repository
http://xenbits.xen.org/gitweb/?p=xen.git;a=shortlog;h=refs/heads/stable-4.16
(tag RELEASE-4.16.1) or from the XenProject download page
https://xenproject.org/downloads/xen-project-archives/xen-project-4-16-series/xen-project-4-16-1/
(where a list of changes can also be found).

We recommend all users of the 4.16 stable series to update to this
first point release.

Regards, Jan




Re: [PATCH 1/2] x86: improve .debug_line contents for assembly sources

2022-04-14 Thread Jan Beulich
On 14.04.2022 15:31, Roger Pau Monné wrote:
> On Thu, Apr 14, 2022 at 02:52:47PM +0200, Jan Beulich wrote:
>> On 14.04.2022 14:40, Roger Pau Monné wrote:
>>> On Tue, Apr 12, 2022 at 12:27:34PM +0200, Jan Beulich wrote:
 While future gas versions will allow line number information to be
 generated for all instances of .irp and alike [1][2], the same isn't
 true (nor immediately intended) for .macro [3]. Hence macros, when they
 do more than just invoke another macro or issue an individual insn, want
 to have .line directives (in header files also .file ones) in place.

 Signed-off-by: Jan Beulich 

 [1] 
 https://sourceware.org/git?p=binutils-gdb.git;a=commitdiff;h=7992631e8c0b0e711fbaba991348ef6f6e583725
 [2] 
 https://sourceware.org/git?p=binutils-gdb.git;a=commitdiff;h=2ee1792bec225ea19c71095cee5a3a9ae6df7c59
 [3] 
 https://sourceware.org/git?p=binutils-gdb.git;a=commitdiff;h=6d1ace6861e999361b30d1bc27459ab8094e0d4a
 ---
 Using .file has the perhaps undesirable side effect of generating a fair
 amount of (all identical) STT_FILE entries in the symbol table. We also
 can't use the supposedly assembler-internal (and hence undocumented)
 .appfile anymore, as it was removed [4]. Note that .linefile (also
 internal/undocumented) as well as the "#  " constructs the
 compiler emits, leading to .linefile insertion by the assembler, aren't
 of use anyway as these are processed and purged when processing .macro
 [3].

 [4] 
 https://sourceware.org/git?p=binutils-gdb.git;a=commitdiff;h=c39e89c3aaa3a6790f85e80f2da5022bc4bce38b

 --- a/xen/arch/x86/include/asm/spec_ctrl_asm.h
 +++ b/xen/arch/x86/include/asm/spec_ctrl_asm.h
 @@ -24,6 +24,8 @@
  #include 
  #include 
  
 +#define FILE_AND_LINE .file __FILE__; .line __LINE__
>>>
>>> Seeing as this seems to get added to all macros below, I guess you did
>>> consider (and discarded) introducing a preprocessor macro do to the
>>> asm macro definitons:
>>>
>>> #define DECLARE_MACRO(n, ...) \
>>> .macro n __VA_ARGS__ \
>>> .file __FILE__; .line __LINE__
>>
>> No, I didn't even consider that. I view such as too obfuscating - there's
>> then e.g. no visual match with the .endm. Furthermore, as outlined in the
>> description, I don't think this wants applying uniformly. There are
>> macros which better don't have this added. Yet I also would prefer to not
>> end up with a mix of .macro and DECLARE_MACRO().
> 
> I think it's a dummy question, but why would we want to add this to
> some macros?
> 
> Isn't it better to always have the file and line reference where the
> macro gets used?

Like said in the description, a macro simply invoking another macro,
or a macro simply wrapping a single insn, is likely better to have
its generated code associated with the original line number. Complex
macros, otoh, are imo often better to have line numbers associated
with actual macro contents. IOW to some degree I support the cited
workaround in binutils (which has been there for many years).

Jan




Re: Virtio on Xen with Rust

2022-04-14 Thread Wei Liu
On Thu, Apr 14, 2022 at 02:36:12PM +0100, Alex Bennée wrote:
> 
> Wei Liu  writes:
> 
> > On Thu, Apr 14, 2022 at 12:07:10PM +, Andrew Cooper wrote:
> >> On 14/04/2022 12:45, Wei Liu wrote:
> >> > Hi Viresh
> >> >
> >> > This is very cool.
> >> >
> >> > On Thu, Apr 14, 2022 at 02:53:58PM +0530, Viresh Kumar wrote:
> >> >> +xen-devel
> >> >>
> >> >> On 14-04-22, 14:45, Viresh Kumar wrote:
> >> >>> Hello,
> >> >>>
> >> >>> We verified our hypervisor-agnostic Rust based vhost-user backends 
> >> >>> with Qemu
> >> >>> based setup earlier, and there was growing concern if they were truly
> >> >>> hypervisor-agnostic.
> >> >>>
> >> >>> In order to prove that, we decided to give it a try with Xen, a type-1
> >> >>> bare-metal hypervisor.
> >> >>>
> >> >>> We are happy to announce that we were able to make progress on that 
> >> >>> front and
> >> >>> have a working setup where we can test our existing Rust based 
> >> >>> backends, like
> >> >>> I2C, GPIO, RNG (though only I2C is tested as of now) over Xen.
> >> >>>
> >> >>> Key components:
> >> >>> --
> >> >>>
> >> >>> - Xen: https://github.com/vireshk/xen
> >> >>>
> >> >>>   Xen requires MMIO and device specific support in order to populate 
> >> >>> the
> >> >>>   required devices at the guest. This tree contains four patches on 
> >> >>> the top of
> >> >>>   mainline Xen, two from Oleksandr (mmio/disk) and two from me (I2C).
> >> >>>
> >> >>> - libxen-sys: https://github.com/vireshk/libxen-sys
> >> >>>
> >> >>>   We currently depend on the userspace tools/libraries provided by 
> >> >>> Xen, like
> >> >>>   xendevicemodel, xenevtchn, xenforeignmemory, etc. This crates 
> >> >>> provides Rust
> >> >>>   wrappers over those calls, generated automatically with help of 
> >> >>> bindgen
> >> >>>   utility in Rust, that allow us to use the installed Xen libraries. 
> >> >>> Though we
> >> >>>   plan to replace this with Rust based "oxerun" (find below) in longer 
> >> >>> run.
> >> >>>
> >> >>> - oxerun (WIP): 
> >> >>> https://gitlab.com/mathieupoirier/oxerun/-/tree/xen-ioctls
> >> >>>
> >> >>>   This is Rust based implementations for Ioctl and hypercalls to Xen. 
> >> >>> This is WIP
> >> >>>   and should eventually replace "libxen-sys" crate entirely (which are 
> >> >>> C based
> >> >>>   implementation of the same).
> >> >>>
> >> > I'm curious to learn why there is a need to replace libxen-sys with the
> >> > pure Rust implementation. Those libraries (xendevicemodel, xenevtchn,
> >> > xenforeignmemory) are very stable and battle tested. Their interfaces
> >> > are stable.
> >> 
> >> Very easy.  The library APIs are mess even if they are technically
> >> stable, and violate various commonly-agreed rules of being a libary such
> >> as not messing with stdout/stderr behind the applications back, and
> >> everything gets more simple when you remove an unnecessary level of C
> >> indirection.
> >
> > You don't have to use the stdio logger FWIW. I don't disagree things can
> > be simpler though.
> 
> Not directly related to this use case but the Rust API can also be
> built to make direct HYP calls which will be useful for building Rust
> based unikernels that need to interact with Xen. For example for a
> dom0less system running a very minimal heartbeat/healthcheck monitor
> written in pure rust.
> 

I think this is a strong reason for not using existing C libraries. It
would be nice if the APIs can work with no_std.

> We would also like to explore unikernel virtio backends but I suspect
> currently the rest of the rust-vmm virtio bits assume a degree of POSIX
> like userspace to set things up.

Indeed.

Thanks,
Wei.

> 
> -- 
> Alex Bennée



Re: Virtio on Xen with Rust

2022-04-14 Thread Alex Bennée


Wei Liu  writes:

> On Thu, Apr 14, 2022 at 12:07:10PM +, Andrew Cooper wrote:
>> On 14/04/2022 12:45, Wei Liu wrote:
>> > Hi Viresh
>> >
>> > This is very cool.
>> >
>> > On Thu, Apr 14, 2022 at 02:53:58PM +0530, Viresh Kumar wrote:
>> >> +xen-devel
>> >>
>> >> On 14-04-22, 14:45, Viresh Kumar wrote:
>> >>> Hello,
>> >>>
>> >>> We verified our hypervisor-agnostic Rust based vhost-user backends with 
>> >>> Qemu
>> >>> based setup earlier, and there was growing concern if they were truly
>> >>> hypervisor-agnostic.
>> >>>
>> >>> In order to prove that, we decided to give it a try with Xen, a type-1
>> >>> bare-metal hypervisor.
>> >>>
>> >>> We are happy to announce that we were able to make progress on that 
>> >>> front and
>> >>> have a working setup where we can test our existing Rust based backends, 
>> >>> like
>> >>> I2C, GPIO, RNG (though only I2C is tested as of now) over Xen.
>> >>>
>> >>> Key components:
>> >>> --
>> >>>
>> >>> - Xen: https://github.com/vireshk/xen
>> >>>
>> >>>   Xen requires MMIO and device specific support in order to populate the
>> >>>   required devices at the guest. This tree contains four patches on the 
>> >>> top of
>> >>>   mainline Xen, two from Oleksandr (mmio/disk) and two from me (I2C).
>> >>>
>> >>> - libxen-sys: https://github.com/vireshk/libxen-sys
>> >>>
>> >>>   We currently depend on the userspace tools/libraries provided by Xen, 
>> >>> like
>> >>>   xendevicemodel, xenevtchn, xenforeignmemory, etc. This crates provides 
>> >>> Rust
>> >>>   wrappers over those calls, generated automatically with help of bindgen
>> >>>   utility in Rust, that allow us to use the installed Xen libraries. 
>> >>> Though we
>> >>>   plan to replace this with Rust based "oxerun" (find below) in longer 
>> >>> run.
>> >>>
>> >>> - oxerun (WIP): 
>> >>> https://gitlab.com/mathieupoirier/oxerun/-/tree/xen-ioctls
>> >>>
>> >>>   This is Rust based implementations for Ioctl and hypercalls to Xen. 
>> >>> This is WIP
>> >>>   and should eventually replace "libxen-sys" crate entirely (which are C 
>> >>> based
>> >>>   implementation of the same).
>> >>>
>> > I'm curious to learn why there is a need to replace libxen-sys with the
>> > pure Rust implementation. Those libraries (xendevicemodel, xenevtchn,
>> > xenforeignmemory) are very stable and battle tested. Their interfaces
>> > are stable.
>> 
>> Very easy.  The library APIs are mess even if they are technically
>> stable, and violate various commonly-agreed rules of being a libary such
>> as not messing with stdout/stderr behind the applications back, and
>> everything gets more simple when you remove an unnecessary level of C
>> indirection.
>
> You don't have to use the stdio logger FWIW. I don't disagree things can
> be simpler though.

Not directly related to this use case but the Rust API can also be
built to make direct HYP calls which will be useful for building Rust
based unikernels that need to interact with Xen. For example for a
dom0less system running a very minimal heartbeat/healthcheck monitor
written in pure rust.

We would also like to explore unikernel virtio backends but I suspect
currently the rest of the rust-vmm virtio bits assume a degree of POSIX
like userspace to set things up.

-- 
Alex Bennée



[ovmf test] 169395: regressions - FAIL

2022-04-14 Thread osstest service owner
flight 169395 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/169395/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64   6 xen-buildfail REGR. vs. 168254
 build-amd64-xsm   6 xen-buildfail REGR. vs. 168254
 build-i386-xsm6 xen-buildfail REGR. vs. 168254
 build-i3866 xen-buildfail REGR. vs. 168254

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt   1 build-check(1)   blocked  n/a
 build-i386-libvirt1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)  blocked n/a

version targeted for testing:
 ovmf 0c901fcc200e411b78b9ca42d07d5ea4aaa13b21
baseline version:
 ovmf b1b89f9009f2390652e0061bd7b24fc40732bc70

Last test of basis   168254  2022-02-28 10:41:46 Z   45 days
Failing since168258  2022-03-01 01:55:31 Z   44 days  387 attempts
Testing same since   169385  2022-04-14 03:40:29 Z0 days   10 attempts


People who touched revisions under test:
  Abdul Lateef Attar 
  Abdul Lateef Attar via groups.io 
  Abner Chang 
  Akihiko Odaki 
  Anthony PERARD 
  Bob Feng 
  Chen Lin Z 
  Chen, Lin Z 
  Dandan Bi 
  Feng, Bob C 
  Gerd Hoffmann 
  Guo Dong 
  Guomin Jiang 
  Hao A Wu 
  Heng Luo 
  Hua Ma 
  Huang, Li-Xia 
  Jagadeesh Ujja 
  Jason 
  Jason Lou 
  Ken Lautner 
  Kenneth Lautner 
  Kuo, Ted 
  Laszlo Ersek 
  Lean Sheng Tan 
  Leif Lindholm 
  Li, Zhihao 
  Liming Gao 
  Liu 
  Liu Yun 
  Liu Yun Y 
  Lixia Huang 
  Lou, Yun 
  Ma, Hua 
  Mara Sophie Grosch 
  Mara Sophie Grosch via groups.io 
  Matt DeVillier 
  Michael D Kinney 
  Michael Kubacki 
  Michael Kubacki 
  Min Xu 
  Oliver Steffen 
  Patrick Rudolph 
  Purna Chandra Rao Bandaru 
  Ray Ni 
  Rebecca Cran 
  Sami Mujawar 
  Sean Rhodes 
  Sean Rhodes sean@starlabs.systems
  Sebastien Boeuf 
  Sunny Wang 
  Ted Kuo 
  Wenyi Xie 
  wenyi,xie via groups.io 
  Xiaolu.Jiang 
  Xie, Yuanhao 
  Yi Li 
  yi1 li 
  Yuanhao Xie 
  Zhihao Li 

jobs:
 build-amd64-xsm  fail
 build-i386-xsm   fail
 build-amd64  fail
 build-i386   fail
 build-amd64-libvirt  blocked 
 build-i386-libvirt   blocked 
 build-amd64-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64 blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64  blocked 



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 5086 lines long.)



Re: [PATCH 1/2] x86: improve .debug_line contents for assembly sources

2022-04-14 Thread Roger Pau Monné
On Thu, Apr 14, 2022 at 03:31:26PM +0200, Roger Pau Monné wrote:
> On Thu, Apr 14, 2022 at 02:52:47PM +0200, Jan Beulich wrote:
> > On 14.04.2022 14:40, Roger Pau Monné wrote:
> > > On Tue, Apr 12, 2022 at 12:27:34PM +0200, Jan Beulich wrote:
> > >> While future gas versions will allow line number information to be
> > >> generated for all instances of .irp and alike [1][2], the same isn't
> > >> true (nor immediately intended) for .macro [3]. Hence macros, when they
> > >> do more than just invoke another macro or issue an individual insn, want
> > >> to have .line directives (in header files also .file ones) in place.
> > >>
> > >> Signed-off-by: Jan Beulich 
> > >>
> > >> [1] 
> > >> https://sourceware.org/git?p=binutils-gdb.git;a=commitdiff;h=7992631e8c0b0e711fbaba991348ef6f6e583725
> > >> [2] 
> > >> https://sourceware.org/git?p=binutils-gdb.git;a=commitdiff;h=2ee1792bec225ea19c71095cee5a3a9ae6df7c59
> > >> [3] 
> > >> https://sourceware.org/git?p=binutils-gdb.git;a=commitdiff;h=6d1ace6861e999361b30d1bc27459ab8094e0d4a
> > >> ---
> > >> Using .file has the perhaps undesirable side effect of generating a fair
> > >> amount of (all identical) STT_FILE entries in the symbol table. We also
> > >> can't use the supposedly assembler-internal (and hence undocumented)
> > >> .appfile anymore, as it was removed [4]. Note that .linefile (also
> > >> internal/undocumented) as well as the "#  " constructs the
> > >> compiler emits, leading to .linefile insertion by the assembler, aren't
> > >> of use anyway as these are processed and purged when processing .macro
> > >> [3].
> > >>
> > >> [4] 
> > >> https://sourceware.org/git?p=binutils-gdb.git;a=commitdiff;h=c39e89c3aaa3a6790f85e80f2da5022bc4bce38b
> > >>
> > >> --- a/xen/arch/x86/include/asm/spec_ctrl_asm.h
> > >> +++ b/xen/arch/x86/include/asm/spec_ctrl_asm.h
> > >> @@ -24,6 +24,8 @@
> > >>  #include 
> > >>  #include 
> > >>  
> > >> +#define FILE_AND_LINE .file __FILE__; .line __LINE__
> > > 
> > > Seeing as this seems to get added to all macros below, I guess you did
> > > consider (and discarded) introducing a preprocessor macro do to the
> > > asm macro definitons:
> > > 
> > > #define DECLARE_MACRO(n, ...) \
> > > .macro n __VA_ARGS__ \
> > > .file __FILE__; .line __LINE__
> > 
> > No, I didn't even consider that. I view such as too obfuscating - there's
> > then e.g. no visual match with the .endm. Furthermore, as outlined in the
> > description, I don't think this wants applying uniformly. There are
> > macros which better don't have this added. Yet I also would prefer to not
> > end up with a mix of .macro and DECLARE_MACRO().
> 
> I think it's a dummy question, but why would we want to add this to
  ^n't

Sorry.



Re: [PATCH 1/2] x86: improve .debug_line contents for assembly sources

2022-04-14 Thread Roger Pau Monné
On Thu, Apr 14, 2022 at 02:52:47PM +0200, Jan Beulich wrote:
> On 14.04.2022 14:40, Roger Pau Monné wrote:
> > On Tue, Apr 12, 2022 at 12:27:34PM +0200, Jan Beulich wrote:
> >> While future gas versions will allow line number information to be
> >> generated for all instances of .irp and alike [1][2], the same isn't
> >> true (nor immediately intended) for .macro [3]. Hence macros, when they
> >> do more than just invoke another macro or issue an individual insn, want
> >> to have .line directives (in header files also .file ones) in place.
> >>
> >> Signed-off-by: Jan Beulich 
> >>
> >> [1] 
> >> https://sourceware.org/git?p=binutils-gdb.git;a=commitdiff;h=7992631e8c0b0e711fbaba991348ef6f6e583725
> >> [2] 
> >> https://sourceware.org/git?p=binutils-gdb.git;a=commitdiff;h=2ee1792bec225ea19c71095cee5a3a9ae6df7c59
> >> [3] 
> >> https://sourceware.org/git?p=binutils-gdb.git;a=commitdiff;h=6d1ace6861e999361b30d1bc27459ab8094e0d4a
> >> ---
> >> Using .file has the perhaps undesirable side effect of generating a fair
> >> amount of (all identical) STT_FILE entries in the symbol table. We also
> >> can't use the supposedly assembler-internal (and hence undocumented)
> >> .appfile anymore, as it was removed [4]. Note that .linefile (also
> >> internal/undocumented) as well as the "#  " constructs the
> >> compiler emits, leading to .linefile insertion by the assembler, aren't
> >> of use anyway as these are processed and purged when processing .macro
> >> [3].
> >>
> >> [4] 
> >> https://sourceware.org/git?p=binutils-gdb.git;a=commitdiff;h=c39e89c3aaa3a6790f85e80f2da5022bc4bce38b
> >>
> >> --- a/xen/arch/x86/include/asm/spec_ctrl_asm.h
> >> +++ b/xen/arch/x86/include/asm/spec_ctrl_asm.h
> >> @@ -24,6 +24,8 @@
> >>  #include 
> >>  #include 
> >>  
> >> +#define FILE_AND_LINE .file __FILE__; .line __LINE__
> > 
> > Seeing as this seems to get added to all macros below, I guess you did
> > consider (and discarded) introducing a preprocessor macro do to the
> > asm macro definitons:
> > 
> > #define DECLARE_MACRO(n, ...) \
> > .macro n __VA_ARGS__ \
> > .file __FILE__; .line __LINE__
> 
> No, I didn't even consider that. I view such as too obfuscating - there's
> then e.g. no visual match with the .endm. Furthermore, as outlined in the
> description, I don't think this wants applying uniformly. There are
> macros which better don't have this added. Yet I also would prefer to not
> end up with a mix of .macro and DECLARE_MACRO().

I think it's a dummy question, but why would we want to add this to
some macros?

Isn't it better to always have the file and line reference where the
macro gets used?

Thanks, Roger.



Re: [PATCH] xen/evtchn: Add design for static event channel signaling for domUs..

2022-04-14 Thread Bertrand Marquis
Hi Stefano,

> On 14 Apr 2022, at 02:14, Stefano Stabellini  wrote:
> 
> On Mon, 11 Apr 2022, Bertrand Marquis wrote:
>> What you mention here is actually combining 2 different solutions inside
>> Xen to build a custom communication solution.
>> My assumption here is that the user will actually create the device tree
>> nodes he wants to do that and we should not create guest node entries
>> as it would enforce some design.
>> 
>> If everything can be statically defined for Xen then the user can also
>> statically define node entries inside his guest to make use of the events
>> and the shared memories.
>> 
>> For example one might need more than one event to build a communication
>> system, or more than one shared memory or could build something
>> communicating with multiple guest thus requiring even more events and
>> shared memories.
> 
> Hi Bertrand, Rahul,
> 
> If the guests are allowed some level of dynamic discovery, this feature
> is not needed. They can discover the shared memory location from the
> domU device tree, then proceed to allocate evtchns as needed and tell
> the other end the evtchn numbers over shared memory. I already have an
> example of it here:
> 
> https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/2251030537/Xen+Shared+Memory+and+Interrupts+Between+VMs
> 
> What if the guest doesn't support device tree at runtime, like baremetal
> or Zephyr? The shared memory address can be hardcoded or generated from
> device tree at build time. That's no problem. Then, the event channels
> can still be allocated at runtime and passed to the other end over
> shared memory. That's what the example on the wikipage does.
> 
> 
> When are static event channels actually useful? When the application
> cannot allocate the event channels at runtime at all. The reason for the
> restriction could be related to safety (no dynamic allocations at
> runtime) or convenience (everything else is fully static, why should the
> event channel numbers be dynamic?)

An other use case here is dom0less: you cannot have dom0 create them.

> 
> Given the above, I can see why there is no need to describe the static
> event channel info in the domU device tree: static event channels are
> only useful in fully static configurations, and in those configurations
> the domU device tree dynamically generated by Xen is not needed. I can
> see where you are coming from.
> 
> 
> The workflow that we have been trying to enable with the System Device
> Tree effort (System Device Tree is similar to a normal Device Tree plus
> the xen,domains nodes) is the following:
> 
> S-DT ---[lopper]---> Linux DT
>L--> Zephyr DT ---[Zephyr build]---> Zephyr .h files
> 
> S-DT contains all the needed information for both the regular Linux DT
> generation and also the Zephyr/RTOS/baremetal header files generation,
> that happens at build time.
> 
> S-DT is not the same as the Xen device tree, but so far it has been
> conceptually and practically similar. I always imagine that the bindings
> we have in Xen we'll also have corresponding bindings in System Device
> Tree.
> 
> For this workflow to work S-DT needs all the info so that both Linux DT
> and Zephyr DT and Zephyr .h files can be generated.
> 
> Does this proposal contain enough information so that Zephyr .h files
> could be statically generated with the event channel numbers and static
> shared memory regions addresses?
> 
> I am not sure. Maybe not?

Yes it should be possible to have all infos as the integrator will setup the
system and will decide upfront the address and the event(s) number(s).

> 
> 
> It is possible that the shared memory usage is so application specific
> that there is no point in even talking about it. But I think that
> introducing a simple bundle of both event channels and shared memory
> would help a lot.
> 
> Something like the following in the Xen device tree would be enough to
> specify an arbitrary number of event channels connected with the same
> domains sharing the memory region.
> 
> It looks like that if we did the below, we would carry a lot more useful
> information compared to the original proposal alone. We could add a
> similar xen,notificaiton property to the domU reserved-memory region in
> device tree generated by Xen for consistency, so that everything
> available to the domU is described fully in device tree.
> 
> 
>domU1 {
>compatible = "xen,domain";
> 
>/* one sub-node per local event channel */
>ec1: evtchn@1 {
>compatible = "xen,evtchn-v1";
>/* local-evtchn link-to-foreign-evtchn */
>xen,evtchn = <0x1 >
>};
>ec2: evtchn@2 {
>compatible = "xen,evtchn-v1";
>xen,evtchn = <0x2 >
>};
>/*
> * shared memory region between DomU1 and DomU2.
> */
>domU1-shared-mem@5000 {
>compatible = "xen,domain-shared-memory-v1";
>xen,shm-id = <0x1>;
>xen,shared-mem 

Re: [PATCH V5 1/2] xen/arm: Add i.MX lpuart driver

2022-04-14 Thread Bertrand Marquis



> On 14 Apr 2022, at 14:02, Bertrand Marquis  wrote:
> 
> Hi Peng,
> 
>> On 14 Apr 2022, at 08:44, Peng Fan (OSS)  wrote:
>> 
>> From: Peng Fan 
>> 
>> The i.MX LPUART Documentation:
>> https://www.nxp.com/webapp/Download?colCode=IMX8QMIEC
>> Chatper 13.6 Low Power Universal Asynchronous Receiver/
>> Transmitter (LPUART)
>> 
>> Tested-by: Henry Wang 
>> Signed-off-by: Peng Fan 
> Asked-by: Bertrand Marquis 
Acked-by: Bertrand Marquis 

(Auto correct, sorry for that)

Bertrand

> 
> I did not check the code but enough people went through this so I think it 
> can be merged.
> 
> Cheers
> Bertrand
> 
>> ---
>> xen/arch/arm/include/asm/imx-lpuart.h |  64 ++
>> xen/drivers/char/Kconfig  |   7 +
>> xen/drivers/char/Makefile |   1 +
>> xen/drivers/char/imx-lpuart.c | 276 ++
>> 4 files changed, 348 insertions(+)
>> create mode 100644 xen/arch/arm/include/asm/imx-lpuart.h
>> create mode 100644 xen/drivers/char/imx-lpuart.c
>> 
>> diff --git a/xen/arch/arm/include/asm/imx-lpuart.h 
>> b/xen/arch/arm/include/asm/imx-lpuart.h
>> new file mode 100644
>> index 00..fe859045dc
>> --- /dev/null
>> +++ b/xen/arch/arm/include/asm/imx-lpuart.h
>> @@ -0,0 +1,64 @@
>> +/*
>> + * xen/arch/arm/include/asm/imx-lpuart.h
>> + *
>> + * Common constant definition between early printk and the LPUART driver
>> + *
>> + * Peng Fan 
>> + * Copyright 2022 NXP
>> + *
>> + * This program is free software; you can redistribute it and/or modify
>> + * it under the terms of the GNU General Public License as published by
>> + * the Free Software Foundation; either version 2 of the License, or
>> + * (at your option) any later version.
>> + *
>> + * This program is distributed in the hope that it will be useful,
>> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
>> + * GNU General Public License for more details.
>> + */
>> +
>> +#ifndef __ASM_ARM_IMX_LPUART_H__
>> +#define __ASM_ARM_IMX_LPUART_H__
>> +
>> +/* 32-bit register definition */
>> +#define UARTBAUD  (0x10)
>> +#define UARTSTAT  (0x14)
>> +#define UARTCTRL  (0x18)
>> +#define UARTDATA  (0x1C)
>> +#define UARTMATCH (0x20)
>> +#define UARTMODIR (0x24)
>> +#define UARTFIFO  (0x28)
>> +#define UARTWATER (0x2c)
>> +
>> +#define UARTSTAT_TDRE BIT(23, UL)
>> +#define UARTSTAT_TC   BIT(22, UL)
>> +#define UARTSTAT_RDRF BIT(21, UL)
>> +#define UARTSTAT_OR   BIT(19, UL)
>> +
>> +#define UARTBAUD_OSR_SHIFT(24)
>> +#define UARTBAUD_OSR_MASK (0x1f)
>> +#define UARTBAUD_SBR_MASK (0x1fff)
>> +#define UARTBAUD_BOTHEDGE (0x0002)
>> +#define UARTBAUD_TDMAE(0x0080)
>> +#define UARTBAUD_RDMAE(0x0020)
>> +
>> +#define UARTCTRL_TIE  BIT(23, UL)
>> +#define UARTCTRL_TCIE BIT(22, UL)
>> +#define UARTCTRL_RIE  BIT(21, UL)
>> +#define UARTCTRL_ILIE BIT(20, UL)
>> +#define UARTCTRL_TE   BIT(19, UL)
>> +#define UARTCTRL_RE   BIT(18, UL)
>> +#define UARTCTRL_MBIT(4, UL)
>> +
>> +#define UARTWATER_RXCNT_OFF 24
>> +
>> +#endif /* __ASM_ARM_IMX_LPUART_H__ */
>> +
>> +/*
>> + * Local variables:
>> + * mode: C
>> + * c-file-style: "BSD"
>> + * c-basic-offset: 4
>> + * indent-tabs-mode: nil
>> + * End:
>> + */
>> diff --git a/xen/drivers/char/Kconfig b/xen/drivers/char/Kconfig
>> index 2ff5b288e2..e5f7b1d8eb 100644
>> --- a/xen/drivers/char/Kconfig
>> +++ b/xen/drivers/char/Kconfig
>> @@ -13,6 +13,13 @@ config HAS_CADENCE_UART
>>This selects the Xilinx Zynq Cadence UART. If you have a Xilinx Zynq
>>based board, say Y.
>> 
>> +config HAS_IMX_LPUART
>> +bool "i.MX LPUART driver"
>> +default y
>> +depends on ARM_64
>> +help
>> +  This selects the i.MX LPUART. If you have i.MX8QM based board, say Y.
>> +
>> config HAS_MVEBU
>>  bool "Marvell MVEBU UART driver"
>>  default y
>> diff --git a/xen/drivers/char/Makefile b/xen/drivers/char/Makefile
>> index 7c646d771c..14e67cf072 100644
>> --- a/xen/drivers/char/Makefile
>> +++ b/xen/drivers/char/Makefile
>> @@ -8,6 +8,7 @@ obj-$(CONFIG_HAS_MVEBU) += mvebu-uart.o
>> obj-$(CONFIG_HAS_OMAP) += omap-uart.o
>> obj-$(CONFIG_HAS_SCIF) += scif-uart.o
>> obj-$(CONFIG_HAS_EHCI) += ehci-dbgp.o
>> +obj-$(CONFIG_HAS_IMX_LPUART) += imx-lpuart.o
>> obj-$(CONFIG_ARM) += arm-uart.o
>> obj-y += serial.o
>> obj-$(CONFIG_XEN_GUEST) += xen_pv_console.o
>> diff --git a/xen/drivers/char/imx-lpuart.c b/xen/drivers/char/imx-lpuart.c
>> new file mode 100644
>> index 00..df44f91e5d
>> --- /dev/null
>> +++ b/xen/drivers/char/imx-lpuart.c
>> @@ -0,0 +1,276 @@
>> +/*
>> + * xen/drivers/char/imx-lpuart.c
>> + *
>> + * Driver for i.MX LPUART.
>> + *
>> + * Peng Fan 
>> + * Copyright 2022 NXP
>> + *
>> + * This program is free software; you can redistribute it and/or modify
>> + * it under the terms of the GNU General Public License as published 

Re: [PATCH V5 1/2] xen/arm: Add i.MX lpuart driver

2022-04-14 Thread Bertrand Marquis
Hi Peng,

> On 14 Apr 2022, at 08:44, Peng Fan (OSS)  wrote:
> 
> From: Peng Fan 
> 
> The i.MX LPUART Documentation:
> https://www.nxp.com/webapp/Download?colCode=IMX8QMIEC
> Chatper 13.6 Low Power Universal Asynchronous Receiver/
> Transmitter (LPUART)
> 
> Tested-by: Henry Wang 
> Signed-off-by: Peng Fan 
Asked-by: Bertrand Marquis 

I did not check the code but enough people went through this so I think it can 
be merged.

Cheers
Bertrand

> ---
> xen/arch/arm/include/asm/imx-lpuart.h |  64 ++
> xen/drivers/char/Kconfig  |   7 +
> xen/drivers/char/Makefile |   1 +
> xen/drivers/char/imx-lpuart.c | 276 ++
> 4 files changed, 348 insertions(+)
> create mode 100644 xen/arch/arm/include/asm/imx-lpuart.h
> create mode 100644 xen/drivers/char/imx-lpuart.c
> 
> diff --git a/xen/arch/arm/include/asm/imx-lpuart.h 
> b/xen/arch/arm/include/asm/imx-lpuart.h
> new file mode 100644
> index 00..fe859045dc
> --- /dev/null
> +++ b/xen/arch/arm/include/asm/imx-lpuart.h
> @@ -0,0 +1,64 @@
> +/*
> + * xen/arch/arm/include/asm/imx-lpuart.h
> + *
> + * Common constant definition between early printk and the LPUART driver
> + *
> + * Peng Fan 
> + * Copyright 2022 NXP
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by
> + * the Free Software Foundation; either version 2 of the License, or
> + * (at your option) any later version.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + */
> +
> +#ifndef __ASM_ARM_IMX_LPUART_H__
> +#define __ASM_ARM_IMX_LPUART_H__
> +
> +/* 32-bit register definition */
> +#define UARTBAUD  (0x10)
> +#define UARTSTAT  (0x14)
> +#define UARTCTRL  (0x18)
> +#define UARTDATA  (0x1C)
> +#define UARTMATCH (0x20)
> +#define UARTMODIR (0x24)
> +#define UARTFIFO  (0x28)
> +#define UARTWATER (0x2c)
> +
> +#define UARTSTAT_TDRE BIT(23, UL)
> +#define UARTSTAT_TC   BIT(22, UL)
> +#define UARTSTAT_RDRF BIT(21, UL)
> +#define UARTSTAT_OR   BIT(19, UL)
> +
> +#define UARTBAUD_OSR_SHIFT(24)
> +#define UARTBAUD_OSR_MASK (0x1f)
> +#define UARTBAUD_SBR_MASK (0x1fff)
> +#define UARTBAUD_BOTHEDGE (0x0002)
> +#define UARTBAUD_TDMAE(0x0080)
> +#define UARTBAUD_RDMAE(0x0020)
> +
> +#define UARTCTRL_TIE  BIT(23, UL)
> +#define UARTCTRL_TCIE BIT(22, UL)
> +#define UARTCTRL_RIE  BIT(21, UL)
> +#define UARTCTRL_ILIE BIT(20, UL)
> +#define UARTCTRL_TE   BIT(19, UL)
> +#define UARTCTRL_RE   BIT(18, UL)
> +#define UARTCTRL_MBIT(4, UL)
> +
> +#define UARTWATER_RXCNT_OFF 24
> +
> +#endif /* __ASM_ARM_IMX_LPUART_H__ */
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/xen/drivers/char/Kconfig b/xen/drivers/char/Kconfig
> index 2ff5b288e2..e5f7b1d8eb 100644
> --- a/xen/drivers/char/Kconfig
> +++ b/xen/drivers/char/Kconfig
> @@ -13,6 +13,13 @@ config HAS_CADENCE_UART
> This selects the Xilinx Zynq Cadence UART. If you have a Xilinx Zynq
> based board, say Y.
> 
> +config HAS_IMX_LPUART
> + bool "i.MX LPUART driver"
> + default y
> + depends on ARM_64
> + help
> +   This selects the i.MX LPUART. If you have i.MX8QM based board, say Y.
> +
> config HAS_MVEBU
>   bool "Marvell MVEBU UART driver"
>   default y
> diff --git a/xen/drivers/char/Makefile b/xen/drivers/char/Makefile
> index 7c646d771c..14e67cf072 100644
> --- a/xen/drivers/char/Makefile
> +++ b/xen/drivers/char/Makefile
> @@ -8,6 +8,7 @@ obj-$(CONFIG_HAS_MVEBU) += mvebu-uart.o
> obj-$(CONFIG_HAS_OMAP) += omap-uart.o
> obj-$(CONFIG_HAS_SCIF) += scif-uart.o
> obj-$(CONFIG_HAS_EHCI) += ehci-dbgp.o
> +obj-$(CONFIG_HAS_IMX_LPUART) += imx-lpuart.o
> obj-$(CONFIG_ARM) += arm-uart.o
> obj-y += serial.o
> obj-$(CONFIG_XEN_GUEST) += xen_pv_console.o
> diff --git a/xen/drivers/char/imx-lpuart.c b/xen/drivers/char/imx-lpuart.c
> new file mode 100644
> index 00..df44f91e5d
> --- /dev/null
> +++ b/xen/drivers/char/imx-lpuart.c
> @@ -0,0 +1,276 @@
> +/*
> + * xen/drivers/char/imx-lpuart.c
> + *
> + * Driver for i.MX LPUART.
> + *
> + * Peng Fan 
> + * Copyright 2022 NXP
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by
> + * the Free Software Foundation; either version 2 of the License, or
> + * (at your option) any later version.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * 

  1   2   >