Re: [PATCH 3/9] tests/acceptance: Tag NetBSD tests as 'os:netbsd'

2021-07-04 Thread Niek Linnenbank
for test_arm_orangepi_uboot_netbsd9:

Reviewed-by: Niek Linnenbank 

Op za 3 jul. 2021 10:44 schreef Philippe Mathieu-Daudé :

> On Sat, Jul 3, 2021 at 10:41 AM Philippe Mathieu-Daudé 
> wrote:
> >
> > CC'ing NetBSD maintainers.
> >
> > On 6/23/21 8:00 PM, Philippe Mathieu-Daudé wrote:
> > > Avocado allows us to select set of tests using tags.
> > > When wanting to run all tests using a NetBSD guest OS,
> > > it is convenient to have them tagged, add the 'os:netbsd'
> > > tag.
>
> I'll amend an command line example to run the NetBSD tests:
>
>$ avocado --show=app,console run -t os:netbsd tests/acceptance/
>
> > > Signed-off-by: Philippe Mathieu-Daudé 
> > > ---
> > >  tests/acceptance/boot_linux_console.py | 1 +
> > >  tests/acceptance/ppc_prep_40p.py   | 2 ++
> > >  2 files changed, 3 insertions(+)
> > >
> > > diff --git a/tests/acceptance/boot_linux_console.py
> b/tests/acceptance/boot_linux_console.py
> > > index cded547d1d4..20d57c1a8c6 100644
> > > --- a/tests/acceptance/boot_linux_console.py
> > > +++ b/tests/acceptance/boot_linux_console.py
> > > @@ -862,6 +862,7 @@ def test_arm_orangepi_uboot_netbsd9(self):
> > >  :avocado: tags=arch:arm
> > >  :avocado: tags=machine:orangepi-pc
> > >  :avocado: tags=device:sd
> > > +:avocado: tags=os:netbsd
> > >  """
> > >  # This test download a 304MB compressed image and expand it
> to 2GB
> > >  deb_url = ('http://snapshot.debian.org/archive/debian/'
> > > diff --git a/tests/acceptance/ppc_prep_40p.py
> b/tests/acceptance/ppc_prep_40p.py
> > > index 96ba13b8943..2993ee3b078 100644
> > > --- a/tests/acceptance/ppc_prep_40p.py
> > > +++ b/tests/acceptance/ppc_prep_40p.py
> > > @@ -27,6 +27,7 @@ def test_factory_firmware_and_netbsd(self):
> > >  """
> > >  :avocado: tags=arch:ppc
> > >  :avocado: tags=machine:40p
> > > +:avocado: tags=os:netbsd
> > >  :avocado: tags=slowness:high
> > >  """
> > >  bios_url = ('http://ftpmirror.your.org/pub/misc/'
> > > @@ -64,6 +65,7 @@ def test_openbios_and_netbsd(self):
> > >  """
> > >  :avocado: tags=arch:ppc
> > >  :avocado: tags=machine:40p
> > > +:avocado: tags=os:netbsd
> > >  """
> > >  drive_url = ('https://cdn.netbsd.org/pub/NetBSD/iso/7.1.2/'
> > >   'NetBSD-7.1.2-prep.iso')
> > >
> >
>


Re: [ovirt-users] Re: Any way to terminate stuck export task

2021-07-04 Thread Nir Soffer
On Sun, Jul 4, 2021 at 11:30 AM Strahil Nikolov  wrote:
>
> Isn't it better to strace it before killing qemu-img .

It may be too late, but it may help to understand why this qemu-img
run got stuck.

> Best Regards,
> Strahil Nikolov
>
> On Sun, Jul 4, 2021 at 0:15, Nir Soffer
>  wrote:
> On Sat, Jul 3, 2021 at 3:46 PM Gianluca Cecchi
>  wrote:
> >
> > Hello,
> > in oVirt 4.3.10 an export job to export domain takes too long, probably due 
> > to the NFS server slow.
> > How can I stop in a clean way the task?
> > I see the exported file remains always at 4,5Gb of size.
> > Command vmstat on host with qemu-img process gives no throughput but 
> > blocked processes
> >
> > procs ---memory-- ---swap-- -io -system-- 
> > --cpu-
> >  r  b  swpd  free  buff  cache  si  sobibo  in  cs us sy id wa st
> >  1  2  0 170208752 474412 1698575200  71972 2948 5677  0  0 
> > 96  4  0
> >  0  2  0 170207184 474412 1698578000  358099 5043 6790  0  
> > 0 96  4  0
> >  0  2  0 170208800 474412 1698580400  137941 2332 5527  0  
> > 0 96  4  0
> >
> > and the generated file refreshes its timestamp but not the size
> >
> > # ll -a  
> > /rhev/data-center/mnt/172.16.1.137:_nas_EXPORT-DOMAIN/20433d5d-9d82-4079-9252-0e746ce54106/images/125ad0f8-2672-468f-86a0-115a7be287f0/
> > total 4675651
> > drwxr-xr-x.  2 vdsm kvm  1024 Jul  3 14:10 .
> > drwxr-xr-x. 12 vdsm kvm  1024 Jul  3 14:10 ..
> > -rw-rw.  1 vdsm kvm 4787863552 Jul  3 14:33 
> > bb94ae66-e574-432b-bf68-7497bb3ca9e6
> > -rw-r--r--.  1 vdsm kvm268 Jul  3 14:10 
> > bb94ae66-e574-432b-bf68-7497bb3ca9e6.meta
> >
> > # du -sh  
> > /rhev/data-center/mnt/172.16.1.137:_nas_EXPORT-DOMAIN/20433d5d-9d82-4079-9252-0e746ce54106/images/125ad0f8-2672-468f-86a0-115a7be287f0/
> > 4.5G
> > /rhev/data-center/mnt/172.16.1.137:_nas_EXPORT-DOMAIN/20433d5d-9d82-4079-9252-0e746ce54106/images/125ad0f8-2672-468f-86a0-115a7be287f0/
> >
> > The VM has two disks, 35Gb and 300GB, not full but quite occupied.
> >
> > Can I simply kill the qemu-img processes on the chosen hypervisor (I 
> > suppose the SPM one)?
>
> Killing the qemu-img process is the only way to stop qemu-img. The system
> is designed to clean up properly after qemu-img terminates.
>
> If this capability is important to you, you can file RFE to allow aborting
> jobs from engine UI/API. This is already implemented internally, but we did
> not expose the capability.
>
> It would be useful to understand why qemu-img convert does not make progress.
> If you can reproduce this by running qemu-img from the shell, it can be useful
> to run it via strace and ask about this in qemu-block mailing list.
>
> Example strace usage:
>
> strace -o convert.log -f -tt -T qemu-img convert ...
>
> Also output of nfsstat during the copy can help.
>
> Nir
>
> ___
> Users mailing list -- us...@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/us...@ovirt.org/message/RAMVA5P5IBOXL3ZRJ73B577QQXGM6EKC/
>




[PATCH resend] block/replication.c: Properly attach children

2021-07-04 Thread Lukas Straub
The replication driver needs access to the children block-nodes of
it's child so it can issue bdrv_make_empty to manage the replication.
However, it does this by directly copying the BdrvChilds, which is
wrong.

Fix this by properly attaching the block-nodes with
bdrv_attach_child().

Also, remove a workaround introduced in commit
6ecbc6c52672db5c13805735ca02784879ce8285
"replication: Avoid blk_make_empty() on read-only child".

Signed-off-by: Lukas Straub 
---

Fix CC: email address so the mailing list doesn't reject it.

 block/replication.c | 94 +
 1 file changed, 61 insertions(+), 33 deletions(-)

diff --git a/block/replication.c b/block/replication.c
index 52163f2d1f..426d2b741a 100644
--- a/block/replication.c
+++ b/block/replication.c
@@ -166,7 +166,12 @@ static void replication_child_perm(BlockDriverState *bs, 
BdrvChild *c,
uint64_t perm, uint64_t shared,
uint64_t *nperm, uint64_t *nshared)
 {
-*nperm = BLK_PERM_CONSISTENT_READ;
+if (c == bs->file) {
+*nperm = BLK_PERM_CONSISTENT_READ;
+} else {
+*nperm = 0;
+}
+
 if ((bs->open_flags & (BDRV_O_INACTIVE | BDRV_O_RDWR)) == BDRV_O_RDWR) {
 *nperm |= BLK_PERM_WRITE;
 }
@@ -340,17 +345,7 @@ static void secondary_do_checkpoint(BDRVReplicationState 
*s, Error **errp)
 return;
 }
 
-BlockBackend *blk = blk_new(qemu_get_current_aio_context(),
-BLK_PERM_WRITE, BLK_PERM_ALL);
-blk_insert_bs(blk, s->hidden_disk->bs, _err);
-if (local_err) {
-error_propagate(errp, local_err);
-blk_unref(blk);
-return;
-}
-
-ret = blk_make_empty(blk, errp);
-blk_unref(blk);
+ret = bdrv_make_empty(s->hidden_disk, errp);
 if (ret < 0) {
 return;
 }
@@ -365,27 +360,35 @@ static void reopen_backing_file(BlockDriverState *bs, 
bool writable,
 Error **errp)
 {
 BDRVReplicationState *s = bs->opaque;
+BdrvChild *hidden_disk, *secondary_disk;
 BlockReopenQueue *reopen_queue = NULL;
 
+/*
+ * s->hidden_disk and s->secondary_disk may not be set yet, as they will
+ * only be set after the children are writable.
+ */
+hidden_disk = bs->file->bs->backing;
+secondary_disk = hidden_disk->bs->backing;
+
 if (writable) {
-s->orig_hidden_read_only = bdrv_is_read_only(s->hidden_disk->bs);
-s->orig_secondary_read_only = bdrv_is_read_only(s->secondary_disk->bs);
+s->orig_hidden_read_only = bdrv_is_read_only(hidden_disk->bs);
+s->orig_secondary_read_only = bdrv_is_read_only(secondary_disk->bs);
 }
 
-bdrv_subtree_drained_begin(s->hidden_disk->bs);
-bdrv_subtree_drained_begin(s->secondary_disk->bs);
+bdrv_subtree_drained_begin(hidden_disk->bs);
+bdrv_subtree_drained_begin(secondary_disk->bs);
 
 if (s->orig_hidden_read_only) {
 QDict *opts = qdict_new();
 qdict_put_bool(opts, BDRV_OPT_READ_ONLY, !writable);
-reopen_queue = bdrv_reopen_queue(reopen_queue, s->hidden_disk->bs,
+reopen_queue = bdrv_reopen_queue(reopen_queue, hidden_disk->bs,
  opts, true);
 }
 
 if (s->orig_secondary_read_only) {
 QDict *opts = qdict_new();
 qdict_put_bool(opts, BDRV_OPT_READ_ONLY, !writable);
-reopen_queue = bdrv_reopen_queue(reopen_queue, s->secondary_disk->bs,
+reopen_queue = bdrv_reopen_queue(reopen_queue, secondary_disk->bs,
  opts, true);
 }
 
@@ -393,8 +396,8 @@ static void reopen_backing_file(BlockDriverState *bs, bool 
writable,
 bdrv_reopen_multiple(reopen_queue, errp);
 }
 
-bdrv_subtree_drained_end(s->hidden_disk->bs);
-bdrv_subtree_drained_end(s->secondary_disk->bs);
+bdrv_subtree_drained_end(hidden_disk->bs);
+bdrv_subtree_drained_end(secondary_disk->bs);
 }
 
 static void backup_job_cleanup(BlockDriverState *bs)
@@ -451,6 +454,7 @@ static void replication_start(ReplicationState *rs, 
ReplicationMode mode,
 BlockDriverState *bs = rs->opaque;
 BDRVReplicationState *s;
 BlockDriverState *top_bs;
+BdrvChild *active_disk, *hidden_disk, *secondary_disk;
 int64_t active_length, hidden_length, disk_length;
 AioContext *aio_context;
 Error *local_err = NULL;
@@ -488,32 +492,32 @@ static void replication_start(ReplicationState *rs, 
ReplicationMode mode,
 case REPLICATION_MODE_PRIMARY:
 break;
 case REPLICATION_MODE_SECONDARY:
-s->active_disk = bs->file;
-if (!s->active_disk || !s->active_disk->bs ||
-!s->active_disk->bs->backing) {
+active_disk = bs->file;
+if (!active_disk || !active_disk->bs ||
+!active_disk->bs->backing) {
 error_setg(errp, "Active disk doesn't have backing 

[PATCH resend] nbd: register yank function earlier

2021-07-04 Thread Lukas Straub
Although unlikely, qemu might hang in nbd_send_request().

Allow recovery in this case by registering the yank function before
calling it.

Signed-off-by: Lukas Straub 
---

Fix CC: email address so the mailing list doesn't reject it.

 block/nbd.c | 8 +---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/block/nbd.c b/block/nbd.c
index 601fccc5ba..f6ff1c4fb4 100644
--- a/block/nbd.c
+++ b/block/nbd.c
@@ -369,32 +369,34 @@ int coroutine_fn 
nbd_co_do_establish_connection(BlockDriverState *bs,
 s->ioc = nbd_co_establish_connection(s->conn, >info, true, errp);
 if (!s->ioc) {
 return -ECONNREFUSED;
 }
 
+yank_register_function(BLOCKDEV_YANK_INSTANCE(s->bs->node_name), nbd_yank,
+   bs);
+
 ret = nbd_handle_updated_info(s->bs, NULL);
 if (ret < 0) {
 /*
  * We have connected, but must fail for other reasons.
  * Send NBD_CMD_DISC as a courtesy to the server.
  */
 NBDRequest request = { .type = NBD_CMD_DISC };
 
 nbd_send_request(s->ioc, );
 
+yank_unregister_function(BLOCKDEV_YANK_INSTANCE(s->bs->node_name),
+ nbd_yank, bs);
 object_unref(OBJECT(s->ioc));
 s->ioc = NULL;
 
 return ret;
 }
 
 qio_channel_set_blocking(s->ioc, false, NULL);
 qio_channel_attach_aio_context(s->ioc, bdrv_get_aio_context(bs));
 
-yank_register_function(BLOCKDEV_YANK_INSTANCE(s->bs->node_name), nbd_yank,
-   bs);
-
 /* successfully connected */
 s->state = NBD_CLIENT_CONNECTED;
 qemu_co_queue_restart_all(>free_sema);
 
 return 0;
-- 
2.32.0


pgpzd_EVcikxB.pgp
Description: OpenPGP digital signature


Re: [ovirt-users] Re: Any way to terminate stuck export task

2021-07-04 Thread Strahil Nikolov
Isn't it better to strace it before killing qemu-img .
Best Regards,Strahil Nikolov
 
 
  On Sun, Jul 4, 2021 at 0:15, Nir Soffer wrote:   On Sat, 
Jul 3, 2021 at 3:46 PM Gianluca Cecchi
 wrote:
>
> Hello,
> in oVirt 4.3.10 an export job to export domain takes too long, probably due 
> to the NFS server slow.
> How can I stop in a clean way the task?
> I see the exported file remains always at 4,5Gb of size.
> Command vmstat on host with qemu-img process gives no throughput but blocked 
> processes
>
> procs ---memory-- ---swap-- -io -system-- 
> --cpu-
>  r  b  swpd  free  buff  cache  si  so    bi    bo  in  cs us sy id wa st
>  1  2      0 170208752 474412 16985752    0    0  719    72 2948 5677  0  0 
>96  4  0
>  0  2      0 170207184 474412 16985780    0    0  3580    99 5043 6790  0  0 
>96  4  0
>  0  2      0 170208800 474412 16985804    0    0  1379    41 2332 5527  0  0 
>96  4  0
>
> and the generated file refreshes its timestamp but not the size
>
> # ll -a  
> /rhev/data-center/mnt/172.16.1.137:_nas_EXPORT-DOMAIN/20433d5d-9d82-4079-9252-0e746ce54106/images/125ad0f8-2672-468f-86a0-115a7be287f0/
> total 4675651
> drwxr-xr-x.  2 vdsm kvm      1024 Jul  3 14:10 .
> drwxr-xr-x. 12 vdsm kvm      1024 Jul  3 14:10 ..
> -rw-rw.  1 vdsm kvm 4787863552 Jul  3 14:33 
> bb94ae66-e574-432b-bf68-7497bb3ca9e6
> -rw-r--r--.  1 vdsm kvm        268 Jul  3 14:10 
> bb94ae66-e574-432b-bf68-7497bb3ca9e6.meta
>
> # du -sh  
> /rhev/data-center/mnt/172.16.1.137:_nas_EXPORT-DOMAIN/20433d5d-9d82-4079-9252-0e746ce54106/images/125ad0f8-2672-468f-86a0-115a7be287f0/
> 4.5G    
> /rhev/data-center/mnt/172.16.1.137:_nas_EXPORT-DOMAIN/20433d5d-9d82-4079-9252-0e746ce54106/images/125ad0f8-2672-468f-86a0-115a7be287f0/
>
> The VM has two disks, 35Gb and 300GB, not full but quite occupied.
>
> Can I simply kill the qemu-img processes on the chosen hypervisor (I suppose 
> the SPM one)?

Killing the qemu-img process is the only way to stop qemu-img. The system
is designed to clean up properly after qemu-img terminates.

If this capability is important to you, you can file RFE to allow aborting
jobs from engine UI/API. This is already implemented internally, but we did
not expose the capability.

It would be useful to understand why qemu-img convert does not make progress.
If you can reproduce this by running qemu-img from the shell, it can be useful
to run it via strace and ask about this in qemu-block mailing list.

Example strace usage:

    strace -o convert.log -f -tt -T qemu-img convert ...

Also output of nfsstat during the copy can help.

Nir
___
Users mailing list -- us...@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/us...@ovirt.org/message/RAMVA5P5IBOXL3ZRJ73B577QQXGM6EKC/