Re: [PATCH] virtio-net: Add check for mac address while peer is vdpa

2021-02-23 Thread Michael S. Tsirkin
> [PATCH] virtio-net: Add check for mac address while peer is vdpa
please do keep numbering patch versions.


On Wed, Feb 24, 2021 at 03:33:33PM +0800, Cindy Lu wrote:
> Sometime vdpa get an all zero mac address from the hardware, this will cause 
> the
> traffic down, So we add the check for in part.if we get an zero mac address 
> we will
> use the default/cmdline mac address instead

How does this differ from v4 of same patch?

Lots of typos above. I guess I can just rewrite the whole log ...

> 
> Signed-off-by: Cindy Lu 
> ---
>  hw/net/virtio-net.c | 7 ++-
>  1 file changed, 6 insertions(+), 1 deletion(-)
> 
> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> index 9179013ac4..f1648fc47d 100644
> --- a/hw/net/virtio-net.c
> +++ b/hw/net/virtio-net.c
> @@ -126,6 +126,7 @@ static void virtio_net_get_config(VirtIODevice *vdev, 
> uint8_t *config)
>  VirtIONet *n = VIRTIO_NET(vdev);
>  struct virtio_net_config netcfg;
>  NetClientState *nc = qemu_get_queue(n->nic);
> +static const MACAddr zero = { .a = { 0, 0, 0, 0, 0, 0 } };
>  
>  int ret = 0;
>  memset(, 0 , sizeof(struct virtio_net_config));
> @@ -151,7 +152,11 @@ static void virtio_net_get_config(VirtIODevice *vdev, 
> uint8_t *config)
>  ret = vhost_net_get_config(get_vhost_net(nc->peer), (uint8_t 
> *),
> n->config_size);
>  if (ret != -1) {
> -memcpy(config, , n->config_size);

Below needs a huge code comment explaining what is going on.

E.g.  if we get a valid mac from peer, use it.

If we get 0 instead
then we don't know the peer mac but since the
mac is 0 and that is not a legal value,
try to proceed assume something else (management, hardware) sets up the mac
correctly and consistently with the qemu configuration.


> +if (memcmp(, , sizeof(zero)) != 0) {

!= 0 is not necessary

> +memcpy(config, , n->config_size);

> +} else {
> +error_report("Get an all zero mac address from hardware");

So why are you skipping the copying of the whole config space? Why not
just skip the mac? Seems quite risky e.g. looks like it will break
things like reporting the link status, right?

> +}
>  }
>  }
>  }
> -- 
> 2.21.3




Re: [PATCH 1/2] gitlab-ci.yml: Allow custom make parallelism

2021-02-23 Thread Paolo Bonzini

On 23/02/21 20:34, Daniele Buono wrote:

This works, but setting this value to 1 for everybody seems a bit too
restrictive. While the gitlab ci runners don't have enough memory for
this, that's not necessarily true for every build platform, and linking
multiple targets in parallel with LTO can result in a big save in time,
so I'd prefer a customizable way.

How about adding a flag `--max-ld-procs` to configure to manually set
backend_max_links?


Another possibility is to invoke "meson configure build 
-Dbackend_max_links=1" after configure.


Paolo


This would also allow setting it up to any specific number above 1,
which looking at the Makefile seems to not be possible now: because of
how the -j flag is passed from make to ninja, a compilation is either
sequential or parallel based on #cpus





Re: [PATCH] disas: Fix build with glib2.0 >=2.67.3

2021-02-23 Thread Christian Ehrhardt
On Tue, Feb 23, 2021 at 5:12 PM Daniel P. Berrangé  wrote:
>
> On Tue, Feb 23, 2021 at 03:43:48PM +, Peter Maydell wrote:
> > On Tue, 23 Feb 2021 at 15:03, Christian Ehrhardt
> >  wrote:
> > >
> > > glib2.0 introduced a change in 2.67.3 and later which triggers an
> > > issue [1] for anyone including it's headers in a "extern C" context
> > > which a few files in disas/* do. An example of such an include chain
> > > and error look like:
> > >
> > > ../../disas/arm-a64.cc
> > > In file included from /usr/include/glib-2.0/glib/gmacros.h:241,
> > >  from 
> > > /usr/lib/x86_64-linux-gnu/glib-2.0/include/glibconfig.h:9,
> > >  from /usr/include/glib-2.0/glib/gtypes.h:32,
> > >  from /usr/include/glib-2.0/glib/galloca.h:32,
> > >  from /usr/include/glib-2.0/glib.h:30,
> > >  from 
> > > /<>/qemu-5.2+dfsg/include/glib-compat.h:32,
> > >  from 
> > > /<>/qemu-5.2+dfsg/include/qemu/osdep.h:126,
> > >  from ../../disas/arm-a64.cc:21:
> > > /usr/include/c++/10/type_traits:56:3: error: template with C linkage
> > >56 |   template
> > >   |   ^~~~
> > > ../../disas/arm-a64.cc:20:1: note: ‘extern "C"’ linkage started here
> > >20 | extern "C" {
> > >   | ^~
> > >
> > > To fix that move the include of osdep.h out of that section. It was added
> > > already as C++ fixup by e78490c44: "disas/arm-a64.cc: Include osdep.h 
> > > first".
> > >
> > > [1]: https://gitlab.gnome.org/GNOME/glib/-/issues/2331
> >
> > I'm not convinced by this as a fix, though I'm happy to be corrected
> > by somebody with a fuller understanding of C++. glib.h may be supposed
> > to work as a C++ header, but osdep.h as a whole is definitely a C header,
> > so I think it ought to be inside 'extern C'; and it has to be
> > the first header included; and it happens to want to include glib.h.
> >
> > Fixing glib.h seems like it would be nicer, assuming they haven't
> > already shipped this breakage.

They have already shipped this. And in the linked issue and from there
MR discussions
you'll find that everyone acknowledges that the "error" is in the
consuming projects
and that glib upstream is rejecting to fix it for e.g. the argument of
backward compatibility.

> Failing that, does it work to do:
>
> This was raised in Fedora and upstream GLib already
>
> https://gitlab.gnome.org/GNOME/glib/-/merge_requests/1935
> https://lists.fedoraproject.org/archives/list/de...@lists.fedoraproject.org/thread/J3P4TRHLWNDIKXF76OLYZNAPTABCZ3U5/#7LXFUDBBBIT23FE44QJYWX3I7U4EHW6M
>
> The key comment there is this one:
>
>   "Note that wrapping the header include in an extern declaration violates
>C++ standard requirements.  ("A translation unit shall include a header
>only outside of any declaration or definition", [using.headers]/3)"
>
> IOW, if we need to make osdep.h safe to use from C++, then we
> need to put the 'extern "C" { ... }'  bit in osdep.h itself,
> not in the things which include it.
>
> >
> > /*
> >  * glib.h expects to be included as a C++ header if we're
> >  * building a C++ file, but osdep.h and thus glib-compat.h are
> >  * C headers and should be inside an "extern C" block.
> >  */
> > #ifdef __cplusplus
> > extern "C++" {
> > #include 
> > #if defined(G_OS_UNIX)
> > #include 
> > #endif
> > }
> >
> > in include/glib-compat.h ?
>
> That'd be even worse.
>
> We need to make headers that need to be used from C++ code follow
> the pattern:
>
> #include 
> #include 
> #include 
> ...all other includs..
>
> extern "C" {
> ..
> only the declarations, no #includes
> ...
> };

While I can follow the words and always awesome explanations by Daniel,
I must admit that I'm a bit lost at what a v2 of this could look like.

osdep.h as of today unfortunately isn't as trivial as 1. include 2.
declarations.
There are late includes deep in cascading ifdef's and we all know that "just
moving includes around for the above fix to work in an easy way" in headers
will likely (maybe even silently) break things.

So I wonder is this going to become a massive patch either moving a lot or
adding many extern C declarations all over the place in osdep-h? Or did I
just fail to see that there is an obviously better approach to this?

>
> Regards,
> Daniel
> --
> |: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
> |: https://libvirt.org -o-https://fstop138.berrange.com :|
> |: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|
>


-- 
Christian Ehrhardt
Staff Engineer, Ubuntu Server
Canonical Ltd



[PATCH] virtio-net: Add check for mac address while peer is vdpa

2021-02-23 Thread Cindy Lu
Sometime vdpa get an all zero mac address from the hardware, this will cause the
traffic down, So we add the check for in part.if we get an zero mac address we 
will
use the default/cmdline mac address instead

Signed-off-by: Cindy Lu 
---
 hw/net/virtio-net.c | 7 ++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index 9179013ac4..f1648fc47d 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -126,6 +126,7 @@ static void virtio_net_get_config(VirtIODevice *vdev, 
uint8_t *config)
 VirtIONet *n = VIRTIO_NET(vdev);
 struct virtio_net_config netcfg;
 NetClientState *nc = qemu_get_queue(n->nic);
+static const MACAddr zero = { .a = { 0, 0, 0, 0, 0, 0 } };
 
 int ret = 0;
 memset(, 0 , sizeof(struct virtio_net_config));
@@ -151,7 +152,11 @@ static void virtio_net_get_config(VirtIODevice *vdev, 
uint8_t *config)
 ret = vhost_net_get_config(get_vhost_net(nc->peer), (uint8_t *),
n->config_size);
 if (ret != -1) {
-memcpy(config, , n->config_size);
+if (memcmp(, , sizeof(zero)) != 0) {
+memcpy(config, , n->config_size);
+} else {
+error_report("Get an all zero mac address from hardware");
+}
 }
 }
 }
-- 
2.21.3




Re: [PATCH 3/6] dp8393x: switch to use qemu_receive_packet() for loopback packet

2021-02-23 Thread Stefan Weil

Am 24.02.21 um 06:53 schrieb Jason Wang:


This patch switches to use qemu_receive_packet() which can detect
reentrancy and return early.

Signed-off-by: Jason Wang 
---
  hw/net/dp8393x.c | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hw/net/dp8393x.c b/hw/net/dp8393x.c
index 205c0decc5..019d4fe435 100644
--- a/hw/net/dp8393x.c
+++ b/hw/net/dp8393x.c
@@ -506,7 +506,7 @@ static void dp8393x_do_transmit_packets(dp8393xState *s)
  s->regs[SONIC_TCR] |= SONIC_TCR_CRSL;
  if (nc->info->can_receive(nc)) {
  s->loopback_packet = 1;
-nc->info->receive(nc, s->tx_buffer, tx_len);
+qemu_receice_packet(nc, s->tx_buffer, tx_Len);



Did you test compilation of that code? It looks like a typo ...

Regards

Stefan W.





Re: What prevents discarding a cluster during rewrite?

2021-02-23 Thread Vladimir Sementsov-Ogievskiy

23.02.2021 19:29, Kevin Wolf wrote:

Am 23.02.2021 um 16:23 hat Vladimir Sementsov-Ogievskiy geschrieben:



On 23.02.2021 13:35, Kevin Wolf wrote:

Am 22.02.2021 um 22:42 hat Vladimir Sementsov-Ogievskiy geschrieben:

23.02.2021 00:30, Vladimir Sementsov-Ogievskiy wrote:

Thinking of how to prevent dereferencing to zero (and discard) of
host cluster during flush of compressed cache (which I'm working on
now), I have a question.. What prevents it for normal writes?


I have no idea about why didn't it fail for years.. May be, I'm
missing something?


Ouch. I suppose the reason why we never ran into it is that afaik Linux
drains the queue before sending discard requests.

Of course, we still need to protect against this in QEMU. We can't just
unref a cluster that is still in use.


I have idea of fixing: increase the refcount of host cluster before
write to data_file (it's important to increase refacount in same
s->lock critical section where we get host_offset) and dereference it
after write.. It should help. Any thoughts?


It would cause metadata updates for rewrites. I don't know whether the
intermediate value would ever be written to disk, but at least we'd
rewrite the same refcounts as before. I don't think we want that.


Hmm, if that can provoke extra refcount cache flush that's bad..

May be we need something like of additional "dynamic" refcounts, so
that total_refcount = normal_refcount + dynamic_refcount.. And for
in-flight clusters dynamic_refcount is > 0. We can store dynamic
refcounts in GHashTable(cluster -> refcount).


Do you think it's worth the complexity? The qcow2 driver is already
relatively complicated today.


I started to implement it and actually it doesn't seem much more complicated
than additional CoRwLock. So, I think, I will make two patches (refcounts vs
CoRwMutex) and we'll compare.




Discard requests might be rare enough that not considering the cluster
at all could work. We could then take a reader CoRwlock during most
operations, and a writer lock for discard.

Actually, maybe s->lock should be this CoRwlock, and instead of dropping
it temporarily like we do now we would just upgrade and downgrade it as
needed. Maybe this would allow finer grained locking in other places,
too.


In this case many operations will be blocked during data writing, like
allocation of another cluster.. That doesn't seem good.


You're right, that would be too restrictive.


Separate CoRwLock looks more feasible.


Maybe that then.

Kevin




--
Best regards,
Vladimir



Re: [PATCH 0/6] Detect reentrant RX casue by loopback

2021-02-23 Thread no-reply
Patchew URL: 
https://patchew.org/QEMU/20210224055401.492407-1-jasow...@redhat.com/



Hi,

This series seems to have some coding style problems. See output below for
more information:

Type: series
Message-id: 20210224055401.492407-1-jasow...@redhat.com
Subject: [PATCH 0/6] Detect reentrant RX casue by loopback

=== TEST SCRIPT BEGIN ===
#!/bin/bash
git rev-parse base > /dev/null || exit 0
git config --local diff.renamelimit 0
git config --local diff.renames True
git config --local diff.algorithm histogram
./scripts/checkpatch.pl --mailback base..
=== TEST SCRIPT END ===

Updating 3c8cf5a9c21ff8782164d1def7f44bd888713384
From https://github.com/patchew-project/qemu
 * [new tag] patchew/20210224055401.492407-1-jasow...@redhat.com -> 
patchew/20210224055401.492407-1-jasow...@redhat.com
Switched to a new branch 'test'
6a41d09 tx_pkt: switch to use qemu_receive_packet_iov() for loopback
5480e55 sungem: switch to use qemu_receive_packet() for loopback
f35fab6 msf2-mac: switch to use qemu_receive_packet() for loopback
1dc29cf dp8393x: switch to use qemu_receive_packet() for loopback packet
6c79834 e1000: switch to use qemu_receive_packet() for loopback
04660c0 net: introduce qemu_receive_packet()

=== OUTPUT BEGIN ===
1/6 Checking commit 04660c06d88e (net: introduce qemu_receive_packet())
ERROR: space required after that ',' (ctx:VxV)
#42: FILE: include/net/net.h:154:
+ssize_t qemu_receive_packet(NetClientState *nc, const uint8_t *buf,int size);
   ^

total: 1 errors, 0 warnings, 115 lines checked

Patch 1/6 has style problems, please review.  If any of these errors
are false positives report them to the maintainer, see
CHECKPATCH in MAINTAINERS.

2/6 Checking commit 6c7983468e94 (e1000: switch to use qemu_receive_packet() 
for loopback)
3/6 Checking commit 1dc29cfa3117 (dp8393x: switch to use qemu_receive_packet() 
for loopback packet)
4/6 Checking commit f35fab6cb737 (msf2-mac: switch to use qemu_receive_packet() 
for loopback)
5/6 Checking commit 5480e5557678 (sungem: switch to use qemu_receive_packet() 
for loopback)
6/6 Checking commit 6a41d093a705 (tx_pkt: switch to use 
qemu_receive_packet_iov() for loopback)
=== OUTPUT END ===

Test command exited with code: 1


The full log is available at
http://patchew.org/logs/20210224055401.492407-1-jasow...@redhat.com/testing.checkpatch/?type=message.
---
Email generated automatically by Patchew [https://patchew.org/].
Please send your feedback to patchew-de...@redhat.com

[PATCH 5/6] sungem: switch to use qemu_receive_packet() for loopback

2021-02-23 Thread Jason Wang
This patch switches to use qemu_receive_packet() which can detect
reentrancy and return early.

Signed-off-by: Jason Wang 
---
 hw/net/sungem.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hw/net/sungem.c b/hw/net/sungem.c
index 33c3722df6..3684a4d733 100644
--- a/hw/net/sungem.c
+++ b/hw/net/sungem.c
@@ -306,7 +306,7 @@ static void sungem_send_packet(SunGEMState *s, const 
uint8_t *buf,
 NetClientState *nc = qemu_get_queue(s->nic);
 
 if (s->macregs[MAC_XIFCFG >> 2] & MAC_XIFCFG_LBCK) {
-nc->info->receive(nc, buf, size);
+qemu_receive_packet(nc, buf, size);
 } else {
 qemu_send_packet(nc, buf, size);
 }
-- 
2.25.1




[PATCH 4/6] msf2-mac: switch to use qemu_receive_packet() for loopback

2021-02-23 Thread Jason Wang
This patch switches to use qemu_receive_packet() which can detect
reentrancy and return early.

Signed-off-by: Jason Wang 
---
 hw/net/msf2-emac.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hw/net/msf2-emac.c b/hw/net/msf2-emac.c
index 32ba9e8412..3e6206044f 100644
--- a/hw/net/msf2-emac.c
+++ b/hw/net/msf2-emac.c
@@ -158,7 +158,7 @@ static void msf2_dma_tx(MSF2EmacState *s)
  * R_CFG1 bit 0 is set.
  */
 if (s->regs[R_CFG1] & R_CFG1_LB_EN_MASK) {
-nc->info->receive(nc, buf, size);
+qemu_receive_packet(nc, buf, size);
 } else {
 qemu_send_packet(nc, buf, size);
 }
-- 
2.25.1




[PATCH 1/6] net: introduce qemu_receive_packet()

2021-02-23 Thread Jason Wang
Some NIC supports loopback mode and this is done by calling
nc->info->receive() directly which in fact suppresses the effort of
reentrancy check that is done in qemu_net_queue_send().

Unfortunately we can use qemu_net_queue_send() here since for loop
back there's no sender as peer, so this patch introduce a
qemu_receive_packet() which is used for implementing loopback mode
for a NIC with this check.

NIC that supports loopback mode will be converted to this helper.

Signed-off-by: Jason Wang 
---
 include/net/net.h   |  5 +
 include/net/queue.h |  8 
 net/net.c   | 38 +++---
 net/queue.c | 22 ++
 4 files changed, 66 insertions(+), 7 deletions(-)

diff --git a/include/net/net.h b/include/net/net.h
index 919facaad2..65eb8a58c5 100644
--- a/include/net/net.h
+++ b/include/net/net.h
@@ -144,12 +144,17 @@ void *qemu_get_nic_opaque(NetClientState *nc);
 void qemu_del_net_client(NetClientState *nc);
 typedef void (*qemu_nic_foreach)(NICState *nic, void *opaque);
 void qemu_foreach_nic(qemu_nic_foreach func, void *opaque);
+int qemu_can_receive_packet(NetClientState *nc);
 int qemu_can_send_packet(NetClientState *nc);
 ssize_t qemu_sendv_packet(NetClientState *nc, const struct iovec *iov,
   int iovcnt);
 ssize_t qemu_sendv_packet_async(NetClientState *nc, const struct iovec *iov,
 int iovcnt, NetPacketSent *sent_cb);
 ssize_t qemu_send_packet(NetClientState *nc, const uint8_t *buf, int size);
+ssize_t qemu_receive_packet(NetClientState *nc, const uint8_t *buf,int size);
+ssize_t qemu_receive_packet_iov(NetClientState *nc,
+const struct iovec *iov,
+int iovcnt);
 ssize_t qemu_send_packet_raw(NetClientState *nc, const uint8_t *buf, int size);
 ssize_t qemu_send_packet_async(NetClientState *nc, const uint8_t *buf,
int size, NetPacketSent *sent_cb);
diff --git a/include/net/queue.h b/include/net/queue.h
index c0269bb1dc..9f2f289d77 100644
--- a/include/net/queue.h
+++ b/include/net/queue.h
@@ -55,6 +55,14 @@ void qemu_net_queue_append_iov(NetQueue *queue,
 
 void qemu_del_net_queue(NetQueue *queue);
 
+ssize_t qemu_net_queue_receive(NetQueue *queue,
+   const uint8_t *data,
+   size_t size);
+
+ssize_t qemu_net_queue_receive_iov(NetQueue *queue,
+   const struct iovec *iov,
+   int iovcnt);
+
 ssize_t qemu_net_queue_send(NetQueue *queue,
 NetClientState *sender,
 unsigned flags,
diff --git a/net/net.c b/net/net.c
index e1035f21d1..6e470133ad 100644
--- a/net/net.c
+++ b/net/net.c
@@ -528,6 +528,17 @@ int qemu_set_vnet_be(NetClientState *nc, bool is_be)
 #endif
 }
 
+int qemu_can_receive_packet(NetClientState *nc)
+{
+if (nc->receive_disabled) {
+return 0;
+} else if (nc->info->can_receive &&
+   !nc->info->can_receive(nc)) {
+return 0;
+}
+return 1;
+}
+
 int qemu_can_send_packet(NetClientState *sender)
 {
 int vm_running = runstate_is_running();
@@ -540,13 +551,7 @@ int qemu_can_send_packet(NetClientState *sender)
 return 1;
 }
 
-if (sender->peer->receive_disabled) {
-return 0;
-} else if (sender->peer->info->can_receive &&
-   !sender->peer->info->can_receive(sender->peer)) {
-return 0;
-}
-return 1;
+return qemu_can_receive_packet(sender->peer);
 }
 
 static ssize_t filter_receive_iov(NetClientState *nc,
@@ -679,6 +684,25 @@ ssize_t qemu_send_packet(NetClientState *nc, const uint8_t 
*buf, int size)
 return qemu_send_packet_async(nc, buf, size, NULL);
 }
 
+ssize_t qemu_receive_packet(NetClientState *nc, const uint8_t *buf, int size)
+{
+if (!qemu_can_receive_packet(nc)) {
+return 0;
+}
+
+return qemu_net_queue_receive(nc->incoming_queue, buf, size);
+}
+
+ssize_t qemu_receive_packet_iov(NetClientState *nc, const struct iovec *iov,
+int iovcnt)
+{
+if (!qemu_can_receive_packet(nc)) {
+return 0;
+}
+
+return qemu_net_queue_receive_iov(nc->incoming_queue, iov, iovcnt);
+}
+
 ssize_t qemu_send_packet_raw(NetClientState *nc, const uint8_t *buf, int size)
 {
 return qemu_send_packet_async_with_flags(nc, QEMU_NET_PACKET_FLAG_RAW,
diff --git a/net/queue.c b/net/queue.c
index 19e32c80fd..c872d51df8 100644
--- a/net/queue.c
+++ b/net/queue.c
@@ -182,6 +182,28 @@ static ssize_t qemu_net_queue_deliver_iov(NetQueue *queue,
 return ret;
 }
 
+ssize_t qemu_net_queue_receive(NetQueue *queue,
+   const uint8_t *data,
+   size_t size)
+{
+if (queue->delivering) {
+return 0;
+}
+
+return qemu_net_queue_deliver(queue, NULL, 0, data, size);
+}
+
+ssize_t 

[PATCH 6/6] tx_pkt: switch to use qemu_receive_packet_iov() for loopback

2021-02-23 Thread Jason Wang
This patch switches to use qemu_receive_receive_iov() which can detect
reentrancy and return early.

Signed-off-by: Jason Wang 
---
 hw/net/net_tx_pkt.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hw/net/net_tx_pkt.c b/hw/net/net_tx_pkt.c
index da262edc3e..1f9aa59eca 100644
--- a/hw/net/net_tx_pkt.c
+++ b/hw/net/net_tx_pkt.c
@@ -553,7 +553,7 @@ static inline void net_tx_pkt_sendv(struct NetTxPkt *pkt,
 NetClientState *nc, const struct iovec *iov, int iov_cnt)
 {
 if (pkt->is_loopback) {
-nc->info->receive_iov(nc, iov, iov_cnt);
+qemu_receive_packet_iov(nc, iov, iov_cnt);
 } else {
 qemu_sendv_packet(nc, iov, iov_cnt);
 }
-- 
2.25.1




[PATCH 3/6] dp8393x: switch to use qemu_receive_packet() for loopback packet

2021-02-23 Thread Jason Wang
This patch switches to use qemu_receive_packet() which can detect
reentrancy and return early.

Signed-off-by: Jason Wang 
---
 hw/net/dp8393x.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hw/net/dp8393x.c b/hw/net/dp8393x.c
index 205c0decc5..019d4fe435 100644
--- a/hw/net/dp8393x.c
+++ b/hw/net/dp8393x.c
@@ -506,7 +506,7 @@ static void dp8393x_do_transmit_packets(dp8393xState *s)
 s->regs[SONIC_TCR] |= SONIC_TCR_CRSL;
 if (nc->info->can_receive(nc)) {
 s->loopback_packet = 1;
-nc->info->receive(nc, s->tx_buffer, tx_len);
+qemu_receice_packet(nc, s->tx_buffer, tx_Len);
 }
 } else {
 /* Transmit packet */
-- 
2.25.1




[PATCH 2/6] e1000: switch to use qemu_receive_packet() for loopback

2021-02-23 Thread Jason Wang
This patch switches to use qemu_receive_packet() which can detect
reentrancy and return early.

Signed-off-by: Jason Wang 
---
 hw/net/e1000.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hw/net/e1000.c b/hw/net/e1000.c
index 4345d863e6..4f75b44cfc 100644
--- a/hw/net/e1000.c
+++ b/hw/net/e1000.c
@@ -546,7 +546,7 @@ e1000_send_packet(E1000State *s, const uint8_t *buf, int 
size)
 
 NetClientState *nc = qemu_get_queue(s->nic);
 if (s->phy_reg[PHY_CTRL] & MII_CR_LOOPBACK) {
-nc->info->receive(nc, buf, size);
+qemu_receive_packet(nc, buf, size);
 } else {
 qemu_send_packet(nc, buf, size);
 }
-- 
2.25.1




[PATCH 0/6] Detect reentrant RX casue by loopback

2021-02-23 Thread Jason Wang
Hi All:

Followed by commit 22dc8663d9 ("net: forbid the reentrant RX"), we
still need to fix the issues casued by loopback mode where the NIC
usually it via calling nc->info->receive() directly.

The fix is to introduce new network helper and check the
queue->delivering.

Thanks

Jason Wang (6):
  net: introduce qemu_receive_packet()
  e1000: switch to use qemu_receive_packet() for loopback
  dp8393x: switch to use qemu_receive_packet() for loopback packet
  msf2-mac: switch to use qemu_receive_packet() for loopback
  sungem: switch to use qemu_receive_packet() for loopback
  tx_pkt: switch to use qemu_receive_packet_iov() for loopback

 hw/net/dp8393x.c|  2 +-
 hw/net/e1000.c  |  2 +-
 hw/net/msf2-emac.c  |  2 +-
 hw/net/net_tx_pkt.c |  2 +-
 hw/net/sungem.c |  2 +-
 include/net/net.h   |  5 +
 include/net/queue.h |  8 
 net/net.c   | 38 +++---
 net/queue.c | 22 ++
 9 files changed, 71 insertions(+), 12 deletions(-)

-- 
2.25.1




[Bug 1914535] Re: PL110 8-bit mode is not emulated correctly

2021-02-23 Thread Vadim Averin
Well... I don't have any ARM Versatile Express board. Also,
documentation and manuals are not saying much about it

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1914535

Title:
  PL110 8-bit mode is not emulated correctly

Status in QEMU:
  New

Bug description:
  When the emulated pl110/pl111 is switched programmatically to 8-bit
  color depth mode, the display is drawn green and blue, but the real
  PL110 displays grayscale in 8-bit mode.

  The bug appears in qemu-system-arm version 3.1.0 (Debian
  1:3.1+dfsg-8+deb10u8) and qemu-system-arm version 5.2.50
  (v5.2.0-1579-g99ae0cd90d).

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1914535/+subscriptions



Re: [PATCH qemu v14] spapr: Implement Open Firmware client interface

2021-02-23 Thread no-reply
Patchew URL: https://patchew.org/QEMU/20210224054130.4540-1-...@ozlabs.ru/



Hi,

This series seems to have some coding style problems. See output below for
more information:

Type: series
Message-id: 20210224054130.4540-1-...@ozlabs.ru
Subject: [PATCH qemu v14] spapr: Implement Open Firmware client interface

=== TEST SCRIPT BEGIN ===
#!/bin/bash
git rev-parse base > /dev/null || exit 0
git config --local diff.renamelimit 0
git config --local diff.renames True
git config --local diff.algorithm histogram
./scripts/checkpatch.pl --mailback base..
=== TEST SCRIPT END ===

Updating 3c8cf5a9c21ff8782164d1def7f44bd888713384
From https://github.com/patchew-project/qemu
 * [new tag] patchew/20210224054130.4540-1-...@ozlabs.ru -> 
patchew/20210224054130.4540-1-...@ozlabs.ru
Switched to a new branch 'test'
3fc539b spapr: Implement Open Firmware client interface

=== OUTPUT BEGIN ===
WARNING: line over 80 characters
#268: FILE: hw/ppc/spapr.c:4463:
+ClientArchitectureSupportClass *casc = 
CLIENT_ARCHITECTURE_SUPPORT_CLASS(oc);

WARNING: line over 80 characters
#1431: FILE: hw/ppc/vof.h:29:
+INTERFACE_CHECK(ClientArchitectureSupport, (obj), 
TYPE_CLIENT_ARCHITECTURE_SUPPORT)

ERROR: code indent should never use tabs
#1548: FILE: pc-bios/vof/bootmem.c:5:
+^Iuint64_t kern[2];$

ERROR: code indent should never use tabs
#1549: FILE: pc-bios/vof/bootmem.c:6:
+^Iphandle chosen = ci_finddevice("/chosen");$

ERROR: code indent should never use tabs
#1551: FILE: pc-bios/vof/bootmem.c:8:
+^Iif (ci_getprop(chosen, "qemu,boot-kernel", kern, sizeof(kern)) !=$

ERROR: code indent should never use tabs
#1552: FILE: pc-bios/vof/bootmem.c:9:
+^I^I^Isizeof(kern))$

ERROR: code indent should never use tabs
#1553: FILE: pc-bios/vof/bootmem.c:10:
+^I^Ireturn;$

ERROR: code indent should never use tabs
#1555: FILE: pc-bios/vof/bootmem.c:12:
+^Ido_boot(kern[0], initrd, initrdsize);$

ERROR: externs should be avoided in .c files
#1574: FILE: pc-bios/vof/ci.c:12:
+extern uint32_t ci_entry(uint32_t params);

ERROR: externs should be avoided in .c files
#1576: FILE: pc-bios/vof/ci.c:14:
+extern unsigned long hv_rtas(unsigned long params);

ERROR: externs should be avoided in .c files
#1577: FILE: pc-bios/vof/ci.c:15:
+extern unsigned int hv_rtas_size;

ERROR: code indent should never use tabs
#1581: FILE: pc-bios/vof/ci.c:19:
+^Ivoid *rtasbase;$

ERROR: code indent should never use tabs
#1582: FILE: pc-bios/vof/ci.c:20:
+^Iuint32_t rtassize = 0;$

ERROR: code indent should never use tabs
#1583: FILE: pc-bios/vof/ci.c:21:
+^Iphandle rtas;$

ERROR: code indent should never use tabs
#1585: FILE: pc-bios/vof/ci.c:23:
+^Iif (strcmp("call-method", (void *)(unsigned long) pargs->service))$

ERROR: braces {} are necessary for all arms of this statement
#1585: FILE: pc-bios/vof/ci.c:23:
+   if (strcmp("call-method", (void *)(unsigned long) pargs->service))
[...]

ERROR: code indent should never use tabs
#1586: FILE: pc-bios/vof/ci.c:24:
+^I^Ireturn false;$

ERROR: code indent should never use tabs
#1588: FILE: pc-bios/vof/ci.c:26:
+^Iif (strcmp("instantiate-rtas", (void *)(unsigned long) pargs->args[0]))$

ERROR: braces {} are necessary for all arms of this statement
#1588: FILE: pc-bios/vof/ci.c:26:
+   if (strcmp("instantiate-rtas", (void *)(unsigned long) pargs->args[0]))
[...]

ERROR: code indent should never use tabs
#1589: FILE: pc-bios/vof/ci.c:27:
+^I^Ireturn false;$

ERROR: code indent should never use tabs
#1591: FILE: pc-bios/vof/ci.c:29:
+^Irtas = ci_finddevice("/rtas");$

ERROR: code indent should never use tabs
#1592: FILE: pc-bios/vof/ci.c:30:
+^Ici_getprop(rtas, "rtas-size", , sizeof(rtassize));$

ERROR: code indent should never use tabs
#1593: FILE: pc-bios/vof/ci.c:31:
+^Iif (rtassize < hv_rtas_size)$

ERROR: braces {} are necessary for all arms of this statement
#1593: FILE: pc-bios/vof/ci.c:31:
+   if (rtassize < hv_rtas_size)
[...]

ERROR: code indent should never use tabs
#1594: FILE: pc-bios/vof/ci.c:32:
+^I^Ireturn false;$

ERROR: code indent should never use tabs
#1596: FILE: pc-bios/vof/ci.c:34:
+^Irtasbase = (void *)(unsigned long) pargs->args[2];$

ERROR: code indent should never use tabs
#1598: FILE: pc-bios/vof/ci.c:36:
+^Imemcpy(rtasbase, hv_rtas, hv_rtas_size);$

ERROR: code indent should never use tabs
#1599: FILE: pc-bios/vof/ci.c:37:
+^Ipargs->args[pargs->nargs] = 0;$

ERROR: code indent should never use tabs
#1600: FILE: pc-bios/vof/ci.c:38:
+^Ipargs->args[pargs->nargs + 1] = pargs->args[2];$

ERROR: code indent should never use tabs
#1602: FILE: pc-bios/vof/ci.c:40:
+^Ireturn true;$

ERROR: code indent should never use tabs
#1607: FILE: pc-bios/vof/ci.c:45:
+^Iif (!prom_handle((void *)(unsigned long) args))$

ERROR: braces {} are necessary for all arms of this statement
#1607: FILE: pc-bios/vof/ci.c:45:
+   if (!prom_handle((void *)(unsigned long) args))
[...]

ERROR: code indent should never use tabs
#1608: FILE: pc-bios/vof/ci.c:46:
+^I^Ici_entry(args);$

ERROR: braces {} are 

[PATCH] e1000: fail early for evil descriptor

2021-02-23 Thread Jason Wang
During procss_tx_desc(), driver can try to chain data descriptor with
legacy descriptor, when will lead underflow for the following
calculation in process_tx_desc() for bytes:

if (tp->size + bytes > msh)
bytes = msh - tp->size;

This will lead a infinite loop. So check and fail early if tp->size if
greater or equal to msh.

Reported-by: Alexander Bulekov 
Reported-by: Cheolwoo Myung 
Reported-by: Ruhr-University Bochum 
Cc: Prasad J Pandit 
Signed-off-by: Jason Wang 
---
 hw/net/e1000.c | 4 
 1 file changed, 4 insertions(+)

diff --git a/hw/net/e1000.c b/hw/net/e1000.c
index d8da2f6528..4345d863e6 100644
--- a/hw/net/e1000.c
+++ b/hw/net/e1000.c
@@ -670,6 +670,9 @@ process_tx_desc(E1000State *s, struct e1000_tx_desc *dp)
 msh = tp->tso_props.hdr_len + tp->tso_props.mss;
 do {
 bytes = split_size;
+if (tp->size >= msh) {
+goto eop;
+}
 if (tp->size + bytes > msh)
 bytes = msh - tp->size;
 
@@ -695,6 +698,7 @@ process_tx_desc(E1000State *s, struct e1000_tx_desc *dp)
 tp->size += split_size;
 }
 
+eop:
 if (!(txd_lower & E1000_TXD_CMD_EOP))
 return;
 if (!(tp->cptse && tp->size < tp->tso_props.hdr_len)) {
-- 
2.25.1




[PATCH qemu v14] spapr: Implement Open Firmware client interface

2021-02-23 Thread Alexey Kardashevskiy
The PAPR platform which describes an OS environment that's presented by
a combination of a hypervisor and firmware. The features it specifies
require collaboration between the firmware and the hypervisor.

Since the beginning, the runtime component of the firmware (RTAS) has
been implemented as a 20 byte shim which simply forwards it to
a hypercall implemented in qemu. The boot time firmware component is
SLOF - but a build that's specific to qemu, and has always needed to be
updated in sync with it. Even though we've managed to limit the amount
of runtime communication we need between qemu and SLOF, there's some,
and it has become increasingly awkward to handle as we've implemented
new features.

This implements a boot time OF client interface (CI) which is
enabled by a new "x-vof" pseries machine option (stands for "Virtual Open
Firmware). When enabled, QEMU implements the custom H_OF_CLIENT hcall
which implements Open Firmware Client Interface (OF CI). This allows
using a smaller stateless firmware which does not have to manage
the device tree.

The new "vof.bin" firmware image is included with source code under
pc-bios/. It also includes RTAS blob.

This implements a handful of CI methods just to get -kernel/-initrd
working. In particular, this implements the device tree fetching and
simple memory allocator - "claim" (an OF CI memory allocator) and updates
"/memory@0/available" to report the client about available memory.

This implements changing some device tree properties which we know how
to deal with, the rest is ignored. To allow changes, this skips
fdt_pack() when x-vof=on as not packing the blob leaves some room for
appending.

In absence of SLOF, this assigns phandles to device tree nodes to make
device tree traversing work.

When x-vof=on, this adds "/chosen" every time QEMU (re)builds a tree.

This adds basic instances support which are managed by a hash map
ihandle -> [phandle].

Before the guest started, the used memory is:
0..4000 - the initial firmware
1..18 - stack

This OF CI does not implement "interpret".

Unlike SLOF, this does not format uninitialized nvram. Instead, this
includes a disk image with pre-formatted nvram.

With this basic support, this can only boot into kernel directly.
However this is just enough for the petitboot kernel and initradmdisk to
boot from any possible source. Note this requires reasonably recent guest
kernel with:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=df5be5be8735

The immediate benefit is much faster booting time which especially
crucial with fully emulated early CPU bring up environments. Also this
may come handy when/if GRUB-in-the-userspace sees light of the day.

This separates VOF and sPAPR in a hope that VOF bits may be reused by
other POWERPC boards which do not support pSeries.

This is coded in assumption that later on we might be adding support for
booting from QEMU backends (blockdev is the first candidate) without
devices/drivers in between as OF1275 does not require that and
it is quite easy to so.

Signed-off-by: Alexey Kardashevskiy 
---

The example command line is:

/home/aik/pbuild/qemu-killslof-localhost-ppc64/qemu-system-ppc64 \
-nodefaults \
-chardev stdio,id=STDIO0,signal=off,mux=on \
-device spapr-vty,id=svty0,reg=0x71000110,chardev=STDIO0 \
-mon id=MON0,chardev=STDIO0,mode=readline \
-nographic \
-vga none \
-enable-kvm \
-m 2G \
-machine 
pseries,x-vof=on,cap-cfpc=broken,cap-sbbc=broken,cap-ibs=broken,cap-ccf-assist=off
 \
-kernel pbuild/kernel-le-guest/vmlinux \
-initrd pb/rootfs.cpio.xz \
-drive 
id=DRIVE0,if=none,file=./p/qemu-killslof/pc-bios/vof/nvram.bin,format=raw \
-global spapr-nvram.drive=DRIVE0 \
-snapshot \
-smp 8,threads=8 \
-L /home/aik/t/qemu-ppc64-bios/ \
-trace events=qemu_trace_events \
-d guest_errors \
-chardev socket,id=SOCKET0,server,nowait,path=qemu.mon.tmux26 \
-mon chardev=SOCKET0,mode=control

---
Changes:
v14:
* check for truncates in readstr()
* ditched a separate vof_reset()
* spapr->vof is a pointer now, dropped the "on" field
* removed rtas_base from vof and updated comment why we allow setting it
* added myself to maintainers
* updated commit log about blockdev and other possible platforms
* added a note why new hcall is 0x5
* no in place endianness convertion in spapr_h_vof_client
* converted all cpu_physical_memory_read/write to address_space_rw
* git mv hw/ppc/spapr_vof_client.c hw/ppc/spapr_vof.c

v13:
* rebase on latest ppc-for-6.0
* shuffled code around to touch spapr.c less

v12:
* split VOF and SPAPR

v11:
* added g_autofree
* fixed gcc warnings
* fixed few leaks
* added nvram image to make "nvram --print-config" not crash;
Note that contrary to  MIN_NVRAM_SIZE (8 * KiB), the actual minimum size
is 16K, or it just does not work (empty output from "nvram")

v10:
* now rebased to compile with meson

v9:
* remove special handling of /rtas/rtas-size as now we always add it in QEMU
* removed leftovers from scsi/grub/stdout/stdin/...

v8:
* no 

[REBASE] [PATCH v5] qga: Correct loop count in qmp_guest_get_vcpus()

2021-02-23 Thread Lin Ma
The guest-get-vcpus returns incorrect vcpu info in case we hotunplug vcpus(not
the last one).
e.g.:
A VM has 4 VCPUs: cpu0 + 3 hotunpluggable online vcpus(cpu1, cpu2 and cpu3).
Hotunplug cpu2,  Now only cpu0, cpu1 and cpu3 are present & online.

./qmp-shell /tmp/qmp-monitor.sock
(QEMU) query-hotpluggable-cpus
{"return": [
{"props": {"core-id": 0, "thread-id": 0, "socket-id": 3}, "vcpus-count": 1,
 "qom-path": "/machine/peripheral/cpu3", "type": "host-x86_64-cpu"},
{"props": {"core-id": 0, "thread-id": 0, "socket-id": 2}, "vcpus-count": 1,
 "qom-path": "/machine/peripheral/cpu2", "type": "host-x86_64-cpu"},
{"props": {"core-id": 0, "thread-id": 0, "socket-id": 1}, "vcpus-count": 1,
 "qom-path": "/machine/peripheral/cpu1", "type": "host-x86_64-cpu"},
{"props": {"core-id": 0, "thread-id": 0, "socket-id": 0}, "vcpus-count": 1,
 "qom-path": "/machine/unattached/device[0]", "type": "host-x86_64-cpu"}
]}

(QEMU) device_del id=cpu2
{"return": {}}

(QEMU) query-hotpluggable-cpus
{"return": [
{"props": {"core-id": 0, "thread-id": 0, "socket-id": 3}, "vcpus-count": 1,
 "qom-path": "/machine/peripheral/cpu3", "type": "host-x86_64-cpu"},
{"props": {"core-id": 0, "thread-id": 0, "socket-id": 2}, "vcpus-count": 1,
 "type": "host-x86_64-cpu"},
{"props": {"core-id": 0, "thread-id": 0, "socket-id": 1}, "vcpus-count": 1,
 "qom-path": "/machine/peripheral/cpu1", "type": "host-x86_64-cpu"},
{"props": {"core-id": 0, "thread-id": 0, "socket-id": 0}, "vcpus-count": 1,
 "qom-path": "/machine/unattached/device[0]", "type": "host-x86_64-cpu"}
]}

Before:
./qmp-shell -N /tmp/qmp-ga.sock
Welcome to the QMP low-level shell!
Connected
(QEMU) guest-get-vcpus
{"return": [
{"online": true, "can-offline": false, "logical-id": 0},
{"online": true, "can-offline": true, "logical-id": 1}]}

After:
./qmp-shell -N /tmp/qmp-ga.sock
Welcome to the QMP low-level shell!
Connected
(QEMU) guest-get-vcpus
{"return": [
{"online": true, "can-offline": false, "logical-id": 0},
{"online": true, "can-offline": true, "logical-id": 1},
{"online": true, "can-offline": true, "logical-id": 3}]}

Signed-off-by: Lin Ma 
Reviewed-by: Marc-André Lureau 
---
 qga/commands-posix.c | 43 ++-
 1 file changed, 14 insertions(+), 29 deletions(-)

diff --git a/qga/commands-posix.c b/qga/commands-posix.c
index 8dd94a3314..530c98344c 100644
--- a/qga/commands-posix.c
+++ b/qga/commands-posix.c
@@ -2385,24 +2385,6 @@ error:
 return NULL;
 }
 
-#define SYSCONF_EXACT(name, errp) sysconf_exact((name), #name, (errp))
-
-static long sysconf_exact(int name, const char *name_str, Error **errp)
-{
-long ret;
-
-errno = 0;
-ret = sysconf(name);
-if (ret == -1) {
-if (errno == 0) {
-error_setg(errp, "sysconf(%s): value indefinite", name_str);
-} else {
-error_setg_errno(errp, errno, "sysconf(%s)", name_str);
-}
-}
-return ret;
-}
-
 /* Transfer online/offline status between @vcpu and the guest system.
  *
  * On input either @errp or *@errp must be NULL.
@@ -2473,30 +2455,33 @@ static void transfer_vcpu(GuestLogicalProcessor *vcpu, 
bool sys2vcpu,
 
 GuestLogicalProcessorList *qmp_guest_get_vcpus(Error **errp)
 {
-int64_t current;
 GuestLogicalProcessorList *head, **tail;
-long sc_max;
+const char *cpu_dir = "/sys/devices/system/cpu";
+const gchar *line;
+g_autoptr(GDir) cpu_gdir = NULL;
 Error *local_err = NULL;
 
-current = 0;
 head = NULL;
 tail = 
-sc_max = SYSCONF_EXACT(_SC_NPROCESSORS_CONF, _err);
+cpu_gdir = g_dir_open(cpu_dir, 0, NULL);
 
-while (local_err == NULL && current < sc_max) {
-GuestLogicalProcessor *vcpu;
-int64_t id = current++;
-char *path = g_strdup_printf("/sys/devices/system/cpu/cpu%" PRId64 "/",
- id);
+if (cpu_gdir == NULL) {
+error_setg_errno(errp, errno, "failed to list entries: %s", cpu_dir);
+return NULL;
+}
 
-if (g_file_test(path, G_FILE_TEST_EXISTS)) {
+while (local_err == NULL && (line = g_dir_read_name(cpu_gdir)) != NULL) {
+GuestLogicalProcessor *vcpu;
+int64_t id;
+if (sscanf(line, "cpu%ld", )) {
+g_autofree char *path = g_strdup_printf("/sys/devices/system/cpu/"
+"cpu%" PRId64 "/", id);
 vcpu = g_malloc0(sizeof *vcpu);
 vcpu->logical_id = id;
 vcpu->has_can_offline = true; /* lolspeak ftw */
 transfer_vcpu(vcpu, true, path, _err);
 QAPI_LIST_APPEND(tail, vcpu);
 }
-g_free(path);
 }
 
 if (local_err == NULL) {
-- 
2.26.2




Re: [PATCH v2 0/6] testing/next (meson check-tcg, fedora bump, docs)

2021-02-23 Thread Richard Henderson
On 2/22/21 2:14 AM, Alex Bennée wrote:
> Alex Bennée (5):
>   meson.build: expose TCG cross compiler information in summary
>   tests/acceptance: allow a "graceful" failing for virtio-gpu test
>   docs/devel: expand on use of containers to build tests
>   docs/devel: update the container based tests
>   docs/devel: add forward reference to check-tcg
> 
> Philippe Mathieu-Daudé (1):
>   docker: Bump Fedora images to release 33

Reviewed-by: Richard Henderson 

r~



Re: [PATCH v3 10/10] target/mips: Extract MXU code to new mxu_translate.c file

2021-02-23 Thread Richard Henderson
On 2/22/21 2:39 PM, Philippe Mathieu-Daudé wrote:
> Extract 1600+ lines from the big translate.c into a new file.
> 
> Signed-off-by: Philippe Mathieu-Daudé 
> ---
>  target/mips/mxu_translate.c | 1625 +++
>  target/mips/translate.c | 1613 --
>  target/mips/meson.build |1 +
>  3 files changed, 1626 insertions(+), 1613 deletions(-)
>  create mode 100644 target/mips/mxu_translate.c


Reviewed-by: Richard Henderson 

r~




Re: [PATCH v3 09/10] target/mips: Simplify 64-bit ifdef'ry of MXU code

2021-02-23 Thread Richard Henderson
On 2/22/21 2:39 PM, Philippe Mathieu-Daudé wrote:
> +#else /* !defined(TARGET_MIPS64) */
> +
> +bool decode_ase_mxu(DisasContext *ctx, uint32_t insn)
> +{
> +return false;

Also seems suspect, but harmless.

> -#if !defined(TARGET_MIPS64)
> -if (ctx->insn_flags & ASE_MXU) {
> +if ((TARGET_LONG_BITS == 32) && (ctx->insn_flags & ASE_MXU)) {
>  decode_opc_mxu(ctx, ctx->opcode);

(1) Unnecessary () around ==.

(2) The call to decode_opc_mxu should be eliminated by the compiler because of
the constant false test.  You can (a) retain the function above and omit the
new test, (b) add the new test and leave the function undefined, a diagnostic
link error, or you can re-declare the function with QEMU_ERROR.


> @@ -28081,9 +28085,7 @@ void mips_tcg_init(void)
>  cpu_llval = tcg_global_mem_new(cpu_env, offsetof(CPUMIPSState, llval),
> "llval");
>  
> -#if !defined(TARGET_MIPS64)
>  mxu_translate_init();
> -#endif /* !TARGET_MIPS64 */

This one won't be eliminated, and is an abort for MIPS64 per patch 8, so all
mips64 now aborts on startup.


r~



Re: [PATCH v3 08/10] target/mips: Make mxu_translate_init() / decode_ase_mxu() proto public

2021-02-23 Thread Richard Henderson
On 2/22/21 2:38 PM, Philippe Mathieu-Daudé wrote:
> +#else /* !defined(TARGET_MIPS64) */
> +void mxu_translate_init(void)
> +{
> +g_assert_not_reached();
> +}

This is suspect, see next patch.
The rest of it seems ok.

r~



Re: [PATCH v3 07/10] target/mips: Introduce mxu_translate_init() helper

2021-02-23 Thread Richard Henderson
On 2/22/21 2:38 PM, Philippe Mathieu-Daudé wrote:
> Extract the MXU register initialization code from mips_tcg_init()
> as a new mxu_translate_init() helper
> 
> Signed-off-by: Philippe Mathieu-Daudé 
> ---
>  target/mips/translate.c | 26 +++---
>  1 file changed, 15 insertions(+), 11 deletions(-)

Reviewed-by: Richard Henderson 

r~




Re: [PATCH v3 06/10] target/mips: Use OPC_MUL instead of OPC__MXU_MUL

2021-02-23 Thread Richard Henderson
On 2/22/21 2:38 PM, Philippe Mathieu-Daudé wrote:
> We already have a macro and definition to extract / check
> the Special2 MUL opcode. Use it instead of the unnecessary
> OPC__MXU_MUL macro.
> 
> Signed-off-by: Philippe Mathieu-Daudé 
> ---
>  target/mips/translate.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)

Reviewed-by: Richard Henderson 

r~




Re: [PATCH v3 05/10] target/mips: Extract decode_ase_mxu() from decode_opc_mxu()

2021-02-23 Thread Richard Henderson
On 2/22/21 2:38 PM, Philippe Mathieu-Daudé wrote:
> To easily convert MXU code to decodetree, extract decode_ase_mxu()
> from decode_opc_mxu(), making it return a boolean.
> We will keep decode_opc_mxu() in the translate.c unit because it
> calls gen_arith().
> 
> Signed-off-by: Philippe Mathieu-Daudé 
> ---
>  target/mips/translate.c | 45 -
>  1 file changed, 26 insertions(+), 19 deletions(-)

I guess the split is fine until you can just call gen_mul() from a trans() from
the mxu decodetree.

Reviewed-by: Richard Henderson 

r~



Re: [PATCH v3 04/10] target/mips: Pass instruction opcode to decode_opc_mxu()

2021-02-23 Thread Richard Henderson
On 2/22/21 2:38 PM, Philippe Mathieu-Daudé wrote:
> In the next commit we'll make decode_opc_mxu() match decodetree
> prototype by returning a boolean. First pass ctx->opcode as an
> argument.
> 
> Signed-off-by: Philippe Mathieu-Daudé 
> ---
>  target/mips/translate.c | 14 +++---
>  1 file changed, 7 insertions(+), 7 deletions(-)

Reviewed-by: Richard Henderson 

r~



Re: [PATCH v3 03/10] target/mips: Remove unused CPUMIPSState* from MXU functions

2021-02-23 Thread Richard Henderson
On 2/22/21 2:38 PM, Philippe Mathieu-Daudé wrote:
> None of these MXU functions use their CPUMIPSState* env argument,
> remove it.
> 
> Signed-off-by: Philippe Mathieu-Daudé 
> ---
>  target/mips/translate.c | 20 ++--
>  1 file changed, 10 insertions(+), 10 deletions(-)

Reviewed-by: Richard Henderson 

r~



Re: [PATCH v3 01/10] target/mips: Rewrite complex ifdef'ry

2021-02-23 Thread Richard Henderson
On 2/22/21 2:38 PM, Philippe Mathieu-Daudé wrote:
> No need for this obfuscated ifdef'ry, KISS.
> 
> Signed-off-by: Philippe Mathieu-Daudé 
> ---
>  target/mips/translate.c | 11 +++
>  1 file changed, 7 insertions(+), 4 deletions(-)

Reviewed-by: Richard Henderson 

r~



Re: [PATCH v2] target/riscv: fix TB_FLAGS bits overlapping bug for rvv/rvh

2021-02-23 Thread Frank Chang
On Wed, Feb 24, 2021 at 2:24 AM Richard Henderson <
richard.hender...@linaro.org> wrote:

> On 2/23/21 12:19 AM, frank.ch...@sifive.com wrote:
> > From: Frank Chang 
> >
> > TB_FLAGS mem_idx bits was extended from 2 bits to 3 bits in
> > commit: c445593, but other TB_FLAGS bits for rvv and rvh were
> > not shift as well so these bits may overlap with each other when
> > rvv is enabled.
> >
> > Signed-off-by: Frank Chang 
>
> Reviewed-by: Richard Henderson 
>
> > -#define TB_FLAGS_MMU_MASK   7
> >  #define TB_FLAGS_PRIV_MMU_MASK3
> >  #define TB_FLAGS_PRIV_HYP_ACCESS_MASK   (1 << 2)
> >  #define TB_FLAGS_MSTATUS_FS MSTATUS_FS
> ...
> > +FIELD(TB_FLAGS, MEM_IDX, 0, 3)
> > +FIELD(TB_FLAGS, VL_EQ_VLMAX, 3, 1)
> > +FIELD(TB_FLAGS, LMUL, 4, 2)
> > +FIELD(TB_FLAGS, SEW, 6, 3)
> > +FIELD(TB_FLAGS, VILL, 9, 1)
> >  /* Is a Hypervisor instruction load/store allowed? */
> > -FIELD(TB_FLAGS, HLSX, 9, 1)
> > +FIELD(TB_FLAGS, HLSX, 10, 1)
>
> The only other thing that I'd add at this point is a comment about
> MSTATUS_FS
> -- a 2-bit field at bit 13 -- for the benefit of the next person that adds
> something to TB_FLAGS.
>
>
In fact, in RVV patchset, both MSTATUS_FS and MSTATUS_VS are skipped
and I also add the comments to state that.
The bits are also rearranged to fill the empty bit holes in RVV patchset on
my local branch:

  FIELD(TB_FLAGS, MEM_IDX, 0, 3)
  FIELD(TB_FLAGS, LMUL, 3, 3)
  FIELD(TB_FLAGS, SEW, 6, 3)
  /* Skip MSTATUS_VS (0x600) bits */
  FIELD(TB_FLAGS, VL_EQ_VLMAX, 11, 1)
  FIELD(TB_FLAGS, VILL, 12, 1)
  /* Skip MSTATUS_FS (0x6000) bits */
  /* Is a Hypervisor instruction load/store allowed? */
  FIELD(TB_FLAGS, HLSX, 15, 1)

Thanks for the review.
Frank Chang


>
> r~
>


Re: [PATCH v2 2/2] target/riscv: Call check_access() before tcg_temp_new()

2021-02-23 Thread Richard Henderson
On 2/22/21 10:49 AM, Alex Richardson wrote:
> ---
>  target/riscv/insn_trans/trans_rvh.c.inc | 16 
>  1 file changed, 8 insertions(+), 8 deletions(-)

Any particular reason?

At first I thought this was a macro with a hidden return, but it isn't.  I
would understand a cleanup like

if (check_access(ctx)) {
   TCGv t0 = tcg_temp_new();
   TCGv t1 = tcg_temp_new();
   ...

where we do not generate the rest of the code if we raised an exception.
Alternately, if the nesting seems ugly,

if (!check_access(ctx)) {
return true;
}
...
return true;


r~



Re: who's using the ozlabs patchwork install for QEMU patches ?

2021-02-23 Thread Jeremy Kerr
Hi Alexey,
> Jeremy or Stephen (cc-ing) do definitely know if there is a better
> way.

The standard process is to send me an email :)

You're wanting to add user 'groug' to the qemu project, is that
correct?

Cheers,


Jeremy




Re: [PATCH v2 1/2] target/riscv: Reduce duplicated code in trans_rvh.c.inc

2021-02-23 Thread Richard Henderson
On 2/22/21 10:49 AM, Alex Richardson wrote:
> I rencently merged CHERI QEMU up to 5.2, and we have to modify all
> functions that perform memory accesses. Factoring these almost-indentical
> functions into two shared helpers makes our changes a lot smaller and
> should also make this code easier to maintain.
> ---
>  target/riscv/insn_trans/trans_rvh.c.inc | 199 
>  1 file changed, 29 insertions(+), 170 deletions(-)

Reviewed-by: Richard Henderson 

r~



Re: softmmu vs sysemu [Was: Re: [RFC v1 06/38] target/arm: split off cpu-softmmu.c]

2021-02-23 Thread Richard Henderson
On 2/23/21 3:43 PM, Philippe Mathieu-Daudé wrote:
> On 2/23/21 7:51 PM, Richard Henderson wrote:
>> I just want the file naming done correctly, while you're renaming.  That is
>> something you are actively changing in this patch set, so we should get it 
>> right.
> 
> So what is the new directory structure?
> 
> - user/
> - sysemu/{tcg,kvm,}/
> 
> or
> 
> - tcg/a-user.c
> - tcg/b-sysemu.c
> - kvm/kvm.c

Personally I think this second one makes more sense, focused primarily on the
accelerator and secondarily on the kind of emulation.


r~



Re: softmmu vs sysemu [Was: Re: [RFC v1 06/38] target/arm: split off cpu-softmmu.c]

2021-02-23 Thread Philippe Mathieu-Daudé
On 2/23/21 7:51 PM, Richard Henderson wrote:
> On 2/23/21 10:18 AM, Claudio Fontana wrote:
>> I am all for "just getting it done", but here the i386 and the arm series 
>> become a mixture of things that I am not comfortable with,
>> I'd prefer a dedicated series..
> 
> You're thinking too deeply about the problem that I'm reporting to you about
> this patch set.
> 
> I just want the file naming done correctly, while you're renaming.  That is
> something you are actively changing in this patch set, so we should get it 
> right.

So what is the new directory structure?

- user/
- sysemu/{tcg,kvm,}/

or

- tcg/a-user.c
- tcg/b-sysemu.c
- kvm/kvm.c

> You don't have to worry about ifdef CONFIG_SOFTMMU vs ifndef CONFIG_USER_ONLY
> within the code itself.



Re: [PATCH v4] net/macos: implement vmnet-based netdev

2021-02-23 Thread Roman Bolshakov
On Thu, Feb 18, 2021 at 02:49:47PM +0100, phillip.en...@gmail.com wrote:
> From: Phillip Tennen 
> 
> This patch implements a new netdev device, reachable via -netdev
> vmnet-macos, that’s backed by macOS’s vmnet framework.
> 
> The vmnet framework provides native bridging support, and its usage in
> this patch is intended as a replacement for attempts to use a tap device
> via the tuntaposx kernel extension. Notably, the tap/tuntaposx approach
> never would have worked in the first place, as QEMU interacts with the
> tap device via poll(), and macOS does not support polling device files.
> 
> vmnet requires either a special entitlement, granted via a provisioning
> profile, or root access. Otherwise attempts to create the virtual
> interface will fail with a “generic error” status code. QEMU may not
> currently be signed with an entitlement granted in a provisioning
> profile, as this would necessitate pre-signed binary build distribution,
> rather than source-code distribution. As such, using this netdev
> currently requires that qemu be run with root access. I’ve opened a
> feedback report with Apple to allow the use of the relevant entitlement
> with this use case:
> https://openradar.appspot.com/radar?id=5007417364447232
> 
> vmnet offers three operating modes, all of which are supported by this
> patch via the “mode=host|shared|bridge” option:
> 
> * "Host" mode: Allows the vmnet interface to communicate with other
> * vmnet
> interfaces that are in host mode and also with the native host.
> * "Shared" mode: Allows traffic originating from the vmnet interface to
> reach the Internet through a NAT. The vmnet interface can also
> communicate with the native host.
> * "Bridged" mode: Bridges the vmnet interface with a physical network
> interface.
> 
> Each of these modes also provide some extra configuration that’s
> supported by this patch:
> 
> * "Bridged" mode: The user may specify the physical interface to bridge
> with. Defaults to en0.
> * "Host" mode / "Shared" mode: The user may specify the DHCP range and
> subnet. Allocated by vmnet if not provided.
> 
> vmnet also offers some extra configuration options that are not
> supported by this patch:
> 
> * Enable isolation from other VMs using vmnet
> * Port forwarding rules
> * Enabling TCP segmentation offload
> * Only applicable in "shared" mode: specifying the NAT IPv6 prefix
> * Only available in "host" mode: specifying the IP address for the VM
> within an isolated network
> 
> Note that this patch requires macOS 10.15 as a minimum, as this is when
> bridging support was implemented in vmnet.framework.
> 
> Signed-off-by: Phillip Tennen 
> ---
>  configure |   2 +-
>  net/clients.h |   6 +
>  net/meson.build   |   1 +
>  net/net.c |   3 +
>  net/vmnet-macos.c | 447 ++
>  qapi/net.json | 120 -
>  qemu-options.hx   |   9 +
>  7 files changed, 585 insertions(+), 3 deletions(-)
>  create mode 100644 net/vmnet-macos.c
> 

Hi Phillip,

Thanks for working on this!

Note that the patch doesn't apply to current master and there's a lot of
warnings wrt trailing whitespaces:

git am v4-net-macos-implement-vmnet-based-netdev.patch
Applying: net/macos: implement vmnet-based netdev
.git/rebase-apply/patch:462: trailing whitespace.
 * If QEMU is started with -nographic, no Cocoa event loop will be
.git/rebase-apply/patch:465: trailing whitespace.
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH,
.git/rebase-apply/patch:466: trailing whitespace.
 0),
.git/rebase-apply/patch:532: trailing whitespace.
# @host: the guest may communicate with the host
.git/rebase-apply/patch:535: trailing whitespace.
# @shared: the guest may reach the Internet through a NAT,
error: patch failed: configure:778
error: configure: patch does not apply
Patch failed at 0001 net/macos: implement vmnet-based netdev
hint: Use 'git am --show-current-patch' to see the failed patch
When you have resolved this problem, run "git am --continue".
If you prefer to skip this patch, run "git am --skip" instead.
To restore the original branch and stop patching, run "git am --abort".

Also it would be helpful to provide a changelog under commit message
delimiter ("---")  for reach new version of the patch to provide an
overview of what has been changed between the versions.

> diff --git a/configure b/configure
> index 4afd22bdf5..f449198db1 100755
> --- a/configure
> +++ b/configure
> @@ -778,7 +778,7 @@ Darwin)
>fi
>audio_drv_list="coreaudio try-sdl"
>audio_possible_drivers="coreaudio sdl"
> -  QEMU_LDFLAGS="-framework CoreFoundation -framework IOKit $QEMU_LDFLAGS"
> +  QEMU_LDFLAGS="-framework CoreFoundation -framework IOKit -framework vmnet 
> $QEMU_LDFLAGS"

I'm not sure this is right approach. Instead, we need a new
configuration option for the feature + proper discovery. Something like
this should work:


[PATCH v2 2/2] hw/net/allwinner-sun8i-emac: traverse transmit queue using TX_CUR_DESC register value

2021-02-23 Thread Niek Linnenbank
Currently the emulated EMAC for sun8i always traverses the transmit queue
from the head when transferring packets. It searches for a list of consecutive
descriptors whichs are flagged as ready for processing and transmits their 
payloads
accordingly. The controller stops processing once it finds a descriptor that is 
not
marked ready.

While the above behaviour works in most situations, it is not the same as the 
actual
EMAC in hardware. Actual hardware uses the TX_CUR_DESC register value to keep 
track
of the last position in the transmit queue and continues processing from that 
position
when software triggers the start of DMA processing. The currently emulated 
behaviour can
lead to packet loss on transmit when software fills the transmit queue with 
ready
descriptors that overlap the tail of the circular list.

This commit modifies the emulated EMAC for sun8i such that it processes
the transmit queue using the TX_CUR_DESC register in the same way as hardware.

Signed-off-by: Niek Linnenbank 
Reviewed-by: Philippe Mathieu-Daudé 
---
 hw/net/allwinner-sun8i-emac.c | 58 +++
 1 file changed, 32 insertions(+), 26 deletions(-)

diff --git a/hw/net/allwinner-sun8i-emac.c b/hw/net/allwinner-sun8i-emac.c
index 042768922c..e586c147e5 100644
--- a/hw/net/allwinner-sun8i-emac.c
+++ b/hw/net/allwinner-sun8i-emac.c
@@ -339,35 +339,40 @@ static void 
allwinner_sun8i_emac_update_irq(AwSun8iEmacState *s)
 qemu_set_irq(s->irq, (s->int_sta & s->int_en) != 0);
 }
 
-static uint32_t allwinner_sun8i_emac_next_desc(AwSun8iEmacState *s,
-   FrameDescriptor *desc,
-   size_t min_size)
+static bool allwinner_sun8i_emac_desc_owned(FrameDescriptor *desc,
+size_t min_buf_size)
 {
-uint32_t paddr = desc->next;
+return (desc->status & DESC_STATUS_CTL) && (min_buf_size == 0 ||
+   (desc->status2 & DESC_STATUS2_BUF_SIZE_MASK) >= min_buf_size);
+}
 
-dma_memory_read(>dma_as, paddr, desc, sizeof(*desc));
+static void allwinner_sun8i_emac_get_desc(AwSun8iEmacState *s,
+  FrameDescriptor *desc,
+  uint32_t phys_addr)
+{
+dma_memory_read(>dma_as, phys_addr, desc, sizeof(*desc));
+}
 
-if ((desc->status & DESC_STATUS_CTL) &&
-(desc->status2 & DESC_STATUS2_BUF_SIZE_MASK) >= min_size) {
-return paddr;
-} else {
-return 0;
-}
+static uint32_t allwinner_sun8i_emac_next_desc(AwSun8iEmacState *s,
+   FrameDescriptor *desc)
+{
+const uint32_t nxt = desc->next;
+allwinner_sun8i_emac_get_desc(s, desc, nxt);
+return nxt;
 }
 
-static uint32_t allwinner_sun8i_emac_get_desc(AwSun8iEmacState *s,
-  FrameDescriptor *desc,
-  uint32_t start_addr,
-  size_t min_size)
+static uint32_t allwinner_sun8i_emac_find_desc(AwSun8iEmacState *s,
+   FrameDescriptor *desc,
+   uint32_t start_addr,
+   size_t min_size)
 {
 uint32_t desc_addr = start_addr;
 
 /* Note that the list is a cycle. Last entry points back to the head. */
 while (desc_addr != 0) {
-dma_memory_read(>dma_as, desc_addr, desc, sizeof(*desc));
+allwinner_sun8i_emac_get_desc(s, desc, desc_addr);
 
-if ((desc->status & DESC_STATUS_CTL) &&
-(desc->status2 & DESC_STATUS2_BUF_SIZE_MASK) >= min_size) {
+if (allwinner_sun8i_emac_desc_owned(desc, min_size)) {
 return desc_addr;
 } else if (desc->next == start_addr) {
 break;
@@ -383,14 +388,14 @@ static uint32_t 
allwinner_sun8i_emac_rx_desc(AwSun8iEmacState *s,
  FrameDescriptor *desc,
  size_t min_size)
 {
-return allwinner_sun8i_emac_get_desc(s, desc, s->rx_desc_curr, min_size);
+return allwinner_sun8i_emac_find_desc(s, desc, s->rx_desc_curr, min_size);
 }
 
 static uint32_t allwinner_sun8i_emac_tx_desc(AwSun8iEmacState *s,
- FrameDescriptor *desc,
- size_t min_size)
+ FrameDescriptor *desc)
 {
-return allwinner_sun8i_emac_get_desc(s, desc, s->tx_desc_head, min_size);
+allwinner_sun8i_emac_get_desc(s, desc, s->tx_desc_curr);
+return s->tx_desc_curr;
 }
 
 static void allwinner_sun8i_emac_flush_desc(AwSun8iEmacState *s,
@@ -470,7 +475,8 @@ static ssize_t allwinner_sun8i_emac_receive(NetClientState 
*nc,
 bytes_left -= desc_bytes;
 
 /* Move to the next descriptor */
-s->rx_desc_curr = 

[PATCH v2 0/2] Allwinner H3 fixes for EMAC and acceptance tests

2021-02-23 Thread Niek Linnenbank
The following are maintenance patches for the Allwinner H3. The first patch
is a proposal to relocate the binary artifacts of the acceptance tests away
from the apt.armbian.com domain. In the past we had problems with artifacts 
being
removed, and now the recently added Armbian 20.08.1 image has been removed as 
well:

$ wget 
https://dl.armbian.com/orangepipc/archive/Armbian_20.08.1_Orangepipc_bionic_current_5.8.5.img.xz
Connecting to dl.armbian.com (dl.armbian.com)|2605:7900:20::5|:443... connected.
...
HTTP request sent, awaiting response... 404 Not Found
2021-02-11 22:34:45 ERROR 404: Not Found.

I've now added the artifacts to github as a short term solution:
  https://github.com/nieklinnenbank/QemuArtifacts

The second patch is a fix for the EMAC that is used by the Allwinner H3 / 
Orange Pi PC machine.

ChangeLog:

v2:
 - added Reviewed-By tags
 - changed URL for artifacts to github.com/nieklinnenbank/QemuArtifacts

Kind regards,

Niek

Niek Linnenbank (2):
  tests/acceptance: replace unstable apt.armbian.com URLs for
orangepi-pc, cubieboard
  hw/net/allwinner-sun8i-emac: traverse transmit queue using TX_CUR_DESC
register value

 hw/net/allwinner-sun8i-emac.c  | 58 ++
 tests/acceptance/boot_linux_console.py | 49 +-
 tests/acceptance/replay_kernel.py  |  6 +--
 3 files changed, 53 insertions(+), 60 deletions(-)

-- 
2.25.1




[PATCH v3 0/2] virtiofsd: Enable posix_acl by default

2021-02-23 Thread Vivek Goyal
Hi,

This is V3 of the patches. Changes since v2 are.

- I dropped the patch to give user an option to enable/disable acls.
  Now acls are enabled by default if xattrs are enabled and fuse
  client offers FUSE_POSIX_ACL capability.
 
Miklos mentioned that ACLS might not have lot of overhead as these
can be cached. So it might make sense to enable these by default.

If we run into performance issues, then we can add another patch to
give option to enable/disable and disable it by default.

Luis Henriques reported that fstest generic/099 fails with virtiofs.
Little debugging showed that we don't enable acl support. This
patch series should fix the issue

Vivek Goyal (2):
  virtiofsd: Add umask to seccom allow list
  virtiofsd: Enable posix_acls by default if xattrs are enabled

 tools/virtiofsd/passthrough_ll.c  | 29 +--
 tools/virtiofsd/passthrough_seccomp.c |  1 +
 2 files changed, 24 insertions(+), 6 deletions(-)

-- 
2.25.4




[PATCH v2 1/2] tests/acceptance: replace unstable apt.armbian.com URLs for orangepi-pc, cubieboard

2021-02-23 Thread Niek Linnenbank
Currently the automated acceptance tests for the Orange Pi PC and cubieboard
machines are disabled by default. The tests for both machines require artifacts
that are stored on the apt.armbian.com domain. Unfortunately, some of these 
artifacts
have been removed from apt.armbian.com and it is uncertain whether more will be 
removed.

This commit moves the artifacts previously stored on apt.armbian.com to github
and retrieves them using the path: '//'.

Signed-off-by: Niek Linnenbank 
Reviewed-by: Willian Rampazzo 
Reviewed-by: Cleber Rosa 
Tested-by: Cleber Rosa 
---
 tests/acceptance/boot_linux_console.py | 49 ++
 tests/acceptance/replay_kernel.py  |  6 ++--
 2 files changed, 21 insertions(+), 34 deletions(-)

diff --git a/tests/acceptance/boot_linux_console.py 
b/tests/acceptance/boot_linux_console.py
index eb01286799..0f8a824c61 100644
--- a/tests/acceptance/boot_linux_console.py
+++ b/tests/acceptance/boot_linux_console.py
@@ -507,15 +507,13 @@ def test_arm_exynos4210_initrd(self):
 self.wait_for_console_pattern('Boot successful.')
 # TODO user command, for now the uart is stuck
 
-@skipUnless(os.getenv('ARMBIAN_ARTIFACTS_CACHED'),
-'Test artifacts fetched from unreliable apt.armbian.com')
 def test_arm_cubieboard_initrd(self):
 """
 :avocado: tags=arch:arm
 :avocado: tags=machine:cubieboard
 """
-deb_url = ('https://apt.armbian.com/pool/main/l/'
-   'linux-4.20.7-sunxi/linux-image-dev-sunxi_5.75_armhf.deb')
+deb_url = ('https://github.com/nieklinnenbank/QemuArtifacts/raw/'
+   'master/cubieboard/linux-image-dev-sunxi_5.75_armhf.deb')
 deb_hash = '1334c29c44d984ffa05ed10de8c3361f33d78315'
 deb_path = self.fetch_asset(deb_url, asset_hash=deb_hash)
 kernel_path = self.extract_from_deb(deb_path,
@@ -549,15 +547,13 @@ def test_arm_cubieboard_initrd(self):
 'system-control@1c0')
 # cubieboard's reboot is not functioning; omit reboot test.
 
-@skipUnless(os.getenv('ARMBIAN_ARTIFACTS_CACHED'),
-'Test artifacts fetched from unreliable apt.armbian.com')
 def test_arm_cubieboard_sata(self):
 """
 :avocado: tags=arch:arm
 :avocado: tags=machine:cubieboard
 """
-deb_url = ('https://apt.armbian.com/pool/main/l/'
-   'linux-4.20.7-sunxi/linux-image-dev-sunxi_5.75_armhf.deb')
+deb_url = ('https://github.com/nieklinnenbank/QemuArtifacts/raw/'
+   'master/cubieboard/linux-image-dev-sunxi_5.75_armhf.deb')
 deb_hash = '1334c29c44d984ffa05ed10de8c3361f33d78315'
 deb_path = self.fetch_asset(deb_url, asset_hash=deb_hash)
 kernel_path = self.extract_from_deb(deb_path,
@@ -678,15 +674,13 @@ def test_arm_quanta_gsj_initrd(self):
 self.wait_for_console_pattern(
 'Give root password for system maintenance')
 
-@skipUnless(os.getenv('ARMBIAN_ARTIFACTS_CACHED'),
-'Test artifacts fetched from unreliable apt.armbian.com')
 def test_arm_orangepi(self):
 """
 :avocado: tags=arch:arm
 :avocado: tags=machine:orangepi-pc
 """
-deb_url = ('https://apt.armbian.com/pool/main/l/'
-   'linux-4.20.7-sunxi/linux-image-dev-sunxi_5.75_armhf.deb')
+deb_url = ('https://github.com/nieklinnenbank/QemuArtifacts/raw/'
+   'master/orangepi-pc/linux-image-dev-sunxi_5.75_armhf.deb')
 deb_hash = '1334c29c44d984ffa05ed10de8c3361f33d78315'
 deb_path = self.fetch_asset(deb_url, asset_hash=deb_hash)
 kernel_path = self.extract_from_deb(deb_path,
@@ -705,15 +699,13 @@ def test_arm_orangepi(self):
 console_pattern = 'Kernel command line: %s' % kernel_command_line
 self.wait_for_console_pattern(console_pattern)
 
-@skipUnless(os.getenv('ARMBIAN_ARTIFACTS_CACHED'),
-'Test artifacts fetched from unreliable apt.armbian.com')
 def test_arm_orangepi_initrd(self):
 """
 :avocado: tags=arch:arm
 :avocado: tags=machine:orangepi-pc
 """
-deb_url = ('https://apt.armbian.com/pool/main/l/'
-   'linux-4.20.7-sunxi/linux-image-dev-sunxi_5.75_armhf.deb')
+deb_url = ('https://github.com/nieklinnenbank/QemuArtifacts/raw/'
+   'master/orangepi-pc/linux-image-dev-sunxi_5.75_armhf.deb')
 deb_hash = '1334c29c44d984ffa05ed10de8c3361f33d78315'
 deb_path = self.fetch_asset(deb_url, asset_hash=deb_hash)
 kernel_path = self.extract_from_deb(deb_path,
@@ -749,24 +741,23 @@ def test_arm_orangepi_initrd(self):
 # Wait for VM to shut down gracefully
 self.vm.wait()
 
-@skipUnless(os.getenv('ARMBIAN_ARTIFACTS_CACHED'),
-'Test artifacts fetched from unreliable apt.armbian.com')
 def test_arm_orangepi_sd(self):
  

[PATCH v3 2/2] virtiofsd: Enable posix_acls by default if xattrs are enabled

2021-02-23 Thread Vivek Goyal
Fuse protocl wants file server to opt in for FUSE_POSIX_ACL if posix
acls are to be enabled. Right now virtiofsd does not enable it. This
patch opts in for FUSE_POSIX_ACL if xattr support is enabled.

When parent directory has default acl and a file is created in that
directory, then umask is ignored and final file permissions are
determined using default acl instead. (man 2 umask).

Currently, fuse applies the umask and sends modified mode in create
request accordingly. fuse server can set FUSE_DONT_MASK and tell
fuse client to not apply umask and fuse server will take care of
it as needed.

With posix acls enabled, requirement will be that we want umask
to determine final file mode if parent directory does not have
default acl.

So if posix acls are enabled, opt in for FUSE_DONT_MASK. virtiofsd
will set umask of the thread doing file creation. And host kernel
should use that umask if parent directory does not have default
acls, otherwise umask does not take affect.

Miklos mentioned that we already call unshare(CLONE_FS) for
every thread. That means umask has now become property of per
thread and it should be ok to manipulate it in file creation path.

So this patch also opts in for FUSE_DONT_MASK if posix acls are enabled
and changes umask to caller umask before file creation and restores
original umask after file creation is done.

This should fix fstest generic/099.

Reported-by: Luis Henriques 
Signed-off-by: Vivek Goyal 
---
 tools/virtiofsd/passthrough_ll.c | 29 +++--
 1 file changed, 23 insertions(+), 6 deletions(-)

diff --git a/tools/virtiofsd/passthrough_ll.c b/tools/virtiofsd/passthrough_ll.c
index 58d24c0010..0d62c9a889 100644
--- a/tools/virtiofsd/passthrough_ll.c
+++ b/tools/virtiofsd/passthrough_ll.c
@@ -120,6 +120,7 @@ struct lo_inode {
 struct lo_cred {
 uid_t euid;
 gid_t egid;
+mode_t umask;
 };
 
 enum {
@@ -169,6 +170,8 @@ struct lo_data {
 /* An O_PATH file descriptor to /proc/self/fd/ */
 int proc_self_fd;
 int user_killpriv_v2, killpriv_v2;
+/* If set, virtiofsd is responsible for setting umask during creation */
+bool change_umask;
 };
 
 static const struct fuse_opt lo_opts[] = {
@@ -661,6 +664,13 @@ static void lo_init(void *userdata, struct fuse_conn_info 
*conn)
 conn->want &= ~FUSE_CAP_HANDLE_KILLPRIV_V2;
 lo->killpriv_v2 = 0;
 }
+
+/* If xattrs are enabled, enable ACL support by default */
+if (lo->xattr && conn->capable & FUSE_CAP_POSIX_ACL) {
+fuse_log(FUSE_LOG_DEBUG, "lo_init: enabling posix_acls\n");
+conn->want |= FUSE_CAP_POSIX_ACL | FUSE_CAP_DONT_MASK;
+lo->change_umask = true;
+}
 }
 
 static void lo_getattr(fuse_req_t req, fuse_ino_t ino,
@@ -1075,7 +1085,8 @@ static void lo_lookup(fuse_req_t req, fuse_ino_t parent, 
const char *name)
  * ownership of caller.
  * TODO: What about selinux context?
  */
-static int lo_change_cred(fuse_req_t req, struct lo_cred *old)
+static int lo_change_cred(fuse_req_t req, struct lo_cred *old,
+  bool change_umask)
 {
 int res;
 
@@ -1095,11 +1106,14 @@ static int lo_change_cred(fuse_req_t req, struct 
lo_cred *old)
 return errno_save;
 }
 
+if (change_umask) {
+old->umask = umask(req->ctx.umask);
+}
 return 0;
 }
 
 /* Regain Privileges */
-static void lo_restore_cred(struct lo_cred *old)
+static void lo_restore_cred(struct lo_cred *old, bool restore_umask)
 {
 int res;
 
@@ -1114,6 +1128,9 @@ static void lo_restore_cred(struct lo_cred *old)
 fuse_log(FUSE_LOG_ERR, "setegid(%u): %m\n", old->egid);
 exit(1);
 }
+
+if (restore_umask)
+umask(old->umask);
 }
 
 static void lo_mknod_symlink(fuse_req_t req, fuse_ino_t parent,
@@ -1138,7 +1155,7 @@ static void lo_mknod_symlink(fuse_req_t req, fuse_ino_t 
parent,
 return;
 }
 
-saverr = lo_change_cred(req, );
+saverr = lo_change_cred(req, , lo->change_umask && !S_ISLNK(mode));
 if (saverr) {
 goto out;
 }
@@ -1147,7 +1164,7 @@ static void lo_mknod_symlink(fuse_req_t req, fuse_ino_t 
parent,
 
 saverr = errno;
 
-lo_restore_cred();
+lo_restore_cred(, lo->change_umask && !S_ISLNK(mode));
 
 if (res == -1) {
 goto out;
@@ -1828,7 +1845,7 @@ static void lo_create(fuse_req_t req, fuse_ino_t parent, 
const char *name,
 return;
 }
 
-err = lo_change_cred(req, );
+err = lo_change_cred(req, , lo->change_umask);
 if (err) {
 goto out;
 }
@@ -1839,7 +1856,7 @@ static void lo_create(fuse_req_t req, fuse_ino_t parent, 
const char *name,
 fd = openat(parent_inode->fd, name, fi->flags | O_CREAT | O_EXCL, mode);
 err = fd == -1 ? errno : 0;
 
-lo_restore_cred();
+lo_restore_cred(, lo->change_umask);
 
 /* Ignore the error if file exists and O_EXCL was not given */
 if (err && (err != EEXIST || (fi->flags & O_EXCL))) {
-- 
2.25.4




[PATCH v3 1/2] virtiofsd: Add umask to seccom allow list

2021-02-23 Thread Vivek Goyal
Patches in this series  are going to make use of "umask" syscall.
So allow it.

Signed-off-by: Vivek Goyal 
---
 tools/virtiofsd/passthrough_seccomp.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/tools/virtiofsd/passthrough_seccomp.c 
b/tools/virtiofsd/passthrough_seccomp.c
index 62441cfcdb..f49ed94b5e 100644
--- a/tools/virtiofsd/passthrough_seccomp.c
+++ b/tools/virtiofsd/passthrough_seccomp.c
@@ -114,6 +114,7 @@ static const int syscall_allowlist[] = {
 SCMP_SYS(utimensat),
 SCMP_SYS(write),
 SCMP_SYS(writev),
+SCMP_SYS(umask),
 };
 
 /* Syscalls used when --syslog is enabled */
-- 
2.25.4




Re: [PULL 38/46] cpu: move cc->transaction_failed to tcg_ops

2021-02-23 Thread Philippe Mathieu-Daudé
On 2/5/21 11:56 PM, Richard Henderson wrote:
> From: Claudio Fontana 
> 
> Signed-off-by: Claudio Fontana 
> Reviewed-by: Alex Bennée 
> Reviewed-by: Philippe Mathieu-Daudé 
> Reviewed-by: Richard Henderson 
> 
> [claudio: wrap target code around CONFIG_TCG and !CONFIG_USER_ONLY]
> 
> avoiding its use in headers used by common_ss code (should be poisoned).
> 
> Note: need to be careful with the use of CONFIG_USER_ONLY,
> Message-Id: <20210204163931.7358-11-cfont...@suse.de>
> Signed-off-by: Richard Henderson 
> ---
>  include/hw/core/cpu.h | 28 +---
>  hw/mips/jazz.c|  9 +++--
>  target/alpha/cpu.c|  2 +-
>  target/arm/cpu.c  |  4 ++--
>  target/m68k/cpu.c |  2 +-
>  target/microblaze/cpu.c   |  2 +-
>  target/mips/cpu.c |  4 +++-
>  target/riscv/cpu.c|  2 +-
>  target/riscv/cpu_helper.c |  2 +-
>  target/sparc/cpu.c|  2 +-
>  target/xtensa/cpu.c   |  2 +-
>  target/xtensa/helper.c|  4 ++--
>  12 files changed, 34 insertions(+), 29 deletions(-)
> 
> diff --git a/include/hw/core/cpu.h b/include/hw/core/cpu.h
> index 60cf20bf05..41ce1daefc 100644
> --- a/include/hw/core/cpu.h
> +++ b/include/hw/core/cpu.h
> @@ -122,6 +122,14 @@ typedef struct TcgCpuOperations {
>  /** @debug_excp_handler: Callback for handling debug exceptions */
>  void (*debug_excp_handler)(CPUState *cpu);
>  
> +/**
> + * @do_transaction_failed: Callback for handling failed memory 
> transactions
> + * (ie bus faults or external aborts; not MMU faults)
> + */
> +void (*do_transaction_failed)(CPUState *cpu, hwaddr physaddr, vaddr addr,
> +  unsigned size, MMUAccessType access_type,
> +  int mmu_idx, MemTxAttrs attrs,
> +  MemTxResult response, uintptr_t retaddr);
>  } TcgCpuOperations;
>  
>  /**
> @@ -133,8 +141,6 @@ typedef struct TcgCpuOperations {
>   * @has_work: Callback for checking if there is work to do.
>   * @do_unaligned_access: Callback for unaligned access handling, if
>   * the target defines #TARGET_ALIGNED_ONLY.
> - * @do_transaction_failed: Callback for handling failed memory transactions
> - * (ie bus faults or external aborts; not MMU faults)
>   * @virtio_is_big_endian: Callback to return %true if a CPU which supports
>   * runtime configurable endianness is currently big-endian. Non-configurable
>   * CPUs can use the default implementation of this method. This method should
> @@ -203,10 +209,6 @@ struct CPUClass {
>  void (*do_unaligned_access)(CPUState *cpu, vaddr addr,
>  MMUAccessType access_type,
>  int mmu_idx, uintptr_t retaddr);
> -void (*do_transaction_failed)(CPUState *cpu, hwaddr physaddr, vaddr addr,
> -  unsigned size, MMUAccessType access_type,
> -  int mmu_idx, MemTxAttrs attrs,
> -  MemTxResult response, uintptr_t retaddr);
>  bool (*virtio_is_big_endian)(CPUState *cpu);
>  int (*memory_rw_debug)(CPUState *cpu, vaddr addr,
> uint8_t *buf, int len, bool is_write);
> @@ -879,9 +881,6 @@ CPUState *cpu_by_arch_id(int64_t id);
>  
>  void cpu_interrupt(CPUState *cpu, int mask);
>  
> -#ifdef NEED_CPU_H
> -
> -#ifdef CONFIG_SOFTMMU
>  static inline void cpu_unaligned_access(CPUState *cpu, vaddr addr,
>  MMUAccessType access_type,
>  int mmu_idx, uintptr_t retaddr)
> @@ -900,14 +899,13 @@ static inline void cpu_transaction_failed(CPUState 
> *cpu, hwaddr physaddr,
>  {
>  CPUClass *cc = CPU_GET_CLASS(cpu);
>  
> -if (!cpu->ignore_memory_transaction_failures && 
> cc->do_transaction_failed) {
> -cc->do_transaction_failed(cpu, physaddr, addr, size, access_type,
> -  mmu_idx, attrs, response, retaddr);
> +if (!cpu->ignore_memory_transaction_failures &&
> +cc->tcg_ops.do_transaction_failed) {
> +cc->tcg_ops.do_transaction_failed(cpu, physaddr, addr, size,
> +  access_type, mmu_idx, attrs,
> +  response, retaddr);
>  }
>  }
> -#endif
> -
> -#endif /* NEED_CPU_H */
>  
>  /**
>   * cpu_set_pc:
> diff --git a/hw/mips/jazz.c b/hw/mips/jazz.c
> index f9442731dd..46c71a0ac8 100644
> --- a/hw/mips/jazz.c
> +++ b/hw/mips/jazz.c
> @@ -116,6 +116,8 @@ static const MemoryRegionOps dma_dummy_ops = {
>  #define MAGNUM_BIOS_SIZE_MAX 0x7e000
>  #define MAGNUM_BIOS_SIZE 
>   \
>  (BIOS_SIZE < MAGNUM_BIOS_SIZE_MAX ? BIOS_SIZE : MAGNUM_BIOS_SIZE_MAX)
> +
> +#if defined(CONFIG_TCG) && !defined(CONFIG_USER_ONLY)
>  static void (*real_do_transaction_failed)(CPUState *cpu, hwaddr physaddr,
>  

Re: [PATCH v5 1/4] Jobs based on custom runners: documentation and configuration placeholder

2021-02-23 Thread Wainer dos Santos Moschetta

Hi,

On 2/23/21 2:45 PM, Daniel P. Berrangé wrote:

On Tue, Feb 23, 2021 at 11:47:18AM -0500, Cleber Rosa wrote:

On Tue, Feb 23, 2021 at 05:37:04PM +0100, Philippe Mathieu-Daudé wrote:

On 2/23/21 12:25 PM, Thomas Huth wrote:

On 19/02/2021 22.58, Cleber Rosa wrote:

As described in the included documentation, the "custom runner" jobs
extend the GitLab CI jobs already in place.  One of their primary
goals of catching and preventing regressions on a wider number of host
systems than the ones provided by GitLab's shared runners.

This sets the stage in which other community members can add their own
machine configuration documentation/scripts, and accompanying job
definitions.  As a general rule, those newly added contributed jobs
should run as "non-gating", until their reliability is verified (AKA
"allow_failure: true").

Signed-off-by: Cleber Rosa 
---
   .gitlab-ci.d/custom-runners.yml | 14 ++
   .gitlab-ci.yml  |  1 +
   docs/devel/ci.rst   | 28 
   docs/devel/index.rst    |  1 +
   4 files changed, 44 insertions(+)
   create mode 100644 .gitlab-ci.d/custom-runners.yml
   create mode 100644 docs/devel/ci.rst

diff --git a/.gitlab-ci.d/custom-runners.yml
b/.gitlab-ci.d/custom-runners.yml
new file mode 100644
index 00..3004da2bda
--- /dev/null
+++ b/.gitlab-ci.d/custom-runners.yml
@@ -0,0 +1,14 @@
+# The CI jobs defined here require GitLab runners installed and
+# registered on machines that match their operating system names,
+# versions and architectures.  This is in contrast to the other CI
+# jobs that are intended to run on GitLab's "shared" runners.
+
+# Different than the default approach on "shared" runners, based on
+# containers, the custom runners have no such *requirement*, as those
+# jobs should be capable of running on operating systems with no
+# compatible container implementation, or no support from
+# gitlab-runner.  To avoid problems that gitlab-runner can cause while
+# reusing the GIT repository, let's enable the recursive submodule
+# strategy.
+variables:
+  GIT_SUBMODULE_STRATEGY: recursive

Is it really necessary? I thought our configure script would take care
of the submodules?

I've done a lot of testing on bare metal systems, and the problems
that come from reusing the same system and failed cleanups can be very
frustrating.  It's unfortunate that we need this, but it was the
simplest and most reliable solution I found.  :/

Hmmm, this makes it sound like the job is not being run in a
fresh pristine checkout. IMHO we need to guarantee that in
general, at which point submodules should "just work", unless
the running is blocking network access ?


Setting the git strategy may work out:

https://docs.gitlab.com/ee/ci/runners/README.html#git-strategy

- Wainer





Regards,
Daniel





Re: [PATCH v2 00/42] esp: consolidate PDMA transfer buffers and other fixes

2021-02-23 Thread Philippe Mathieu-Daudé
Hi Mark,

On 2/9/21 8:29 PM, Mark Cave-Ayland wrote:
> This patch series comes from an experimental branch that I've been working on
> to try and boot a MacOS toolbox ROM under the QEMU q800 machine. The effort is
> far from complete, but it seems worth submitting these patches separately 
> since
> they are limited to the ESP device and form a substantial part of the work to
> date.
> 
> As part of Laurent's recent q800 work so-called PDMA (pseudo-DMA) support was
> added to the ESP device. This is whereby the DREQ (DMA request) line is used
> to signal to the host CPU that it can transfer data to/from the device over
> the SCSI bus.
> 
> The existing PDMA tracks 4 separate transfer data sources as indicated by the
> ESP pdma_origin variable: PDMA, TI, CMD and ASYNC with an independent variable
> pdma_len to store the transfer length. This works well with Linux which uses a
> single PDMA request to transfer a number of sectors in a single request.
> 
> Unfortunately the MacOS toolbox ROM has other ideas here: it sends data to the
> ESP as a mixture of FIFO and PDMA transfers and then uses a mixture of the 
> FIFO
> and DMA counters to confirm that the correct number of bytes have been
> transferred. For this to work correctly the PDMA buffers and separate pdma_len
> transfer counter must be consolidated into the FIFO to allow mixing of both
> types of transfer within a single request.
> 
> The patchset is split into several sections:
> 
> - Patches 1-7 are minor patches which make esp.c checkpatch friendly, QOMify 
> ESPState,
>   and also fix up some trace events ready for later patches in the series
> 
> - Patches 8-13 unify the DMA transfer count. In particular there are 2 
> synthetic
>   variables dma_counter and dma_left within ESPState which do not need to 
> exist. 
>   DMA transfer lengths are programmed into the TC (transfer count) register 
> which is 
>   decremented for each byte transferred, generating an interrupt when it 
> reaches zero.
>   These patches add helper functions to read the TC and STC registers 
> directly and
>   remove these synthetic variables so that the DMA transfer length is now 
> tracked in
>   a single place.
> 
> - Now that the TC register represents the authoritative DMA transfer length, 
> patches
>   14-25 work to eliminate the separate PDMA variables pdma_start, pdma_cur, 
> pdma_len
>   and separate PDMA buffers PDMA and CMD. The PDMA position variables can be 
> replaced
>   by the existing ESP cmdlen and ti_wptr/ti_rptr, whilst the FIFO (TI) buffer 
> is used
>   for incoming data with commands being accumulated in cmdbuf as per standard 
> DMA
>   requests.

I tried to help reviewing up to this point.

The next parts are too specific to me.

> - Patches 26 and 27 fix the detection of missing SCSI targets by the MacOS 
> toolbox ROM
>   on startup at which point it will attempt to start reading information from 
> a CDROM
>   attached to the q800 machine.
> 
> - Patch 28 is the main rework of the PDMA buffer transfers: instead of 
> tracking the
>   SCSI transfers using a separate ASYNC pdma_origin, the contents of the 
> ESPState
>   async_buf are copied to the FIFO buffer in 16-byte chunks with the transfer 
> status
>   and IRQs being set accordingly.
> 
> - Patch 29 removes the last separate PDMA variable pdma_origin, including the 
> separate
>   PDMA migration subsection which is no longer required (see note below about 
> migration
>   compatibility).
>   
> - Patch 30 enables 4 byte PDMA reads/writes over the SCSI bus which are used 
> by MacOS
>   when reading the next stage bootloader from CDROM (this is an increase from
>   2 bytes currently implemented and used by Linux).
> 
> - Patches 31-34 fix an issue whereby the MacOS toolbox ROM tries to read 
> incoming data
>   from the target within a few instructions of receiving the command complete 
> interrupt.
>   Since IO is asynchronous in QEMU, it is necessary to delay the command 
> complete
>   interrupt for incoming data to avoid underflow.
> 
> - Patches 35-37 fix a problem with the SATN and stop command not changing the 
> SCSI bus
>   to message out phase. This actually first manifested itself after the Fifo8 
> conversion
>   with guests that mix DMA/non-DMA commands but it is moved forward to aid 
> bisection.
> 
> - Patches 38-39 convert ti_buf and cmdbuf from simple arrays to QEMU's Fifo8 
> type which
>   helped locate a handful of bugs around handling the buffer pointers which 
> are
>   incorpated within earlier patches within the series.
>   
> - Finally patches 40-42 add support for the FIFO count registers, non-DMA 
> transfers and
>   unaligned accesses which are required for the MacOS toolbox ROM to 
> successful read
>   files from disk.



Re: [PATCH v2 23/42] esp: use ti_wptr/ti_rptr to manage the current FIFO position for PDMA

2021-02-23 Thread Philippe Mathieu-Daudé
On 2/9/21 8:29 PM, Mark Cave-Ayland wrote:
> This eliminates the last user of the PDMA-specific pdma_cur variable which can
> now be removed.
> 
> Signed-off-by: Mark Cave-Ayland 
> ---
>  hw/scsi/esp.c | 23 ---
>  include/hw/scsi/esp.h |  1 -
>  2 files changed, 8 insertions(+), 16 deletions(-)

Reviewed-by: Philippe Mathieu-Daudé 



Re: [PATCH v2 18/42] esp: accumulate SCSI commands for PDMA transfers in cmdbuf instead of pdma_buf

2021-02-23 Thread Philippe Mathieu-Daudé
On 2/9/21 8:29 PM, Mark Cave-Ayland wrote:
> ESP SCSI commands are already accumulated in cmdbuf and so there is no need to
> keep a separate pdma_buf buffer. Accumulate SCSI commands for PDMA transfers 
> in
> cmdbuf instead of pdma_buf so update cmdlen accordingly and change pdma_origin
> for PDMA transfers to CMD which allows the PDMA origin to be removed.
> 
> This commit also removes a stray memcpy() from get_cmd() which is a no-op 
> because
> cmdlen is always zero at the start of a command.
> 
> Notionally the removal of pdma_buf from vmstate_esp_pdma also breaks migration
> compatibility for the PDMA subsection until its complete removal by the end of
> the series.
> 
> Signed-off-by: Mark Cave-Ayland 
> ---
>  hw/scsi/esp.c | 56 +++
>  include/hw/scsi/esp.h |  2 --
>  2 files changed, 25 insertions(+), 33 deletions(-)

Reviewed-by: Philippe Mathieu-Daudé 



Re: [PATCH v2 17/42] esp: move pdma_len and TC logic into esp_pdma_read()/esp_pdma_write()

2021-02-23 Thread Philippe Mathieu-Daudé
On 2/9/21 8:29 PM, Mark Cave-Ayland wrote:
> Signed-off-by: Mark Cave-Ayland 
> ---
>  hw/scsi/esp.c | 50 --
>  1 file changed, 32 insertions(+), 18 deletions(-)
> 
> diff --git a/hw/scsi/esp.c b/hw/scsi/esp.c
> index cfeba2feb0..7134c0aff4 100644
> --- a/hw/scsi/esp.c
> +++ b/hw/scsi/esp.c
> @@ -153,22 +153,45 @@ static uint8_t *get_pdma_buf(ESPState *s)

Reviewed-by: Philippe Mathieu-Daudé 



Re: [PATCH v2 13/42] esp: remove dma_left from ESPState

2021-02-23 Thread Philippe Mathieu-Daudé
On 2/9/21 8:29 PM, Mark Cave-Ayland wrote:
> The ESP device already keeps track of the remaining bytes left to transfer via
> its TC (transfer counter) register which is decremented for each byte that
> is transferred across the SCSI bus.
> 
> Switch the transfer logic to use the value of TC instead of dma_left and then
> remove dma_left completely, adding logic to the vmstate_esp post_load() 
> function
> to transfer the old dma_left value to the TC register during migration from
> older versions.
> 
> Signed-off-by: Mark Cave-Ayland 
> ---
>  hw/scsi/esp.c | 47 ---
>  include/hw/scsi/esp.h |  5 +++--
>  2 files changed, 34 insertions(+), 18 deletions(-)

I dare to add:
Reviewed-by: Philippe Mathieu-Daudé 



[PATCH v2] tests/docker: Use --arch-only when building Debian cross image

2021-02-23 Thread Philippe Mathieu-Daudé
When building a Docker image based on debian10.docker on
a non-x86 host, we get:

 [2/4] RUN apt update && DEBIAN_FRONTEND=noninteractive eatmydata apt 
build-dep -yy qemu
 Reading package lists... Done
 Building dependency tree
 Reading state information... Done
 Some packages could not be installed. This may mean that you have
 requested an impossible situation or if you are using the unstable
 distribution that some required packages have not yet been created
 or been moved out of Incoming.
 The following information may help to resolve the situation:

 The following packages have unmet dependencies:
  builddeps:qemu : Depends: gcc-s390x-linux-gnu but it is not installable
   Depends: gcc-alpha-linux-gnu but it is not installable
 E: Unable to correct problems, you have held broken packages.

Fix by using the --arch-only option suggested here:
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1866032/comments/1

Suggested-by: Christian Ehrhardt 
Reviewed-by: Alex Bennée 
Signed-off-by: Philippe Mathieu-Daudé 
---
 tests/docker/dockerfiles/debian10.docker | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tests/docker/dockerfiles/debian10.docker 
b/tests/docker/dockerfiles/debian10.docker
index 9d42b5a4b81..d034acbd256 100644
--- a/tests/docker/dockerfiles/debian10.docker
+++ b/tests/docker/dockerfiles/debian10.docker
@@ -32,6 +32,6 @@ RUN apt update && \
 psmisc \
 python3 \
 python3-sphinx \
-$(apt-get -s build-dep qemu | egrep ^Inst | fgrep '[all]' | cut -d\  
-f2)
+$(apt-get -s build-dep --arch-only qemu | egrep ^Inst | fgrep '[all]' 
| cut -d\  -f2)
 
 ENV FEATURES docs
-- 
2.26.2




Re: Plugin Address Translations Inconsistent/Incorrect?

2021-02-23 Thread Aaron Lindsay via
On Feb 22 15:48, Aaron Lindsay wrote:
> On Feb 22 19:30, Alex Bennée wrote:
> > Aaron Lindsay  writes:
> > That said I think we could add an additional helper to translate a
> > hwaddr to a global address space address. I'm open to suggestions of the
> > best way to structure this.
> 
> Haven't put a ton of thought into it, but what about something like this
> (untested):
> 
> uint64_t qemu_plugin_hwaddr_phys_addr(const struct qemu_plugin_hwaddr *haddr)
> {
> #ifdef CONFIG_SOFTMMU
> if (haddr) {
> if (!haddr->is_io) {
> RAMBlock *block;
> ram_addr_t offset;
> 
> block = qemu_ram_block_from_host((void *) haddr->v.ram.hostaddr, 
> false, );
> if (!block) {
> error_report("Bad ram pointer %"PRIx64"", 
> haddr->v.ram.hostaddr);
> abort();
> }
> 
> return block->offset + offset + block->mr->addr;
> } else {
> MemoryRegionSection *mrs = haddr->v.io.section;
> return haddr->v.io.offset + mrs->mr->addr;
> }
> }
> #endif
> return 0;
> }

This appears to successfully return correct physical addresses for RAM
at least, though I've not tested it thoroughly for MMIO yet.

If it ends up being desirable based on the discussion elsewhere on this
thread I am willing to perform more complete testing, turn this into a
patch, and submit it.

-Aaron



[Bug 1916655] [NEW] Compilation fails due to zstd qcow2 compression

2021-02-23 Thread Dantali0n
Public bug reported:

Compilation of QEMU fails when using recent versions of zstd.

I use the following commands to compile QEMU:
$ mkdir build
$ cd build
$ ../configure --enable-debug --target-list=x86_64-softmmu
$ make -j $(nproc)

Here is a paste from the ../configure output:
https://paste.ubuntu.com/p/dHsWzGV7TH/

And one from the make output:
https://paste.ubuntu.com/p/89qKk4NrFz/

In short the error boils down to:
../block/qcow2-threads.c: In function ‘qcow2_zstd_compress’:
../block/qcow2-threads.c:225:16: error: implicit declaration of function 
‘ZSTD_compressStream2’; did you mean ‘ZSTD_compressStream’? 
[-Werror=implicit-function-declaration]
  225 | zstd_ret = ZSTD_compressStream2(cctx, , , ZSTD_e_end);
  |^~~~
  |ZSTD_compressStream
../block/qcow2-threads.c:225:16: error: nested extern declaration of 
‘ZSTD_compressStream2’ [-Werror=nested-externs]
../block/qcow2-threads.c:225:60: error: ‘ZSTD_e_end’ undeclared (first use in 
this function)
  225 | zstd_ret = ZSTD_compressStream2(cctx, , , ZSTD_e_end);
  |

System info:
QEMU commit: 7ef8134565dccf9186d5eabd7dbb4ecae6dead87 (from Github)
Kernel: 5.10.15
zstd: 1.4.8

** Affects: qemu
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1916655

Title:
  Compilation fails due to zstd qcow2 compression

Status in QEMU:
  New

Bug description:
  Compilation of QEMU fails when using recent versions of zstd.

  I use the following commands to compile QEMU:
  $ mkdir build
  $ cd build
  $ ../configure --enable-debug --target-list=x86_64-softmmu
  $ make -j $(nproc)

  Here is a paste from the ../configure output:
  https://paste.ubuntu.com/p/dHsWzGV7TH/

  And one from the make output:
  https://paste.ubuntu.com/p/89qKk4NrFz/

  In short the error boils down to:
  ../block/qcow2-threads.c: In function ‘qcow2_zstd_compress’:
  ../block/qcow2-threads.c:225:16: error: implicit declaration of function 
‘ZSTD_compressStream2’; did you mean ‘ZSTD_compressStream’? 
[-Werror=implicit-function-declaration]
225 | zstd_ret = ZSTD_compressStream2(cctx, , , 
ZSTD_e_end);
|^~~~
|ZSTD_compressStream
  ../block/qcow2-threads.c:225:16: error: nested extern declaration of 
‘ZSTD_compressStream2’ [-Werror=nested-externs]
  ../block/qcow2-threads.c:225:60: error: ‘ZSTD_e_end’ undeclared (first use in 
this function)
225 | zstd_ret = ZSTD_compressStream2(cctx, , , 
ZSTD_e_end);
|

  System info:
  QEMU commit: 7ef8134565dccf9186d5eabd7dbb4ecae6dead87 (from Github)
  Kernel: 5.10.15
  zstd: 1.4.8

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1916655/+subscriptions



RE: [PATCH] target/hexagon/opcodes: Add missing varargs cleanup

2021-02-23 Thread Taylor Simpson


> -Original Message-
> From: Philippe Mathieu-Daudé  On
> Behalf Of Philippe Mathieu-Daudé
> Sent: Tuesday, February 23, 2021 5:13 AM
> To: qemu-devel@nongnu.org
> Cc: Richard Henderson ; Taylor Simpson
> ; Philippe Mathieu-Daudé 
> Subject: [PATCH] target/hexagon/opcodes: Add missing varargs cleanup
>
> Fix a trivial incorrect usage of variable argument macros detected
> by Coverity (missing_va_end: va_end was not called for ap).
>
> Fixes: Coverity CID 1446720 (VARARGS)
> Fixes: e3c00c2ed75 ("Hexagon (target/hexagon) opcode data structures")
> Signed-off-by: Philippe Mathieu-Daudé 
> ---
>  target/hexagon/opcodes.c | 1 +
>  1 file changed, 1 insertion(+)
>
> diff --git a/target/hexagon/opcodes.c b/target/hexagon/opcodes.c
> index 4eef5fc40f6..35d790cdd5b 100644
> --- a/target/hexagon/opcodes.c
> +++ b/target/hexagon/opcodes.c
> @@ -82,6 +82,7 @@ static void init_attribs(int tag, ...)
>  while ((attr = va_arg(ap, int)) != 0) {
>  set_bit(attr, opcode_attribs[tag]);
>  }
> +va_end(ap);
>  }
>

Reviewed-by: Taylor Simpson 
Tested-by: Taylor Simpson 



Re: [PATCH v5 1/5] slirp: Advance libslirp submodule to add ipv6 host-forward support

2021-02-23 Thread Doug Evans
On Fri, Feb 19, 2021 at 4:20 PM Philippe Mathieu-Daudé 
wrote:

> Hi Doug,
>
> On 2/20/21 1:13 AM, Doug Evans via wrote:
>
> When updating submodules, the commit description is a good
> good place to include the output of:
>
>   $ git shortlog 8f43a99..26ae658
>
> See for example QEMU commit f350d78f102 ("Update SLOF").
>
> Anyhow up to the maintainer merging your patch.
>


Thanks, that helps a little.
The issue here is that qemu is using branch 4.2 of libslirp, whereas the
patch is currently just on libslirp's master branch.
Part of the problem is the commit description, which you've helped with.
But what about the functional part of the patch?
I can only get git format-patch to include a commit id, and the only commit
id available is on the libslirp master branch.

Is there an additional step I need to do like submit libslirp changes in
three parts?
Step 1: Submit patch to libslirp's repo (not qemu's copy, but the
libslirp master repo)
Step 2: Submit a patch to libslirp's repo to add the patch to its 4.2 branch
Step 2b: Wait for qemu's copy of libslirp's 4.2 branch to appear in
qemu's libslirp repo
Step 3: Submit patch to advance qemu's libslirp submodule

I've done steps 1,3, it's just effecting the equivalent of step2 that I'm
fuzzy on.


> > Signed-off-by: Doug Evans 
> > ---
> >
> > Changes from v4:
> > NOTE TO REVIEWERS: I need some hand-holding to know what The Right
> > way to submit this particular patch is.
> >
> > - no change
> >
> > Changes from v3:
> > - pick up latest libslirp patch to reject ipv6 addr-any for guest address
> >   - libslirp currently only provides a stateless DHCPv6 server, which
> means
> > it can't know in advance what the guest's IP address is, and thus
> > cannot do the "addr-any -> guest ip address" translation that is done
> > for ipv4
> >
> > Changes from v2:
> > - this patch is new in v3, split out from v2
> >
> >  slirp | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/slirp b/slirp
> > index 8f43a99191..26ae658a83 16
> > --- a/slirp
> > +++ b/slirp
> > @@ -1 +1 @@
> > -Subproject commit 8f43a99191afb47ca3f3c6972f6306209f367ece
> > +Subproject commit 26ae658a83eeca16780cf5615c8247cbb151c3fa
> >
>
>


Re: [PATCH v5 2/4] Jobs based on custom runners: build environment docs and playbook

2021-02-23 Thread Cleber Rosa
On Tue, Feb 23, 2021 at 06:23:25PM +, Alex Bennée wrote:
> 
> Cleber Rosa  writes:
> 
> > On Tue, Feb 23, 2021 at 03:01:50PM +, Alex Bennée wrote:
> >> 
> >> Alex Bennée  writes:
> >> 
> >> > Cleber Rosa  writes:
> >> >
> >> >> To run basic jobs on custom runners, the environment needs to be
> >> >> properly set up.  The most common requirement is having the right
> >> >> packages installed.
> >> >>
> >> 
> >> >
> >> > So I got somewhat there with a direct command line invocation:
> >> >
> >> >   ansible-playbook -u root -i 192.168.122.24,192.168.122.45 
> >> > scripts/ci/setup/build-environment.yml -e 
> >> > 'ansible_python_interpreter=/usr/bin/python3'
> >> >
> >> > although for some reason a single host -i fails...
> >> >
> >> >> diff --git a/scripts/ci/setup/build-environment.yml 
> >> >> b/scripts/ci/setup/build-environment.yml
> >> >> new file mode 100644
> >> >> index 00..0197e0a48b
> >> >> --- /dev/null
> >> >> +++ b/scripts/ci/setup/build-environment.yml
> >> >> @@ -0,0 +1,76 @@
> >> >> +---
> >> >> +- name: Installation of basic packages to build QEMU
> >> >> +  hosts: all
> >> >> +  tasks:
> >> >> +- name: Update apt cache
> >> >> +  apt:
> >> >> +update_cache: yes
> >> >> +  when:
> >> >> +- ansible_facts['distribution'] == 'Ubuntu'
> >> >
> >> > So are we limiting to Ubuntu here rather than say a Debian base?
> >> 
> >> Also I'm getting:
> >> 
> >>   TASK [Update apt cache] 
> >> *
> >>   fatal: [hackbox-ubuntu-2004]: FAILED! => {"msg": "The conditional check 
> >> 'ansible_facts['distribution'] == 'Ubuntu'' failed. The error was: error 
> >> while evaluating conditional (ansible_facts['distribution'] == 'Ubuntu'): 
> >> 'dict object' has no attribute 'distribution'\n\nThe error appears to have 
> >> been in '/home/alex/lsrc/qemu.git/scripts/ci/setup/build-environment.yml': 
> >> line 5, column 7, but may\nbe elsewhere in the file depending on the exact 
> >> syntax problem.\n\nThe offending line appears to be:\n\n  tasks:\n- 
> >> name: Update apt cache\n  ^ here\n"}
> >> 
> >> which is odd given that machine is definitely an Ubuntu one.
> >>
> >
> > It's defintely odd.  This is what I get on a fresh machine:
> >
> >TASK [Update apt cache] 
> > *
> >[WARNING]: Updating cache and auto-installing missing dependency: 
> > python3-apt
> >ok: [localhost]
> >
> > Could you please let me know the output of:
> >
> >$ ansible -m setup -u $YOUR_USERNAME -i $HOSTNAME, all | grep
> >ansible_distribution
> 
> The key doesn't exist:
> 
> hackbox-ubuntu-2004 | SUCCESS => {
> "ansible_facts": {
> "ansible_all_ipv4_addresses": [
> "192.168.122.170"
> ],
> "ansible_all_ipv6_addresses": [
> "fe80::5054:ff:fe54:7cfe"
> ],
> "ansible_apparmor": {
> "status": "enabled"
> },
> "ansible_architecture": "x86_64",
> "ansible_bios_date": "04/01/2014",
> "ansible_bios_version": "1.10.2-1ubuntu1",
> "ansible_cmdline": {
> "BOOT_IMAGE": "/vmlinuz-5.4.0-65-generic",
> "maybe-ubiquity": true,
> "ro": true,
> "root": "/dev/mapper/ubuntu--vg-ubuntu--lv"
> },
> "ansible_date_time": {
> "date": "2021-02-23",
> "day": "23",
> "epoch": "1614104601",
> "hour": "18",
> "iso8601": "2021-02-23T18:23:21Z",
> "iso8601_basic": "20210223T182321857461",
> "iso8601_basic_short": "20210223T182321",
> "iso8601_micro": "2021-02-23T18:23:21.857529Z",
> "minute": "23",
> "month": "02",
> "second": "21",
> "time": "18:23:21",
> "tz": "UTC",
> "tz_offset": "+",
> "weekday": "Tuesday",
> "weekday_number": "2",
> "weeknumber": "08",
> "year": "2021"
> },
> "ansible_default_ipv4": {
> "address": "192.168.122.170",
> "alias": "enp1s0",
> "broadcast": "192.168.122.255",
> "gateway": "192.168.122.1",
> "interface": "enp1s0",
> "macaddress": "52:54:00:54:7c:fe",
> "mtu": 1500,
> "netmask": "255.255.255.0",
> "network": "192.168.122.0",
> "type": "ether"
> },
> "ansible_default_ipv6": {},
> "ansible_device_links": {
> "ids": {
> "dm-0": [
> "dm-name-ubuntu--vg-ubuntu--lv",
> 
> "dm-uuid-LVM-filR1BfuX6Mpp9J7CP9cbVsTT2ICh7Apc9qZsFohnsqycocacS0Sm6HAhjTBEAkq"
> ],
> "sda": 

Re: [PATCH 1/2] gitlab-ci.yml: Allow custom make parallelism

2021-02-23 Thread Daniele Buono

This works, but setting this value to 1 for everybody seems a bit too
restrictive. While the gitlab ci runners don't have enough memory for
this, that's not necessarily true for every build platform, and linking
multiple targets in parallel with LTO can result in a big save in time,
so I'd prefer a customizable way.

How about adding a flag `--max-ld-procs` to configure to manually set
backend_max_links?

This would also allow setting it up to any specific number above 1,
which looking at the Makefile seems to not be possible now: because of
how the -j flag is passed from make to ninja, a compilation is either
sequential or parallel based on #cpus

On 2/23/2021 3:12 AM, Paolo Bonzini wrote:

On 23/02/21 00:01, Daniele Buono wrote:
Currently, make parallelism at build time is defined as #cpus+1. Some 
build jobs may need (or benefit from) a different number. An example 
is builds with LTO where, because of the huge demand of memory at link 
time, gitlab runners fails if two linkers are run concurrently This 
patch retains the default value of #cpus+1 but allows setting the 
"JOBS" variable to a different number when applying the template


As I just found out, you can add -Dbackend_max_links=1 to the meson 
command line instead if LTO is enabled.


Paolo





Re: [PATCH] tests/docker: Use --arch-only when building Debian cross images

2021-02-23 Thread Philippe Mathieu-Daudé
On 2/23/21 7:28 PM, Alex Bennée wrote:
> Philippe Mathieu-Daudé  writes:
> 
>> When building a Docker image based on debian10.docker on
>> a non-x86 host, we get:
>>
>>  [2/4] RUN apt update && DEBIAN_FRONTEND=noninteractive eatmydata 
>> apt build-dep -yy qemu
>>  Reading package lists... Done
>>  Building dependency tree
>>  Reading state information... Done
>>  Some packages could not be installed. This may mean that you have
>>  requested an impossible situation or if you are using the unstable
>>  distribution that some required packages have not yet been created
>>  or been moved out of Incoming.
>>  The following information may help to resolve the situation:
>>
>>  The following packages have unmet dependencies:
>>   builddeps:qemu : Depends: gcc-s390x-linux-gnu but it is not installable
>>Depends: gcc-alpha-linux-gnu but it is not installable
>>  E: Unable to correct problems, you have held broken packages.
>>
>> Fix by using the --arch-only option suggested here:
>> https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1866032/comments/1
>>
>> Suggested-by: Christian Ehrhardt 
>> Signed-off-by: Philippe Mathieu-Daudé 
>> ---
>>  tests/docker/dockerfiles/debian-all-test-cross.docker | 2 +-
>>  tests/docker/dockerfiles/debian10.docker  | 2 +-
>>  2 files changed, 2 insertions(+), 2 deletions(-)
>>
>> diff --git a/tests/docker/dockerfiles/debian-all-test-cross.docker 
>> b/tests/docker/dockerfiles/debian-all-test-cross.docker
>> index dedcea58b46..593b7ef1023 100644
>> --- a/tests/docker/dockerfiles/debian-all-test-cross.docker
>> +++ b/tests/docker/dockerfiles/debian-all-test-cross.docker
>> @@ -11,7 +11,7 @@ FROM qemu/debian10
>>  # What we need to build QEMU itself
>>  RUN apt update && \
>>  DEBIAN_FRONTEND=noninteractive eatmydata \
>> -apt build-dep -yy qemu
>> +apt build-dep --arch-only -yy qemu
> 
> This is just going to fail later on when you discover the cross
> compilers are only packaged for amd64.

Yes, I wonder if this is a Debian bug (we can still install cross
gcc for the host target -- which is pointless) and the cross libc.

> Perhaps we need to mark this one as amd64 only somehow?

OK.

>>  
>>  # Add the foreign architecture we want and install dependencies
>>  RUN DEBIAN_FRONTEND=noninteractive eatmydata \
>> diff --git a/tests/docker/dockerfiles/debian10.docker 
>> b/tests/docker/dockerfiles/debian10.docker
>> index 9d42b5a4b81..d034acbd256 100644
>> --- a/tests/docker/dockerfiles/debian10.docker
>> +++ b/tests/docker/dockerfiles/debian10.docker
>> @@ -32,6 +32,6 @@ RUN apt update && \
>>  psmisc \
>>  python3 \
>>  python3-sphinx \
>> -$(apt-get -s build-dep qemu | egrep ^Inst | fgrep '[all]' | cut -d\ 
>>  -f2)
>> +$(apt-get -s build-dep --arch-only qemu | egrep ^Inst | fgrep
>>  '[all]' | cut -d\  -f2)
> 
> This bit is fine, without the all-test-cross change:
> 
> Reviewed-by: Alex Bennée 
> 



Re: [PATCH v2 30/42] esp: add 4 byte PDMA read and write transfers

2021-02-23 Thread Philippe Mathieu-Daudé
On 2/23/21 9:24 AM, Mark Cave-Ayland wrote:
> On 16/02/2021 07:30, Philippe Mathieu-Daudé wrote:
> 
>>> Are you planning to review any more of this series? I'm keen to put out
>>> a (hopefully final) v3 soon, but I'll hold off for little while if you
>>> want more time to look over the remaining patches.
>>
>> I talked about this series with Laurent on Sunday, asking him for
>> review help ;) I don't remember if there is any big comment to
>> address in patches 1-14. If not I can review the missing ones
>> there today and you could send directly a pull request for this
>> first set, then send the rest as v3. Does that help?
>> For the rest I doubt having time to focus before Friday.
> 
> Hi Phil/Laurent,
> 
> I know you're both really busy, but gentle ping to ask if anyone is
> still planning to review the second half of this patchset? :)

At this point I reviewed more than half of the series :)

Laurent, can you have a look at the upper half?



Re: softmmu vs sysemu [Was: Re: [RFC v1 06/38] target/arm: split off cpu-softmmu.c]

2021-02-23 Thread Richard Henderson
On 2/23/21 10:18 AM, Claudio Fontana wrote:
> I am all for "just getting it done", but here the i386 and the arm series 
> become a mixture of things that I am not comfortable with,
> I'd prefer a dedicated series..

You're thinking too deeply about the problem that I'm reporting to you about
this patch set.

I just want the file naming done correctly, while you're renaming.  That is
something you are actively changing in this patch set, so we should get it 
right.

You don't have to worry about ifdef CONFIG_SOFTMMU vs ifndef CONFIG_USER_ONLY
within the code itself.


r~



Re: [PATCH v2 40/42] esp: add trivial implementation of the ESP_RFLAGS register

2021-02-23 Thread Philippe Mathieu-Daudé
On 2/9/21 8:30 PM, Mark Cave-Ayland wrote:
> The bottom 5 bits contain the number of bytes remaining in the FIFO which is
> trivial to implement with Fifo8 (the remaining bits are unimplemented and left
> as 0 for now).
> 
> Signed-off-by: Mark Cave-Ayland 
> ---
>  hw/scsi/esp.c | 4 
>  1 file changed, 4 insertions(+)

Reviewed-by: Philippe Mathieu-Daudé 



Re: [PATCH v2 38/42] esp: convert ti_buf from array to Fifo8

2021-02-23 Thread Philippe Mathieu-Daudé
On 2/9/21 8:30 PM, Mark Cave-Ayland wrote:
> Rename TI_BUFSZ to ESP_FIFO_SZ since this constant is really describing the 
> size
> of the FIFO and is not directly related to the TI size.
> 
> Signed-off-by: Mark Cave-Ayland 
> ---
>  hw/scsi/esp.c | 117 ++
>  include/hw/scsi/esp.h |   8 +--
>  2 files changed, 79 insertions(+), 46 deletions(-)

> @@ -806,11 +818,9 @@ void esp_reg_write(ESPState *s, uint32_t saddr, uint64_t 
> val)
>  } else {
>  trace_esp_error_fifo_overrun();
>  }
> -} else if (s->ti_wptr == TI_BUFSZ - 1) {
> -trace_esp_error_fifo_overrun();
>  } else {
>  s->ti_size++;
> -s->ti_buf[s->ti_wptr++] = val & 0xff;
> +esp_fifo_push(s, val & 0xff);

Personally I'd drop the '& 0xff' part.

>  }
>  
>  /* Non-DMA transfers raise an interrupt after every byte */
> @@ -839,8 +849,7 @@ void esp_reg_write(ESPState *s, uint32_t saddr, uint64_t 
> val)
>  case CMD_FLUSH:
>  trace_esp_mem_writeb_cmd_flush(val);
>  /*s->ti_size = 0;*/

Is this comment still meaningful?

> -s->ti_wptr = 0;
> -s->ti_rptr = 0;
> +fifo8_reset(>fifo);
>  break;
>  case CMD_RESET:
>  trace_esp_mem_writeb_cmd_reset(val);
> @@ -958,11 +967,18 @@ static int esp_pre_save(void *opaque)
>  static int esp_post_load(void *opaque, int version_id)
>  {
>  ESPState *s = ESP(opaque);
> +int len, i;
>  
>  version_id = MIN(version_id, s->mig_version_id);
>  
>  if (version_id < 5) {
>  esp_set_tc(s, s->mig_dma_left);
> +
> +/* Migrate ti_buf to fifo */
> +len = s->mig_ti_wptr - s->mig_ti_rptr;
> +for (i = 0; i < len; i++) {
> +fifo8_push(>fifo, s->mig_ti_buf[i]);

Again I dare to add:
Reviewed-by: Philippe Mathieu-Daudé 

> +}
>  }



Re: [PATCH] target/hexagon/opcodes: Add missing varargs cleanup

2021-02-23 Thread Richard Henderson
On 2/23/21 3:12 AM, Philippe Mathieu-Daudé wrote:
> Fix a trivial incorrect usage of variable argument macros detected
> by Coverity (missing_va_end: va_end was not called for ap).
> 
> Fixes: Coverity CID 1446720 (VARARGS)
> Fixes: e3c00c2ed75 ("Hexagon (target/hexagon) opcode data structures")
> Signed-off-by: Philippe Mathieu-Daudé 
> ---
>  target/hexagon/opcodes.c | 1 +
>  1 file changed, 1 insertion(+)

Reviewed-by: Richard Henderson 

r~



Re: [PATCH] target/riscv: fix vs() to return proper error code

2021-02-23 Thread Richard Henderson
On 2/22/21 10:59 PM, frank.ch...@sifive.com wrote:
> From: Frank Chang 
> 
> vs() should return -RISCV_EXCP_ILLEGAL_INST instead of -1 if rvv feature
> is not enabled.
> 
> If -1 is returned, exception will be raised and cs->exception_index will
> be set to the negative return value. The exception will then be treated
> as an instruction access fault instead of illegal instruction fault.

It does seem an unfortunate interface; -1 seems so tempting, but does not by
itself mean anything.

I wonder if we should dispense with the whole "negative number" thing and
simply return an exception value.  Then for bonus points put all of the
RISCV_EXCP_* values in an enumeration, and return that type from these
functions so that it's perfectly clear what the interface really is.

That said,

> @@ -54,7 +54,7 @@ static int vs(CPURISCVState *env, int csrno)
>  if (env->misa & RVV) {
>  return 0;
>  }
> -return -1;
> +return -RISCV_EXCP_ILLEGAL_INST;

this fixes the immediate bug, so
Reviewed-by: Richard Henderson 


r~



Re: [PATCH v2 36/42] esp: add maxlen parameter to get_cmd()

2021-02-23 Thread Philippe Mathieu-Daudé
On 2/9/21 8:30 PM, Mark Cave-Ayland wrote:
> Some guests use a mixture of DMA and non-DMA transfers in combination with the
> SATN and stop command to transfer message out phase and command phase bytes to
> the target. Prepare for the next commit by adding a maxlen parameter to
> get_cmd() to allow partial transfers.
> 
> Signed-off-by: Mark Cave-Ayland 
> ---
>  hw/scsi/esp.c | 20 +++-
>  1 file changed, 11 insertions(+), 9 deletions(-)

I dare to add:
Reviewed-by: Philippe Mathieu-Daudé 



Re: [PATCH v2 25/42] esp: remove CMD pdma_origin

2021-02-23 Thread Philippe Mathieu-Daudé
On 2/9/21 8:30 PM, Mark Cave-Ayland wrote:
> The cmdbuf is really just a copy of FIFO data (including extra message phase
> bytes) so its pdma_origin is effectively TI. Fortunately we already know when
> we are receiving a SCSI command since do_cmd == 1 which enables us to
> distinguish between the two cases in esp_pdma_read()/esp_pdma_write().
> 
> Signed-off-by: Mark Cave-Ayland 
> ---
>  hw/scsi/esp.c | 22 --
>  include/hw/scsi/esp.h |  1 -
>  2 files changed, 12 insertions(+), 11 deletions(-)

Reviewed-by: Philippe Mathieu-Daudé 



Re: [PATCH v2 24/42] esp: use in-built TC to determine PDMA transfer length

2021-02-23 Thread Philippe Mathieu-Daudé
On 2/9/21 8:30 PM, Mark Cave-Ayland wrote:
> Real hardware simply counts down using the in-built TC to determine when the
> the PDMA request is complete. Use the TC to determine the PDMA transfer length
> which then enables us to remove the redundant pdma_len variable.
> 
> Signed-off-by: Mark Cave-Ayland 
> ---
>  hw/scsi/esp.c | 28 +---
>  include/hw/scsi/esp.h |  1 -
>  2 files changed, 13 insertions(+), 16 deletions(-)

Reviewed-by: Philippe Mathieu-Daudé 



Re: [PATCH v2] i386: Add the support for AMD EPYC 3rd generation processors

2021-02-23 Thread Babu Moger
Hi Pankaj,

On 2/23/21 10:32 AM, Pankaj Gupta wrote:
> Hello Babu,
> 
> Have below doubt about exposed CPU flags between EPYC-Rome & EPYC-Milan 
> family.
> Please see below.
> 
>> Adds the support for AMD 3rd generation processors. The model
>> display for the new processor will be EPYC-Milan.
>>
>> Adds the following new feature bits on top of the feature bits from
>> the first and second generation EPYC models.
>>
>> pcid  : Process context identifiers support
>> ibrs  : Indirect Branch Restricted Speculation
>> ssbd  : Speculative Store Bypass Disable
>> erms  : Enhanced REP MOVSB/STOSB support
>> fsrm  : Fast Short REP MOVSB support
>> invpcid   : Invalidate processor context ID
>> pku   : Protection keys support
>> svme-addr-chk : SVM instructions address check for #GP handling
>>
>> Depends on the following kernel commits:
>> 14c2bf81fcd2 ("KVM: SVM: Fix #GP handling for doubly-nested virtualization")
>> 3b9c723ed7cf ("KVM: SVM: Add support for SVM instruction address check 
>> change")
>> 4aa2691dcbd3 ("8ce1c461188799d863398dd2865d KVM: x86: Factor out x86 
>> instruction emulation with decoding")
>> 4407a797e941 ("KVM: SVM: Enable INVPCID feature on AMD")
>> 9715092f8d7e ("KVM: X86: Move handling of INVPCID types to x86")
>> 3f3393b3ce38 ("KVM: X86: Rename and move the function 
>> vmx_handle_memory_failure to x86.c")
>> 830bd71f2c06 ("KVM: SVM: Remove set_cr_intercept, clr_cr_intercept and 
>> is_cr_intercept")
>> 4c44e8d6c193 ("KVM: SVM: Add new intercept word in vmcb_control_area")
>> c62e2e94b9d4 ("KVM: SVM: Modify 64 bit intercept field to two 32 bit 
>> vectors")
>> 9780d51dc2af ("KVM: SVM: Modify intercept_exceptions to generic intercepts")
>> 30abaa88382c ("KVM: SVM: Change intercept_dr to generic intercepts")
>> 03bfeeb988a9 ("KVM: SVM: Change intercept_cr to generic intercepts")
>> c45ad7229d13 ("KVM: SVM: Introduce 
>> vmcb_(set_intercept/clr_intercept/_is_intercept)")
>> a90c1ed9f11d ("(pcid) KVM: nSVM: Remove unused field")
>> fa44b82eb831 ("KVM: x86: Move MPK feature detection to common code")
>> 38f3e775e9c2 ("x86/Kconfig: Update config and kernel doc for MPK feature on 
>> AMD")
>> 37486135d3a7 ("KVM: x86: Fix pkru save/restore when guest CR4.PKE=0, move it 
>> to x86.c")
>>
>> Signed-off-by: Babu Moger 
>> ---
>> v2: Added svme-addr-chk. Also added all the dependent kernel commits in the 
>> log.
>>
>> v1: 
>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flore.kernel.org%2Fqemu-devel%2F16118780.27536.17735339269843944966.stgit%40bmoger-ubuntu%2Fdata=04%7C01%7Cbabu.moger%40amd.com%7C4cdb8e4513444faf227d08d8d818b23b%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637496947884399665%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=DepdBnCEp%2By069GaEmWnhETZ8saVkV9E8cExtzIyVLk%3Dreserved=0
>>
>>  target/i386/cpu.c |  107 
>> +
>>  target/i386/cpu.h |4 ++
>>  2 files changed, 110 insertions(+), 1 deletion(-)
>>
>> diff --git a/target/i386/cpu.c b/target/i386/cpu.c
>> index 9c3d2d60b7..24db7ed892 100644
>> --- a/target/i386/cpu.c
>> +++ b/target/i386/cpu.c
>> @@ -1033,7 +1033,7 @@ static FeatureWordInfo 
>> feature_word_info[FEATURE_WORDS] = {
>>  "clzero", NULL, "xsaveerptr", NULL,
>>  NULL, NULL, NULL, NULL,
>>  NULL, "wbnoinvd", NULL, NULL,
>> -"ibpb", NULL, NULL, "amd-stibp",
>> +"ibpb", NULL, "ibrs", "amd-stibp",
>>  NULL, NULL, NULL, NULL,
>>  NULL, NULL, NULL, NULL,
>>  "amd-ssbd", "virt-ssbd", "amd-no-ssb", NULL,
>> @@ -1798,6 +1798,56 @@ static CPUCaches epyc_rome_cache_info = {
>>  },
>>  };
>>
>> +static CPUCaches epyc_milan_cache_info = {
>> +.l1d_cache = &(CPUCacheInfo) {
>> +.type = DATA_CACHE,
>> +.level = 1,
>> +.size = 32 * KiB,
>> +.line_size = 64,
>> +.associativity = 8,
>> +.partitions = 1,
>> +.sets = 64,
>> +.lines_per_tag = 1,
>> +.self_init = 1,
>> +.no_invd_sharing = true,
>> +},
>> +.l1i_cache = &(CPUCacheInfo) {
>> +.type = INSTRUCTION_CACHE,
>> +.level = 1,
>> +.size = 32 * KiB,
>> +.line_size = 64,
>> +.associativity = 8,
>> +.partitions = 1,
>> +.sets = 64,
>> +.lines_per_tag = 1,
>> +.self_init = 1,
>> +.no_invd_sharing = true,
>> +},
>> +.l2_cache = &(CPUCacheInfo) {
>> +.type = UNIFIED_CACHE,
>> +.level = 2,
>> +.size = 512 * KiB,
>> +.line_size = 64,
>> +.associativity = 8,
>> +.partitions = 1,
>> +.sets = 1024,
>> +.lines_per_tag = 1,
>> +},
>> +.l3_cache = &(CPUCacheInfo) {
>> +.type = UNIFIED_CACHE,
>> +.level = 3,
>> +.size = 32 * MiB,
>> +.line_size = 64,
>> +.associativity = 16,
>> 

Re: [PATCH] tests/docker: Use --arch-only when building Debian cross images

2021-02-23 Thread Alex Bennée


Philippe Mathieu-Daudé  writes:

> When building a Docker image based on debian10.docker on
> a non-x86 host, we get:
>
>  [2/4] RUN apt update && DEBIAN_FRONTEND=noninteractive eatmydata apt 
> build-dep -yy qemu
>  Reading package lists... Done
>  Building dependency tree
>  Reading state information... Done
>  Some packages could not be installed. This may mean that you have
>  requested an impossible situation or if you are using the unstable
>  distribution that some required packages have not yet been created
>  or been moved out of Incoming.
>  The following information may help to resolve the situation:
>
>  The following packages have unmet dependencies:
>   builddeps:qemu : Depends: gcc-s390x-linux-gnu but it is not installable
>Depends: gcc-alpha-linux-gnu but it is not installable
>  E: Unable to correct problems, you have held broken packages.
>
> Fix by using the --arch-only option suggested here:
> https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1866032/comments/1
>
> Suggested-by: Christian Ehrhardt 
> Signed-off-by: Philippe Mathieu-Daudé 
> ---
>  tests/docker/dockerfiles/debian-all-test-cross.docker | 2 +-
>  tests/docker/dockerfiles/debian10.docker  | 2 +-
>  2 files changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/tests/docker/dockerfiles/debian-all-test-cross.docker 
> b/tests/docker/dockerfiles/debian-all-test-cross.docker
> index dedcea58b46..593b7ef1023 100644
> --- a/tests/docker/dockerfiles/debian-all-test-cross.docker
> +++ b/tests/docker/dockerfiles/debian-all-test-cross.docker
> @@ -11,7 +11,7 @@ FROM qemu/debian10
>  # What we need to build QEMU itself
>  RUN apt update && \
>  DEBIAN_FRONTEND=noninteractive eatmydata \
> -apt build-dep -yy qemu
> +apt build-dep --arch-only -yy qemu

This is just going to fail later on when you discover the cross
compilers are only packaged for amd64. Perhaps we need to mark this one
as amd64 only somehow?

>  
>  # Add the foreign architecture we want and install dependencies
>  RUN DEBIAN_FRONTEND=noninteractive eatmydata \
> diff --git a/tests/docker/dockerfiles/debian10.docker 
> b/tests/docker/dockerfiles/debian10.docker
> index 9d42b5a4b81..d034acbd256 100644
> --- a/tests/docker/dockerfiles/debian10.docker
> +++ b/tests/docker/dockerfiles/debian10.docker
> @@ -32,6 +32,6 @@ RUN apt update && \
>  psmisc \
>  python3 \
>  python3-sphinx \
> -$(apt-get -s build-dep qemu | egrep ^Inst | fgrep '[all]' | cut -d\  
> -f2)
> +$(apt-get -s build-dep --arch-only qemu | egrep ^Inst | fgrep
>  '[all]' | cut -d\  -f2)

This bit is fine, without the all-test-cross change:

Reviewed-by: Alex Bennée 

-- 
Alex Bennée



Re: [PATCH v5 4/4] Jobs based on custom runners: add job definitions for QEMU's machines

2021-02-23 Thread Cleber Rosa
On Tue, Feb 23, 2021 at 03:56:19PM +, Daniel P. Berrangé wrote:
> 
> Urgh, well that's a big problem. We certainly don't want *anything* being
> placed on the custom runners without explicit opt-in, otherwise jobs run
> in the main repo have a different environment from when users run on their
> personal forks.
> 
> IOW, we need anti-affinity against our custom runners really.
>

I'm assuming Phil missed that documentation, because that's a
non-issue, really.

Just unchecking the "Run untagged jobs" check box on the runner
configuration makes sure that the custom runners won't pickup any jobs
not *specifically* tagged for them.

Regards,
- Cleber.


signature.asc
Description: PGP signature


Re: [PATCH v2 19/42] esp: remove buf parameter from do_cmd()

2021-02-23 Thread Philippe Mathieu-Daudé
On 2/9/21 8:29 PM, Mark Cave-Ayland wrote:
> Now that all SCSI commands are accumulated in cmdbuf, remove the buf parameter
> from do_cmd() since this always points to cmdbuf.
> 
> Signed-off-by: Mark Cave-Ayland 
> ---
>  hw/scsi/esp.c | 13 +++--
>  1 file changed, 7 insertions(+), 6 deletions(-)

Reviewed-by: Philippe Mathieu-Daudé 



Re: [PATCH v2] target/riscv: fix TB_FLAGS bits overlapping bug for rvv/rvh

2021-02-23 Thread Richard Henderson
On 2/23/21 12:19 AM, frank.ch...@sifive.com wrote:
> From: Frank Chang 
> 
> TB_FLAGS mem_idx bits was extended from 2 bits to 3 bits in
> commit: c445593, but other TB_FLAGS bits for rvv and rvh were
> not shift as well so these bits may overlap with each other when
> rvv is enabled.
> 
> Signed-off-by: Frank Chang 

Reviewed-by: Richard Henderson 

> -#define TB_FLAGS_MMU_MASK   7
>  #define TB_FLAGS_PRIV_MMU_MASK3
>  #define TB_FLAGS_PRIV_HYP_ACCESS_MASK   (1 << 2)
>  #define TB_FLAGS_MSTATUS_FS MSTATUS_FS
...
> +FIELD(TB_FLAGS, MEM_IDX, 0, 3)
> +FIELD(TB_FLAGS, VL_EQ_VLMAX, 3, 1)
> +FIELD(TB_FLAGS, LMUL, 4, 2)
> +FIELD(TB_FLAGS, SEW, 6, 3)
> +FIELD(TB_FLAGS, VILL, 9, 1)
>  /* Is a Hypervisor instruction load/store allowed? */
> -FIELD(TB_FLAGS, HLSX, 9, 1)
> +FIELD(TB_FLAGS, HLSX, 10, 1)

The only other thing that I'd add at this point is a comment about MSTATUS_FS
-- a 2-bit field at bit 13 -- for the benefit of the next person that adds
something to TB_FLAGS.


r~



Re: [PATCH v2 16/42] esp: use pdma_origin directly in esp_pdma_read()/esp_pdma_write()

2021-02-23 Thread Philippe Mathieu-Daudé
On 2/9/21 8:29 PM, Mark Cave-Ayland wrote:
> This is the first step in removing get_pdma_buf() from esp.c.
> 
> Signed-off-by: Mark Cave-Ayland 
> ---
>  hw/scsi/esp.c | 34 --
>  1 file changed, 28 insertions(+), 6 deletions(-)

Reviewed-by: Philippe Mathieu-Daudé 



Re: [PATCH v2 14/42] esp: remove minlen restriction in handle_ti

2021-02-23 Thread Philippe Mathieu-Daudé
On 2/9/21 8:29 PM, Mark Cave-Ayland wrote:
> The limiting of DMA transfers to the maximum size of the available data is 
> already
> handled by esp_do_dma() and do_dma_pdma_cb().
> 
> Signed-off-by: Mark Cave-Ayland 
> ---
>  hw/scsi/esp.c | 12 ++--
>  1 file changed, 2 insertions(+), 10 deletions(-)

Reviewed-by: Philippe Mathieu-Daudé 



Re: [PATCH v5 2/4] Jobs based on custom runners: build environment docs and playbook

2021-02-23 Thread Alex Bennée


Cleber Rosa  writes:

> On Tue, Feb 23, 2021 at 03:01:50PM +, Alex Bennée wrote:
>> 
>> Alex Bennée  writes:
>> 
>> > Cleber Rosa  writes:
>> >
>> >> To run basic jobs on custom runners, the environment needs to be
>> >> properly set up.  The most common requirement is having the right
>> >> packages installed.
>> >>
>> 
>> >
>> > So I got somewhat there with a direct command line invocation:
>> >
>> >   ansible-playbook -u root -i 192.168.122.24,192.168.122.45 
>> > scripts/ci/setup/build-environment.yml -e 
>> > 'ansible_python_interpreter=/usr/bin/python3'
>> >
>> > although for some reason a single host -i fails...
>> >
>> >> diff --git a/scripts/ci/setup/build-environment.yml 
>> >> b/scripts/ci/setup/build-environment.yml
>> >> new file mode 100644
>> >> index 00..0197e0a48b
>> >> --- /dev/null
>> >> +++ b/scripts/ci/setup/build-environment.yml
>> >> @@ -0,0 +1,76 @@
>> >> +---
>> >> +- name: Installation of basic packages to build QEMU
>> >> +  hosts: all
>> >> +  tasks:
>> >> +- name: Update apt cache
>> >> +  apt:
>> >> +update_cache: yes
>> >> +  when:
>> >> +- ansible_facts['distribution'] == 'Ubuntu'
>> >
>> > So are we limiting to Ubuntu here rather than say a Debian base?
>> 
>> Also I'm getting:
>> 
>>   TASK [Update apt cache] 
>> *
>>   fatal: [hackbox-ubuntu-2004]: FAILED! => {"msg": "The conditional check 
>> 'ansible_facts['distribution'] == 'Ubuntu'' failed. The error was: error 
>> while evaluating conditional (ansible_facts['distribution'] == 'Ubuntu'): 
>> 'dict object' has no attribute 'distribution'\n\nThe error appears to have 
>> been in '/home/alex/lsrc/qemu.git/scripts/ci/setup/build-environment.yml': 
>> line 5, column 7, but may\nbe elsewhere in the file depending on the exact 
>> syntax problem.\n\nThe offending line appears to be:\n\n  tasks:\n- 
>> name: Update apt cache\n  ^ here\n"}
>> 
>> which is odd given that machine is definitely an Ubuntu one.
>>
>
> It's defintely odd.  This is what I get on a fresh machine:
>
>TASK [Update apt cache] 
> *
>[WARNING]: Updating cache and auto-installing missing dependency: 
> python3-apt
>ok: [localhost]
>
> Could you please let me know the output of:
>
>$ ansible -m setup -u $YOUR_USERNAME -i $HOSTNAME, all | grep
>ansible_distribution

The key doesn't exist:

hackbox-ubuntu-2004 | SUCCESS => {
"ansible_facts": {
"ansible_all_ipv4_addresses": [
"192.168.122.170"
],
"ansible_all_ipv6_addresses": [
"fe80::5054:ff:fe54:7cfe"
],
"ansible_apparmor": {
"status": "enabled"
},
"ansible_architecture": "x86_64",
"ansible_bios_date": "04/01/2014",
"ansible_bios_version": "1.10.2-1ubuntu1",
"ansible_cmdline": {
"BOOT_IMAGE": "/vmlinuz-5.4.0-65-generic",
"maybe-ubiquity": true,
"ro": true,
"root": "/dev/mapper/ubuntu--vg-ubuntu--lv"
},
"ansible_date_time": {
"date": "2021-02-23",
"day": "23",
"epoch": "1614104601",
"hour": "18",
"iso8601": "2021-02-23T18:23:21Z",
"iso8601_basic": "20210223T182321857461",
"iso8601_basic_short": "20210223T182321",
"iso8601_micro": "2021-02-23T18:23:21.857529Z",
"minute": "23",
"month": "02",
"second": "21",
"time": "18:23:21",
"tz": "UTC",
"tz_offset": "+",
"weekday": "Tuesday",
"weekday_number": "2",
"weeknumber": "08",
"year": "2021"
},
"ansible_default_ipv4": {
"address": "192.168.122.170",
"alias": "enp1s0",
"broadcast": "192.168.122.255",
"gateway": "192.168.122.1",
"interface": "enp1s0",
"macaddress": "52:54:00:54:7c:fe",
"mtu": 1500,
"netmask": "255.255.255.0",
"network": "192.168.122.0",
"type": "ether"
},
"ansible_default_ipv6": {},
"ansible_device_links": {
"ids": {
"dm-0": [
"dm-name-ubuntu--vg-ubuntu--lv",

"dm-uuid-LVM-filR1BfuX6Mpp9J7CP9cbVsTT2ICh7Apc9qZsFohnsqycocacS0Sm6HAhjTBEAkq"
],
"sda": [
"scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-0-1"
],
"sda1": [
"scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-0-1-part1"
],
"sda2": [
"scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-0-1-part2"
],

Re: [PATCH v4 2/4] util/qemu-sockets.c: Split host:port parsing out of inet_parse

2021-02-23 Thread Doug Evans
On Mon, Feb 22, 2021 at 1:39 AM Daniel P. Berrangé 
wrote:

> On Fri, Feb 19, 2021 at 02:17:42PM -0800, Doug Evans wrote:
> > On Fri, Feb 19, 2021 at 2:00 AM Daniel P. Berrangé 
> > wrote:
> >
> > > On Thu, Feb 18, 2021 at 12:15:36PM -0800, Doug Evans wrote:
> > > > The parsing is moved into new function inet_parse_host_and_port.
> > > > This is done in preparation for using the function in net/slirp.c.
> > > >
> > > > Signed-off-by: Doug Evans 
> > > > ---
> > > >
> > > > Changes from v3:
> > > > - this patch is new in v4
> > > >   - provides new utility: inet_parse_host_and_port, updates
> inet_parse
> > > > to use it
> > > >
> > > >  include/qemu/sockets.h |  3 ++
> > > >  util/qemu-sockets.c| 94
> +++---
> > > >  2 files changed, 72 insertions(+), 25 deletions(-)
> > > >
> > > > diff --git a/include/qemu/sockets.h b/include/qemu/sockets.h
> > > > index 7d1f813576..f720378a6b 100644
> > > > --- a/include/qemu/sockets.h
> > > > +++ b/include/qemu/sockets.h
> > > > @@ -31,6 +31,9 @@ int socket_set_fast_reuse(int fd);
> > > >
> > > >  int inet_ai_family_from_address(InetSocketAddress *addr,
> > > >  Error **errp);
> > > > +const char* inet_parse_host_and_port(const char* str, int
> terminator,
> > > > + char **addr, char **port, bool
> > > *is_v6,
> > > > + Error **errp);
> > > >  int inet_parse(InetSocketAddress *addr, const char *str, Error
> **errp);
> > > >  int inet_connect(const char *str, Error **errp);
> > > >  int inet_connect_saddr(InetSocketAddress *saddr, Error **errp);
> > > > diff --git a/util/qemu-sockets.c b/util/qemu-sockets.c
> > > > index 8af0278f15..9fca7d9212 100644
> > > > --- a/util/qemu-sockets.c
> > > > +++ b/util/qemu-sockets.c
> > > > @@ -615,44 +615,88 @@ static int inet_parse_flag(const char
> *flagname,
> > > const char *optstr, bool *val,
> > > >  return 0;
> > > >  }
> > > >
> > > > -int inet_parse(InetSocketAddress *addr, const char *str, Error
> **errp)
> > > > +/*
> > > > + * Parse an inet host and port as "host:port".
> > > > + * Terminator may be '\0'.
> > > > + * The syntax for ipv4 addresses is: address:port.
> > > > + * The syntax for ipv6 addresses is: [address]:port.
> > >
> > > It also supports
> > >
> > >"The syntax for hostnames is hostname:port
> > >
> > > > + * On success, returns a pointer to the terminator. Space for the
> > > address and
> > > > + * port is malloced and stored in *host, *port, the caller must
> free.
> > > > + * *is_v6 indicates whether the address is ipv4 or ipv6. If ipv6
> then
> > > the
> > > > + * surrounding [] brackets are removed.
> > >
> > > When is_v6 is true, it indicates that a numeric ipv6 address was given.
> > > When false either a numberic ipv4 address or hostname was given.
> > >
> > > > + * On failure NULL is returned with the error stored in *errp.
> > > > + */
> > > > +const char* inet_parse_host_and_port(const char* str, int
> terminator,
> > > > + char **hostp, char **portp,
> bool
> > > *is_v6,
> > > > + Error **errp)
> > > >  {
> > > > -const char *optstr, *h;
> > > > +const char *terminator_ptr = strchr(str, terminator);
> > > > +g_autofree char *buf = NULL;
> > > >  char host[65];
> > > >  char port[33];
> > > > -int to;
> > > > -int pos;
> > > > -char *begin;
> > > >
> > > > -memset(addr, 0, sizeof(*addr));
> > > > +if (terminator_ptr == NULL) {
> > > > +/* If the terminator isn't found then use the entire
> string. */
> > > > +terminator_ptr = str + strlen(str);
> > > > +}
> > > > +buf = g_strndup(str, terminator_ptr - str);
> > > >
> > > > -/* parse address */
> > > > -if (str[0] == ':') {
> > > > -/* no host given */
> > > > -host[0] = '\0';
> > > > -if (sscanf(str, ":%32[^,]%n", port, ) != 1) {
> > > > -error_setg(errp, "error parsing port in address '%s'",
> str);
> > > > -return -1;
> > > > -}
> > >
> > >
> > > > -} else if (str[0] == '[') {
> > > > +if (buf[0] == '[') {
> > > >  /* IPv6 addr */
> > > > -if (sscanf(str, "[%64[^]]]:%32[^,]%n", host, port, ) !=
> 2) {
> > > > -error_setg(errp, "error parsing IPv6 address '%s'",
> str);
> > > > -return -1;
> > > > +if (buf[1] == ']') {
> > > > +/* sscanf %[ doesn't recognize empty contents. */
> > > > +host[0] = '\0';
> > > > +if (sscanf(buf, "[]:%32s", port) != 1) {
> > > > +error_setg(errp, "error parsing IPv6 host:port
> '%s'",
> > > buf);
> > > > +return NULL;
> > > > +}
> > >
> > > This is introducing new functionality to the parser. Current callers
> > > let empty string ":port" be used for both ipv4 and ipv6, based
> > > on whether the flags ",ipv4[=on|off],ipv6[=on|off]" 

Re: [PATCH v5 4/4] Jobs based on custom runners: add job definitions for QEMU's machines

2021-02-23 Thread Cleber Rosa
On Tue, Feb 23, 2021 at 04:17:23PM +0100, Philippe Mathieu-Daudé wrote:
> On 2/19/21 10:58 PM, Cleber Rosa wrote:
> > The QEMU project has two machines (aarch64 and s390x) that can be used
> > for jobs that do build and run tests.  This introduces those jobs,
> > which are a mapping of custom scripts used for the same purpose.
> > 
> > Signed-off-by: Cleber Rosa 
> > Reviewed-by: Daniel P. Berrangé 
> > ---
> >  .gitlab-ci.d/custom-runners.yml | 204 
> >  1 file changed, 204 insertions(+)
> > 
> > diff --git a/.gitlab-ci.d/custom-runners.yml 
> > b/.gitlab-ci.d/custom-runners.yml
> > index 3004da2bda..a9166c82a2 100644
> > --- a/.gitlab-ci.d/custom-runners.yml
> > +++ b/.gitlab-ci.d/custom-runners.yml
> > @@ -12,3 +12,207 @@
> >  # strategy.
> >  variables:
> >GIT_SUBMODULE_STRATEGY: recursive
> > +
> > +# All ubuntu-18.04 jobs should run successfully in an environment
> > +# setup by the scripts/ci/setup/build-environment.yml task
> > +# "Install basic packages to build QEMU on Ubuntu 18.04/20.04"
> > +ubuntu-18.04-s390x-all-linux-static:
> > + allow_failure: true
> > + needs: []
> > + stage: build
> > + tags:
> > + - ubuntu_18.04
> > + - s390x
> 
> Where is this tag list filled upon registration?
>

The documentation on this series (previous patch) describes how one
should go about settings the tags.  Pasting it here for easier context:

---

Following the registration, it's necessary to configure the runner tags,
and optionally other configurations on the GitLab UI.  Navigate to:

 * Settings (the gears like icon), then
 * CI/CD, then
 * Runners, and click on the "Expand" button, then
 * "Runners activated for this project", then
 * Click on the "Edit" icon (next to the "Lock" Icon)

Under tags, add values matching the jobs a runner should run.  For a
Ubuntu 20.04 aarch64 system, the tags should be set as::

  ubuntu_20.04,aarch64

Because the job definition at ``.gitlab-ci.d/custom-runners.yml``
would contain::

  ubuntu-20.04-aarch64-all:
   tags:
   - ubuntu_20.04
   - aarch64

---

Does that answer your question?

> > + rules:
> > + - if: '$CI_COMMIT_BRANCH =~ /^staging/'
> > + script:
> > + # --disable-libssh is needed because of 
> > https://bugs.launchpad.net/qemu/+bug/1838763
> > + # --disable-glusterfs is needed because there's no static version of 
> > those libs in distro supplied packages
> > + - mkdir build
> > + - cd build
> > + - ../configure --enable-debug --static --disable-system 
> > --disable-glusterfs --disable-libssh
> > + - make --output-sync -j`nproc`
> > + - make --output-sync -j`nproc` check V=1
> > + - make --output-sync -j`nproc` check-tcg V=1
> 
> Also this break the rest of the tests...
> 
> The first containers job (amd64-alpine-container) got
> added to the custom runner and failed (because docker-dind
> isn't there?):
>

The documentation explains that, saying that it's recommended to uncheck
the "Run untagged jobs" check box.

> $ export TAG="$CI_REGISTRY_IMAGE/qemu/$NAME:latest"
> $ export COMMON_TAG="$CI_REGISTRY/qemu-project/qemu/$NAME:latest"
> $ apk add python3
> bash: line 110: apk: command not found
> Running after_script 00:01
> Running after script...
> $ docker logout
> Removing login credentials for https://index.docker.io/v1/
> ERROR: Job failed: exit status 1
> 
> Do we need to restrict the other jobs to the Gitlab public
> (x86) runners? Maybe as:
>

You just need to take care of the runners you add.  All the other jobs
are assumed to be running on the shared runners.

Regards,
- Cleber.


signature.asc
Description: PGP signature


Re: [PATCH v5 2/4] Jobs based on custom runners: build environment docs and playbook

2021-02-23 Thread Alex Bennée


Cleber Rosa  writes:

> On Tue, Feb 23, 2021 at 03:17:24PM +, Alex Bennée wrote:
>> 
>> Erik Skultety  writes:
>> 
>> > On Tue, Feb 23, 2021 at 02:01:53PM +, Alex Bennée wrote:
>> >> 
>> >> Cleber Rosa  writes:
>> >> 
>> >> > To run basic jobs on custom runners, the environment needs to be
>> >> > properly set up.  The most common requirement is having the right
>> >> > packages installed.
>> >> >
>> >> > The playbook introduced here covers the QEMU's project s390x and
>> >> > aarch64 machines.  At the time this is being proposed, those machines
>> >> > have already had this playbook applied to them.
>> >> >
>> >> > Signed-off-by: Cleber Rosa 
>> >> > ---
>> >> >  docs/devel/ci.rst  | 30 ++
>> >> >  scripts/ci/setup/build-environment.yml | 76 ++
>> >> >  scripts/ci/setup/inventory |  1 +
>> >> >  3 files changed, 107 insertions(+)
>> >> >  create mode 100644 scripts/ci/setup/build-environment.yml
>> >> >  create mode 100644 scripts/ci/setup/inventory
>> >> >
>> >> > diff --git a/docs/devel/ci.rst b/docs/devel/ci.rst
>> >> > index 585b7bf4b8..a556558435 100644
>> >> > --- a/docs/devel/ci.rst
>> >> > +++ b/docs/devel/ci.rst
>> >> > @@ -26,3 +26,33 @@ gitlab-runner, is called a "custom runner".
>> >> >  The GitLab CI jobs definition for the custom runners are located 
>> >> > under::
>> >> >  
>> >> >.gitlab-ci.d/custom-runners.yml
>> >> > +
>> >> > +Machine Setup Howto
>> >> > +---
>> >> > +
>> >> > +For all Linux based systems, the setup can be mostly automated by the
>> >> > +execution of two Ansible playbooks.  Start by adding your machines to
>> >> > +the ``inventory`` file under ``scripts/ci/setup``, such as this::
>> >> > +
>> >> > +  fully.qualified.domain
>> >> > +  other.machine.hostname
>> >> 
>> >> Is this really needed? Can't the host list be passed in the command
>> >> line? I find it off to imagine users wanting to configure whole fleets
>> >> of runners.
>> >
>> > Why not support both, since the playbook execution is not wrapped by 
>> > anything,
>> > giving the option of using either and inventory or direct cmdline 
>> > invocation
>> > seems like the proper way to do it.
>> 
>> Sure - and I dare say people used to managing fleets of servers will
>> want to do it properly but in the first instance lets provide the simple
>> command line option so a user can get up and running without also
>> ensuring files are in the correct format.
>>
>
> Like I said before, I'm strongly in favor of a more straightforward
> documentation, instead of documenting multiple ways to perform the
> same task.  I clearly believe that writing the inventory file (which
> will later be used for the second gitlab-runner playbook) is the best
> choice here.
>
> Do you think the command line approach is clearer?  Should we switch?

I think the command line is $LESS_STEPS for a user to follow but I'm
happy to defer to the inventory approach if it's more idiomatic. I'd
rather avoid uses having their pristine source trees being polluted with
local customisations they have to keep stashing or loosing.

>
> Regards,
> Cleber.


-- 
Alex Bennée



softmmu vs sysemu [Was: Re: [RFC v1 06/38] target/arm: split off cpu-softmmu.c]

2021-02-23 Thread Claudio Fontana
On 2/23/21 10:35 AM, Claudio Fontana wrote:
> On 2/23/21 10:16 AM, Philippe Mathieu-Daudé wrote:
>> On 2/23/21 9:55 AM, Claudio Fontana wrote:
>>> On 2/22/21 6:29 PM, Alex Bennée wrote:

 Claudio Fontana  writes:

> From: Claudio Fontana 
>
> Signed-off-by: Claudio Fontana 
> ---
>  target/arm/internals.h   |   9 ++-
>  target/arm/cpu-softmmu.c | 134 +++
>  target/arm/cpu.c |  95 ---
>  target/arm/meson.build   |   1 +
>  4 files changed, 143 insertions(+), 96 deletions(-)
>  create mode 100644 target/arm/cpu-softmmu.c
>
> diff --git a/target/arm/internals.h b/target/arm/internals.h
> index 6384461177..6589b63ebc 100644
> --- a/target/arm/internals.h
> +++ b/target/arm/internals.h
> @@ -1196,4 +1196,11 @@ static inline uint64_t 
> useronly_maybe_clean_ptr(uint32_t desc, uint64_t ptr)
>  return ptr;
>  }
>  
> -#endif
> +#ifndef CONFIG_USER_ONLY
> +void arm_cpu_set_irq(void *opaque, int irq, int level);
> +void arm_cpu_kvm_set_irq(void *opaque, int irq, int level);
> +bool arm_cpu_virtio_is_big_endian(CPUState *cs);
> +uint64_t a15_l2ctlr_read(CPUARMState *env, const ARMCPRegInfo *ri);
> +#endif /* !CONFIG_USER_ONLY */
> +
> +#endif /* TARGET_ARM_INTERNALS_H */
> diff --git a/target/arm/cpu-softmmu.c b/target/arm/cpu-softmmu.c
> new file mode 100644
> index 00..263d1fc588
> --- /dev/null
> +++ b/target/arm/cpu-softmmu.c
> @@ -0,0 +1,134 @@
> +/*
> + * QEMU ARM CPU

 I guess apropos the discussion earlier it's really cpu-sysemu.c and we
 could expand the header comment.

   QEMU ARM CPU - Helpers for system emulation and KVM only

 

 Otherwise:

 Reviewed-by: Alex Bennée 

>>>
>>> Should I rename all *softmmu in the series to "sysemu"?
>>>
>>> I wonder if we should take the issue of sysemu/system/softmmu topic into a 
>>> separate series.
>>> Currently basically starting from the build system already, "softmmu", 
>>> sysemu and system are treated as a single thing, and the convention from 
>>> build system and directories seems to be "softmmu",
>>> while from the header files we get "sysemu/".
>>>
>>> I agree that this is not a sufficient model to account for the new feature 
>>> that Richard wants to develop,
>>> I just suggest we could also consider tackling this separately, with a pass 
>>> through the whole code, gathering more input in the context of a dedicated 
>>> series.
>>>
>>> What do you think?
>>
>> This is a valid reasoning. However I have my doubts "doing
>> that later" will ever be done/finished (not related to you
>> Claudio in particular, but with dealing with all subsystems).

I went through this again, and looked at the todo.

Already just at the meson level, in the previous i386 series, the changes to 
split real softmmu (CONFIG_SOFTMMU?) and sysemu would be required.

Otherwise I don't feel so comfortable adding a half-baked discrepancy from the 
current convention, which is consistent
(even if "wrong", or "whose model is insufficiently detailed" to be able to 
implement new features).

If we don't believe that the work of splitting semantically real softmmu and 
sysemu would ever be completed,
why start in the middle of this series? The risk would then result in a worse 
situation than the current one I think,
with an even less homogeneous code base.

Could this work not be done independently of this series (and I could rebase on 
that if necessary),
and pushed by who really understand the problem? Probably starting from meson, 
configure, ...
I could help reviewing there, but probably I am not the best person to push it.

Why are we not confident that this can be done?


>>
>> Personally I'd rather see this sorted out with the arm target
>> then once done propose it as an example to the other ones.
>> You already considered the most complex cases, x86 and arm :)


I mean, after we have split the code a bit more properly, renaming the 
additional mentions of "softmmu" to "sysemu" should not be hard.
Or, we could do the softmmu vs sysemu split first, and I rebase on top of that.


> 
> 
> Ok, if there are no other comments I would go with "sysemu", just because 
> "system" is a bit too much of a loaded word,
> and we have the precedent of include/sysemu/ .


I am all for "just getting it done", but here the i386 and the arm series 
become a mixture of things that I am not comfortable with,
I'd prefer a dedicated series..


>>> Also Paolo, any comments, since softmmu is all over meson?
>>>
> 
> And Peter, any comments, preference?
> 


Maybe let us give some more time for more comments to flow in?


Thanks,


Claudio






Re: [PATCH v5 2/4] Jobs based on custom runners: build environment docs and playbook

2021-02-23 Thread Alex Bennée


Cleber Rosa  writes:

> On Tue, Feb 23, 2021 at 02:01:53PM +, Alex Bennée wrote:
>> 
>> Cleber Rosa  writes:
>> 
>> > To run basic jobs on custom runners, the environment needs to be
>> > properly set up.  The most common requirement is having the right
>> > packages installed.
>> >
>> > The playbook introduced here covers the QEMU's project s390x and
>> > aarch64 machines.  At the time this is being proposed, those machines
>> > have already had this playbook applied to them.
>> >
>> > Signed-off-by: Cleber Rosa 
>> > ---
>> >  docs/devel/ci.rst  | 30 ++
>> >  scripts/ci/setup/build-environment.yml | 76 ++
>> >  scripts/ci/setup/inventory |  1 +
>> >  3 files changed, 107 insertions(+)
>> >  create mode 100644 scripts/ci/setup/build-environment.yml
>> >  create mode 100644 scripts/ci/setup/inventory
>> >
>> > diff --git a/docs/devel/ci.rst b/docs/devel/ci.rst
>> > index 585b7bf4b8..a556558435 100644
>> > --- a/docs/devel/ci.rst
>> > +++ b/docs/devel/ci.rst
>> > @@ -26,3 +26,33 @@ gitlab-runner, is called a "custom runner".
>> >  The GitLab CI jobs definition for the custom runners are located under::
>> >  
>> >.gitlab-ci.d/custom-runners.yml
>> > +
>> > +Machine Setup Howto
>> > +---
>> > +
>> > +For all Linux based systems, the setup can be mostly automated by the
>> > +execution of two Ansible playbooks.  Start by adding your machines to
>> > +the ``inventory`` file under ``scripts/ci/setup``, such as this::
>> > +
>> > +  fully.qualified.domain
>> > +  other.machine.hostname
>> 
>> Is this really needed? Can't the host list be passed in the command
>> line? I find it off to imagine users wanting to configure whole fleets
>> of runners.
>>
>
> No, it's not needed.
>
> But, in my experience, it's the most common way people use
> ansible-playbook.  As with all most tools QEMU relies on, that are
> many different ways of using them.  IMO documenting more than one way
> to perform the same task makes the documentation unclear.
>
>> > +
>> > +You may need to set some variables in the inventory file itself.  One
>> > +very common need is to tell Ansible to use a Python 3 interpreter on
>> > +those hosts.  This would look like::
>> > +
>> > +  fully.qualified.domain ansible_python_interpreter=/usr/bin/python3
>> > +  other.machine.hostname ansible_python_interpreter=/usr/bin/python3
>> > +
>> > +Build environment
>> > +~
>> > +
>> > +The ``scripts/ci/setup/build-environment.yml`` Ansible playbook will
>> > +set up machines with the environment needed to perform builds and run
>> > +QEMU tests.  It covers a number of different Linux distributions and
>> > +FreeBSD.
>> > +
>> > +To run the playbook, execute::
>> > +
>> > +  cd scripts/ci/setup
>> > +  ansible-playbook -i inventory build-environment.yml
>> 
>> So I got somewhat there with a direct command line invocation:
>> 
>>   ansible-playbook -u root -i 192.168.122.24,192.168.122.45 
>> scripts/ci/setup/build-environment.yml -e 
>> 'ansible_python_interpreter=/usr/bin/python3'
>>
>
> Yes, and the "-e" is another example of the multiple ways to achieve
> the same task.
>
>> although for some reason a single host -i fails...
>> 
>> > diff --git a/scripts/ci/setup/build-environment.yml 
>> > b/scripts/ci/setup/build-environment.yml
>
> It requires a comma separated list, even if it's a list with a single
> item:
>
>
> https://docs.ansible.com/ansible/latest/cli/ansible-playbook.html#cmdoption-ansible-playbook-i
>
>> > new file mode 100644
>> > index 00..0197e0a48b
>> > --- /dev/null
>> > +++ b/scripts/ci/setup/build-environment.yml
>> > @@ -0,0 +1,76 @@
>> > +---
>> > +- name: Installation of basic packages to build QEMU
>> > +  hosts: all
>> > +  tasks:
>> > +- name: Update apt cache
>> > +  apt:
>> > +update_cache: yes
>> > +  when:
>> > +- ansible_facts['distribution'] == 'Ubuntu'
>> 
>> So are we limiting to Ubuntu here rather than say a Debian base?
>>
>
> You have a point, because this would certainly work and be applicable
> to Debian systems too.  But, this is a new addition on v5, and I'm
> limiting this patch to the machines that are available/connected right
> now to the QEMU project on GitLab.
>
> I can change that to "distribution_family == Debian" if you think
> it's a good idea.  But IMO it'd make more sense for a patch
> introducing the package list for Debian systems to change that.
>
>> > +
>> > +- name: Install basic packages to build QEMU on Ubuntu 18.04/20.04
>> > +  package:
>> > +name:
>> > +# Originally from tests/docker/dockerfiles/ubuntu1804.docker
>> > +  - ccache
>> > +  - clang
>> > +  - gcc
>> > +  - gettext
>> > +  - git
>> > +  - glusterfs-common
>> > +  - libaio-dev
>> > +  - libattr1-dev
>> > +  - libbrlapi-dev
>> > +  - libbz2-dev
>> > +  - libcacard-dev
>> > +  - 

Re: [PATCH v5 1/4] Jobs based on custom runners: documentation and configuration placeholder

2021-02-23 Thread Cleber Rosa
On Tue, Feb 23, 2021 at 06:34:07PM +0100, Philippe Mathieu-Daudé wrote:
> On 2/23/21 6:24 PM, Philippe Mathieu-Daudé wrote:
> > On 2/23/21 5:47 PM, Cleber Rosa wrote:
> >> On Tue, Feb 23, 2021 at 05:37:04PM +0100, Philippe Mathieu-Daudé wrote:
> >>> On 2/23/21 12:25 PM, Thomas Huth wrote:
>  On 19/02/2021 22.58, Cleber Rosa wrote:
> > As described in the included documentation, the "custom runner" jobs
> > extend the GitLab CI jobs already in place.  One of their primary
> > goals of catching and preventing regressions on a wider number of host
> > systems than the ones provided by GitLab's shared runners.
> >
> > This sets the stage in which other community members can add their own
> > machine configuration documentation/scripts, and accompanying job
> > definitions.  As a general rule, those newly added contributed jobs
> > should run as "non-gating", until their reliability is verified (AKA
> > "allow_failure: true").
> >
> > Signed-off-by: Cleber Rosa 
> > ---
> >   .gitlab-ci.d/custom-runners.yml | 14 ++
> >   .gitlab-ci.yml  |  1 +
> >   docs/devel/ci.rst   | 28 
> >   docs/devel/index.rst    |  1 +
> >   4 files changed, 44 insertions(+)
> >   create mode 100644 .gitlab-ci.d/custom-runners.yml
> >   create mode 100644 docs/devel/ci.rst
> >
> > diff --git a/.gitlab-ci.d/custom-runners.yml
> > b/.gitlab-ci.d/custom-runners.yml
> > new file mode 100644
> > index 00..3004da2bda
> > --- /dev/null
> > +++ b/.gitlab-ci.d/custom-runners.yml
> > @@ -0,0 +1,14 @@
> > +# The CI jobs defined here require GitLab runners installed and
> > +# registered on machines that match their operating system names,
> > +# versions and architectures.  This is in contrast to the other CI
> > +# jobs that are intended to run on GitLab's "shared" runners.
> > +
> > +# Different than the default approach on "shared" runners, based on
> > +# containers, the custom runners have no such *requirement*, as those
> > +# jobs should be capable of running on operating systems with no
> > +# compatible container implementation, or no support from
> > +# gitlab-runner.  To avoid problems that gitlab-runner can cause while
> > +# reusing the GIT repository, let's enable the recursive submodule
> > +# strategy.
> > +variables:
> > +  GIT_SUBMODULE_STRATEGY: recursive
> 
>  Is it really necessary? I thought our configure script would take care
>  of the submodules?
> >>>
> >>
> >> I've done a lot of testing on bare metal systems, and the problems
> >> that come from reusing the same system and failed cleanups can be very
> >> frustrating.  It's unfortunate that we need this, but it was the
> >> simplest and most reliable solution I found.  :/
> >>
> >> Having said that, I noticed after I posted this series that this is
> >> affecting all other jobs.  We don't need it that in the jobs based
> >> on containers (for obvious reasons), so I see two options:
> >>
> >> 1) have it enabled on all jobs for consistency
> >>
> >> 2) have it enabled only on jobs that will reuse the repo
> >>
> >>> Well, if there is a failure during the first clone (I got one network
> >>> timeout in the middle) 
> > 
> > [This network failure is pasted at the end]
> > 
> >>> then next time it doesn't work:
> >>>
> >>> Updating/initializing submodules recursively...
> >>> Synchronizing submodule url for 'capstone'
> >>> Synchronizing submodule url for 'dtc'
> >>> Synchronizing submodule url for 'meson'
> >>> Synchronizing submodule url for 'roms/QemuMacDrivers'
> >>> Synchronizing submodule url for 'roms/SLOF'
> >>> Synchronizing submodule url for 'roms/edk2'
> >>> Synchronizing submodule url for
> >>> 'roms/edk2/ArmPkg/Library/ArmSoftFloatLib/berkeley-softfloat-3'
> >>> Synchronizing submodule url for
> >>> 'roms/edk2/BaseTools/Source/C/BrotliCompress/brotli'
> >>> Synchronizing submodule url for
> >>> 'roms/edk2/BaseTools/Source/C/BrotliCompress/brotli/research/esaxx'
> >>> Synchronizing submodule url for
> >>> 'roms/edk2/BaseTools/Source/C/BrotliCompress/brotli/research/libdivsufsort'
> >>> Synchronizing submodule url for
> >>> 'roms/edk2/CryptoPkg/Library/OpensslLib/openssl'
> >>> Synchronizing submodule url for
> >>> 'roms/edk2/MdeModulePkg/Library/BrotliCustomDecompressLib/brotli'
> >>> Synchronizing submodule url for
> >>> 'roms/edk2/MdeModulePkg/Universal/RegularExpressionDxe/oniguruma'
> >>> Synchronizing submodule url for
> >>> 'roms/edk2/UnitTestFrameworkPkg/Library/CmockaLib/cmocka'
> 
> So far, beside the repository useful for QEMU, I cloned:
> 
> - boringssl
> - krb5
> - pyca-cryptography
> - esaxx
> - libdivsufsort
> - oniguruma
> - openssl
> - brotli
> - cmocka
>

Hi Phil,

I'm not following what you meant by "I cloned"... Are you experimenting
with this on a machine of your own and manually 

Re: [PATCH 2/3] scripts/ci/gitlab-pipeline-status: give more information on failures

2021-02-23 Thread Alex Bennée


Cleber Rosa  writes:

> When an HTTP GET request fails, it's useful to go beyond the "not
> successful" message, and show the code returned by the server.
>
> Signed-off-by: Cleber Rosa 

Reviewed-by: Alex Bennée 

> ---
>  scripts/ci/gitlab-pipeline-status | 4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/scripts/ci/gitlab-pipeline-status 
> b/scripts/ci/gitlab-pipeline-status
> index 0c1e8bd8a7..ad62ab3cfc 100755
> --- a/scripts/ci/gitlab-pipeline-status
> +++ b/scripts/ci/gitlab-pipeline-status
> @@ -56,7 +56,9 @@ def get_json_http_response(url):
>  connection.request('GET', url=url)
>  response = connection.getresponse()
>  if response.code != http.HTTPStatus.OK:
> -raise CommunicationFailure("Failed to receive a successful response")
> +msg = "Received unsuccessful response: %s (%s)" % (response.code,
> +   response.reason)
> +raise CommunicationFailure(msg)
>  return json.loads(response.read())


-- 
Alex Bennée



Re: [PATCH 1/3] scripts/ci/gitlab-pipeline-status: split utlity function for HTTP GET

2021-02-23 Thread Alex Bennée


Cleber Rosa  writes:

> This simply splits out the code that does an HTTP GET so that it
> can be used for other API requests.
>
> Signed-off-by: Cleber Rosa 

fix sp as per Wainer, otherwise:

Reviewed-by: Alex Bennée 

> ---
>  scripts/ci/gitlab-pipeline-status | 17 -
>  1 file changed, 12 insertions(+), 5 deletions(-)
>
> diff --git a/scripts/ci/gitlab-pipeline-status 
> b/scripts/ci/gitlab-pipeline-status
> index 78e72f6008..0c1e8bd8a7 100755
> --- a/scripts/ci/gitlab-pipeline-status
> +++ b/scripts/ci/gitlab-pipeline-status
> @@ -48,18 +48,25 @@ def get_local_branch_commit(branch):
>  return result
>  
>  
> -def get_pipeline_status(project_id, commit_sha1):
> +def get_json_http_response(url):
>  """
> -Returns the JSON content of the pipeline status API response
> +Returns the JSON content of an HTTP GET request to gitlab.com
>  """
> -url = '/api/v4/projects/{}/pipelines?sha={}'.format(project_id,
> -commit_sha1)
>  connection = http.client.HTTPSConnection('gitlab.com')
>  connection.request('GET', url=url)
>  response = connection.getresponse()
>  if response.code != http.HTTPStatus.OK:
>  raise CommunicationFailure("Failed to receive a successful response")
> -json_response = json.loads(response.read())
> +return json.loads(response.read())
> +
> +
> +def get_pipeline_status(project_id, commit_sha1):
> +"""
> +Returns the JSON content of the pipeline status API response
> +"""
> +url = '/api/v4/projects/{}/pipelines?sha={}'.format(project_id,
> +commit_sha1)
> +json_response = get_json_http_response(url)
>  
>  # As far as I can tell, there should be only one pipeline for the same
>  # project + commit. If this assumption is false, we can add further


-- 
Alex Bennée



Re: [PATCH v4 16/21] i386: track explicit 'hv-*' features enablement/disablement

2021-02-23 Thread Vitaly Kuznetsov
Igor Mammedov  writes:

> On Tue, 23 Feb 2021 16:46:50 +0100
> Vitaly Kuznetsov  wrote:
>
>> Igor Mammedov  writes:
>> 
>> > On Mon, 22 Feb 2021 11:20:34 +0100
>> > Vitaly Kuznetsov  wrote:
>> >  
>> >> Vitaly Kuznetsov  writes:
>> >>   
>> >> > Igor Mammedov  writes:
>> >> >
>> >> >>> 
>> >> >>> We need to distinguish because that would be sane.
>> >> >>> 
>> >> >>> Enlightened VMCS is an extension to VMX, it can't be used without
>> >> >>> it. Genuine Hyper-V doesn't have a knob for enabling and disabling 
>> >> >>> it,
>> >> >> ...
>> >> >>> That bein said, if
>> >> >>> guest CPU lacks VMX it is counter-productive to expose EVMCS. However,
>> >> >>> there is a problem with explicit enablement: what should
>> >> >>> 
>> >> >>> 'hv-passthrough,hv-evmcs' option do? Just silently drop EVMCS? Doesn't
>> >> >>> sound sane to me.
>> >> >> based on above I'd error out is user asks for unsupported option
>> >> >> i.e. no VMX -> no hv-evmcs - if explicitly asked -> error out
>> >> >
>> >> > That's what I keep telling you but you don't seem to listen. 'Scratch
>> >> > CPU' can't possibly help with this use-case because when you parse 
>> >> >
>> >> > 'hv-passthrough,hv-evmcs,vmx=off' you
>> >> >
>> >> > 1) "hv-passthrough" -> set EVMCS bit to '1' as it is supported by the
>> >> > host.
>> >> >
>> >> > 2) 'hv-evmcs' -> keep EVMCS bit '1'
>> >> >
>> >> > 3) 'vmx=off' -> you have no idea where EVMCS bit came from.
>> >> >
>> >> > We have to remember which options were aquired from the host and which
>> >> > were set explicitly by the user. 
>> >> 
>> >> Igor,
>> >> 
>> >> could you please comment on the above? In case my line of thought is
>> >> correct, and it is impossible to distinguish between e.g.
>> >> 
>> >> 'hv-passthrough,hv-evmcs,-vmx'
>> >> and
>> >> 'hv-passthrough,-vmx'
>> >> 
>> >> without a custom parser (written just exactly the way I did in this
>> >> version, for example) regardless of when 'hv-passthrough' is
>> >> expanded. E.g. we have the exact same problem with
>> >> 'hv-default,hv-evmcs,-vmx'. I that case I see no point in discussing  
>> >
>> > right, if we need to distinguish between explicit and implicit hv-evmcs 
>> > set by
>> > hv-passthrough custom parser probably the way to go.
>> >
>> > However do we need actually need to do it?  
>> 
>> I think we really need that. See below ...
>> 
>> > I'd treat 'hv-passthrough,-vmx' the same way as 
>> > 'hv-passthrough,hv-evmcs,-vmx'
>> > and it applies not only hv-evmcs but other features hv-passthrough might 
>> > set
>> > (i.e. if whatever was [un]set by hv-passthrough in combination with other
>> > features results in invalid config, QEMU shall error out instead of 
>> > magically
>> > altering host provided hv-passthrough value).
>> >
>> > something like:
>> >   'hv-passthrough,-vmx' when hv-passthrough makes hv-evmcs bit set
>> > should result in
>> >   error_setg(errp,"'vmx' feature can't be disabled when hv-evmcs is 
>> > enabled,"
>> >  " either enable 'vmx' or disable 'hv-evmcs' along with 
>> > disabling 'vmx'"
>> >
>> > making host's features set, *magically* mutable, depending on other user 
>> > provided features
>> > is a bit confusing. One would never know what hv-passthrough actually 
>> > means, and if
>> > enabling/disabling 'random' feature changes it.
>> >
>> > It's cleaner to do just what user asked (whether implicitly or explicitly) 
>> > and error out
>> > in case it ends up in nonsense configuration.
>> >  
>> 
>> I don't seem to agree this is a sane behavior, especially if you replace
>> 'hv-passthrough' with 'hv-default' above. Removing 'vmx' from CPU for
>> Windows guests is common if you'd want to avoid nested configuration:
>> even without any Hyper-V guests created, Windows itself is a Hyper-V
>> partition.
>> 
>> So a sane user will do:
>> 
>> '-cpu host,hv-default,vmx=off' 
>> 
>> and on Intel he will get an error, and on AMD he won't. 
>> 
>> So what you're suggesting actually defeats the whole purpose of
>> 'hv-default' as upper-layer tools (think libvirt) will need to know that
> I'd assume it would be hard for libvirt to use 'hv-default' from migration
> point of view. It's semi opaque (one can find out what features it sets
> indirectly inspecting individual hv_foo features, and mgmt will need to
> know about them). If it will mutate when other features [un]set, upper
> layers might need to enumerate all these permutations to know which hosts
> are compatible or compare host feature sets every time before attempting
> migration.
>

That's exactly the opposite of what's the goal here which is: make it
possible for upper layers to not know anything about Hyper-V
enlightenments besides 'hv-default'. Migration should work just fine, if
the rest of guest configuration matches -- then 'hv-default' will create
the exact same things (e.g. if 'vmx' was disabled on the source it has
to be enabled on the destination, it can't be different)


>> Intel configurations for Windows 

[Bug 1916394] Re: [git] Cannot build qemu: FAILED: target/hexagon/semantics_generated.pyinc

2021-02-23 Thread briancain
** Changed in: qemu
 Assignee: (unassigned) => briancain (brian-cain)

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1916394

Title:
  [git] Cannot build qemu: FAILED:
  target/hexagon/semantics_generated.pyinc

Status in QEMU:
  New

Bug description:
  Hello.

  I'm using Archlinux and I maintain qemu-git AUR package.

  I tried to build Qemu at commit
  4115aec9af2a3de5fa89a0b1daa12debcd7741ff but it stops with this error
  message:

  Found ninja-1.10.2 at /usr/bin/ninja
  [632/9068] Generating semantics_generated.pyinc with a custom command
  FAILED: target/hexagon/semantics_generated.pyinc
  @INPUT@ target/hexagon/semantics_generated.pyinc
  /bin/sh: line 1: @INPUT@: command not found
  [637/9068] Compiling C object 
fsdev/vi...proxy-helper.p/virtfs-proxy-helper.c.o
  ninja: build stopped: subcommand failed.

  ninja version: 1.10.2
  meson version: 0.57.1

  Downgrading meson doesn't change anything.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1916394/+subscriptions



Re: [PATCH v4 16/21] i386: track explicit 'hv-*' features enablement/disablement

2021-02-23 Thread Igor Mammedov
On Tue, 23 Feb 2021 16:46:50 +0100
Vitaly Kuznetsov  wrote:

> Igor Mammedov  writes:
> 
> > On Mon, 22 Feb 2021 11:20:34 +0100
> > Vitaly Kuznetsov  wrote:
> >  
> >> Vitaly Kuznetsov  writes:
> >>   
> >> > Igor Mammedov  writes:
> >> >
> >> >>> 
> >> >>> We need to distinguish because that would be sane.
> >> >>> 
> >> >>> Enlightened VMCS is an extension to VMX, it can't be used without
> >> >>> it. Genuine Hyper-V doesn't have a knob for enabling and disabling it, 
> >> >>>
> >> >> ...
> >> >>> That bein said, if
> >> >>> guest CPU lacks VMX it is counter-productive to expose EVMCS. However,
> >> >>> there is a problem with explicit enablement: what should
> >> >>> 
> >> >>> 'hv-passthrough,hv-evmcs' option do? Just silently drop EVMCS? Doesn't
> >> >>> sound sane to me.
> >> >> based on above I'd error out is user asks for unsupported option
> >> >> i.e. no VMX -> no hv-evmcs - if explicitly asked -> error out
> >> >
> >> > That's what I keep telling you but you don't seem to listen. 'Scratch
> >> > CPU' can't possibly help with this use-case because when you parse 
> >> >
> >> > 'hv-passthrough,hv-evmcs,vmx=off' you
> >> >
> >> > 1) "hv-passthrough" -> set EVMCS bit to '1' as it is supported by the
> >> > host.
> >> >
> >> > 2) 'hv-evmcs' -> keep EVMCS bit '1'
> >> >
> >> > 3) 'vmx=off' -> you have no idea where EVMCS bit came from.
> >> >
> >> > We have to remember which options were aquired from the host and which
> >> > were set explicitly by the user. 
> >> 
> >> Igor,
> >> 
> >> could you please comment on the above? In case my line of thought is
> >> correct, and it is impossible to distinguish between e.g.
> >> 
> >> 'hv-passthrough,hv-evmcs,-vmx'
> >> and
> >> 'hv-passthrough,-vmx'
> >> 
> >> without a custom parser (written just exactly the way I did in this
> >> version, for example) regardless of when 'hv-passthrough' is
> >> expanded. E.g. we have the exact same problem with
> >> 'hv-default,hv-evmcs,-vmx'. I that case I see no point in discussing  
> >
> > right, if we need to distinguish between explicit and implicit hv-evmcs set 
> > by
> > hv-passthrough custom parser probably the way to go.
> >
> > However do we need actually need to do it?  
> 
> I think we really need that. See below ...
> 
> > I'd treat 'hv-passthrough,-vmx' the same way as 
> > 'hv-passthrough,hv-evmcs,-vmx'
> > and it applies not only hv-evmcs but other features hv-passthrough might set
> > (i.e. if whatever was [un]set by hv-passthrough in combination with other
> > features results in invalid config, QEMU shall error out instead of 
> > magically
> > altering host provided hv-passthrough value).
> >
> > something like:
> >   'hv-passthrough,-vmx' when hv-passthrough makes hv-evmcs bit set
> > should result in
> >   error_setg(errp,"'vmx' feature can't be disabled when hv-evmcs is 
> > enabled,"
> >  " either enable 'vmx' or disable 'hv-evmcs' along with 
> > disabling 'vmx'"
> >
> > making host's features set, *magically* mutable, depending on other user 
> > provided features
> > is a bit confusing. One would never know what hv-passthrough actually 
> > means, and if
> > enabling/disabling 'random' feature changes it.
> >
> > It's cleaner to do just what user asked (whether implicitly or explicitly) 
> > and error out
> > in case it ends up in nonsense configuration.
> >  
> 
> I don't seem to agree this is a sane behavior, especially if you replace
> 'hv-passthrough' with 'hv-default' above. Removing 'vmx' from CPU for
> Windows guests is common if you'd want to avoid nested configuration:
> even without any Hyper-V guests created, Windows itself is a Hyper-V
> partition.
> 
> So a sane user will do:
> 
> '-cpu host,hv-default,vmx=off' 
> 
> and on Intel he will get an error, and on AMD he won't. 
> 
> So what you're suggesting actually defeats the whole purpose of
> 'hv-default' as upper-layer tools (think libvirt) will need to know that
I'd assume it would be hard for libvirt to use 'hv-default' from migration
point of view. It's semi opaque (one can find out what features it sets
indirectly inspecting individual hv_foo features, and mgmt will need to
know about them). If it will mutate when other features [un]set, upper
layers might need to enumerate all these permutations to know which hosts
are compatible or compare host feature sets every time before attempting
migration.

> Intel configurations for Windows guests are somewhat different. They'll
> need to know what 'hv-evmcs' is. We're back to where we've started.

we were talking about hv-passthrough, and if host advertises hv-evmcs
QEMU should complain if user disabled features it depends on (
not silently fixing up configuration error).
But the same applies to hv-default.

> If we are to follow this approach let's just throw away 'hv-evmcs' from
> 'hv-default' set, it's going to be much cleaner. But again, I don't
> really believe it's the right way to go.

if desired behavior, on Intel host for above 

Re: [PATCH v5 1/4] Jobs based on custom runners: documentation and configuration placeholder

2021-02-23 Thread Daniel P . Berrangé
On Tue, Feb 23, 2021 at 11:47:18AM -0500, Cleber Rosa wrote:
> On Tue, Feb 23, 2021 at 05:37:04PM +0100, Philippe Mathieu-Daudé wrote:
> > On 2/23/21 12:25 PM, Thomas Huth wrote:
> > > On 19/02/2021 22.58, Cleber Rosa wrote:
> > >> As described in the included documentation, the "custom runner" jobs
> > >> extend the GitLab CI jobs already in place.  One of their primary
> > >> goals of catching and preventing regressions on a wider number of host
> > >> systems than the ones provided by GitLab's shared runners.
> > >>
> > >> This sets the stage in which other community members can add their own
> > >> machine configuration documentation/scripts, and accompanying job
> > >> definitions.  As a general rule, those newly added contributed jobs
> > >> should run as "non-gating", until their reliability is verified (AKA
> > >> "allow_failure: true").
> > >>
> > >> Signed-off-by: Cleber Rosa 
> > >> ---
> > >>   .gitlab-ci.d/custom-runners.yml | 14 ++
> > >>   .gitlab-ci.yml  |  1 +
> > >>   docs/devel/ci.rst   | 28 
> > >>   docs/devel/index.rst    |  1 +
> > >>   4 files changed, 44 insertions(+)
> > >>   create mode 100644 .gitlab-ci.d/custom-runners.yml
> > >>   create mode 100644 docs/devel/ci.rst
> > >>
> > >> diff --git a/.gitlab-ci.d/custom-runners.yml
> > >> b/.gitlab-ci.d/custom-runners.yml
> > >> new file mode 100644
> > >> index 00..3004da2bda
> > >> --- /dev/null
> > >> +++ b/.gitlab-ci.d/custom-runners.yml
> > >> @@ -0,0 +1,14 @@
> > >> +# The CI jobs defined here require GitLab runners installed and
> > >> +# registered on machines that match their operating system names,
> > >> +# versions and architectures.  This is in contrast to the other CI
> > >> +# jobs that are intended to run on GitLab's "shared" runners.
> > >> +
> > >> +# Different than the default approach on "shared" runners, based on
> > >> +# containers, the custom runners have no such *requirement*, as those
> > >> +# jobs should be capable of running on operating systems with no
> > >> +# compatible container implementation, or no support from
> > >> +# gitlab-runner.  To avoid problems that gitlab-runner can cause while
> > >> +# reusing the GIT repository, let's enable the recursive submodule
> > >> +# strategy.
> > >> +variables:
> > >> +  GIT_SUBMODULE_STRATEGY: recursive
> > > 
> > > Is it really necessary? I thought our configure script would take care
> > > of the submodules?
> >
> 
> I've done a lot of testing on bare metal systems, and the problems
> that come from reusing the same system and failed cleanups can be very
> frustrating.  It's unfortunate that we need this, but it was the
> simplest and most reliable solution I found.  :/

Hmmm, this makes it sound like the job is not being run in a
fresh pristine checkout. IMHO we need to guarantee that in
general, at which point submodules should "just work", unless
the running is blocking network access ?



Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|




Re: [PATCH v5 2/4] Jobs based on custom runners: build environment docs and playbook

2021-02-23 Thread Cleber Rosa
On Tue, Feb 23, 2021 at 03:01:50PM +, Alex Bennée wrote:
> 
> Alex Bennée  writes:
> 
> > Cleber Rosa  writes:
> >
> >> To run basic jobs on custom runners, the environment needs to be
> >> properly set up.  The most common requirement is having the right
> >> packages installed.
> >>
> 
> >
> > So I got somewhat there with a direct command line invocation:
> >
> >   ansible-playbook -u root -i 192.168.122.24,192.168.122.45 
> > scripts/ci/setup/build-environment.yml -e 
> > 'ansible_python_interpreter=/usr/bin/python3'
> >
> > although for some reason a single host -i fails...
> >
> >> diff --git a/scripts/ci/setup/build-environment.yml 
> >> b/scripts/ci/setup/build-environment.yml
> >> new file mode 100644
> >> index 00..0197e0a48b
> >> --- /dev/null
> >> +++ b/scripts/ci/setup/build-environment.yml
> >> @@ -0,0 +1,76 @@
> >> +---
> >> +- name: Installation of basic packages to build QEMU
> >> +  hosts: all
> >> +  tasks:
> >> +- name: Update apt cache
> >> +  apt:
> >> +update_cache: yes
> >> +  when:
> >> +- ansible_facts['distribution'] == 'Ubuntu'
> >
> > So are we limiting to Ubuntu here rather than say a Debian base?
> 
> Also I'm getting:
> 
>   TASK [Update apt cache] 
> *
>   fatal: [hackbox-ubuntu-2004]: FAILED! => {"msg": "The conditional check 
> 'ansible_facts['distribution'] == 'Ubuntu'' failed. The error was: error 
> while evaluating conditional (ansible_facts['distribution'] == 'Ubuntu'): 
> 'dict object' has no attribute 'distribution'\n\nThe error appears to have 
> been in '/home/alex/lsrc/qemu.git/scripts/ci/setup/build-environment.yml': 
> line 5, column 7, but may\nbe elsewhere in the file depending on the exact 
> syntax problem.\n\nThe offending line appears to be:\n\n  tasks:\n- name: 
> Update apt cache\n  ^ here\n"}
> 
> which is odd given that machine is definitely an Ubuntu one.
>

It's defintely odd.  This is what I get on a fresh machine:

   TASK [Update apt cache] 
*
   [WARNING]: Updating cache and auto-installing missing dependency: python3-apt
   ok: [localhost]

Could you please let me know the output of:

   $ ansible -m setup -u $YOUR_USERNAME -i $HOSTNAME, all | grep 
ansible_distribution

Thanks,
- Cleber.


signature.asc
Description: PGP signature


RE: [PATCH v2 01/22] block: add eMMC block device type

2021-02-23 Thread Sai Pavan Boddu
Hi Philippe,

> -Original Message-
> From: Philippe Mathieu-Daudé 
> Sent: Monday, February 22, 2021 5:34 PM
> To: Sai Pavan Boddu ; Markus Armbruster
> ; Kevin Wolf ; Max Reitz
> ; Vladimir Sementsov-Ogievskiy
> ; Eric Blake ; Joel Stanley
> ; Cédric Le Goater ; Vincent Palatin
> ; Dr. David Alan Gilbert ;
> Thomas Huth ; Stefan Hajnoczi ;
> Peter Maydell ; Alistair Francis
> ; Edgar Iglesias ; Luc Michel
> ; Paolo Bonzini 
> Cc: Sai Pavan Boddu ; qemu-devel@nongnu.org; qemu-
> bl...@nongnu.org
> Subject: Re: [PATCH v2 01/22] block: add eMMC block device type
> 
> On 2/22/21 9:20 AM, Sai Pavan Boddu wrote:
> > From: Vincent Palatin 
> >
> > Add new block device type.
> >
> > Signed-off-by: Vincent Palatin 
> > [SPB: Rebased over 5.1 version]
> > Signed-off-by: Sai Pavan Boddu 
> > Signed-off-by: Joel Stanley 
> > Signed-off-by: Cédric Le Goater 
> > Reviewed-by: Alistair Francis 
> > ---
> >  include/sysemu/blockdev.h | 1 +
> >  blockdev.c| 1 +
> >  2 files changed, 2 insertions(+)
> >
> > diff --git a/include/sysemu/blockdev.h b/include/sysemu/blockdev.h
> > index 3b5fcda..eefae9f 100644
> > --- a/include/sysemu/blockdev.h
> > +++ b/include/sysemu/blockdev.h
> > @@ -24,6 +24,7 @@ typedef enum {
> >   */
> >  IF_NONE = 0,
> >  IF_IDE, IF_SCSI, IF_FLOPPY, IF_PFLASH, IF_MTD, IF_SD, IF_VIRTIO,
> > IF_XEN,
> > +IF_EMMC,
> >  IF_COUNT
> >  } BlockInterfaceType;
> >
> > diff --git a/blockdev.c b/blockdev.c
> > index cd438e6..390d43c 100644
> > --- a/blockdev.c
> > +++ b/blockdev.c
> > @@ -83,6 +83,7 @@ static const char *const if_name[IF_COUNT] = {
> >  [IF_SD] = "sd",
> >  [IF_VIRTIO] = "virtio",
> >  [IF_XEN] = "xen",
> > +[IF_EMMC] = "emmc",
> >  };
> 
> We don't need to introduce support for the legacy -drive magic.
> 
> -device should be enough for this device, right?
[Sai Pavan Boddu] I was seeing to use -device for emmc. But I see we anyway 
need blockdev support for this, which would require us the use -drive.

Can you give some pointers, how to approach this ?

Regards,
Sai Pavan



Re: [PATCH v5 1/4] Jobs based on custom runners: documentation and configuration placeholder

2021-02-23 Thread Philippe Mathieu-Daudé
On 2/23/21 6:24 PM, Philippe Mathieu-Daudé wrote:
> On 2/23/21 5:47 PM, Cleber Rosa wrote:
>> On Tue, Feb 23, 2021 at 05:37:04PM +0100, Philippe Mathieu-Daudé wrote:
>>> On 2/23/21 12:25 PM, Thomas Huth wrote:
 On 19/02/2021 22.58, Cleber Rosa wrote:
> As described in the included documentation, the "custom runner" jobs
> extend the GitLab CI jobs already in place.  One of their primary
> goals of catching and preventing regressions on a wider number of host
> systems than the ones provided by GitLab's shared runners.
>
> This sets the stage in which other community members can add their own
> machine configuration documentation/scripts, and accompanying job
> definitions.  As a general rule, those newly added contributed jobs
> should run as "non-gating", until their reliability is verified (AKA
> "allow_failure: true").
>
> Signed-off-by: Cleber Rosa 
> ---
>   .gitlab-ci.d/custom-runners.yml | 14 ++
>   .gitlab-ci.yml  |  1 +
>   docs/devel/ci.rst   | 28 
>   docs/devel/index.rst    |  1 +
>   4 files changed, 44 insertions(+)
>   create mode 100644 .gitlab-ci.d/custom-runners.yml
>   create mode 100644 docs/devel/ci.rst
>
> diff --git a/.gitlab-ci.d/custom-runners.yml
> b/.gitlab-ci.d/custom-runners.yml
> new file mode 100644
> index 00..3004da2bda
> --- /dev/null
> +++ b/.gitlab-ci.d/custom-runners.yml
> @@ -0,0 +1,14 @@
> +# The CI jobs defined here require GitLab runners installed and
> +# registered on machines that match their operating system names,
> +# versions and architectures.  This is in contrast to the other CI
> +# jobs that are intended to run on GitLab's "shared" runners.
> +
> +# Different than the default approach on "shared" runners, based on
> +# containers, the custom runners have no such *requirement*, as those
> +# jobs should be capable of running on operating systems with no
> +# compatible container implementation, or no support from
> +# gitlab-runner.  To avoid problems that gitlab-runner can cause while
> +# reusing the GIT repository, let's enable the recursive submodule
> +# strategy.
> +variables:
> +  GIT_SUBMODULE_STRATEGY: recursive

 Is it really necessary? I thought our configure script would take care
 of the submodules?
>>>
>>
>> I've done a lot of testing on bare metal systems, and the problems
>> that come from reusing the same system and failed cleanups can be very
>> frustrating.  It's unfortunate that we need this, but it was the
>> simplest and most reliable solution I found.  :/
>>
>> Having said that, I noticed after I posted this series that this is
>> affecting all other jobs.  We don't need it that in the jobs based
>> on containers (for obvious reasons), so I see two options:
>>
>> 1) have it enabled on all jobs for consistency
>>
>> 2) have it enabled only on jobs that will reuse the repo
>>
>>> Well, if there is a failure during the first clone (I got one network
>>> timeout in the middle) 
> 
> [This network failure is pasted at the end]
> 
>>> then next time it doesn't work:
>>>
>>> Updating/initializing submodules recursively...
>>> Synchronizing submodule url for 'capstone'
>>> Synchronizing submodule url for 'dtc'
>>> Synchronizing submodule url for 'meson'
>>> Synchronizing submodule url for 'roms/QemuMacDrivers'
>>> Synchronizing submodule url for 'roms/SLOF'
>>> Synchronizing submodule url for 'roms/edk2'
>>> Synchronizing submodule url for
>>> 'roms/edk2/ArmPkg/Library/ArmSoftFloatLib/berkeley-softfloat-3'
>>> Synchronizing submodule url for
>>> 'roms/edk2/BaseTools/Source/C/BrotliCompress/brotli'
>>> Synchronizing submodule url for
>>> 'roms/edk2/BaseTools/Source/C/BrotliCompress/brotli/research/esaxx'
>>> Synchronizing submodule url for
>>> 'roms/edk2/BaseTools/Source/C/BrotliCompress/brotli/research/libdivsufsort'
>>> Synchronizing submodule url for
>>> 'roms/edk2/CryptoPkg/Library/OpensslLib/openssl'
>>> Synchronizing submodule url for
>>> 'roms/edk2/MdeModulePkg/Library/BrotliCustomDecompressLib/brotli'
>>> Synchronizing submodule url for
>>> 'roms/edk2/MdeModulePkg/Universal/RegularExpressionDxe/oniguruma'
>>> Synchronizing submodule url for
>>> 'roms/edk2/UnitTestFrameworkPkg/Library/CmockaLib/cmocka'

So far, beside the repository useful for QEMU, I cloned:

- boringssl
- krb5
- pyca-cryptography
- esaxx
- libdivsufsort
- oniguruma
- openssl
- brotli
- cmocka

But reach the runner time limit of 2h.

The directory reports 3GB of source code.

I don't think the series has been tested enough before posting,
I'm stopping here my experiments.

Regards,

Phil.




Re: [PATCH v5 2/4] Jobs based on custom runners: build environment docs and playbook

2021-02-23 Thread Cleber Rosa
On Tue, Feb 23, 2021 at 03:17:24PM +, Alex Bennée wrote:
> 
> Erik Skultety  writes:
> 
> > On Tue, Feb 23, 2021 at 02:01:53PM +, Alex Bennée wrote:
> >> 
> >> Cleber Rosa  writes:
> >> 
> >> > To run basic jobs on custom runners, the environment needs to be
> >> > properly set up.  The most common requirement is having the right
> >> > packages installed.
> >> >
> >> > The playbook introduced here covers the QEMU's project s390x and
> >> > aarch64 machines.  At the time this is being proposed, those machines
> >> > have already had this playbook applied to them.
> >> >
> >> > Signed-off-by: Cleber Rosa 
> >> > ---
> >> >  docs/devel/ci.rst  | 30 ++
> >> >  scripts/ci/setup/build-environment.yml | 76 ++
> >> >  scripts/ci/setup/inventory |  1 +
> >> >  3 files changed, 107 insertions(+)
> >> >  create mode 100644 scripts/ci/setup/build-environment.yml
> >> >  create mode 100644 scripts/ci/setup/inventory
> >> >
> >> > diff --git a/docs/devel/ci.rst b/docs/devel/ci.rst
> >> > index 585b7bf4b8..a556558435 100644
> >> > --- a/docs/devel/ci.rst
> >> > +++ b/docs/devel/ci.rst
> >> > @@ -26,3 +26,33 @@ gitlab-runner, is called a "custom runner".
> >> >  The GitLab CI jobs definition for the custom runners are located under::
> >> >  
> >> >.gitlab-ci.d/custom-runners.yml
> >> > +
> >> > +Machine Setup Howto
> >> > +---
> >> > +
> >> > +For all Linux based systems, the setup can be mostly automated by the
> >> > +execution of two Ansible playbooks.  Start by adding your machines to
> >> > +the ``inventory`` file under ``scripts/ci/setup``, such as this::
> >> > +
> >> > +  fully.qualified.domain
> >> > +  other.machine.hostname
> >> 
> >> Is this really needed? Can't the host list be passed in the command
> >> line? I find it off to imagine users wanting to configure whole fleets
> >> of runners.
> >
> > Why not support both, since the playbook execution is not wrapped by 
> > anything,
> > giving the option of using either and inventory or direct cmdline invocation
> > seems like the proper way to do it.
> 
> Sure - and I dare say people used to managing fleets of servers will
> want to do it properly but in the first instance lets provide the simple
> command line option so a user can get up and running without also
> ensuring files are in the correct format.
>

Like I said before, I'm strongly in favor of a more straightforward
documentation, instead of documenting multiple ways to perform the
same task.  I clearly believe that writing the inventory file (which
will later be used for the second gitlab-runner playbook) is the best
choice here.

Do you think the command line approach is clearer?  Should we switch?

Regards,
Cleber.


signature.asc
Description: PGP signature


Re: [PATCH v5 1/4] Jobs based on custom runners: documentation and configuration placeholder

2021-02-23 Thread Philippe Mathieu-Daudé
On 2/23/21 5:47 PM, Cleber Rosa wrote:
> On Tue, Feb 23, 2021 at 05:37:04PM +0100, Philippe Mathieu-Daudé wrote:
>> On 2/23/21 12:25 PM, Thomas Huth wrote:
>>> On 19/02/2021 22.58, Cleber Rosa wrote:
 As described in the included documentation, the "custom runner" jobs
 extend the GitLab CI jobs already in place.  One of their primary
 goals of catching and preventing regressions on a wider number of host
 systems than the ones provided by GitLab's shared runners.

 This sets the stage in which other community members can add their own
 machine configuration documentation/scripts, and accompanying job
 definitions.  As a general rule, those newly added contributed jobs
 should run as "non-gating", until their reliability is verified (AKA
 "allow_failure: true").

 Signed-off-by: Cleber Rosa 
 ---
   .gitlab-ci.d/custom-runners.yml | 14 ++
   .gitlab-ci.yml  |  1 +
   docs/devel/ci.rst   | 28 
   docs/devel/index.rst    |  1 +
   4 files changed, 44 insertions(+)
   create mode 100644 .gitlab-ci.d/custom-runners.yml
   create mode 100644 docs/devel/ci.rst

 diff --git a/.gitlab-ci.d/custom-runners.yml
 b/.gitlab-ci.d/custom-runners.yml
 new file mode 100644
 index 00..3004da2bda
 --- /dev/null
 +++ b/.gitlab-ci.d/custom-runners.yml
 @@ -0,0 +1,14 @@
 +# The CI jobs defined here require GitLab runners installed and
 +# registered on machines that match their operating system names,
 +# versions and architectures.  This is in contrast to the other CI
 +# jobs that are intended to run on GitLab's "shared" runners.
 +
 +# Different than the default approach on "shared" runners, based on
 +# containers, the custom runners have no such *requirement*, as those
 +# jobs should be capable of running on operating systems with no
 +# compatible container implementation, or no support from
 +# gitlab-runner.  To avoid problems that gitlab-runner can cause while
 +# reusing the GIT repository, let's enable the recursive submodule
 +# strategy.
 +variables:
 +  GIT_SUBMODULE_STRATEGY: recursive
>>>
>>> Is it really necessary? I thought our configure script would take care
>>> of the submodules?
>>
> 
> I've done a lot of testing on bare metal systems, and the problems
> that come from reusing the same system and failed cleanups can be very
> frustrating.  It's unfortunate that we need this, but it was the
> simplest and most reliable solution I found.  :/
> 
> Having said that, I noticed after I posted this series that this is
> affecting all other jobs.  We don't need it that in the jobs based
> on containers (for obvious reasons), so I see two options:
> 
> 1) have it enabled on all jobs for consistency
> 
> 2) have it enabled only on jobs that will reuse the repo
> 
>> Well, if there is a failure during the first clone (I got one network
>> timeout in the middle) 

[This network failure is pasted at the end]

>> then next time it doesn't work:
>>
>> Updating/initializing submodules recursively...
>> Synchronizing submodule url for 'capstone'
>> Synchronizing submodule url for 'dtc'
>> Synchronizing submodule url for 'meson'
>> Synchronizing submodule url for 'roms/QemuMacDrivers'
>> Synchronizing submodule url for 'roms/SLOF'
>> Synchronizing submodule url for 'roms/edk2'
>> Synchronizing submodule url for
>> 'roms/edk2/ArmPkg/Library/ArmSoftFloatLib/berkeley-softfloat-3'
>> Synchronizing submodule url for
>> 'roms/edk2/BaseTools/Source/C/BrotliCompress/brotli'
>> Synchronizing submodule url for
>> 'roms/edk2/BaseTools/Source/C/BrotliCompress/brotli/research/esaxx'
>> Synchronizing submodule url for
>> 'roms/edk2/BaseTools/Source/C/BrotliCompress/brotli/research/libdivsufsort'
>> Synchronizing submodule url for
>> 'roms/edk2/CryptoPkg/Library/OpensslLib/openssl'
>> Synchronizing submodule url for
>> 'roms/edk2/MdeModulePkg/Library/BrotliCustomDecompressLib/brotli'
>> Synchronizing submodule url for
>> 'roms/edk2/MdeModulePkg/Universal/RegularExpressionDxe/oniguruma'
>> Synchronizing submodule url for
>> 'roms/edk2/UnitTestFrameworkPkg/Library/CmockaLib/cmocka'
>> Synchronizing submodule url for 'roms/ipxe'
>> Synchronizing submodule url for 'roms/openbios'
>> Synchronizing submodule url for 'roms/opensbi'
>> Synchronizing submodule url for 'roms/qboot'
>> Synchronizing submodule url for 'roms/qemu-palcode'
>> Synchronizing submodule url for 'roms/seabios'
>> Synchronizing submodule url for 'roms/seabios-hppa'
>> Synchronizing submodule url for 'roms/sgabios'
>> Synchronizing submodule url for 'roms/skiboot'
>> Synchronizing submodule url for 'roms/u-boot'
>> Synchronizing submodule url for 'roms/u-boot-sam460ex'
>> Synchronizing submodule url for 'roms/vbootrom'
>> Synchronizing submodule url for 'slirp'
>> Synchronizing submodule url for 'tests/fp/berkeley-softfloat-3'

Re: [PATCH v2 0/2] block: Use 'read-zeroes=true' mode by default with 'null-co' driver

2021-02-23 Thread Fam Zheng
On 2021-02-23 17:01, Max Reitz wrote:
> On 23.02.21 10:21, Fam Zheng wrote:
> > On 2021-02-22 18:55, Philippe Mathieu-Daudé wrote:
> > > On 2/22/21 6:35 PM, Fam Zheng wrote:
> > > > On 2021-02-19 15:09, Philippe Mathieu-Daudé wrote:
> > > > > On 2/19/21 12:07 PM, Max Reitz wrote:
> > > > > > On 13.02.21 22:54, Fam Zheng wrote:
> > > > > > > On 2021-02-11 15:26, Philippe Mathieu-Daudé wrote:
> > > > > > > > The null-co driver doesn't zeroize buffer in its default config,
> > > > > > > > because it is designed for testing and tests want to run fast.
> > > > > > > > However this confuses security researchers (access to uninit
> > > > > > > > buffers).
> > > > > > > 
> > > > > > > I'm a little surprised.
> > > > > > > 
> > > > > > > Is changing default the only way to fix this? I'm not opposed to
> > > > > > > changing the default but I'm not convinced this is the easiest 
> > > > > > > way.
> > > > > > > block/nvme.c also doesn't touch the memory, but defers to the 
> > > > > > > device
> > > > > > > DMA, why doesn't that confuse the security checker?
> > > > > 
> > > > > Generally speaking, there is a balance between security and 
> > > > > performance.
> > > > > We try to provide both, but when we can't, my understanding is 
> > > > > security
> > > > > is more important.
> > > > 
> > > > Why is hiding the code path behind a non-default more secure? What is
> > > > not secure now?
> > > 
> > > Se we are back to the problem of having default values.
> > > 
> > > I'd like to remove the default and have the option explicit,
> > > but qemu_opt_get_bool() expects a 'default' value.
> > > 
> > > Should we rename qemu_opt_get_bool() -> qemu_opt_get_bool_with_default()
> > > and add a simpler qemu_opt_get_bool()?
> > 
> > My point is I still don't get the full context of the problem this
> > series is trying to solve. If the problem is tools are confused, I would
> > like to understand why. What is the thing that matters here, exactly?
> 
> My personal opinion is that it isn’t even about tools, it’s just about the
> fact that operating on uninitialized data is something that should generally
> be avoided.  For the null drivers, there is a reason to allow it
> (performance testing), but that should be a special case, not the default.

How do we define uninitialized data? Are buffers passed to a successful
read (2) syscall initialized? We cannot know, because it's up to the
fs/driver/storage. It's the same to bdrv_pread().

In fact block/null.c doesn't operate on uninitialized data, we should
really fix guess_disk_lchs() and similar.

> 
> > But there's always been nullblk.ko in kernel that doesn't lie in the
> > name. If we change the default we are not very honest any more about
> > what the driver actually does.
> 
> Then we’re already lying, because if we model it after /dev/null, we should
> probably return -EIO on every read.

nullblk.ko is not /dev/null, it's /dev/nullb*:

https://www.kernel.org/doc/Documentation/block/null_blk.txt

Fam




Re: [PATCH v2 4/4] utils: Deprecate inexact fractional suffix sizes

2021-02-23 Thread Daniel P . Berrangé
On Thu, Feb 11, 2021 at 02:44:38PM -0600, Eric Blake wrote:
> The value '1.1k' is inexact; 1126.4 bytes is not possible, so we
> happen to truncate it to 1126.  Our use of fractional sizes is
> intended for convenience, but when a user specifies a fraction that is
> not a clean translation to binary, truncating/rounding behind their
> backs can cause confusion.  Better is to deprecate inexact values,
> which still leaves '1.5k' as valid, but alerts the user to spell out
> their values as a precise byte number in cases where they are
> currently being rounded.
> 
> Note that values like '0.1G' in the testsuite need adjustment as a
> result.
> 
> Since qemu_strtosz() does not have an Err** parameter, and plumbing
> that in would be a much larger task, we instead go with just directly
> emitting the deprecation warning to stderr.
> 
> Signed-off-by: Eric Blake 
> 
> ---
> 
> I'm not a fan of this patch, but am proposing it for discussion purposes.

Likewise. I'm *not* in favour of this patch.

Allowing some fractions but not other fractions forces the potential
user to figure out what the exact fraction is up front, at which point
they've lost the benefit of using fractions. If users actually care
about byte exact values then they already have the option to specify
those exactly. If they've instead chosen to use fractions then they
have implicitly decided they're ok with the potentially in-exact
answer.

IMHO the only question is whethe we should truncate or round, and
I dont really have a preference - either is fine as long as we
are intentionally picking one and documenting it.

> ---
>  docs/system/deprecated.rst | 9 +
>  tests/test-cutils.c| 6 +++---
>  tests/test-keyval.c| 4 ++--
>  tests/test-qemu-opts.c | 4 ++--
>  util/cutils.c  | 9 +++--
>  5 files changed, 23 insertions(+), 9 deletions(-)
> 
> diff --git a/docs/system/deprecated.rst b/docs/system/deprecated.rst
> index 113c2e933f1b..2c9cb849eec5 100644
> --- a/docs/system/deprecated.rst
> +++ b/docs/system/deprecated.rst
> @@ -154,6 +154,15 @@ Input parameters that take a size value should only use 
> a size suffix
>  the value is hexadecimal.  That is, '0x20M' is deprecated, and should
>  be written either as '32M' or as '0x200'.
> 
> +inexact sizes via scaled fractions (since 6.0)
> +''
> +
> +Input parameters that take a size value should only use a fractional
> +size (such as '1.5M') that will result in an exact byte value.  The
> +use of inexact values (such as '1.1M') that require truncation or
> +rounding is deprecated, and you should instead consider writing your
> +unusual size in bytes (here, '1153433' or '1153434' as desired).


Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|




Re: [PATCH v2 3/4] utils: Deprecate hex-with-suffix sizes

2021-02-23 Thread Daniel P . Berrangé
On Thu, Feb 11, 2021 at 02:44:37PM -0600, Eric Blake wrote:
> Supporting '0x20M' looks odd, particularly since we have a 'B' suffix
> that is ambiguous for bytes, as well as a less-frequently-used 'E'
> suffix for extremely large exibytes.  In practice, people using hex
> inputs are specifying values in bytes (and would have written
> 0x200, or possibly relied on default_suffix in the case of
> qemu_strtosz_MiB), and the use of scaling suffixes makes the most
> sense for inputs in decimal (where the user would write 32M).  But
> rather than outright dropping support for hex-with-suffix, let's
> follow our deprecation policy.  Sadly, since qemu_strtosz() does not
> have an Err** parameter, and plumbing that in would be a much larger
> task, we instead go with just directly emitting the deprecation
> warning to stderr.
> 
> Signed-off-by: Eric Blake 
> ---
>  docs/system/deprecated.rst |  8 
>  util/cutils.c  | 10 +-
>  2 files changed, 17 insertions(+), 1 deletion(-)

Reviewed-by: Daniel P. Berrangé 


Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|




Re: [PATCH v5 2/4] Jobs based on custom runners: build environment docs and playbook

2021-02-23 Thread Cleber Rosa
On Tue, Feb 23, 2021 at 02:01:53PM +, Alex Bennée wrote:
> 
> Cleber Rosa  writes:
> 
> > To run basic jobs on custom runners, the environment needs to be
> > properly set up.  The most common requirement is having the right
> > packages installed.
> >
> > The playbook introduced here covers the QEMU's project s390x and
> > aarch64 machines.  At the time this is being proposed, those machines
> > have already had this playbook applied to them.
> >
> > Signed-off-by: Cleber Rosa 
> > ---
> >  docs/devel/ci.rst  | 30 ++
> >  scripts/ci/setup/build-environment.yml | 76 ++
> >  scripts/ci/setup/inventory |  1 +
> >  3 files changed, 107 insertions(+)
> >  create mode 100644 scripts/ci/setup/build-environment.yml
> >  create mode 100644 scripts/ci/setup/inventory
> >
> > diff --git a/docs/devel/ci.rst b/docs/devel/ci.rst
> > index 585b7bf4b8..a556558435 100644
> > --- a/docs/devel/ci.rst
> > +++ b/docs/devel/ci.rst
> > @@ -26,3 +26,33 @@ gitlab-runner, is called a "custom runner".
> >  The GitLab CI jobs definition for the custom runners are located under::
> >  
> >.gitlab-ci.d/custom-runners.yml
> > +
> > +Machine Setup Howto
> > +---
> > +
> > +For all Linux based systems, the setup can be mostly automated by the
> > +execution of two Ansible playbooks.  Start by adding your machines to
> > +the ``inventory`` file under ``scripts/ci/setup``, such as this::
> > +
> > +  fully.qualified.domain
> > +  other.machine.hostname
> 
> Is this really needed? Can't the host list be passed in the command
> line? I find it off to imagine users wanting to configure whole fleets
> of runners.
>

No, it's not needed.

But, in my experience, it's the most common way people use
ansible-playbook.  As with all most tools QEMU relies on, that are
many different ways of using them.  IMO documenting more than one way
to perform the same task makes the documentation unclear.

> > +
> > +You may need to set some variables in the inventory file itself.  One
> > +very common need is to tell Ansible to use a Python 3 interpreter on
> > +those hosts.  This would look like::
> > +
> > +  fully.qualified.domain ansible_python_interpreter=/usr/bin/python3
> > +  other.machine.hostname ansible_python_interpreter=/usr/bin/python3
> > +
> > +Build environment
> > +~
> > +
> > +The ``scripts/ci/setup/build-environment.yml`` Ansible playbook will
> > +set up machines with the environment needed to perform builds and run
> > +QEMU tests.  It covers a number of different Linux distributions and
> > +FreeBSD.
> > +
> > +To run the playbook, execute::
> > +
> > +  cd scripts/ci/setup
> > +  ansible-playbook -i inventory build-environment.yml
> 
> So I got somewhat there with a direct command line invocation:
> 
>   ansible-playbook -u root -i 192.168.122.24,192.168.122.45 
> scripts/ci/setup/build-environment.yml -e 
> 'ansible_python_interpreter=/usr/bin/python3'
>

Yes, and the "-e" is another example of the multiple ways to achieve
the same task.

> although for some reason a single host -i fails...
> 
> > diff --git a/scripts/ci/setup/build-environment.yml 
> > b/scripts/ci/setup/build-environment.yml

It requires a comma separated list, even if it's a list with a single
item:

   
https://docs.ansible.com/ansible/latest/cli/ansible-playbook.html#cmdoption-ansible-playbook-i

> > new file mode 100644
> > index 00..0197e0a48b
> > --- /dev/null
> > +++ b/scripts/ci/setup/build-environment.yml
> > @@ -0,0 +1,76 @@
> > +---
> > +- name: Installation of basic packages to build QEMU
> > +  hosts: all
> > +  tasks:
> > +- name: Update apt cache
> > +  apt:
> > +update_cache: yes
> > +  when:
> > +- ansible_facts['distribution'] == 'Ubuntu'
> 
> So are we limiting to Ubuntu here rather than say a Debian base?
>

You have a point, because this would certainly work and be applicable
to Debian systems too.  But, this is a new addition on v5, and I'm
limiting this patch to the machines that are available/connected right
now to the QEMU project on GitLab.

I can change that to "distribution_family == Debian" if you think
it's a good idea.  But IMO it'd make more sense for a patch
introducing the package list for Debian systems to change that.

> > +
> > +- name: Install basic packages to build QEMU on Ubuntu 18.04/20.04
> > +  package:
> > +name:
> > +# Originally from tests/docker/dockerfiles/ubuntu1804.docker
> > +  - ccache
> > +  - clang
> > +  - gcc
> > +  - gettext
> > +  - git
> > +  - glusterfs-common
> > +  - libaio-dev
> > +  - libattr1-dev
> > +  - libbrlapi-dev
> > +  - libbz2-dev
> > +  - libcacard-dev
> > +  - libcap-ng-dev
> > +  - libcurl4-gnutls-dev
> > +  - libdrm-dev
> > +  - libepoxy-dev
> > +  - libfdt-dev
> > +  - libgbm-dev
> > +  

Re: [PATCH v5 2/4] Jobs based on custom runners: build environment docs and playbook

2021-02-23 Thread Cleber Rosa
On Tue, Feb 23, 2021 at 03:51:33PM +0100, Erik Skultety wrote:
> On Tue, Feb 23, 2021 at 02:01:53PM +, Alex Bennée wrote:
> > 
> > Cleber Rosa  writes:
> > 
> > > To run basic jobs on custom runners, the environment needs to be
> > > properly set up.  The most common requirement is having the right
> > > packages installed.
> > >
> > > The playbook introduced here covers the QEMU's project s390x and
> > > aarch64 machines.  At the time this is being proposed, those machines
> > > have already had this playbook applied to them.
> > >
> > > Signed-off-by: Cleber Rosa 
> > > ---
> > >  docs/devel/ci.rst  | 30 ++
> > >  scripts/ci/setup/build-environment.yml | 76 ++
> > >  scripts/ci/setup/inventory |  1 +
> > >  3 files changed, 107 insertions(+)
> > >  create mode 100644 scripts/ci/setup/build-environment.yml
> > >  create mode 100644 scripts/ci/setup/inventory
> > >
> > > diff --git a/docs/devel/ci.rst b/docs/devel/ci.rst
> > > index 585b7bf4b8..a556558435 100644
> > > --- a/docs/devel/ci.rst
> > > +++ b/docs/devel/ci.rst
> > > @@ -26,3 +26,33 @@ gitlab-runner, is called a "custom runner".
> > >  The GitLab CI jobs definition for the custom runners are located under::
> > >  
> > >.gitlab-ci.d/custom-runners.yml
> > > +
> > > +Machine Setup Howto
> > > +---
> > > +
> > > +For all Linux based systems, the setup can be mostly automated by the
> > > +execution of two Ansible playbooks.  Start by adding your machines to
> > > +the ``inventory`` file under ``scripts/ci/setup``, such as this::
> > > +
> > > +  fully.qualified.domain
> > > +  other.machine.hostname
> > 
> > Is this really needed? Can't the host list be passed in the command
> > line? I find it off to imagine users wanting to configure whole fleets
> > of runners.
> 
> Why not support both, since the playbook execution is not wrapped by anything,
> giving the option of using either and inventory or direct cmdline invocation
> seems like the proper way to do it.
>

Well, these two (and possibly many others) are supported by
ansible-playbook.  I don't think we should document more than one
though, as it leads to a more confusing documentation.

> > 
> > > +
> > > +You may need to set some variables in the inventory file itself.  One
> > > +very common need is to tell Ansible to use a Python 3 interpreter on
> > > +those hosts.  This would look like::
> > > +
> > > +  fully.qualified.domain ansible_python_interpreter=/usr/bin/python3
> > > +  other.machine.hostname ansible_python_interpreter=/usr/bin/python3
> > > +
> > > +Build environment
> > > +~
> > > +
> > > +The ``scripts/ci/setup/build-environment.yml`` Ansible playbook will
> > > +set up machines with the environment needed to perform builds and run
> > > +QEMU tests.  It covers a number of different Linux distributions and
> > > +FreeBSD.
> > > +
> > > +To run the playbook, execute::
> > > +
> > > +  cd scripts/ci/setup
> > > +  ansible-playbook -i inventory build-environment.yml
> > 
> > So I got somewhat there with a direct command line invocation:
> > 
> >   ansible-playbook -u root -i 192.168.122.24,192.168.122.45 
> > scripts/ci/setup/build-environment.yml -e 
> > 'ansible_python_interpreter=/usr/bin/python3'
> > 
> > although for some reason a single host -i fails...
> 
> The trick is to end it with a ',' like "-i host1,"
>

Yep, that is the trick!  A weird one nevertheless... :)

> Erik

Thanks for the review and comments so far Erik!

Best,
- Cleber.


signature.asc
Description: PGP signature


Re: [PATCH v5 1/4] Jobs based on custom runners: documentation and configuration placeholder

2021-02-23 Thread Cleber Rosa
On Tue, Feb 23, 2021 at 05:37:04PM +0100, Philippe Mathieu-Daudé wrote:
> On 2/23/21 12:25 PM, Thomas Huth wrote:
> > On 19/02/2021 22.58, Cleber Rosa wrote:
> >> As described in the included documentation, the "custom runner" jobs
> >> extend the GitLab CI jobs already in place.  One of their primary
> >> goals of catching and preventing regressions on a wider number of host
> >> systems than the ones provided by GitLab's shared runners.
> >>
> >> This sets the stage in which other community members can add their own
> >> machine configuration documentation/scripts, and accompanying job
> >> definitions.  As a general rule, those newly added contributed jobs
> >> should run as "non-gating", until their reliability is verified (AKA
> >> "allow_failure: true").
> >>
> >> Signed-off-by: Cleber Rosa 
> >> ---
> >>   .gitlab-ci.d/custom-runners.yml | 14 ++
> >>   .gitlab-ci.yml  |  1 +
> >>   docs/devel/ci.rst   | 28 
> >>   docs/devel/index.rst    |  1 +
> >>   4 files changed, 44 insertions(+)
> >>   create mode 100644 .gitlab-ci.d/custom-runners.yml
> >>   create mode 100644 docs/devel/ci.rst
> >>
> >> diff --git a/.gitlab-ci.d/custom-runners.yml
> >> b/.gitlab-ci.d/custom-runners.yml
> >> new file mode 100644
> >> index 00..3004da2bda
> >> --- /dev/null
> >> +++ b/.gitlab-ci.d/custom-runners.yml
> >> @@ -0,0 +1,14 @@
> >> +# The CI jobs defined here require GitLab runners installed and
> >> +# registered on machines that match their operating system names,
> >> +# versions and architectures.  This is in contrast to the other CI
> >> +# jobs that are intended to run on GitLab's "shared" runners.
> >> +
> >> +# Different than the default approach on "shared" runners, based on
> >> +# containers, the custom runners have no such *requirement*, as those
> >> +# jobs should be capable of running on operating systems with no
> >> +# compatible container implementation, or no support from
> >> +# gitlab-runner.  To avoid problems that gitlab-runner can cause while
> >> +# reusing the GIT repository, let's enable the recursive submodule
> >> +# strategy.
> >> +variables:
> >> +  GIT_SUBMODULE_STRATEGY: recursive
> > 
> > Is it really necessary? I thought our configure script would take care
> > of the submodules?
>

I've done a lot of testing on bare metal systems, and the problems
that come from reusing the same system and failed cleanups can be very
frustrating.  It's unfortunate that we need this, but it was the
simplest and most reliable solution I found.  :/

Having said that, I noticed after I posted this series that this is
affecting all other jobs.  We don't need it that in the jobs based
on containers (for obvious reasons), so I see two options:

1) have it enabled on all jobs for consistency

2) have it enabled only on jobs that will reuse the repo

> Well, if there is a failure during the first clone (I got one network
> timeout in the middle) then next time it doesn't work:
> 
> Updating/initializing submodules recursively...
> Synchronizing submodule url for 'capstone'
> Synchronizing submodule url for 'dtc'
> Synchronizing submodule url for 'meson'
> Synchronizing submodule url for 'roms/QemuMacDrivers'
> Synchronizing submodule url for 'roms/SLOF'
> Synchronizing submodule url for 'roms/edk2'
> Synchronizing submodule url for
> 'roms/edk2/ArmPkg/Library/ArmSoftFloatLib/berkeley-softfloat-3'
> Synchronizing submodule url for
> 'roms/edk2/BaseTools/Source/C/BrotliCompress/brotli'
> Synchronizing submodule url for
> 'roms/edk2/BaseTools/Source/C/BrotliCompress/brotli/research/esaxx'
> Synchronizing submodule url for
> 'roms/edk2/BaseTools/Source/C/BrotliCompress/brotli/research/libdivsufsort'
> Synchronizing submodule url for
> 'roms/edk2/CryptoPkg/Library/OpensslLib/openssl'
> Synchronizing submodule url for
> 'roms/edk2/MdeModulePkg/Library/BrotliCustomDecompressLib/brotli'
> Synchronizing submodule url for
> 'roms/edk2/MdeModulePkg/Universal/RegularExpressionDxe/oniguruma'
> Synchronizing submodule url for
> 'roms/edk2/UnitTestFrameworkPkg/Library/CmockaLib/cmocka'
> Synchronizing submodule url for 'roms/ipxe'
> Synchronizing submodule url for 'roms/openbios'
> Synchronizing submodule url for 'roms/opensbi'
> Synchronizing submodule url for 'roms/qboot'
> Synchronizing submodule url for 'roms/qemu-palcode'
> Synchronizing submodule url for 'roms/seabios'
> Synchronizing submodule url for 'roms/seabios-hppa'
> Synchronizing submodule url for 'roms/sgabios'
> Synchronizing submodule url for 'roms/skiboot'
> Synchronizing submodule url for 'roms/u-boot'
> Synchronizing submodule url for 'roms/u-boot-sam460ex'
> Synchronizing submodule url for 'roms/vbootrom'
> Synchronizing submodule url for 'slirp'
> Synchronizing submodule url for 'tests/fp/berkeley-softfloat-3'
> Synchronizing submodule url for 'tests/fp/berkeley-testfloat-3'
> Synchronizing submodule url for 'ui/keycodemapdb'
> Entering 'capstone'
> Entering 'dtc'
> 

Re: [PATCH v2 2/4] utils: Improve qemu_strtosz() to have 64 bits of precision

2021-02-23 Thread Daniel P . Berrangé
On Thu, Feb 11, 2021 at 02:44:36PM -0600, Eric Blake wrote:
> We have multiple clients of qemu_strtosz (qemu-io, the opts visitor,
> the keyval visitor), and it gets annoying that edge-case testing is
> impacted by implicit rounding to 53 bits of precision due to parsing
> with strtod().  As an example posted by Rich Jones:
>  $ nbdkit memory $(( 2**63 - 2**30 )) --run \
>'build/qemu-io -f raw "$uri" -c "w -P 3 $(( 2**63 - 2**30 - 512 )) 512" '
>  write failed: Input/output error
> 
> because 9223372035781033472 got rounded to 0x7fffc000 which is
> out of bounds.
> 
> It is also worth noting that our existing parser, by virtue of using
> strtod(), accepts decimal AND hex numbers, even though test-cutils
> previously lacked any coverage of the latter until the previous patch.
> We do have existing clients that expect a hex parse to work (for
> example, iotest 33 using qemu-io -c "write -P 0xa 0x200 0x400"), but
> strtod() parses "08" as 8 rather than as an invalid octal number, so
> we know there are no clients that depend on octal.  Our use of
> strtod() also means that "0x1.8k" would actually parse as 1536 (the
> fraction is 8/16), rather than 1843 (if the fraction were 8/10); but
> as this was not covered in the testsuite, I have no qualms forbidding
> hex fractions as invalid, so this patch declares that the use of
> fractions is only supported with decimal input, and enhances the
> testsuite to document that.
> 
> Our previous use of strtod() meant that -1 parsed as a negative; now
> that we parse with strtoull(), negative values can wrap around modulo
> 2^64, so we have to explicitly check whether the user passed in a '-';
> and make it consistent to also reject '-0'.  This has the minor effect
> of treating negative values as EINVAL (with no change to endptr)
> rather than ERANGE (with endptr advanced to what was parsed), visible
> in the updated iotest output.
> 
> We also had no testsuite coverage of "1.1e0k", which happened to parse
> under strtod() but is unlikely to occur in practice; as long as we are
> making things more robust, it is easy enough to reject the use of
> exponents in a strtod parse.
> 
> The fix is done by breaking the parse into an integer prefix (no loss
> in precision), rejecting negative values (since we can no longer rely
> on strtod() to do that), determining if a decimal or hexadecimal parse
> was intended (with the new restriction that a fractional hex parse is
> not allowed), and where appropriate, using a floating point fractional
> parse (where we also scan to reject use of exponents in the fraction).
> The bulk of the patch is then updates to the testsuite to match our
> new precision, as well as adding new cases we reject (whether they
> were rejected or inadvertently accepted before).
> 
> Signed-off-by: Eric Blake 
> 
> ---
> 
> Note that this approach differs from what has been attempted in the
> past; see the thread starting at
> https://lists.gnu.org/archive/html/qemu-devel/2019-12/msg00852.html
> That approach tried to parse both as strtoull and strtod and take
> whichever was longer, but that was harder to document.
> ---
>  tests/test-cutils.c  | 74 ++
>  tests/test-keyval.c  | 35 +
>  tests/test-qemu-opts.c   | 33 
>  util/cutils.c| 90 
>  tests/qemu-iotests/049.out   | 14 +++--
>  tests/qemu-iotests/178.out.qcow2 |  3 +-
>  tests/qemu-iotests/178.out.raw   |  3 +-
>  7 files changed, 158 insertions(+), 94 deletions(-)

Reviewed-by: Daniel P. Berrangé 


Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|




[PATCH] tests/docker: Use --arch-only when building Debian cross images

2021-02-23 Thread Philippe Mathieu-Daudé
When building a Docker image based on debian10.docker on
a non-x86 host, we get:

 [2/4] RUN apt update && DEBIAN_FRONTEND=noninteractive eatmydata apt 
build-dep -yy qemu
 Reading package lists... Done
 Building dependency tree
 Reading state information... Done
 Some packages could not be installed. This may mean that you have
 requested an impossible situation or if you are using the unstable
 distribution that some required packages have not yet been created
 or been moved out of Incoming.
 The following information may help to resolve the situation:

 The following packages have unmet dependencies:
  builddeps:qemu : Depends: gcc-s390x-linux-gnu but it is not installable
   Depends: gcc-alpha-linux-gnu but it is not installable
 E: Unable to correct problems, you have held broken packages.

Fix by using the --arch-only option suggested here:
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1866032/comments/1

Suggested-by: Christian Ehrhardt 
Signed-off-by: Philippe Mathieu-Daudé 
---
 tests/docker/dockerfiles/debian-all-test-cross.docker | 2 +-
 tests/docker/dockerfiles/debian10.docker  | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/tests/docker/dockerfiles/debian-all-test-cross.docker 
b/tests/docker/dockerfiles/debian-all-test-cross.docker
index dedcea58b46..593b7ef1023 100644
--- a/tests/docker/dockerfiles/debian-all-test-cross.docker
+++ b/tests/docker/dockerfiles/debian-all-test-cross.docker
@@ -11,7 +11,7 @@ FROM qemu/debian10
 # What we need to build QEMU itself
 RUN apt update && \
 DEBIAN_FRONTEND=noninteractive eatmydata \
-apt build-dep -yy qemu
+apt build-dep --arch-only -yy qemu
 
 # Add the foreign architecture we want and install dependencies
 RUN DEBIAN_FRONTEND=noninteractive eatmydata \
diff --git a/tests/docker/dockerfiles/debian10.docker 
b/tests/docker/dockerfiles/debian10.docker
index 9d42b5a4b81..d034acbd256 100644
--- a/tests/docker/dockerfiles/debian10.docker
+++ b/tests/docker/dockerfiles/debian10.docker
@@ -32,6 +32,6 @@ RUN apt update && \
 psmisc \
 python3 \
 python3-sphinx \
-$(apt-get -s build-dep qemu | egrep ^Inst | fgrep '[all]' | cut -d\  
-f2)
+$(apt-get -s build-dep --arch-only qemu | egrep ^Inst | fgrep '[all]' 
| cut -d\  -f2)
 
 ENV FEATURES docs
-- 
2.26.2




Re: [PATCH v2 1/4] utils: Enhance testsuite for do_strtosz()

2021-02-23 Thread Daniel P . Berrangé
On Thu, Feb 11, 2021 at 02:44:35PM -0600, Eric Blake wrote:
> Enhance our testsuite coverage of do_strtosz() to cover some things we
> know that existing users want to continue working (hex bytes), as well
> as some things that accidentally work but shouldn't (hex fractions) or
> accidentally fail but that users want to work (64-bit precision on
> byte values).  This includes fixing a typo in the comment regarding
> our parsing near 2^64.
> 
> Signed-off-by: Eric Blake 
> ---
>  tests/test-cutils.c | 154 
>  1 file changed, 143 insertions(+), 11 deletions(-)

Reviewed-by: Daniel P. Berrangé 


Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|




  1   2   3   >