Re: [PATCH v4 14/15] qemu-iotests: add option to show qemu binary logs on stdout

2021-05-28 Thread Emanuele Giuseppe Esposito




+
  imgfmt = os.environ.get('IMGFMT', 'raw')
  imgproto = os.environ.get('IMGPROTO', 'file')
  output_dir = os.environ.get('OUTPUT_DIR', '.')
@@ -614,6 +616,13 @@ def _post_shutdown(self) -> None:
  super()._post_shutdown()
  self.subprocess_check_valgrind(qemu_valgrind)
+    def _pre_launch(self) -> None:
+    super()._pre_launch()
+    if qemu_print and self._qemu_log_file is not None:
+    # set QEMU binary output to stdout
+    self._qemu_log_file.close()
+    self._qemu_log_file = None
+


So, many use of _private members actually show that proper way of doing 
this is adding an option to __init__ instead


And then add yet another bool variable in __init__ just to mark when use 
the log file or not? At this point, if we really don't want this here we 
can just create a public function in machine.py and call that...

This can also be shared with machine.py's _post_shutdown().



Interesting will pylint complain on using _private members outside of 
the home class?


No, test 297 tests it and prints no warning or error.


Thank you,
Emanuele




Re: [PATCH v4 09/15] qemu-iotests: extend the check script to support valgrind for python tests

2021-05-28 Thread Emanuele Giuseppe Esposito




  sys.exit(os.EX_USAGE)
+qemu_valgrind = []
+if os.environ.get('VALGRIND_QEMU') == "y" and \
+    os.environ.get('NO_VALGRIND') != "y":
+    valgrind_logfile = "--log-file=" + test_dir.strip()





a bit strange that you need to strip test_dir here.. Why?

Yep, it's unnecessary



+VALGRIND_QEMU -- {VALGRIND_QEMU}
  """
  args = collections.defaultdict(str, self.get_env())



commit subject sais "support valgrind", but actually we only add a 
variable. valgrind is not actually used yet.. I'd reflect it in commit 
subject somehow



will replace with "prepare supporting"

Emanuele




Re: [PATCH v4 08/15] docs/devel/testing: add -gdb option to the debugging section of QEMU iotests

2021-05-28 Thread Emanuele Giuseppe Esposito




On 28/05/2021 19:16, Vladimir Sementsov-Ogievskiy wrote:

20.05.2021 10:52, Emanuele Giuseppe Esposito wrote:

Signed-off-by: Emanuele Giuseppe Esposito 


Reviewed-by: Vladimir Sementsov-Ogievskiy 


---
  docs/devel/testing.rst | 11 +++
  1 file changed, 11 insertions(+)

diff --git a/docs/devel/testing.rst b/docs/devel/testing.rst
index 8144e316a4..a746cab745 100644
--- a/docs/devel/testing.rst
+++ b/docs/devel/testing.rst
@@ -229,6 +229,17 @@ Debugging a test case
  The following options to the ``check`` script can be useful when 
debugging

  a failing test:
+* ``-gdb`` wraps every QEMU invocation in a ``gdbserver``, which 
waits for a
+  connection from a gdb client.  The options given to ``gdbserver`` 
(e.g. the
+  address on which to listen for connections) are taken from the 
``$GDB_OPTIONS``
+  environment variable.  By default (if ``$GDB_OPTIONS`` is empty), 
it listens on

+  ``localhost:12345``.
+  It is possible to connect to it for example with
+  ``gdb -iex "target remote $addr"``, where ``$addr`` is the address
+  ``gdbserver`` listens on.
+  If the ``-gdb`` option is not used, ``$GDB_OPTIONS`` is ignored,
+  regardless on whether it is set or not.
+
  * ``-d`` (debug) just increases the logging verbosity, showing
    for example the QMP commands and answers.



Didn't you think of an interface as simple as just wrap qemu binary by 
"gdb --args" and redirect stdin and stdout directly to the terminal 
where test is running? Wouldn't it more convenient? So, you just start 
./check -gdb , and you are in gdb. Or you need exactly gdb server, 
to attach from another machine?


Keeping this possibility in mind, it's probably better to call you 
option "-gdbserver"... But we can rename later if needed, finally, it's 
only a test framework.



See 
https://patchew.org/QEMU/20210414170352.29927-1-eespo...@redhat.com/20210414170352.29927-5-eespo...@redhat.com/ 


(penultimate email), there was a similar question:



Out of interest: Why gdbserver and not “just” gdb?  On Fedora, those are 
separate packages, so I don’t have gdbserver installed, that’s why I’m 
asking.


As far as I have tried, using only gdb with ./check is very hard to use, 
because the stdout is filtered out by the script.
So invoking gdb attaches it to QEMU, but it is not possible to start 
execution (run command) or interact with it, because of the python 
script filtering. This leaves the test hanging.


gdbserver is just something that a gdb client can attach to (for 
example, in another console or even in another host) for example with 
the command
# gdb -iex "target remote localhost:12345" . This provides a nice and 
separate gdb monitor to the client.


At this point I should put this somewhere, either in commit message or 
in the actual documentation...


I don't know about the name. "gdb" should also be short for "gdbserver" 
in a way.


Thank you,
Emanuele




Re: [PATCH v4 14/15] qemu-iotests: add option to show qemu binary logs on stdout

2021-05-28 Thread Vladimir Sementsov-Ogievskiy

20.05.2021 10:52, Emanuele Giuseppe Esposito wrote:

Using the flag -p, allow the qemu binary to print to stdout.

Reviewed-by: Max Reitz 
Signed-off-by: Emanuele Giuseppe Esposito 
---
  tests/qemu-iotests/check  | 4 +++-
  tests/qemu-iotests/iotests.py | 9 +
  tests/qemu-iotests/testenv.py | 9 +++--
  3 files changed, 19 insertions(+), 3 deletions(-)

diff --git a/tests/qemu-iotests/check b/tests/qemu-iotests/check
index 2101cedfe3..51b90681ab 100755
--- a/tests/qemu-iotests/check
+++ b/tests/qemu-iotests/check
@@ -33,6 +33,8 @@ def make_argparser() -> argparse.ArgumentParser:
 help='pretty print output for make check')
  
  p.add_argument('-d', dest='debug', action='store_true', help='debug')

+p.add_argument('-p', dest='print', action='store_true',
+help='redirects qemu\'s stdout and stderr to the test output')
  p.add_argument('-gdb', action='store_true',
 help="start gdbserver with $GDB_OPTIONS options \
  ('localhost:12345' if $GDB_OPTIONS is empty)")
@@ -117,7 +119,7 @@ if __name__ == '__main__':
aiomode=args.aiomode, cachemode=args.cachemode,
imgopts=args.imgopts, misalign=args.misalign,
debug=args.debug, valgrind=args.valgrind,
-  gdb=args.gdb)
+  gdb=args.gdb, qprint=args.print)
  
  testfinder = TestFinder(test_dir=env.source_iotests)
  
diff --git a/tests/qemu-iotests/iotests.py b/tests/qemu-iotests/iotests.py

index 75f1e1711c..53a3916a91 100644
--- a/tests/qemu-iotests/iotests.py
+++ b/tests/qemu-iotests/iotests.py
@@ -80,6 +80,8 @@
  if gdb_qemu_env:
  qemu_gdb = ['gdbserver'] + gdb_qemu_env.strip().split(' ')
  
+qemu_print = os.environ.get('PRINT_QEMU', False)

+
  imgfmt = os.environ.get('IMGFMT', 'raw')
  imgproto = os.environ.get('IMGPROTO', 'file')
  output_dir = os.environ.get('OUTPUT_DIR', '.')
@@ -614,6 +616,13 @@ def _post_shutdown(self) -> None:
  super()._post_shutdown()
  self.subprocess_check_valgrind(qemu_valgrind)
  
+def _pre_launch(self) -> None:

+super()._pre_launch()
+if qemu_print and self._qemu_log_file is not None:
+# set QEMU binary output to stdout
+self._qemu_log_file.close()
+self._qemu_log_file = None
+


So, many use of _private members actually show that proper way of doing this is 
adding an option to __init__ instead

Interesting will pylint complain on using _private members outside of the home 
class?


  def add_object(self, opts):
  self._args.append('-object')
  self._args.append(opts)
diff --git a/tests/qemu-iotests/testenv.py b/tests/qemu-iotests/testenv.py
index 319d29cb0c..b79ce22fe9 100644
--- a/tests/qemu-iotests/testenv.py
+++ b/tests/qemu-iotests/testenv.py
@@ -74,7 +74,7 @@ class TestEnv(ContextManager['TestEnv']):
   'AIOMODE', 'CACHEMODE', 'VALGRIND_QEMU',
   'CACHEMODE_IS_DEFAULT', 'IMGFMT_GENERIC', 
'IMGOPTSSYNTAX',
   'IMGKEYSECRET', 'QEMU_DEFAULT_MACHINE', 
'MALLOC_PERTURB_',
- 'GDB_OPTIONS']
+ 'GDB_OPTIONS', 'PRINT_QEMU']
  
  def get_env(self) -> Dict[str, str]:

  env = {}
@@ -166,7 +166,8 @@ def __init__(self, imgfmt: str, imgproto: str, aiomode: str,
   misalign: bool = False,
   debug: bool = False,
   valgrind: bool = False,
- gdb: bool = False) -> None:
+ gdb: bool = False,
+ qprint: bool = False) -> None:
  self.imgfmt = imgfmt
  self.imgproto = imgproto
  self.aiomode = aiomode
@@ -174,6 +175,9 @@ def __init__(self, imgfmt: str, imgproto: str, aiomode: str,
  self.misalign = misalign
  self.debug = debug
  
+if qprint:

+self.print_qemu = 'y'
+
  if gdb:
  self.gdb_options = os.environ.get('GDB_OPTIONS', DEF_GDB_OPTIONS)
  if not self.gdb_options:
@@ -283,6 +287,7 @@ def print_env(self) -> None:
  SOCKET_SCM_HELPER -- {SOCKET_SCM_HELPER}
  GDB_OPTIONS   -- {GDB_OPTIONS}
  VALGRIND_QEMU -- {VALGRIND_QEMU}
+PRINT_QEMU_OUTPUT -- {PRINT_QEMU}
  """
  
  args = collections.defaultdict(str, self.get_env())




5 places we need to modify to add a new option. That's not very good :( (not 
about your patch).

And I still doubt, that we want to add any new variable to print_env. If we do, 
than it's simpler to print all variables here, than, one place less to modify 
by hand.


Reviewed-by: Vladimir Sementsov-Ogievskiy 

--
Best regards,
Vladimir



Re: [PATCH v4 15/15] docs/devel/testing: add -p option to the debug section of QEMU iotests

2021-05-28 Thread Vladimir Sementsov-Ogievskiy

20.05.2021 10:52, Emanuele Giuseppe Esposito wrote:

Signed-off-by: Emanuele Giuseppe Esposito 


I'd merge it to previous patch. anyway:

Reviewed-by: Vladimir Sementsov-Ogievskiy 


---
  docs/devel/testing.rst | 4 
  1 file changed, 4 insertions(+)

diff --git a/docs/devel/testing.rst b/docs/devel/testing.rst
index d743e88746..1192d6489e 100644
--- a/docs/devel/testing.rst
+++ b/docs/devel/testing.rst
@@ -250,6 +250,10 @@ a failing test:
  * ``-d`` (debug) just increases the logging verbosity, showing
for example the QMP commands and answers.
  
+* ``-p`` (print) redirect QEMU’s stdout and stderr to the test output,

+  instead of saving it into a log file in
+  ``$TEST_DIR/qemu-machine-``.
+
  Test case groups
  
  




--
Best regards,
Vladimir



Re: [PATCH v4 13/15] docs/devel/testing: add -valgrind option to the debug section of QEMU iotests

2021-05-28 Thread Vladimir Sementsov-Ogievskiy

20.05.2021 10:52, Emanuele Giuseppe Esposito wrote:

Reviewed-by: Max Reitz
Signed-off-by: Emanuele Giuseppe Esposito



Reviewed-by: Vladimir Sementsov-Ogievskiy 

--
Best regards,
Vladimir



Re: [PATCH v4 12/15] qemu-iotests: insert valgrind command line as wrapper for qemu binary

2021-05-28 Thread Vladimir Sementsov-Ogievskiy

20.05.2021 10:52, Emanuele Giuseppe Esposito wrote:

The priority will be given to gdb command line, meaning if the -gdb
parameter and -valgrind are given, gdb will be wrapped around
the qemu binary.


I'd prefer just return an error immediately if user specify both -gdb and 
-valgrind



Signed-off-by: Emanuele Giuseppe Esposito 
---
  tests/qemu-iotests/iotests.py | 3 ++-
  1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/tests/qemu-iotests/iotests.py b/tests/qemu-iotests/iotests.py
index a06284acad..75f1e1711c 100644
--- a/tests/qemu-iotests/iotests.py
+++ b/tests/qemu-iotests/iotests.py
@@ -590,7 +590,8 @@ class VM(qtest.QEMUQtestMachine):
  def __init__(self, path_suffix=''):
  name = "qemu%s-%d" % (path_suffix, os.getpid())
  timer = 15.0 if not (qemu_gdb or qemu_valgrind) else None
-super().__init__(qemu_prog, qemu_opts, wrapper=qemu_gdb,
+wrapper = qemu_gdb if qemu_gdb else qemu_valgrind
+super().__init__(qemu_prog, qemu_opts, wrapper=wrapper,
   name=name,
   test_dir=test_dir,
   socket_scm_helper=socket_scm_helper,




Reviewed-by: Vladimir Sementsov-Ogievskiy 

--
Best regards,
Vladimir



Re: [PATCH v4 11/15] qemu-iotests: allow valgrind to read/delete the generated log file

2021-05-28 Thread Vladimir Sementsov-Ogievskiy

20.05.2021 10:52, Emanuele Giuseppe Esposito wrote:

When using -valgrind on the script tests, it generates a log file
in $TEST_DIR that is either read (if valgrind finds problems) or
otherwise deleted. Provide the same exact behavior when using
-valgrind on the python tests.

Signed-off-by: Emanuele Giuseppe Esposito 
---
  tests/qemu-iotests/iotests.py | 16 
  1 file changed, 16 insertions(+)

diff --git a/tests/qemu-iotests/iotests.py b/tests/qemu-iotests/iotests.py
index 5d75094ba6..a06284acad 100644
--- a/tests/qemu-iotests/iotests.py
+++ b/tests/qemu-iotests/iotests.py
@@ -597,6 +597,22 @@ def __init__(self, path_suffix=''):
   sock_dir=sock_dir, qmp_timer=timer)
  self._num_drives = 0
  
+def subprocess_check_valgrind(self, valgrind: List[str]) -> None:


For me just "check_valgrind" would be more intuitive.

I think, you also can use qemu_valgrind global variable directly.. What is the 
reason for passing it as parameter?


+if not valgrind or not self._popen:
+return
+
+valgrind_filename =  f"{test_dir}/{self._popen.pid}.valgrind"


should we use os.path.join instead? I don't know.. And don't care, as I don't 
use windows anyway. Still os.path.join is used everywhere in the file except 
for has_working_luks() function.. So, you are going add another exception.


+
+if self.exitcode() == 99:
+with open(valgrind_filename) as f:
+print(f.read())
+else:
+os.remove(valgrind_filename)
+
+def _post_shutdown(self) -> None:
+super()._post_shutdown()
+self.subprocess_check_valgrind(qemu_valgrind)
+
  def add_object(self, opts):
  self._args.append('-object')
  self._args.append(opts)



Reviewed-by: Vladimir Sementsov-Ogievskiy 

--
Best regards,
Vladimir



Re: [PATCH v4 10/15] qemu-iotests: extent QMP socket timeout when using valgrind

2021-05-28 Thread Vladimir Sementsov-Ogievskiy

20.05.2021 10:52, Emanuele Giuseppe Esposito wrote:

As with gdbserver, valgrind delays the test execution, so
the default QMP socket timeout timeout too soon.


First, "Timeout" class is a generic class for timeouts, not relying to sockets. 
So,  commit message lacks information about that we modify generic context-provider class 
and why we do it.



Signed-off-by: Emanuele Giuseppe Esposito 
---
  tests/qemu-iotests/iotests.py | 6 +++---
  1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/tests/qemu-iotests/iotests.py b/tests/qemu-iotests/iotests.py
index 41462a80fc..5d75094ba6 100644
--- a/tests/qemu-iotests/iotests.py
+++ b/tests/qemu-iotests/iotests.py
@@ -489,12 +489,12 @@ def __init__(self, seconds, errmsg="Timeout"):
  self.seconds = seconds
  self.errmsg = errmsg
  def __enter__(self):
-if not qemu_gdb:
+if not (qemu_gdb or qemu_valgrind):
  signal.signal(signal.SIGALRM, self.timeout)
  signal.setitimer(signal.ITIMER_REAL, self.seconds)
  return self
  def __exit__(self, exc_type, value, traceback):
-if not qemu_gdb:
+if not (qemu_gdb or qemu_valgrind):


If you follow my suggestion on 05, you'll have to modify only one line instead 
of two.


  signal.setitimer(signal.ITIMER_REAL, 0)
  return False
  def timeout(self, signum, frame):
@@ -589,7 +589,7 @@ class VM(qtest.QEMUQtestMachine):
  
  def __init__(self, path_suffix=''):

  name = "qemu%s-%d" % (path_suffix, os.getpid())
-timer = 15.0 if not qemu_gdb else None
+timer = 15.0 if not (qemu_gdb or qemu_valgrind) else None
  super().__init__(qemu_prog, qemu_opts, wrapper=qemu_gdb,
   name=name,
   test_dir=test_dir,



still it should work as intended:
Reviewed-by: Vladimir Sementsov-Ogievskiy 


--
Best regards,
Vladimir



Re: [PATCH v4 09/15] qemu-iotests: extend the check script to support valgrind for python tests

2021-05-28 Thread Vladimir Sementsov-Ogievskiy

20.05.2021 10:52, Emanuele Giuseppe Esposito wrote:

Currently, the check script only parses the option and sets the
VALGRIND_QEMU environmental variable to "y".
Add another local python variable that prepares the command line,
identical to the one provided in the test scripts.

Because the python script does not know in advance the valgring
PID to assign to the log file name, use the "%p" flag in valgrind
log file name that automatically puts the process PID at runtime.

Reviewed-by: Max Reitz 
Signed-off-by: Emanuele Giuseppe Esposito 
---
  tests/qemu-iotests/check  |  7 ---
  tests/qemu-iotests/iotests.py | 11 +++
  tests/qemu-iotests/testenv.py |  1 +
  3 files changed, 16 insertions(+), 3 deletions(-)

diff --git a/tests/qemu-iotests/check b/tests/qemu-iotests/check
index b9820fdaaf..2101cedfe3 100755
--- a/tests/qemu-iotests/check
+++ b/tests/qemu-iotests/check
@@ -36,6 +36,10 @@ def make_argparser() -> argparse.ArgumentParser:
  p.add_argument('-gdb', action='store_true',
 help="start gdbserver with $GDB_OPTIONS options \
  ('localhost:12345' if $GDB_OPTIONS is empty)")
+p.add_argument('-valgrind', action='store_true',
+help='use valgrind, sets VALGRIND_QEMU environment '
+'variable')
+
  p.add_argument('-misalign', action='store_true',
 help='misalign memory allocations')
  p.add_argument('--color', choices=['on', 'off', 'auto'],
@@ -86,9 +90,6 @@ def make_argparser() -> argparse.ArgumentParser:
  g_bash.add_argument('-o', dest='imgopts',
  help='options to pass to qemu-img create/convert, '
  'sets IMGOPTS environment variable')
-g_bash.add_argument('-valgrind', action='store_true',
-help='use valgrind, sets VALGRIND_QEMU environment '
-'variable')
  
  g_sel = p.add_argument_group('test selecting options',

   'The following options specify test set '
diff --git a/tests/qemu-iotests/iotests.py b/tests/qemu-iotests/iotests.py
index c9628e6828..41462a80fc 100644
--- a/tests/qemu-iotests/iotests.py
+++ b/tests/qemu-iotests/iotests.py
@@ -97,6 +97,17 @@
  sys.stderr.write('Please run this test via the "check" script\n')
  sys.exit(os.EX_USAGE)
  
+qemu_valgrind = []

+if os.environ.get('VALGRIND_QEMU') == "y" and \
+os.environ.get('NO_VALGRIND') != "y":
+valgrind_logfile = "--log-file=" + test_dir.strip()


a bit strange that you need to strip test_dir here.. Why?


+# %p allows to put the valgrind process PID, since
+# we don't know it a priori (subprocess.Popen is
+# not yet invoked)
+valgrind_logfile += "/%p.valgrind"
+
+qemu_valgrind = ['valgrind', valgrind_logfile, '--error-exitcode=99']
+
  socket_scm_helper = os.environ.get('SOCKET_SCM_HELPER', 'socket_scm_helper')
  
  luks_default_secret_object = 'secret,id=keysec0,data=' + \

diff --git a/tests/qemu-iotests/testenv.py b/tests/qemu-iotests/testenv.py
index 49ddd586ef..319d29cb0c 100644
--- a/tests/qemu-iotests/testenv.py
+++ b/tests/qemu-iotests/testenv.py
@@ -282,6 +282,7 @@ def print_env(self) -> None:
  SOCK_DIR  -- {SOCK_DIR}
  SOCKET_SCM_HELPER -- {SOCKET_SCM_HELPER}
  GDB_OPTIONS   -- {GDB_OPTIONS}
+VALGRIND_QEMU -- {VALGRIND_QEMU}
  """
  
  args = collections.defaultdict(str, self.get_env())




commit subject sais "support valgrind", but actually we only add a variable. 
valgrind is not actually used yet.. I'd reflect it in commit subject somehow

--
Best regards,
Vladimir



Re: [PATCH v4 08/15] docs/devel/testing: add -gdb option to the debugging section of QEMU iotests

2021-05-28 Thread Vladimir Sementsov-Ogievskiy

20.05.2021 10:52, Emanuele Giuseppe Esposito wrote:

Signed-off-by: Emanuele Giuseppe Esposito 


Reviewed-by: Vladimir Sementsov-Ogievskiy 


---
  docs/devel/testing.rst | 11 +++
  1 file changed, 11 insertions(+)

diff --git a/docs/devel/testing.rst b/docs/devel/testing.rst
index 8144e316a4..a746cab745 100644
--- a/docs/devel/testing.rst
+++ b/docs/devel/testing.rst
@@ -229,6 +229,17 @@ Debugging a test case
  The following options to the ``check`` script can be useful when debugging
  a failing test:
  
+* ``-gdb`` wraps every QEMU invocation in a ``gdbserver``, which waits for a

+  connection from a gdb client.  The options given to ``gdbserver`` (e.g. the
+  address on which to listen for connections) are taken from the 
``$GDB_OPTIONS``
+  environment variable.  By default (if ``$GDB_OPTIONS`` is empty), it listens 
on
+  ``localhost:12345``.
+  It is possible to connect to it for example with
+  ``gdb -iex "target remote $addr"``, where ``$addr`` is the address
+  ``gdbserver`` listens on.
+  If the ``-gdb`` option is not used, ``$GDB_OPTIONS`` is ignored,
+  regardless on whether it is set or not.
+
  * ``-d`` (debug) just increases the logging verbosity, showing
for example the QMP commands and answers.
  



Didn't you think of an interface as simple as just wrap qemu binary by "gdb --args" 
and redirect stdin and stdout directly to the terminal where test is running? Wouldn't it more 
convenient? So, you just start ./check -gdb , and you are in gdb. Or you need 
exactly gdb server, to attach from another machine?

Keeping this possibility in mind, it's probably better to call you option 
"-gdbserver"... But we can rename later if needed, finally, it's only a test 
framework.


--
Best regards,
Vladimir



Re: [PATCH v4 07/15] qemu-iotests: add gdbserver option to script tests too

2021-05-28 Thread Vladimir Sementsov-Ogievskiy

20.05.2021 10:52, Emanuele Giuseppe Esposito wrote:

The only limitation here is that running a script with gdbserver
will make the test output mismatch with the expected
results, making the test fail.

Signed-off-by: Emanuele Giuseppe Esposito


Reviewed-by: Vladimir Sementsov-Ogievskiy 

--
Best regards,
Vladimir



Re: [PATCH v4 06/15] qemu_iotests: insert gdbserver command line as wrapper for qemu binary

2021-05-28 Thread Vladimir Sementsov-Ogievskiy

20.05.2021 10:52, Emanuele Giuseppe Esposito wrote:

Signed-off-by: Emanuele Giuseppe Esposito


Reviewed-by: Vladimir Sementsov-Ogievskiy 

--
Best regards,
Vladimir



Re: [PATCH v4 05/15] qemu-iotests: delay QMP socket timers

2021-05-28 Thread Vladimir Sementsov-Ogievskiy

20.05.2021 10:52, Emanuele Giuseppe Esposito wrote:

Attaching gdbserver implies that the qmp socket
should wait indefinitely for an answer from QEMU.

Signed-off-by: Emanuele Giuseppe Esposito 
---
  tests/qemu-iotests/iotests.py | 10 ++
  1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/tests/qemu-iotests/iotests.py b/tests/qemu-iotests/iotests.py
index d667fde6f8..cf1ca60376 100644
--- a/tests/qemu-iotests/iotests.py
+++ b/tests/qemu-iotests/iotests.py
@@ -478,11 +478,13 @@ def __init__(self, seconds, errmsg="Timeout"):
  self.seconds = seconds
  self.errmsg = errmsg
  def __enter__(self):
-signal.signal(signal.SIGALRM, self.timeout)
-signal.setitimer(signal.ITIMER_REAL, self.seconds)
+if not qemu_gdb:
+signal.signal(signal.SIGALRM, self.timeout)
+signal.setitimer(signal.ITIMER_REAL, self.seconds)
  return self
  def __exit__(self, exc_type, value, traceback):
-signal.setitimer(signal.ITIMER_REAL, 0)
+if not qemu_gdb:
+signal.setitimer(signal.ITIMER_REAL, 0)
  return False
  def timeout(self, signum, frame):
  raise Exception(self.errmsg)


So, you just make the class do nothing.. I'd prefer something like this:

@contextmanager
def NoTimeout:
   yield

if qemu_gdb:
  Timeout = NoTimeout


anyway:

Reviewed-by: Vladimir Sementsov-Ogievskiy 



@@ -576,7 +578,7 @@ class VM(qtest.QEMUQtestMachine):
  
  def __init__(self, path_suffix=''):

  name = "qemu%s-%d" % (path_suffix, os.getpid())
-timer = 15.0
+timer = 15.0 if not qemu_gdb else None
  super().__init__(qemu_prog, qemu_opts, name=name,
   test_dir=test_dir,
   socket_scm_helper=socket_scm_helper,




Still, it's not simple to avoid any kind of timeouts. The most annoying would 
be timeouts in event_wait() / events_wait()

--
Best regards,
Vladimir



Re: [PATCH v4 04/15] qemu-iotests: add option to attach gdbserver

2021-05-28 Thread Vladimir Sementsov-Ogievskiy

20.05.2021 10:52, Emanuele Giuseppe Esposito wrote:

Define -gdb flag and GDB_OPTIONS environment variable
to python tests to attach a gdbserver to each qemu instance.
This patch only adds and parses this flag, it does not yet add
the implementation for it.

if -gdb is not provided but $GDB_OPTIONS is set, ignore the
environment variable.

Signed-off-by: Emanuele Giuseppe Esposito 
---
  tests/qemu-iotests/check  |  6 +-
  tests/qemu-iotests/iotests.py |  5 +
  tests/qemu-iotests/testenv.py | 19 ---
  3 files changed, 26 insertions(+), 4 deletions(-)

diff --git a/tests/qemu-iotests/check b/tests/qemu-iotests/check
index d1c87ceaf1..b9820fdaaf 100755
--- a/tests/qemu-iotests/check
+++ b/tests/qemu-iotests/check
@@ -33,6 +33,9 @@ def make_argparser() -> argparse.ArgumentParser:
 help='pretty print output for make check')
  
  p.add_argument('-d', dest='debug', action='store_true', help='debug')

+p.add_argument('-gdb', action='store_true',
+   help="start gdbserver with $GDB_OPTIONS options \
+('localhost:12345' if $GDB_OPTIONS is empty)")
  p.add_argument('-misalign', action='store_true',
 help='misalign memory allocations')
  p.add_argument('--color', choices=['on', 'off', 'auto'],
@@ -112,7 +115,8 @@ if __name__ == '__main__':
  env = TestEnv(imgfmt=args.imgfmt, imgproto=args.imgproto,
aiomode=args.aiomode, cachemode=args.cachemode,
imgopts=args.imgopts, misalign=args.misalign,
-  debug=args.debug, valgrind=args.valgrind)
+  debug=args.debug, valgrind=args.valgrind,
+  gdb=args.gdb)
  
  testfinder = TestFinder(test_dir=env.source_iotests)
  
diff --git a/tests/qemu-iotests/iotests.py b/tests/qemu-iotests/iotests.py

index 5d78de0f0b..d667fde6f8 100644
--- a/tests/qemu-iotests/iotests.py
+++ b/tests/qemu-iotests/iotests.py
@@ -75,6 +75,11 @@
  qemu_prog = os.environ.get('QEMU_PROG', 'qemu')
  qemu_opts = os.environ.get('QEMU_OPTIONS', '').strip().split(' ')
  
+gdb_qemu_env = os.environ.get('GDB_OPTIONS')

+qemu_gdb = []
+if gdb_qemu_env:
+qemu_gdb = ['gdbserver'] + gdb_qemu_env.strip().split(' ')
+
  imgfmt = os.environ.get('IMGFMT', 'raw')
  imgproto = os.environ.get('IMGPROTO', 'file')
  output_dir = os.environ.get('OUTPUT_DIR', '.')
diff --git a/tests/qemu-iotests/testenv.py b/tests/qemu-iotests/testenv.py
index 6d27712617..49ddd586ef 100644
--- a/tests/qemu-iotests/testenv.py
+++ b/tests/qemu-iotests/testenv.py
@@ -27,6 +27,7 @@
  import glob
  from typing import Dict, Any, Optional, ContextManager
  
+DEF_GDB_OPTIONS = 'localhost:12345'
  
  def isxfile(path: str) -> bool:

  return os.path.isfile(path) and os.access(path, os.X_OK)
@@ -72,7 +73,8 @@ class TestEnv(ContextManager['TestEnv']):
   'QEMU_NBD_OPTIONS', 'IMGOPTS', 'IMGFMT', 'IMGPROTO',
   'AIOMODE', 'CACHEMODE', 'VALGRIND_QEMU',
   'CACHEMODE_IS_DEFAULT', 'IMGFMT_GENERIC', 
'IMGOPTSSYNTAX',
- 'IMGKEYSECRET', 'QEMU_DEFAULT_MACHINE', 'MALLOC_PERTURB_']
+ 'IMGKEYSECRET', 'QEMU_DEFAULT_MACHINE', 'MALLOC_PERTURB_',
+ 'GDB_OPTIONS']
  
  def get_env(self) -> Dict[str, str]:

  env = {}
@@ -163,7 +165,8 @@ def __init__(self, imgfmt: str, imgproto: str, aiomode: str,
   imgopts: Optional[str] = None,
   misalign: bool = False,
   debug: bool = False,
- valgrind: bool = False) -> None:
+ valgrind: bool = False,
+ gdb: bool = False) -> None:
  self.imgfmt = imgfmt
  self.imgproto = imgproto
  self.aiomode = aiomode
@@ -171,6 +174,14 @@ def __init__(self, imgfmt: str, imgproto: str, aiomode: 
str,
  self.misalign = misalign
  self.debug = debug
  
+if gdb:

+self.gdb_options = os.environ.get('GDB_OPTIONS', DEF_GDB_OPTIONS)
+if not self.gdb_options:
+# cover the case 'export GDB_OPTIONS='
+self.gdb_options = DEF_GDB_OPTIONS
+elif 'GDB_OPTIONS' in os.environ:


please add a comment:

 # to not propagate it in prepare_subprocess()


+del os.environ['GDB_OPTIONS']
+
  if valgrind:
  self.valgrind_qemu = 'y'
  
@@ -269,7 +280,9 @@ def print_env(self) -> None:

  PLATFORM  -- {platform}
  TEST_DIR  -- {TEST_DIR}
  SOCK_DIR  -- {SOCK_DIR}
-SOCKET_SCM_HELPER -- {SOCKET_SCM_HELPER}"""
+SOCKET_SCM_HELPER -- {SOCKET_SCM_HELPER}
+GDB_OPTIONS   -- {GDB_OPTIONS}
+"""
  
  args = collections.defaultdict(str, self.get_env())
  



Reviewed-by: Vladimir Sementsov-Ogievskiy 

--
Best regards,
Vladimir



Re: [PATCH v2 08/33] block/backup: drop support for copy_range

2021-05-28 Thread Vladimir Sementsov-Ogievskiy

20.05.2021 17:21, Vladimir Sementsov-Ogievskiy wrote:

copy_range is not a default behavior since 6a30f663d4c0b3c, and it's
now available only though x-perf experimantal argument, so it's OK to
drop it.

Even when backup is used to copy disk to same filesystem, and
filesystem supports zero-copy copy_range, copy_range is probably not
what we want for backup: backup has good property of making a copy of
active disk, with no impact to active disk itself (unlike creating a
snapshot). And if copy_range instead of copying data adds fs-level
references, and on next guest write COW operation occurs, it's seems
most possible, that new block will be allocated for original vm disk,
not for backup disk. Thus, fragmentation of original disk will
increase.

We can simply add support back on demand. Now we want to publish
copy-before-write filter, and instead of thinking how to pass
use-copy-range argument to block-copy (create x-block-copy parameter
for new public filter driver, or may be set it by hand after filter
node creation?), instead of this let's just drop copy-range support in
backup for now.

After this patch copy-range support in block-copy becomes unused. Let's
keep it for a while, it won't hurt:

1. If there would be request for supporting copy_range in backup
(and/or in a new public copy-before-write filter), it will be easy
to satisfy it.

2. Probably, qemu-img convert will reuse block-copy, and qemu-img has
option to enable copy-range. qemu-img convert is not a backup, and
copy_range may be more reasonable for some cases in context of
qemu-img convert.



Actually, I know one case, where copy_range for backup job may be reasonable:

Using backup in push-backup with fleecing scheme in

   [PATCH 0/6] push backup with fleecing

Of-course, no real sense in using push-backup-with-fleecing scheme with both 
temp image and final backup target being on the same file system (no benefit of 
fleecing, we can use simple backup without temporary image).

But we absolutely don't care about fragmentation of temp disk.

Still, it doesn't make sense, as temp-image and real-backup-target should not 
be on same file-system..

Could it be some distributed filesystem, where it still make sense to call 
copy_range? Theoretically could.


Another thought: I'm going also to implement RAM-cache driver, to optimize 
push-backup-with-fleecing scheme. I'll need a way to copy data from RAM-cache 
node to final-target. I can implement copy_range for RAM-cache, and this will 
allow to not create extra buffer, but use the buffer that is already allocated 
and own by RAM-cache.. Still, this behavior is obviously good, it should work 
automatically, no reason to make it optional..


Hmm, so what should be summarized:

- Actually, block-copy does copy_range. So, probably it's good to change the 
copy_range() function in qemu to fallback to read+write..

And about copy_range itself, what we want:

1. We want to control does it influence fragmentation of source disk. When 
copying from temporary image we don't care. But when source of block-copy is 
active disk in we do care to not influence how original disk lay in filesystem. 
Probably, we even want an option for copy_range() syscall to control this thing.

2. We want to be efficient with copy_size, ie size of chunks to copy. We even 
have existing issue in block-copy: write-zero is limited to 
BLOCK_COPY_MAX_BUFFER which is obviously inefficient.

For copy-size we should have some good defaults or automatic detection logic..

For copy_range fragmentation..

If we have some internal copy_range-like optimizations like zero-copy from 
RAM-cache node, or maybe copy compressed data from one qcow2 node to another 
without decompression, it should be done anyway, it shouldn't be set by user 
option. Still, for file-posix, we don't know, does underlying filesystem 
copy_range() implementation lead to fragmentation or not. And we don't know is 
user OK with it or not. So we need an option.. So, it's probably better to keep 
x-perm.copy-range for now, until we don't have a good idea on interface.

--
Best regards,
Vladimir



[PATCH 2/2] block-copy: refactor copy_range handling

2021-05-28 Thread Vladimir Sementsov-Ogievskiy
Currently we update s->use_copy_range and s->copy_size in
block_copy_do_copy().

It's not very good:

1. block_copy_do_copy() is intended to be a simple function, that wraps
bdrv_co_ functions for need of block copy. That's why we don't pass
BlockCopyTask into it. So, block_copy_do_copy() is bad place for
manipulation with generic state of block-copy process

2. We are going to make block-copy thread-safe. So, it's good to move
manipulation with state of block-copy to the places where we'll need
critical sections anyway, to not introduce extra synchronization
primitives in block_copy_do_copy().

Signed-off-by: Vladimir Sementsov-Ogievskiy 
---
 block/block-copy.c | 71 +++---
 1 file changed, 48 insertions(+), 23 deletions(-)

diff --git a/block/block-copy.c b/block/block-copy.c
index f9e871b64f..c96fe31054 100644
--- a/block/block-copy.c
+++ b/block/block-copy.c
@@ -65,6 +65,7 @@ typedef struct BlockCopyTask {
 int64_t offset;
 int64_t bytes;
 bool zeroes;
+bool copy_range;
 QLIST_ENTRY(BlockCopyTask) list;
 CoQueue wait_queue; /* coroutines blocked on this task */
 } BlockCopyTask;
@@ -183,6 +184,7 @@ static BlockCopyTask *block_copy_task_create(BlockCopyState 
*s,
 .call_state = call_state,
 .offset = offset,
 .bytes = bytes,
+.copy_range = s->use_copy_range,
 };
 qemu_co_queue_init(>wait_queue);
 QLIST_INSERT_HEAD(>tasks, task, list);
@@ -342,11 +344,17 @@ static coroutine_fn int block_copy_task_run(AioTaskPool 
*pool,
  *
  * No sync here: nor bitmap neighter intersecting requests handling, only copy.
  *
+ * @copy_range is in-out argument: if *copy_range is false, copy_range is not
+ * done. If *copy_range is true, copy_range attempt is done. If copy_range
+ * attempt failed, the function fallback to usual read+write and *copy_range is
+ * set to false. *copy_range and zeroes must not be true simultaneously.
+ *
  * Returns 0 on success.
  */
 static int coroutine_fn block_copy_do_copy(BlockCopyState *s,
int64_t offset, int64_t bytes,
-   bool zeroes, bool *error_is_read)
+   bool zeroes, bool *copy_range,
+   bool *error_is_read)
 {
 int ret;
 int64_t nbytes = MIN(offset + bytes, s->len) - offset;
@@ -359,6 +367,7 @@ static int coroutine_fn block_copy_do_copy(BlockCopyState 
*s,
 assert(offset + bytes <= s->len ||
offset + bytes == QEMU_ALIGN_UP(s->len, s->cluster_size));
 assert(nbytes < INT_MAX);
+assert(!(*copy_range && zeroes));
 
 if (zeroes) {
 ret = bdrv_co_pwrite_zeroes(s->target, offset, nbytes, s->write_flags &
@@ -370,32 +379,15 @@ static int coroutine_fn block_copy_do_copy(BlockCopyState 
*s,
 return ret;
 }
 
-if (s->use_copy_range) {
+if (*copy_range) {
 ret = bdrv_co_copy_range(s->source, offset, s->target, offset, nbytes,
  0, s->write_flags);
 if (ret < 0) {
 trace_block_copy_copy_range_fail(s, offset, ret);
-s->use_copy_range = false;
-s->copy_size = MAX(s->cluster_size, BLOCK_COPY_MAX_BUFFER);
+*copy_range = false;
 /* Fallback to read+write with allocated buffer */
 } else {
-if (s->use_copy_range) {
-/*
- * Successful copy-range. Now increase copy_size.  copy_range
- * does not respect max_transfer (it's a TODO), so we factor
- * that in here.
- *
- * Note: we double-check s->use_copy_range for the case when
- * parallel block-copy request unsets it during previous
- * bdrv_co_copy_range call.
- */
-s->copy_size =
-MIN(MAX(s->cluster_size, BLOCK_COPY_MAX_COPY_RANGE),
-QEMU_ALIGN_DOWN(block_copy_max_transfer(s->source,
-s->target),
-s->cluster_size));
-}
-goto out;
+return 0;
 }
 }
 
@@ -431,14 +423,44 @@ out:
 return ret;
 }
 
+static void block_copy_handle_copy_range_result(BlockCopyState *s,
+bool is_success)
+{
+if (!s->use_copy_range) {
+/* already disabled */
+return;
+}
+
+if (is_success) {
+/*
+ * Successful copy-range. Now increase copy_size.  copy_range
+ * does not respect max_transfer (it's a TODO), so we factor
+ * that in here.
+ */
+s->copy_size =
+MIN(MAX(s->cluster_size, BLOCK_COPY_MAX_COPY_RANGE),
+QEMU_ALIGN_DOWN(block_copy_max_transfer(s->source,
+ 

[PATCH 1/2] block-copy: fix block_copy_task_entry() progress update

2021-05-28 Thread Vladimir Sementsov-Ogievskiy
Don't report successful progress on failure, when call_state->ret is
set.

Signed-off-by: Vladimir Sementsov-Ogievskiy 
---
 block/block-copy.c | 8 +---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/block/block-copy.c b/block/block-copy.c
index c2e5090412..f9e871b64f 100644
--- a/block/block-copy.c
+++ b/block/block-copy.c
@@ -439,9 +439,11 @@ static coroutine_fn int block_copy_task_entry(AioTask 
*task)
 
 ret = block_copy_do_copy(t->s, t->offset, t->bytes, t->zeroes,
  _is_read);
-if (ret < 0 && !t->call_state->ret) {
-t->call_state->ret = ret;
-t->call_state->error_is_read = error_is_read;
+if (ret < 0) {
+if (!t->call_state->ret) {
+t->call_state->ret = ret;
+t->call_state->error_is_read = error_is_read;
+}
 } else {
 progress_work_done(t->s->progress, t->bytes);
 }
-- 
2.29.2




[PATCH 0/2] block-copy: small fix and refactor

2021-05-28 Thread Vladimir Sementsov-Ogievskiy
Hi all!

This is my suggestion how to refactor block-copy to avoid extra atomic
operations in 
"[PATCH v2 0/7] block-copy: protect block-copy internal structures"

Vladimir Sementsov-Ogievskiy (2):
  block-copy: fix block_copy_task_entry() progress update
  block-copy: refactor copy_range handling

 block/block-copy.c | 79 +++---
 1 file changed, 53 insertions(+), 26 deletions(-)

-- 
2.29.2




Re: [PATCH v2 7/7] block-copy: protect BlockCopyState .method fields

2021-05-28 Thread Vladimir Sementsov-Ogievskiy

28.05.2021 14:01, Paolo Bonzini wrote:

On 28/05/21 12:24, Paolo Bonzini wrote:


It's still more complicated, because you need to add some kind of

 method = s->method;


This would even have to be a separate, one-line critical section...



Hm, so, we should set .use_copy_range in task, when it is initialized.




 ret = block_copy_do_copy(..., method);
 if (ret < 0 && method <= COPY_RANGE_SMALL) {
 method = COPY_RANGE_READ_WRITE;
 ret = block_copy_do_copy(..., method);
 }
 lock();
 if (method == s->method) {
 /* compute new method */
 }

which makes it more complicated than this patch IMO.  But yeah at least it's a 
viable alternative to the atomics.





OK, I'm OK with patch as is. Finally I can refactor it later on top if needed.. 
I'll try now do some refactoring, you'll probably want to base on it, or 
vise-versa, I'll rebase it later on top of these patches.

--
Best regards,
Vladimir



Re: [PULL 0/2] Block patches for 5.1.0-rc4

2021-05-28 Thread Vladimir Sementsov-Ogievskiy

11.08.2020 12:54, Max Reitz wrote:

On 11.08.20 11:39, Peter Maydell wrote:

On Tue, 11 Aug 2020 at 10:35, Max Reitz  wrote:


Hi,

There is a bug in the backup job that breaks backups from images whose
size is not aligned to the job's cluster size (i.e., qemu crashes
because of a failed assertion).  If this bug makes it into the release,
it would be a regression from 5.0.

On one hand, this is probably a rare configuration that should not
happen in practice.  On the other, it is a regression, and the fix
(patch 1) is simple.  So I think it would be good to have this in 5.1.


I'm really reluctant to have to roll an rc4...


Well, that’s the information there is on this: Regression, simple fix,
but little relevance in practice, and late to the party.
If, given this, you don’t want to roll an rc4, then that’s how it is.



Recently bug was reproduced by accidentally starting backup with source = cdrom 
(image was not 64k-cluster aligned).

Fedora 33, Rhel/Centos 8 are affected.

Could this go to stable branch?

--
Best regards,
Vladimir



Re: [PATCH v2 7/7] block-copy: protect BlockCopyState .method fields

2021-05-28 Thread Vladimir Sementsov-Ogievskiy

28.05.2021 14:01, Paolo Bonzini wrote:

On 28/05/21 12:24, Paolo Bonzini wrote:


It's still more complicated, because you need to add some kind of

 method = s->method;


This would even have to be a separate, one-line critical section...



Or atomic operation.. What I don't like that all troubles are for unused code. 
Many things may change to the moment when we actually reuse this for qemu-img 
convert.

And, qemu-img convert probably don't need this complicated logic with different 
methods. It may be better just return error if copy_range failed. What's a good 
reason for fall-back to normal write if copy-range is explicitly requested by 
user?




 ret = block_copy_do_copy(..., method);
 if (ret < 0 && method <= COPY_RANGE_SMALL) {
 method = COPY_RANGE_READ_WRITE;
 ret = block_copy_do_copy(..., method);
 }
 lock();
 if (method == s->method) {
 /* compute new method */
 }

which makes it more complicated than this patch IMO.  But yeah at least it's a 
viable alternative to the atomics.






--
Best regards,
Vladimir



Re: [PATCH] hw/nvme: add param to control auto zone transitioning to zone state closed

2021-05-28 Thread Klaus Jensen

On May 28 11:05, Niklas Cassel wrote:

From: Niklas Cassel 

In the Zoned Namespace Command Set Specification, chapter
2.5.1 Managing resources

"The controller may transition zones in the ZSIO:Implicitly Opened state
to the ZSC:Closed state for resource management purposes."

The word may in this sentence means that automatically transitioning
an implicitly opened zone to closed is completely optional.

Add a new parameter so that the user can control if this automatic
transitioning should be performed or not.

Being able to control this can help with verifying that e.g. a user-space
program behaves properly even without this optional ZNS feature.

The default value is set to true, in order to not change the existing
behavior.

Signed-off-by: Niklas Cassel 
---
hw/nvme/ctrl.c | 9 -
hw/nvme/ns.c   | 2 ++
hw/nvme/nvme.h | 1 +
3 files changed, 11 insertions(+), 1 deletion(-)

diff --git a/hw/nvme/ctrl.c b/hw/nvme/ctrl.c
index 40a7efcea9..d00f0297a5 100644
--- a/hw/nvme/ctrl.c
+++ b/hw/nvme/ctrl.c
@@ -141,6 +141,11 @@
 *
 * zoned.cross_read=
 * Setting this property to true enables Read Across Zone Boundaries.
+ *
+ * zoned.auto_transition=
+ * Indicates if zones in zone state implicitly opened can be
+ * automatically transitioned to zone state closed for resource
+ * management purposes.
 */

#include "qemu/osdep.h"
@@ -1699,7 +1704,9 @@ static uint16_t nvme_zrm_open_flags(NvmeNamespace *ns, 
NvmeZone *zone,
/* fallthrough */

case NVME_ZONE_STATE_CLOSED:
-nvme_zrm_auto_transition_zone(ns);
+if (ns->params.auto_transition_zones) {
+nvme_zrm_auto_transition_zone(ns);
+}
status = nvme_aor_check(ns, act, 1);
if (status) {
return status;
diff --git a/hw/nvme/ns.c b/hw/nvme/ns.c
index 3fec9c6273..31dee43d30 100644
--- a/hw/nvme/ns.c
+++ b/hw/nvme/ns.c
@@ -531,6 +531,8 @@ static Property nvme_ns_props[] = {
   params.max_open_zones, 0),
DEFINE_PROP_UINT32("zoned.descr_ext_size", NvmeNamespace,
   params.zd_extension_size, 0),
+DEFINE_PROP_BOOL("zoned.auto_transition", NvmeNamespace,
+ params.auto_transition_zones, true),
DEFINE_PROP_END_OF_LIST(),
};

diff --git a/hw/nvme/nvme.h b/hw/nvme/nvme.h
index 81a35cda14..bd86054db2 100644
--- a/hw/nvme/nvme.h
+++ b/hw/nvme/nvme.h
@@ -100,6 +100,7 @@ typedef struct NvmeNamespaceParams {
uint32_t max_active_zones;
uint32_t max_open_zones;
uint32_t zd_extension_size;
+bool auto_transition_zones;
} NvmeNamespaceParams;

typedef struct NvmeNamespace {
--
2.31.1



Looks good Niklas!

Reviewed-by: Klaus Jensen 


signature.asc
Description: PGP signature


[PATCH] hw/nvme: add param to control auto zone transitioning to zone state closed

2021-05-28 Thread Niklas Cassel
From: Niklas Cassel 

In the Zoned Namespace Command Set Specification, chapter
2.5.1 Managing resources

"The controller may transition zones in the ZSIO:Implicitly Opened state
to the ZSC:Closed state for resource management purposes."

The word may in this sentence means that automatically transitioning
an implicitly opened zone to closed is completely optional.

Add a new parameter so that the user can control if this automatic
transitioning should be performed or not.

Being able to control this can help with verifying that e.g. a user-space
program behaves properly even without this optional ZNS feature.

The default value is set to true, in order to not change the existing
behavior.

Signed-off-by: Niklas Cassel 
---
 hw/nvme/ctrl.c | 9 -
 hw/nvme/ns.c   | 2 ++
 hw/nvme/nvme.h | 1 +
 3 files changed, 11 insertions(+), 1 deletion(-)

diff --git a/hw/nvme/ctrl.c b/hw/nvme/ctrl.c
index 40a7efcea9..d00f0297a5 100644
--- a/hw/nvme/ctrl.c
+++ b/hw/nvme/ctrl.c
@@ -141,6 +141,11 @@
  *
  * zoned.cross_read=
  * Setting this property to true enables Read Across Zone Boundaries.
+ *
+ * zoned.auto_transition=
+ * Indicates if zones in zone state implicitly opened can be
+ * automatically transitioned to zone state closed for resource
+ * management purposes.
  */
 
 #include "qemu/osdep.h"
@@ -1699,7 +1704,9 @@ static uint16_t nvme_zrm_open_flags(NvmeNamespace *ns, 
NvmeZone *zone,
 /* fallthrough */
 
 case NVME_ZONE_STATE_CLOSED:
-nvme_zrm_auto_transition_zone(ns);
+if (ns->params.auto_transition_zones) {
+nvme_zrm_auto_transition_zone(ns);
+}
 status = nvme_aor_check(ns, act, 1);
 if (status) {
 return status;
diff --git a/hw/nvme/ns.c b/hw/nvme/ns.c
index 3fec9c6273..31dee43d30 100644
--- a/hw/nvme/ns.c
+++ b/hw/nvme/ns.c
@@ -531,6 +531,8 @@ static Property nvme_ns_props[] = {
params.max_open_zones, 0),
 DEFINE_PROP_UINT32("zoned.descr_ext_size", NvmeNamespace,
params.zd_extension_size, 0),
+DEFINE_PROP_BOOL("zoned.auto_transition", NvmeNamespace,
+ params.auto_transition_zones, true),
 DEFINE_PROP_END_OF_LIST(),
 };
 
diff --git a/hw/nvme/nvme.h b/hw/nvme/nvme.h
index 81a35cda14..bd86054db2 100644
--- a/hw/nvme/nvme.h
+++ b/hw/nvme/nvme.h
@@ -100,6 +100,7 @@ typedef struct NvmeNamespaceParams {
 uint32_t max_active_zones;
 uint32_t max_open_zones;
 uint32_t zd_extension_size;
+bool auto_transition_zones;
 } NvmeNamespaceParams;
 
 typedef struct NvmeNamespace {
-- 
2.31.1



Re: [PATCH v2 7/7] block-copy: protect BlockCopyState .method fields

2021-05-28 Thread Paolo Bonzini

On 28/05/21 12:24, Paolo Bonzini wrote:


It's still more complicated, because you need to add some kind of

 method = s->method;


This would even have to be a separate, one-line critical section...

Paolo


 ret = block_copy_do_copy(..., method);
 if (ret < 0 && method <= COPY_RANGE_SMALL) {
     method = COPY_RANGE_READ_WRITE;
     ret = block_copy_do_copy(..., method);
     }
 lock();
     if (method == s->method) {
     /* compute new method */
     }

which makes it more complicated than this patch IMO.  But yeah at least 
it's a viable alternative to the atomics.






Re: [PATCH v2 5/7] block-copy: add QemuMutex lock for BlockCopyCallState list

2021-05-28 Thread Paolo Bonzini

On 18/05/21 12:07, Emanuele Giuseppe Esposito wrote:

+qemu_mutex_lock(_state->s->calls_lock);
  QLIST_INSERT_HEAD(_state->s->calls, call_state, list);
+qemu_mutex_unlock(_state->s->calls_lock);


Let's just use tasks_lock here (maybe even rename it to just "lock").

Paolo




Re: [PATCH v2 7/7] block-copy: protect BlockCopyState .method fields

2021-05-28 Thread Paolo Bonzini

On 26/05/21 19:18, Vladimir Sementsov-Ogievskiy wrote:


It's actually the original idea of block_copy_do_copy() function: do 
only simple copy, don't interact with the state. It's a kind of wrapper 
on top of bdrv_co.


So, actually updating s->use_copy_range here was a bad idea.


It's still more complicated, because you need to add some kind of

method = s->method;
ret = block_copy_do_copy(..., method);
if (ret < 0 && method <= COPY_RANGE_SMALL) {
method = COPY_RANGE_READ_WRITE;
ret = block_copy_do_copy(..., method);
}
lock();
if (method == s->method) {
/* compute new method */
}

which makes it more complicated than this patch IMO.  But yeah at least 
it's a viable alternative to the atomics.


Paolo




Re: [PATCH 3/3] ui/vdagent: fix clipboard info memory leak in error path

2021-05-28 Thread Gerd Hoffmann
On Wed, May 26, 2021 at 10:12:48AM +0100, Stefan Hajnoczi wrote:
> If the size of a VD_AGENT_CLIPBOARD_GRAB message is invalid we leak info
> when returning early.
> 
> Thanks to Coverity for spotting this:
> 
> *** CID 1453266:  Resource leaks  (RESOURCE_LEAK)
> /qemu/ui/vdagent.c: 465 in vdagent_chr_recv_clipboard()
> 459 info = qemu_clipboard_info_new(>cbpeer, s);
> 460 if (size > sizeof(uint32_t) * 10) {
> 461 /*
> 462  * spice has 6 types as of 2021. Limiting to 10 entries
> 463  * so we we have some wiggle room.
> 464  */
> >>> CID 1453266:  Resource leaks  (RESOURCE_LEAK)
> >>> Variable "info" going out of scope leaks the storage it points to.
> 465 return;
> 466 }
> 467 while (size >= sizeof(uint32_t)) {
> 468 trace_vdagent_cb_grab_type(GET_NAME(type_name, *(uint32_t 
> *)data));
> 469 switch (*(uint32_t *)data) {
> 470 case VD_AGENT_CLIPBOARD_UTF8_TEXT:
> 
> Fixes: f0349f4d8947ad32d0fa4678cbf5dbb356fcbda1 ("ui/vdagent: add clipboard 
> support")
> Cc: Gerd Hoffmann 
> Signed-off-by: Stefan Hajnoczi 

Reviewed-by: Gerd Hoffmann