Re: [PATCH v10 3/3] iotests: test nbd reconnect

2019-10-23 Thread Eric Blake

On 10/23/19 3:33 AM, Vladimir Sementsov-Ogievskiy wrote:

23.10.2019 4:31, Eric Blake wrote:

On 10/9/19 3:41 AM, Vladimir Sementsov-Ogievskiy wrote:

Add test, which starts backup to nbd target and restarts nbd server
during backup.

Signed-off-by: Vladimir Sementsov-Ogievskiy 
---



+vm.qmp_log('blockdev-backup', device='drive0', sync='full', target='backup0',
+   speed=(1 * 1024 * 1024))


This starts the job throttled, to give us time...


+
+# Wait for some progress
+t = 0
+while t < wait_limit:
+    jobs = vm.qmp('query-block-jobs')['return']
+    if jobs and jobs[0]['offset'] > 0:
+    break
+    time.sleep(wait_step)
+    t += wait_step
+
+if jobs and jobs[0]['offset'] > 0:
+    log('Backup job is started')
+
+log('Kill NBD server')
+srv.kill()
+srv.wait()
+
+jobs = vm.qmp('query-block-jobs')['return']
+if jobs and jobs[0]['offset'] < jobs[0]['len']:
+    log('Backup job is still in progress')
+
+vm.qmp_log('block-job-set-speed', device='drive0', speed=0)


Ah, I overlooked this line in my late-night review.


+
+# Emulate server down time for 1 second
+time.sleep(1)


...but once we restart,...


+
+log('Start NBD server')
+srv = qemu_nbd_popen('-k', nbd_sock, '-f', iotests.imgfmt, disk_b)
+
+e = vm.event_wait('BLOCK_JOB_COMPLETED')


...should we unthrottle the job to allow the test to complete slightly faster 
after the reconnect?  But that can be done as an improvement on top, if it 
helps.


It is done above, before time.sleep(1)


Yep, so I feel better now.

--
Eric Blake, Principal Software Engineer
Red Hat, Inc.   +1-919-301-3226
Virtualization:  qemu.org | libvirt.org




Re: [PATCH v10 3/3] iotests: test nbd reconnect

2019-10-23 Thread Vladimir Sementsov-Ogievskiy
23.10.2019 4:31, Eric Blake wrote:
> On 10/9/19 3:41 AM, Vladimir Sementsov-Ogievskiy wrote:
>> Add test, which starts backup to nbd target and restarts nbd server
>> during backup.
>>
>> Signed-off-by: Vladimir Sementsov-Ogievskiy 
>> ---
>>   tests/qemu-iotests/264    | 95 +++
>>   tests/qemu-iotests/264.out    | 13 +
>>   tests/qemu-iotests/group  |  1 +
>>   tests/qemu-iotests/iotests.py | 11 
>>   4 files changed, 120 insertions(+)
>>   create mode 100755 tests/qemu-iotests/264
>>   create mode 100644 tests/qemu-iotests/264.out
>>
>> diff --git a/tests/qemu-iotests/264 b/tests/qemu-iotests/264
>> new file mode 100755
>> index 00..c8cd97ae2b
>> --- /dev/null
>> +++ b/tests/qemu-iotests/264
> 
>> +import iotests
>> +from iotests import qemu_img_create, qemu_io_silent_check, file_path, \
>> +    qemu_nbd_popen, log
>> +
>> +disk_a, disk_b, nbd_sock = file_path('disk_a', 'disk_b', 'nbd-sock')
>> +nbd_uri = 'nbd+unix:///?socket=' + nbd_sock
> 
> Needs rebasing on top of Max's patches to stick sockets in SOCK_DIR:
> 
> https://lists.gnu.org/archive/html/qemu-devel/2019-10/msg04201.html
> 
> [or, if my pull request lands first, Max needs to add this one to the list of 
> affected tests...]
> 
> 
>> +vm.qmp_log('blockdev-add', filters=[iotests.filter_qmp_testfiles],
>> +   **{'node_name': 'backup0',
>> +  'driver': 'raw',
>> +  'file': {'driver': 'nbd',
>> +   'server': {'type': 'unix', 'path': nbd_sock},
>> +   'reconnect-delay': 10}})
>> +vm.qmp_log('blockdev-backup', device='drive0', sync='full', 
>> target='backup0',
>> +   speed=(1 * 1024 * 1024))
> 
> This starts the job throttled, to give us time...
> 
>> +
>> +# Wait for some progress
>> +t = 0
>> +while t < wait_limit:
>> +    jobs = vm.qmp('query-block-jobs')['return']
>> +    if jobs and jobs[0]['offset'] > 0:
>> +    break
>> +    time.sleep(wait_step)
>> +    t += wait_step
>> +
>> +if jobs and jobs[0]['offset'] > 0:
>> +    log('Backup job is started')
>> +
>> +log('Kill NBD server')
>> +srv.kill()
>> +srv.wait()
>> +
>> +jobs = vm.qmp('query-block-jobs')['return']
>> +if jobs and jobs[0]['offset'] < jobs[0]['len']:
>> +    log('Backup job is still in progress')
>> +
>> +vm.qmp_log('block-job-set-speed', device='drive0', speed=0)
>> +
>> +# Emulate server down time for 1 second
>> +time.sleep(1)
> 
> ...but once we restart,...
> 
>> +
>> +log('Start NBD server')
>> +srv = qemu_nbd_popen('-k', nbd_sock, '-f', iotests.imgfmt, disk_b)
>> +
>> +e = vm.event_wait('BLOCK_JOB_COMPLETED')
> 
> ...should we unthrottle the job to allow the test to complete slightly faster 
> after the reconnect?  But that can be done as an improvement on top, if it 
> helps.

It is done above, before time.sleep(1)

> 
> 
>> +++ b/tests/qemu-iotests/iotests.py
>> @@ -165,6 +165,13 @@ def qemu_io_silent(*args):
>>    (-exitcode, ' '.join(args)))
>>   return exitcode
>> +def qemu_io_silent_check(*args):
>> +    '''Run qemu-io and return the true if subprocess returned 0'''
>> +    args = qemu_io_args + list(args)
>> +    exitcode = subprocess.call(args, stdout=open('/dev/null', 'w'),
>> +   stderr=subprocess.STDOUT)
> 
> This discards the stdout data, even on failure. Is there a smarter way to 
> grab the output into a variable, but only dump it to the log on failure, 
> rather than outright discarding it?
> 
> But for the sake of getting this test in before freeze, that can be a 
> followup, if at all.
> 
> Reviewed-by: Eric Blake 
> 
> I'll send a pull request soon.
> 

Thank you!!

-- 
Best regards,
Vladimir


Re: [PATCH v10 3/3] iotests: test nbd reconnect

2019-10-22 Thread Eric Blake

On 10/9/19 3:41 AM, Vladimir Sementsov-Ogievskiy wrote:

Add test, which starts backup to nbd target and restarts nbd server
during backup.

Signed-off-by: Vladimir Sementsov-Ogievskiy 
---
  tests/qemu-iotests/264| 95 +++
  tests/qemu-iotests/264.out| 13 +
  tests/qemu-iotests/group  |  1 +
  tests/qemu-iotests/iotests.py | 11 
  4 files changed, 120 insertions(+)
  create mode 100755 tests/qemu-iotests/264
  create mode 100644 tests/qemu-iotests/264.out

diff --git a/tests/qemu-iotests/264 b/tests/qemu-iotests/264
new file mode 100755
index 00..c8cd97ae2b
--- /dev/null
+++ b/tests/qemu-iotests/264



+import iotests
+from iotests import qemu_img_create, qemu_io_silent_check, file_path, \
+qemu_nbd_popen, log
+
+disk_a, disk_b, nbd_sock = file_path('disk_a', 'disk_b', 'nbd-sock')
+nbd_uri = 'nbd+unix:///?socket=' + nbd_sock


Needs rebasing on top of Max's patches to stick sockets in SOCK_DIR:

https://lists.gnu.org/archive/html/qemu-devel/2019-10/msg04201.html

[or, if my pull request lands first, Max needs to add this one to the 
list of affected tests...]




+vm.qmp_log('blockdev-add', filters=[iotests.filter_qmp_testfiles],
+   **{'node_name': 'backup0',
+  'driver': 'raw',
+  'file': {'driver': 'nbd',
+   'server': {'type': 'unix', 'path': nbd_sock},
+   'reconnect-delay': 10}})
+vm.qmp_log('blockdev-backup', device='drive0', sync='full', target='backup0',
+   speed=(1 * 1024 * 1024))


This starts the job throttled, to give us time...


+
+# Wait for some progress
+t = 0
+while t < wait_limit:
+jobs = vm.qmp('query-block-jobs')['return']
+if jobs and jobs[0]['offset'] > 0:
+break
+time.sleep(wait_step)
+t += wait_step
+
+if jobs and jobs[0]['offset'] > 0:
+log('Backup job is started')
+
+log('Kill NBD server')
+srv.kill()
+srv.wait()
+
+jobs = vm.qmp('query-block-jobs')['return']
+if jobs and jobs[0]['offset'] < jobs[0]['len']:
+log('Backup job is still in progress')
+
+vm.qmp_log('block-job-set-speed', device='drive0', speed=0)
+
+# Emulate server down time for 1 second
+time.sleep(1)


...but once we restart,...


+
+log('Start NBD server')
+srv = qemu_nbd_popen('-k', nbd_sock, '-f', iotests.imgfmt, disk_b)
+
+e = vm.event_wait('BLOCK_JOB_COMPLETED')


...should we unthrottle the job to allow the test to complete slightly 
faster after the reconnect?  But that can be done as an improvement on 
top, if it helps.




+++ b/tests/qemu-iotests/iotests.py
@@ -165,6 +165,13 @@ def qemu_io_silent(*args):
   (-exitcode, ' '.join(args)))
  return exitcode
  
+def qemu_io_silent_check(*args):

+'''Run qemu-io and return the true if subprocess returned 0'''
+args = qemu_io_args + list(args)
+exitcode = subprocess.call(args, stdout=open('/dev/null', 'w'),
+   stderr=subprocess.STDOUT)


This discards the stdout data, even on failure. Is there a smarter way 
to grab the output into a variable, but only dump it to the log on 
failure, rather than outright discarding it?


But for the sake of getting this test in before freeze, that can be a 
followup, if at all.


Reviewed-by: Eric Blake 

I'll send a pull request soon.

--
Eric Blake, Principal Software Engineer
Red Hat, Inc.   +1-919-301-3226
Virtualization:  qemu.org | libvirt.org