Re: "relaxed" `-hda file.qcow2` equivalent ?

2024-11-15 Thread Frantisek Rysanek
> adding `,bus=ide.0,unit=0` to `-device ide-hd,drive=qcow-driver` do
> the job :-)
> 
> "relaxed" `-hda file.qcow2` is (seems to be) `-blockdev 
> driver=file,node-name=file-driver,filename=file.qcow2,locking=off
> -blockdev driver=qcow2,node-name=qcow-driver,file=file-driver -device
> ide-hd,drive=qcow-driver,bus=ide.0,unit=0` ;-)

:-)
thanks for sharing that detailed recipe. I actually find the second 
row a little convoluted, but I guess that cross-eyed expression on my 
face will fade away later tonight.
You have hereby enriched the official documentation, and the world at 
large.

Even if your antic eventually gets proven fraught by some unforseen 
FS consistency issues, you have shown us the way.

Thanks for educating me :-)

Frank 



Re: "relaxed" `-hda file.qcow2` equivalent ?

2024-11-15 Thread lacsaP Patatetom
Le ven. 15 nov. 2024 à 16:11, lacsaP Patatetom  a
écrit :

> Le ven. 15 nov. 2024 à 15:21, Frantisek Rysanek 
> a écrit :
>
>> > And, I have an idea: rather than refer to driver=qcow2 and
>> > file.filename, how about referring to the loopback device (NBD) that
>> > you already have, courtesy of qemu-nbd ? Would that perhaps circumvent
>> > the file lock? ;-)
>> >
>> > -blockdev node-name=xy,driver=raw,file.driver=host_device,\
>> > file.filename=/dev/loop0,file.locking=off
>> >
>> > -device virtio-scsi-pci -device scsi-hd,drive=xy
>> >
>>
>> I mean: the QEMU device emulation would not run on top of the QCOW
>> file directly (and the underlying filesystem, and its locking
>> feature), but would instead share a block device with your host-side
>> mount. Thus, it would also plug directly into any block-level
>> buffering going on, on the host side.
>>
>> On the guest, I'm wondering if you should mount the partition with
>> -o direct. This should prevent any write-back buffering in the guest,
>> which however you will not be doing, as you say.
>> On the other hand, if you make changes to the FS on the host side,
>> while the QEMU guest instance is already running, the guest probably
>> will not get to know about any changes, probably unless you umount
>> and remount, that with "-o direct" (to avoid local read caching in
>> the guest).
>>
>> Even if this crazy stuff works in the end, I'm wondering if it's all
>> worth the implied pitfalls :-)
>> Apparently you still need to keep stuff in sync in some way...
>>
>> Frank
>>
>>
> after reading page 17 @
> https://vmsplice.net/~stefan/qemu-block-layer-features-and-concepts.pdf,
> I'm almost there with :
>
> qemu -snapshot \
> -blockdev
> driver=file,node-name=file-driver,filename=file.qcow2,locking=off \
> -blockdev driver=qcow2,node-name=qcow-driver,file=file-driver \
> -device ide-hd,drive=qcow-driver \
> -hdb file2.qcow2
>
> the difference lies in the fact that it's not `hda` but `hdc` : on the
> guest side, the disk appears second after the one passed by `hdb`
>

adding `,bus=ide.0,unit=0` to `-device ide-hd,drive=qcow-driver` do the job
:-)

"relaxed" `-hda file.qcow2` is (seems to be) `-blockdev
driver=file,node-name=file-driver,filename=file.qcow2,locking=off -blockdev
driver=qcow2,node-name=qcow-driver,file=file-driver -device
ide-hd,drive=qcow-driver,bus=ide.0,unit=0` ;-)


Re: "relaxed" `-hda file.qcow2` equivalent ?

2024-11-15 Thread lacsaP Patatetom
Le ven. 15 nov. 2024 à 15:21, Frantisek Rysanek 
a écrit :

> > And, I have an idea: rather than refer to driver=qcow2 and
> > file.filename, how about referring to the loopback device (NBD) that
> > you already have, courtesy of qemu-nbd ? Would that perhaps circumvent
> > the file lock? ;-)
> >
> > -blockdev node-name=xy,driver=raw,file.driver=host_device,\
> > file.filename=/dev/loop0,file.locking=off
> >
> > -device virtio-scsi-pci -device scsi-hd,drive=xy
> >
>
> I mean: the QEMU device emulation would not run on top of the QCOW
> file directly (and the underlying filesystem, and its locking
> feature), but would instead share a block device with your host-side
> mount. Thus, it would also plug directly into any block-level
> buffering going on, on the host side.
>
> On the guest, I'm wondering if you should mount the partition with
> -o direct. This should prevent any write-back buffering in the guest,
> which however you will not be doing, as you say.
> On the other hand, if you make changes to the FS on the host side,
> while the QEMU guest instance is already running, the guest probably
> will not get to know about any changes, probably unless you umount
> and remount, that with "-o direct" (to avoid local read caching in
> the guest).
>
> Even if this crazy stuff works in the end, I'm wondering if it's all
> worth the implied pitfalls :-)
> Apparently you still need to keep stuff in sync in some way...
>
> Frank
>
>
after reading page 17 @
https://vmsplice.net/~stefan/qemu-block-layer-features-and-concepts.pdf,
I'm almost there with :

qemu -snapshot \
-blockdev driver=file,node-name=file-driver,filename=file.qcow2,locking=off
\
-blockdev driver=qcow2,node-name=qcow-driver,file=file-driver \
-device ide-hd,drive=qcow-driver \
-hdb file2.qcow2

the difference lies in the fact that it's not `hda` but `hdc` : on the
guest side, the disk appears second after the one passed by `hdb`


Re: "relaxed" `-hda file.qcow2` equivalent ?

2024-11-15 Thread Frantisek Rysanek
> And, I have an idea: rather than refer to driver=qcow2 and 
> file.filename, how about referring to the loopback device (NBD) that
> you already have, courtesy of qemu-nbd ? Would that perhaps circumvent
> the file lock? ;-)
> 
> -blockdev node-name=xy,driver=raw,file.driver=host_device,\
> file.filename=/dev/loop0,file.locking=off
> 
> -device virtio-scsi-pci -device scsi-hd,drive=xy 
>

I mean: the QEMU device emulation would not run on top of the QCOW 
file directly (and the underlying filesystem, and its locking 
feature), but would instead share a block device with your host-side 
mount. Thus, it would also plug directly into any block-level 
buffering going on, on the host side.

On the guest, I'm wondering if you should mount the partition with 
-o direct. This should prevent any write-back buffering in the guest, 
which however you will not be doing, as you say.
On the other hand, if you make changes to the FS on the host side, 
while the QEMU guest instance is already running, the guest probably 
will not get to know about any changes, probably unless you umount 
and remount, that with "-o direct" (to avoid local read caching in 
the guest).

Even if this crazy stuff works in the end, I'm wondering if it's all 
worth the implied pitfalls :-)
Apparently you still need to keep stuff in sync in some way...

Frank



Re: "relaxed" `-hda file.qcow2` equivalent ?

2024-11-15 Thread Frantisek Rysanek
Hmm. By now I'm thinking aloud.

According to the manpage (see below), "-blockdev" is the most "down 
to earth", true and modern way to specify the emulated disk - 
possibly needs to be combined with a "-device" to specify the 
guest-side emulated interface? Curiously, I cannot find a complete 
example... 

The manpage says, that:
-hda (etc) is the oldest form, nowadays a macro for -drive + -device 
(controller).
-drive is a shorthand for "-blockdev + -device"
So... where I'm using -drive, I might as well be using -blockdev .
Plus something more? No clue... :-)
Or perhaps use "-blockdev node-name=.." instead of "-drive id=.." ?

The qemu docs chapter on "QEMU block drivers" gives this example:

-blockdev 
driver=qcow2,file.filename=/path/to/image,file.locking=off,file.driver
=file

I.e., you should put your locking=off option into a -blockdev 
definition as you say.

And, I have an idea: rather than refer to driver=qcow2 and 
file.filename, how about referring to the loopback device (NBD) that 
you already have, courtesy of qemu-nbd ? Would that perhaps 
circumvent the file lock? ;-)

-blockdev node-name=xy,driver=raw,file.driver=host_device,\
file.filename=/dev/loop0,file.locking=off

-device virtio-scsi-pci -device scsi-hd,drive=xy 


Further reading:

https://manpages.debian.org/testing/qemu-system-x86/qemu-system-x86_64
.1.en.html#Block_device_options

https://unix.stackexchange.com/questions/753092/what-is-the-difference
-between-these-two-blockdev-declarations

https://www.qemu.org/docs/master/system/qemu-block-drivers.html#disk-i
mage-file-locking


Frank



Re: "relaxed" `-hda file.qcow2` equivalent ?

2024-11-15 Thread lacsaP Patatetom
Le ven. 15 nov. 2024 à 14:00, lacsaP Patatetom  a
écrit :

> Le ven. 15 nov. 2024 à 13:35, Frantisek Rysanek 
> a écrit :
>
>> > # qemu-nbd -c /dev/nbd0 file.qcow2
>> > # mount /dev/nbd0p1 /mnt -o uid=me
>> > $ # do some changes in /mnt/...
>> >
>> Are you sure the changes have made it to the underlying file?
>> If you do an umount here, a sync() is guaranteed.
>> If you do not umount, at this point you may have some dirty
>> writeback.
>> So that the following QEMU instance may find the filesystem in a
>> "factually out of date" state. I dare not say "inconsistent state",
>> because this should theoretically be taken care of... maybe...
>>
>>
> I manually flush the system caches to make sure my `file.qcow2` file is up
> to date .
>
> > $ qemu-system-x86_64 -snapshot -hda file.qcow2 # with locking=off
>>
>> Read-only access is one thing. Maybe the QEMU instance will cope with
>> that.
>>
>> Do you want the QEMU instance to *write* to the filesystem, by any
>> chance?
>>
>
> no, I don't need it.
> `-snapshot` is there to prevent corruption of the assembly left in place.
>
>
>> If you manage to force-mount the filesystem for writing, and you
>> write some changes to it, how does your "outer host-side instance of
>> the mounted FS" get to know?
>> This is a recipe for filesystem breakage :-)
>>
>> You know, these are exactly the sort of problems, that get avoided by
>> the locking :-)
>>
>> They also get avoided by "shared access filesystems" such as GFS2 or
>> OCFS2, or network filesystems such as NFS or CIFS=SMBFS. Maybe CEPH
>> is also remotely in this vein, although this is more of a distributed
>> clustered FS, and an overkill for your scenario.
>>
>> Frank
>>
>> in any case, to avoid this kind of problem and make sure I have the right
> data, I manually clear the system caches with `sudo bash -c “sync && sysctl
> -q vm.drop_caches=3”` before calling qemu.
> `-snapshot` is there to prevent corruption of the assembly left in place.
>
> regards, lacsaP.
>

after rereading the documentation (introduction @
https://qemu-project.gitlab.io/qemu/system/qemu-block-drivers.html#disk-image-file-locking),
I've just had a look at “Linux OFD” to try to remove the lock on my
`file.qcow2` file, but I haven't found anything there : if someone has a
clue, I'm interested...


Re: "relaxed" `-hda file.qcow2` equivalent ?

2024-11-15 Thread lacsaP Patatetom
Le ven. 15 nov. 2024 à 13:35, Frantisek Rysanek 
a écrit :

> > # qemu-nbd -c /dev/nbd0 file.qcow2
> > # mount /dev/nbd0p1 /mnt -o uid=me
> > $ # do some changes in /mnt/...
> >
> Are you sure the changes have made it to the underlying file?
> If you do an umount here, a sync() is guaranteed.
> If you do not umount, at this point you may have some dirty
> writeback.
> So that the following QEMU instance may find the filesystem in a
> "factually out of date" state. I dare not say "inconsistent state",
> because this should theoretically be taken care of... maybe...
>
>
I manually flush the system caches to make sure my `file.qcow2` file is up
to date .

> $ qemu-system-x86_64 -snapshot -hda file.qcow2 # with locking=off
>
> Read-only access is one thing. Maybe the QEMU instance will cope with
> that.
>
> Do you want the QEMU instance to *write* to the filesystem, by any
> chance?
>

no, I don't need it.
`-snapshot` is there to prevent corruption of the assembly left in place.


> If you manage to force-mount the filesystem for writing, and you
> write some changes to it, how does your "outer host-side instance of
> the mounted FS" get to know?
> This is a recipe for filesystem breakage :-)
>
> You know, these are exactly the sort of problems, that get avoided by
> the locking :-)
>
> They also get avoided by "shared access filesystems" such as GFS2 or
> OCFS2, or network filesystems such as NFS or CIFS=SMBFS. Maybe CEPH
> is also remotely in this vein, although this is more of a distributed
> clustered FS, and an overkill for your scenario.
>
> Frank
>
> in any case, to avoid this kind of problem and make sure I have the right
data, I manually clear the system caches with `sudo bash -c “sync && sysctl
-q vm.drop_caches=3”` before calling qemu.
`-snapshot` is there to prevent corruption of the assembly left in place.

regards, lacsaP.


Re: "relaxed" `-hda file.qcow2` equivalent ?

2024-11-15 Thread Frantisek Rysanek
> # qemu-nbd -c /dev/nbd0 file.qcow2
> # mount /dev/nbd0p1 /mnt -o uid=me
> $ # do some changes in /mnt/...
> 
Are you sure the changes have made it to the underlying file?
If you do an umount here, a sync() is guaranteed.
If you do not umount, at this point you may have some dirty 
writeback.
So that the following QEMU instance may find the filesystem in a 
"factually out of date" state. I dare not say "inconsistent state", 
because this should theoretically be taken care of... maybe...

> $ qemu-system-x86_64 -snapshot -hda file.qcow2 # with locking=off

Read-only access is one thing. Maybe the QEMU instance will cope with 
that.

Do you want the QEMU instance to *write* to the filesystem, by any 
chance? 
If you manage to force-mount the filesystem for writing, and you 
write some changes to it, how does your "outer host-side instance of 
the mounted FS" get to know?
This is a recipe for filesystem breakage :-)

You know, these are exactly the sort of problems, that get avoided by 
the locking :-)

They also get avoided by "shared access filesystems" such as GFS2 or 
OCFS2, or network filesystems such as NFS or CIFS=SMBFS. Maybe CEPH 
is also remotely in this vein, although this is more of a distributed 
clustered FS, and an overkill for your scenario.

Frank



Re: "relaxed" `-hda file.qcow2` equivalent ?

2024-11-15 Thread lacsaP Patatetom
Le jeu. 14 nov. 2024 à 17:00, Frantisek Rysanek 
a écrit :

> > my image `fie.qcow2` is mounted elsewhere (nbd) and running
> > `qemu-system-x86_64 -snapshot -hda file.qcow2` fails with the error
> > "qemu-system-x86_64: -hda usb1.disk: Failed to get shared “write”
> > lock / Is another process using the image [usb1.disk] ?".
>
> Oh...
>
> So you want concurrent access to the block device.
> 1) have it mounted on the host=HV using qemu-nbd
> 2) while at the same time, you want to give it to a guest VM?
>
> Unless you use a filesystem such as GFS2 or OCFS2 inside the
> partitions that exist inside that QCOW2 black box, or mount it
> read-only at both ends, you are asking for trouble :-)
> That is, if QEMU permits unlocked access by a VM, parallel to the
> qemu-nbd loop-mount.
>
> Note that this is just a tangential pointer on my part. Just a start
> of a further journey. Further reading and study.
>
> If you really want concurrent access to a directory tree, consider
> using something like Samba or NFS as the simpler options.
>
> Then again, concurrent access is definitely fun to play with and grow
> :-)
>
> Frank
>
> my problem is exactly this, the link between `device`, `drive` and
`blockdev` : while `device` and `drive` link quite easily, I can't link
`blockdev` to the other two...

setting aside the fact that the image is used elsewhere, my aim is to mimic
`-hda file.qcow2` but without taking the lock into account (`locking=off`).

I currently do this:

```
# qemu-nbd -c /dev/nbd0 file.qcow2
# mount /dev/nbd0p1 /mnt -o uid=me
$ # do some changes in /mnt/...
# umount /mnt
# qemu-nbd -d /dev/nbd0

$ qemu-system-x86_64 -snapshot -hda file.qcow2

# qemu-nbd -c /dev/nbd0 file.qcow2
# mount /dev/nbd0p1 /mnt -o uid=me
$ # do some changes in /mnt/...
# umount /mnt
# qemu-nbd -d /dev/nbd0

$ qemu-system-x86_64 -snapshot -hda file.qcow2

# qemu-nbd -c /dev/nbd0 file.qcow2
# mount /dev/nbd0p1 /mnt -o uid=me
$ # do some changes in /mnt/...
etc...
```

and I'd like to be able to do this :

```
# qemu-nbd -c /dev/nbd0 file.qcow2
# mount /dev/nbd0p1 /mnt -o uid=me
$ # do some changes in /mnt/...

$ qemu-system-x86_64 -snapshot -hda file.qcow2 # with locking=off

$ # do some changes in /mnt/...

$ qemu-system-x86_64 -snapshot -hda file.qcow2 # with locking=off

$ # do some changes in /mnt/...
etc...
```


Re: "relaxed" `-hda file.qcow2` equivalent ?

2024-11-14 Thread Frantisek Rysanek
> 
> hi,
> what is the "relaxed" equivalent of the `-hda file.qcow2` command line
> argument to control file locking (`locking=off`) ? the documentation
> says that `-hda a` can be replaced by `-drive file=a` and that locking
> can be controlled with the `locking` option of the `blockdev` argument
> : how to link `drive` and `blockdev` to obtain "relaxed" `-hda
> file.qcow2` ? regards, lacsaP.

Let me give you an example of how disk controller device and disk 
drive devices are nested, i.e. the expanded form of the "-hda" arg.
I start qemu by a script, and I use shell variables to put the long 
command line together. The following is a relevant excerpt of the 
script.
See how the "-device" (controller) record maps to the "-drive" 
records via the id= and drive= attributes:


DISK1="-drive 
if=none,id=ff,cache.direct=on,aio=native,discard=unmap,detect-zeroes=u
nmap,file=/somepath/system.qcow2"

DISK2="-drive 
if=none,id=fe,cache.direct=on,aio=native,discard=unmap,detect-zeroes=u
nmap,file=/somepath/data.qcow2"

DISK3="-drive 
if=none,id=fd,cache.direct=on,aio=native,discard=unmap,detect-zeroes=u
nmap,file=/somepath/swap.qcow2"

HDD_STUFF="$DISK1 $DISK2 $DISK3 -device virtio-scsi-pci -device 
scsi-hd,drive=ff -device scsi-hd,drive=fe -device scsi-hd,drive=fd"

In my script, those are four long lines.
Here they have wrapped... sorry :-)

I run this on top of a filesystem that supports sparse files and 
snapshots, and can be told to avoid FS-level COW if it can
 (= to make it prefer in-place overwrites over COW).

Now... I've found a more advanced description of image file locking 
in the official current documentation here:

https://www.qemu.org/docs/master/system/qemu-block-drivers.html#disk-i
mage-file-locking

...now try to combine this with my example above :-)

Not sure what you mean by "relaxed".
Do you need to map the same image file to multiple guests?
For what purpose?
HA failover?
Both/all guests should have simultaneous write access?
Similar to multipath SCSI ? Are you playing with the GFS2/OCFS2 or 
something?
Or do you mean to use the same image by multiple guests as a shared 
read-only disk?
I recall reading an article ages ago, on how to create "ephemeral 
fake writeable block devices" for individual guests, on top of a 
shared read-only "master image"... (as multiple COW snapshots)

Frank



Re: "relaxed" `-hda file.qcow2` equivalent ?

2024-11-14 Thread Frantisek Rysanek
> my image `fie.qcow2` is mounted elsewhere (nbd) and running
> `qemu-system-x86_64 -snapshot -hda file.qcow2` fails with the error
> "qemu-system-x86_64: -hda usb1.disk: Failed to get shared “write”
> lock / Is another process using the image [usb1.disk] ?".

Oh...

So you want concurrent access to the block device.
1) have it mounted on the host=HV using qemu-nbd
2) while at the same time, you want to give it to a guest VM?

Unless you use a filesystem such as GFS2 or OCFS2 inside the
partitions that exist inside that QCOW2 black box, or mount it
read-only at both ends, you are asking for trouble :-)
That is, if QEMU permits unlocked access by a VM, parallel to the
qemu-nbd loop-mount.

Note that this is just a tangential pointer on my part. Just a start
of a further journey. Further reading and study.

If you really want concurrent access to a directory tree, consider
using something like Samba or NFS as the simpler options.

Then again, concurrent access is definitely fun to play with and grow
:-)

Frank



Re: "relaxed" `-hda file.qcow2` equivalent ?

2024-11-14 Thread lacsaP Patatetom
Le jeu. 14 nov. 2024 à 16:15, lacsaP Patatetom  a
écrit :

> Le jeu. 14 nov. 2024 à 15:11, lacsaP Patatetom  a
> écrit :
>
>> hi,
>> what is the "relaxed" equivalent of the `-hda file.qcow2` command line
>> argument to control file locking (`locking=off`) ?
>> the documentation says that `-hda a` can be replaced by `-drive file=a`
>> and that locking can be controlled with the `locking` option of the
>> `blockdev` argument : how to link `drive` and `blockdev` to obtain
>> "relaxed" `-hda file.qcow2` ?
>> regards, lacsaP.
>>
>
> a few (corrected) details to clarify my request...
>
> `-hda` is a (very) old way of specifying the disk to be used, but is
> extremely practical when there's nothing special to define, and is still
> supported by qemu.
>
> my image `file.qcow2` is mounted elsewhere (nbd) and running
> `qemu-system-x86_64 -snapshot -hda file.qcow2` fails with the error
> "qemu-system-x86_64: -hda file.qcow2: Failed to get shared “write” lock /
> Is another process using the image [file.qcow2] ?".
>
> I'm doing some tests and I don't want to unmount my mounts, so I'm trying
> to force qemu to start without taking into account the fact that the image
> is not free and keeping the disk order (I'm also using `-hdb file2.qcow2`
> but this image doesn't cause any problems, is not mounted and is totally
> free).
>
> what should I use to replace `-hda file.qcow2` (drive/device/blockdev) to
> force qemu to boot from this first disk ?
>
> regards, lacsaP.
>
>
`qemu-system-x86_64 -global file.locking=off -snapshot -hda file.qcow2 -hdb
file2.qcow2`
and
`qemu-system-x86_64 -global driver=file,property=locking,value=off
-snapshot -hda file.qcow2 -hdb file2.qcow2`
do not force qemu to start up...


Re: "relaxed" `-hda file.qcow2` equivalent ?

2024-11-14 Thread lacsaP Patatetom
Le jeu. 14 nov. 2024 à 15:11, lacsaP Patatetom  a
écrit :

> hi,
> what is the "relaxed" equivalent of the `-hda file.qcow2` command line
> argument to control file locking (`locking=off`) ?
> the documentation says that `-hda a` can be replaced by `-drive file=a`
> and that locking can be controlled with the `locking` option of the
> `blockdev` argument : how to link `drive` and `blockdev` to obtain
> "relaxed" `-hda file.qcow2` ?
> regards, lacsaP.
>

a few (corrected) details to clarify my request...

`-hda` is a (very) old way of specifying the disk to be used, but is
extremely practical when there's nothing special to define, and is still
supported by qemu.

my image `file.qcow2` is mounted elsewhere (nbd) and running
`qemu-system-x86_64 -snapshot -hda file.qcow2` fails with the error
"qemu-system-x86_64: -hda file.qcow2: Failed to get shared “write” lock /
Is another process using the image [file.qcow2] ?".

I'm doing some tests and I don't want to unmount my mounts, so I'm trying
to force qemu to start without taking into account the fact that the image
is not free and keeping the disk order (I'm also using `-hdb file2.qcow2`
but this image doesn't cause any problems, is not mounted and is totally
free).

what should I use to replace `-hda file.qcow2` (drive/device/blockdev) to
force qemu to boot from this first disk ?

regards, lacsaP.


Re: "relaxed" `-hda file.qcow2` equivalent ?

2024-11-14 Thread lacsaP Patatetom
Le jeu. 14 nov. 2024 à 15:11, lacsaP Patatetom  a
écrit :

> hi,
> what is the "relaxed" equivalent of the `-hda file.qcow2` command line
> argument to control file locking (`locking=off`) ?
> the documentation says that `-hda a` can be replaced by `-drive file=a`
> and that locking can be controlled with the `locking` option of the
> `blockdev` argument : how to link `drive` and `blockdev` to obtain
> "relaxed" `-hda file.qcow2` ?
> regards, lacsaP.
>

a few details to clarify my request...

`-hda` is a (very) old way of specifying the disk to be used, but is
extremely practical when there's nothing special to define, and is still
supported by qemu.

my image `fie.qcow2` is mounted elsewhere (nbd) and running
`qemu-system-x86_64 -snapshot -hda file.qcow2` fails with the error
"qemu-system-x86_64: -hda usb1.disk: Failed to get shared “write” lock / Is
another process using the image [usb1.disk] ?".

I'm doing some tests and I don't want to unmount my mounts, so I'm trying
to force qemu to start without taking into account the fact that the image
is not free and keeping the disk order (I'm also using `-hdb file2.qcow2`
but this image doesn't cause any problems, is not mounted and is totally
free).

what should I use to replace `-hda file.qcow2` (drive/device/blockdev) to
force qemu to boot from this first disk ?

regards, lacsaP.