On 02.02.26 23:05, Frediano Ziglio wrote:
> On Mon, 2 Feb 2026 at 10:42, corubba <[email protected]> wrote:
>>
>> This patchset is a proof-of-concept implementation of power-control
>> support in the SPICE protocol, allowing some basic operations like reset
>> and power-off without a separate management protocol/connection.
>>
>> Marc-André Lureau already added power-control in virt-viewer [0], but it
>> is limited to direct-QEMU-connections since it's using QMP. This also
>> means it can't be used with libvirt, while there is virt-qemu-qmp-proxy
>> [1] it is "strongly recommended that this tool only be used to send
>> commands which query information about the running guest." In a related
>> patchset Victor Toso [2] recalls "discussions around having a way to
>> power-on/off/reboot in spice and that it was rejected to keep spice as a
>> viewer, etc.", however I couldn't find them in the various mailing list
>> archives. The also mentioned comment by Daniel P. Berrangé [3] to rather
>> have power control in SPICE was the basis for my initiative.
>>
>> The basic idea is to have a new "power" channel, which makes
>> compatibility (including backwards) easy: Using a power-capable SPICE
>> server with a power-incapable SPICE client works, and also the other way
>> around. SPICE servers announce support for individual power-control-
>> actions (like reset or power-on/off) via capabilities. SPICE clients can
>> check these to enable the appropriate UI elements. To actually trigger a
>> power action, SPICE clients send a dedicated action-message to the SPICE
>> server, which passes the request on to the underlying management plane,
>> which does whatever it needs to actually make it happen.
>>
>> To make sure the scope doesn't explode, the supported actions should be
>> strictly specified by the protocol, and only cover the most basic/
>> general ones. I would restrict the actions to "hard" ones which don't
>> require any help/interaction with the controlled target, so exclude any
>> "graceful" actions like shutdown or reboot (which can be done using the
>> input channel if I am not mistaken). The most useful is "reset" to
>> revive a frozen/hung target system, which also has the nice property of
>> not changing the power status (in contrast to e.g. power-off) so you
>> can't cut off the branch you are sitting on; you can't power-on again a
>> QEMU guest using SPICE, since there is no SPICE server when the guest is
>> not running. The other two actions I would consider are power-off and
>> power-on, especially useful for the BMC use-case. These three probably
>> already cover the majority of use-cases, and would be a significant
>> improvement in usefulness.
>>
>> The patchset should be considered PoC quality, and is not meant to be
>> merged as-is, but serve as a basis for further discussion and
>> experimentation. It contains the implementation in the SPICE-protocol
>> and -libraries as well as an example server-implementation in QEMU and
>> client-implementation in virt-viewer. The patchset spans across six git
>> repos, the individual patches are tagged and each touches exactly one
>> repo. Patches 1-6 add the new power-channel, 7-12 implement the reset
>> action and 13-18 the power-off action, 19-21 add capabilities, and
>> finally 22 some documentation.
>>
>> I implemented (only) reset and power-off, since those two already had UI
>> elements in virt-viewer. A lot of the work was done by looking at and
>> copying existing source code, adjusting it as needed. I tried my best
>> to understand how all the pieces fit together and remove any unneeded
>> parts, but expect to have missed some. While the existing documentation
>> was mostly helpful, it was also very misleading regarding the "client
>> classes" [4]: I spent quite some time searching for the factories, until
>> realizing those only existed in the old client but not in spice-gtk/
>> virt-viewer. I tested only with QEMU VMs, started directly and also
>> using libvirt (which works fine too).
>>
>>
>> TODOs and open questions:
>> - Instead of a dedicated channel you could also add the handful of
>>   required messages to the main channel, but that sounds messy and not
>>   like a good idea.
> 
> I prefer new messages in the main channel.
> A separate channel for some control message is quite overkilling.

Will do that then.

>> - Actively restrict to one power channel per connection, or allow (and
>>   handle) multiple power channels in parallel? No idea if there is
>>   actually a use case for multiple.
> 
> Usually multiple channels per connection are used for multiple devices
> (like multiple monitors or multiple usb redirection).

This becomes moot when using the main channel, since that always uses a
single channel afaik. But nice to know anyway, thanks.
 
>> - Do one (empty) message per action (like now), or one message for all
>>   with an action-id in the payload? You have to discriminate in both
>>   cases, and using messages removes one layer.
> 
> I would prefer separate messages for each action.
> 
>> - Acknowledge the successful execution of an action from the server to
>>   the client. The existing ack mechanism currently only works the other
>>   way around (client-to-server). Or just rely on the guaranteed
>>   delivery provided by the underlying TCP transport.
> 
> If the ack if for simply "I got the message" I would say TCP is enough.
> If it is for something more like "I got the message, I processed, here
> is the result", then a message back would suit.

The way it's currently implemented, the SPICE client sends an action
message to the SPICE server. If no error occurs, the server will send  
nothing back; if an error occurs, it will send a notify (with only     
human-readable data). So the only way for a client to make sure no     
error occurred is to wait and check the absence of a notify. Instead   
you could have an answer message with a OK/NOK flag and optional error-
message, so the client would always get a response. Thinking about it, 
yeah there should be a proper answer message.

>> - Better error reporting from the server to the client, and also to the
>>   user. Instead of a notify may require a dedicated answer message.
>> - Report the current power status (on or off) from the server to the
>>   client. Send updates when it changes. Can be used by clients to
>>   inform the user and further restrict actions (no power-on when already
>>   running)
> 
> How could you implement the power-on, when the machine is powered down
> there's no SPICE to talk to.

This may be the true in the specific case of QEMU, but not in general. 
Take the BMC or KVM (as in keyboard+mouse+monitor) use-case where you  
have a little embedded server for the sole purpose of managing a bigger
server. The BMC could run a SPICE server all the time, and you could   
always connect to it. If the actual server is powered down, you get no 
display or input channels or anything, but the main channel would come 
up, and the SPICE server tells the SPICE client "controlled system is  
currently powered down, but I support to turn power on". You then press
the "power on" button in your SPICE client, and after a few seconds the
display and input channels (and more) come up, and you are off to the  
races. That is under the assumption that channels can dynamically go up
and down, and ideally some way for a SPICE server to inform clients of 
now-available channels.
    
In theory you could also do this with VMs where the hypervisor provides
a persistent SPICE endpoint for all VMs (even powered off). But for    
BMCs/KVMs it's easier since they always have exactly one controlled    
system and need exactly one port for it. A hypervisor on the other hand
can control anywhere from zero to thousands of VMs, and you need one   
dedicated port for each one since (to my knowledge) there is no native 
way in SPICE to indicate the "target" you want. Interesting thought:   
You could probably build a "dispatcher" using a wildcard certificate,  
TLS SNI and a layer 4 reverse proxy.
 
>> - Should pause/continue actions be supported? Not useful for hardware
>>   targets, and I imagine seldom used with VMs.
> 
> I see them as more Qemu options. Not sure if you can talk using SPICE
> to a suspended machine.

I have not actually tried this, but it would surprise me. Just because 
the guest is paused doesn't mean all the QEMU machinery around it is,  
which would include the integrated SPICE server. Because this would    
mean even today, if you pause a VM using QMP for long enough, the SPICE
connection would die since the SPICE server wouldn't answer.

>> - Could this completely replace the current QMP implementation in virt-
>>   viewer?
> 
> I'm not sure, QMP is used for a lot of things.
The only occurrences of QMP in the virt-viewer codebase I found are for
the power-control. While QMP can certainly a do lot more, virt-viewer  
is currently just using that specific subset; hence the question.

>> - Channel migration was ignored for simplicity, but should be easy as
>>   there is no state to maintain/transfer.
> 
> Yes, agreed.
> 
>> - Proper tests and documentation.
>>
>>
>> [0] https://lists.virt-tools.org/2018-July/015213.html
>> [1] https://libvirt.org/manpages/virt-qemu-qmp-proxy.html
>> [2] 
>> https://lists.freedesktop.org/archives/spice-devel/2018-August/045210.html
>> [3] https://lists.virt-tools.org/2018-August/015263.html
>> [4] https://www.spice-space.org/spice-for-newbies.html#_spice_client
>>
>>
>> Best regards
>> --
>> Corubba
> 
> Frediano

Reply via email to