> On Wed, Jun 25, 2025 at 08:03:00AM +0000, Dexuan Cui wrote:
> >> From: Stefano Garzarella <sgarz...@redhat.com>
> >> Sent: Tuesday, June 17, 2025 7:39 AM
> >>  ...
> >> Now looks better to me, I just checked transports: vmci and virtio/vhost
> >> returns what we want, but for hyperv we have:
> >>
> >>    static s64 hvs_stream_has_data(struct vsock_sock *vsk)
> >>    {
> >>            struct hvsock *hvs = vsk->trans;
> >>            s64 ret;
> >>
> >>            if (hvs->recv_data_len > 0)
> >>                    return 1;
> >>
> >> @Hyper-v maintainers: do you know why we don't return `recv_data_len`?
> >
> >Sorry for the late response!  This is the complete code of the function:
> >
> >static s64 hvs_stream_has_data(struct vsock_sock *vsk)
> >{
> >        struct hvsock *hvs = vsk->trans;
> >        s64 ret;
> >
> >        if (hvs->recv_data_len > 0)
> >                return 1;
> >
> >        switch (hvs_channel_readable_payload(hvs->chan)) {
> >        case 1:
> >                ret = 1;
> >                break;
> >        case 0:
> >                vsk->peer_shutdown |= SEND_SHUTDOWN;
> >                ret = 0;
> >                break;
> >        default: /* -1 */
> >                ret = 0;
> >                break;
> >        }
> >
> >        return ret;
> >}
> >
> >If (hvs->recv_data_len > 0), I think we can return hvs->recv_data_len here.
> >
> >If hvs->recv_data_len is 0, and hvs_channel_readable_payload(hvs->chan)
> >returns 1, we should not return hvs->recv_data_len (which is 0 here), 
> >and it's
> >not very easy to find how many bytes of payload in total is available right 
> >now:
> >each host-to-guest "packet" in the VMBus channel ringbuffer has a header
> >(which is not part of the payload data) and a trailing padding field, and we
> >would have to iterate on all the "packets" (or at least the next
> >"packet"?) to find the exact bytes of pending payload. Please see
> >hvs_stream_dequeue() for details.
> >
> >Ideally hvs_stream_has_data() should return the exact length of pending
> >readable payload, but when the hv_sock code was written in 2017,
> >vsock_stream_has_data() -> ... -> hvs_stream_has_data() basically only needs
> >to know whether there is any data or not, i.e. it's kind of a boolean 
> >variable, so
> >hvs_stream_has_data() was written to return 1 or 0 for simplicity. :-)
> 
> Yeah, I see, thanks for the details! :-)
> 
> >
> >I can post the patch below (not tested yet) to fix hvs_stream_has_data() by
> >returning the payload length of the next single "packet".  Does it look good
> >to you?
> 
> Yep, LGTM! Can be a best effort IMO.
> 
> Maybe when you have it tested, post it here as proper patch, and Xuewei 
> can include it in the next version of this series (of course with you as 
> author, etc.). In this way will be easy to test/merge, since they are 
> related.
> 
> @Xuewei @Dexuan Is it okay for you?

Yeah, sounds good to me!

Thanks,
Xuewei

> Thanks,
> Stefano
> 
> >
> >--- a/net/vmw_vsock/hyperv_transport.c
> >+++ b/net/vmw_vsock/hyperv_transport.c
> >@@ -694,15 +694,25 @@ static ssize_t hvs_stream_enqueue(struct vsock_sock 
> >*vsk, struct msghdr *msg,
> > static s64 hvs_stream_has_data(struct vsock_sock *vsk)
> > {
> >        struct hvsock *hvs = vsk->trans;
> >+       bool need_refill = !hvs->recv_desc;
> >        s64 ret;
> >
> >        if (hvs->recv_data_len > 0)
> >-               return 1;
> >+               return hvs->recv_data_len;
> >
> >        switch (hvs_channel_readable_payload(hvs->chan)) {
> >        case 1:
> >-               ret = 1;
> >-               break;
> >+               if (!need_refill)
> >+                       return -EIO;
> >+
> >+               hvs->recv_desc = hv_pkt_iter_first(hvs->chan);
> >+               if (!hvs->recv_desc)
> >+                       return -ENOBUFS;
> >+
> >+               ret = hvs_update_recv_data(hvs);
> >+               if (ret)
> >+                       return ret;
> >+               return hvs->recv_data_len;
> >        case 0:
> >                vsk->peer_shutdown |= SEND_SHUTDOWN;
> >                ret = 0;
> >
> >Thanks,
> >Dexuan

Reply via email to