RE: [PATCH net-next] hyperv: Move state setting for link query
-Original Message- From: Jason Wang [mailto:jasow...@redhat.com] Sent: Wednesday, March 5, 2014 12:57 AM To: Haiyang Zhang; da...@davemloft.net; net...@vger.kernel.org Cc: KY Srinivasan; o...@aepfle.de; linux-ker...@vger.kernel.org; driverdev- de...@linuxdriverproject.org Subject: Re: [PATCH net-next] hyperv: Move state setting for link query On 03/05/2014 12:57 AM, Haiyang Zhang wrote: -Original Message- From: Jason Wang [mailto:jasow...@redhat.com] Sent: Monday, March 3, 2014 10:10 PM To: Haiyang Zhang; da...@davemloft.net; net...@vger.kernel.org Cc: KY Srinivasan; o...@aepfle.de; linux-ker...@vger.kernel.org; driverdev- de...@linuxdriverproject.org Subject: Re: [PATCH net-next] hyperv: Move state setting for link query On 03/04/2014 07:54 AM, Haiyang Zhang wrote: It moves the state setting for query into rndis_filter_receive_response(). All callbacks including query-complete and status-callback are synchronized by channel-inbound_lock. This prevents pentential race between them. This still looks racy to me. The problem is workqueue is not synchronized with those here. Consider the following case in netvsc_link_change(): if (rdev-link_state) { ... receive interrupt ... rndis_filter_receice_response() which changes rdev-link_state ... netif_carrier_off() } And also it need to schedule a work otherwise the link status is out of sync. The rndis_filter_query_device_link_status() makes the query and wait for the complete message, including set state, before returning. The rndis_filter_query_device_link_status() is called from rndis_filter_device_add(), which is called from either netvsc_change_mtu() or netvsc_probe(). The change_mtu() and netvsc_link_change() are synchronized by rtnl_lock(). In netvsc_probe(), the status query complete happens before register_netdev(), and the netvsc_linkstatus_callback() schedules the work only after netdevice is registered. So, there are no race in either case. The carrier_on/off work will be scheduled when netvsc_open() is called. Then, the status will be updated based on the latest link_state. Thanks, - Haiyang I see. Then if the link status is changing during mtu changing in guest. Looks like we may miss a link status updating? When changing mtu, rndis_filter_device_add() is called, and host will send a link status call back immediately after connect-to-netvsp, no matter the link is changing or not. This will schedule a work to update the status. Thanks, - Haiyang ___ devel mailing list de...@linuxdriverproject.org http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel
RE: [PATCH net-next] hyperv: Move state setting for link query
-Original Message- From: Jason Wang [mailto:jasow...@redhat.com] Sent: Monday, March 3, 2014 10:10 PM To: Haiyang Zhang; da...@davemloft.net; net...@vger.kernel.org Cc: KY Srinivasan; o...@aepfle.de; linux-ker...@vger.kernel.org; driverdev- de...@linuxdriverproject.org Subject: Re: [PATCH net-next] hyperv: Move state setting for link query On 03/04/2014 07:54 AM, Haiyang Zhang wrote: It moves the state setting for query into rndis_filter_receive_response(). All callbacks including query-complete and status-callback are synchronized by channel-inbound_lock. This prevents pentential race between them. This still looks racy to me. The problem is workqueue is not synchronized with those here. Consider the following case in netvsc_link_change(): if (rdev-link_state) { ... receive interrupt ... rndis_filter_receice_response() which changes rdev-link_state ... netif_carrier_off() } And also it need to schedule a work otherwise the link status is out of sync. The rndis_filter_query_device_link_status() makes the query and wait for the complete message, including set state, before returning. The rndis_filter_query_device_link_status() is called from rndis_filter_device_add(), which is called from either netvsc_change_mtu() or netvsc_probe(). The change_mtu() and netvsc_link_change() are synchronized by rtnl_lock(). In netvsc_probe(), the status query complete happens before register_netdev(), and the netvsc_linkstatus_callback() schedules the work only after netdevice is registered. So, there are no race in either case. The carrier_on/off work will be scheduled when netvsc_open() is called. Then, the status will be updated based on the latest link_state. Thanks, - Haiyang ___ devel mailing list de...@linuxdriverproject.org http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel
RE: [PATCH net-next] hyperv: Move state setting for link query
-Original Message- From: Jason Wang [mailto:jasow...@redhat.com] Sent: Monday, March 3, 2014 10:10 PM To: Haiyang Zhang; da...@davemloft.net; net...@vger.kernel.org Cc: KY Srinivasan; o...@aepfle.de; linux-ker...@vger.kernel.org; driverdev- de...@linuxdriverproject.org Subject: Re: [PATCH net-next] hyperv: Move state setting for link query On 03/04/2014 07:54 AM, Haiyang Zhang wrote: It moves the state setting for query into rndis_filter_receive_response(). All callbacks including query-complete and status-callback are synchronized by channel-inbound_lock. This prevents pentential race between them. This still looks racy to me. The problem is workqueue is not synchronized with those here. Consider the following case in netvsc_link_change(): if (rdev-link_state) { ... receive interrupt ... rndis_filter_receice_response() which changes rdev-link_state ... netif_carrier_off() } And also it need to schedule a work otherwise the link status is out of sync. I will update this patch to schedule a work by the end of probe function. Thanks, - Haiyang ___ devel mailing list de...@linuxdriverproject.org http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel
Re: [PATCH net-next] hyperv: Move state setting for link query
On 03/05/2014 12:57 AM, Haiyang Zhang wrote: -Original Message- From: Jason Wang [mailto:jasow...@redhat.com] Sent: Monday, March 3, 2014 10:10 PM To: Haiyang Zhang; da...@davemloft.net; net...@vger.kernel.org Cc: KY Srinivasan; o...@aepfle.de; linux-ker...@vger.kernel.org; driverdev- de...@linuxdriverproject.org Subject: Re: [PATCH net-next] hyperv: Move state setting for link query On 03/04/2014 07:54 AM, Haiyang Zhang wrote: It moves the state setting for query into rndis_filter_receive_response(). All callbacks including query-complete and status-callback are synchronized by channel-inbound_lock. This prevents pentential race between them. This still looks racy to me. The problem is workqueue is not synchronized with those here. Consider the following case in netvsc_link_change(): if (rdev-link_state) { ... receive interrupt ... rndis_filter_receice_response() which changes rdev-link_state ... netif_carrier_off() } And also it need to schedule a work otherwise the link status is out of sync. The rndis_filter_query_device_link_status() makes the query and wait for the complete message, including set state, before returning. The rndis_filter_query_device_link_status() is called from rndis_filter_device_add(), which is called from either netvsc_change_mtu() or netvsc_probe(). The change_mtu() and netvsc_link_change() are synchronized by rtnl_lock(). In netvsc_probe(), the status query complete happens before register_netdev(), and the netvsc_linkstatus_callback() schedules the work only after netdevice is registered. So, there are no race in either case. The carrier_on/off work will be scheduled when netvsc_open() is called. Then, the status will be updated based on the latest link_state. Thanks, - Haiyang I see. Then if the link status is changing during mtu changing in guest. Looks like we may miss a link status updating? ___ devel mailing list de...@linuxdriverproject.org http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel
Re: [PATCH net-next] hyperv: Move state setting for link query
On 03/04/2014 07:54 AM, Haiyang Zhang wrote: It moves the state setting for query into rndis_filter_receive_response(). All callbacks including query-complete and status-callback are synchronized by channel-inbound_lock. This prevents pentential race between them. This still looks racy to me. The problem is workqueue is not synchronized with those here. Consider the following case in netvsc_link_change(): if (rdev-link_state) { ... receive interrupt ... rndis_filter_receice_response() which changes rdev-link_state ... netif_carrier_off() } And also it need to schedule a work otherwise the link status is out of sync. Signed-off-by: Haiyang Zhang haiya...@microsoft.com --- drivers/net/hyperv/rndis_filter.c | 21 - 1 files changed, 20 insertions(+), 1 deletions(-) diff --git a/drivers/net/hyperv/rndis_filter.c b/drivers/net/hyperv/rndis_filter.c index f0cc8ef..6a9f602 100644 --- a/drivers/net/hyperv/rndis_filter.c +++ b/drivers/net/hyperv/rndis_filter.c @@ -240,6 +240,22 @@ static int rndis_filter_send_request(struct rndis_device *dev, return ret; } +static void rndis_set_link_state(struct rndis_device *rdev, + struct rndis_request *request) +{ + u32 link_status; + struct rndis_query_complete *query_complete; + + query_complete = request-response_msg.msg.query_complete; + + if (query_complete-status == RNDIS_STATUS_SUCCESS + query_complete-info_buflen == sizeof(u32)) { + memcpy(link_status, (void *)((unsigned long)query_complete + +query_complete-info_buf_offset), sizeof(u32)); + rdev-link_state = link_status != 0; + } +} + static void rndis_filter_receive_response(struct rndis_device *dev, struct rndis_message *resp) { @@ -269,6 +285,10 @@ static void rndis_filter_receive_response(struct rndis_device *dev, sizeof(struct rndis_message) + RNDIS_EXT_LEN) { memcpy(request-response_msg, resp, resp-msg_len); + if (request-request_msg.ndis_msg_type == + RNDIS_MSG_QUERY request-request_msg.msg. + query_req.oid == RNDIS_OID_GEN_MEDIA_CONNECT_STATUS) + rndis_set_link_state(dev, request); } else { netdev_err(ndev, rndis response buffer overflow @@ -617,7 +637,6 @@ static int rndis_filter_query_device_link_status(struct rndis_device *dev) ret = rndis_filter_query_device(dev, RNDIS_OID_GEN_MEDIA_CONNECT_STATUS, link_status, size); - dev-link_state = (link_status != 0) ? true : false; return ret; } ___ devel mailing list de...@linuxdriverproject.org http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel