> -----Original Message-----
> From: Jason Wang [mailto:jasow...@redhat.com]
> Sent: Monday, March 3, 2014 10:10 PM
> To: Haiyang Zhang; da...@davemloft.net; net...@vger.kernel.org
> Cc: KY Srinivasan; o...@aepfle.de; linux-kernel@vger.kernel.org; driverdev-
> de...@linuxdriverproject.org
> Subject: Re: [PATCH net-next] hyperv: Move state setting for link query
> 
> On 03/04/2014 07:54 AM, Haiyang Zhang wrote:
> > It moves the state setting for query into rndis_filter_receive_response().
> > All callbacks including query-complete and status-callback are
> > synchronized by channel->inbound_lock. This prevents pentential race
> between them.
> 
> This still looks racy to me. The problem is workqueue is not synchronized with
> those here.
> 
> Consider the following case in netvsc_link_change():
> 
>     if (rdev->link_state) {
>     ... receive interrupt ...
>     rndis_filter_receice_response() which changes rdev->link_state
>     ...
>         netif_carrier_off()
>     }
> 
> And also it need to schedule a work otherwise the link status is out of sync.

The rndis_filter_query_device_link_status() makes the query and wait for the
complete message, including set state, before returning.

The rndis_filter_query_device_link_status() is called from 
rndis_filter_device_add(),
which is called from either netvsc_change_mtu() or netvsc_probe().

The change_mtu() and netvsc_link_change() are synchronized by rtnl_lock().
In netvsc_probe(), the status query & complete happens before 
register_netdev(), and
the netvsc_linkstatus_callback() schedules the work only after netdevice is 
registered.
So, there are no race in either case.

The carrier_on/off work will be scheduled when netvsc_open() is called. Then,
the status will be updated based on the latest link_state.

Thanks,
- Haiyang

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to