Re: [net-next rfc v7 1/3] virtio-net: separate fields of sending/receiving queue from virtnet_info

2012-12-04 Thread Jason Wang
On Tuesday, December 04, 2012 02:13:11 PM Rusty Russell wrote:
> Jason Wang  writes:
> > On Monday, December 03, 2012 12:25:42 PM Rusty Russell wrote:
> >> > +
> >> > +/* Work struct for refilling if we run low on memory. */
> >> > +struct delayed_work refill;
> >> 
> >> I can't really see the justificaiton for a refill per queue.  Just have
> >> one work iterate all the queues if it happens, unless it happens often
> >> (in which case, we need to look harder at this anyway).
> > 
> > But during this kind of iteration, we may need enable/disable the napi
> > regardless of whether the receive queue has lots to be refilled. This may
> > add extra latency.
> 
> Sure, but does it actually happen?  We only use the work when we run out
> of memory.  If this happens in normal behaviour we need to change
> something else...

True, I will change to use a global one.

Thanks
> 
> Thanks,
> Rusty.
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [net-next rfc v7 1/3] virtio-net: separate fields of sending/receiving queue from virtnet_info

2012-12-04 Thread Jason Wang
On Monday, December 03, 2012 01:18:48 PM Michael S. Tsirkin wrote:
> On Mon, Dec 03, 2012 at 01:15:01PM +0800, Jason Wang wrote:
> > > > +
> > > > 
> > > > + /* Work struct for refilling if we run low on memory. */
> > > > 
> > > > + struct delayed_work refill;
> > > 
> > > I can't really see the justificaiton for a refill per queue. Just have
> > > 
> > > one work iterate all the queues if it happens, unless it happens often
> > > 
> > > (in which case, we need to look harder at this anyway).
> > 
> > But during this kind of iteration, we may need enable/disable the napi
> > regardless of whether the receive queue has lots to be refilled. This may
> > add extra latency.
> 
> We are running from the timer, so latency is not a concern I think.

Maybe, anyway it's only called when avaiable memory is low, so it should not 
be an issue.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [net-next rfc v7 1/3] virtio-net: separate fields of sending/receiving queue from virtnet_info

2012-12-04 Thread Jason Wang
On Monday, December 03, 2012 01:18:48 PM Michael S. Tsirkin wrote:
 On Mon, Dec 03, 2012 at 01:15:01PM +0800, Jason Wang wrote:
+

+ /* Work struct for refilling if we run low on memory. */

+ struct delayed_work refill;
   
   I can't really see the justificaiton for a refill per queue. Just have
   
   one work iterate all the queues if it happens, unless it happens often
   
   (in which case, we need to look harder at this anyway).
  
  But during this kind of iteration, we may need enable/disable the napi
  regardless of whether the receive queue has lots to be refilled. This may
  add extra latency.
 
 We are running from the timer, so latency is not a concern I think.

Maybe, anyway it's only called when avaiable memory is low, so it should not 
be an issue.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [net-next rfc v7 1/3] virtio-net: separate fields of sending/receiving queue from virtnet_info

2012-12-04 Thread Jason Wang
On Tuesday, December 04, 2012 02:13:11 PM Rusty Russell wrote:
 Jason Wang jasow...@redhat.com writes:
  On Monday, December 03, 2012 12:25:42 PM Rusty Russell wrote:
   +
   +/* Work struct for refilling if we run low on memory. */
   +struct delayed_work refill;
  
  I can't really see the justificaiton for a refill per queue.  Just have
  one work iterate all the queues if it happens, unless it happens often
  (in which case, we need to look harder at this anyway).
  
  But during this kind of iteration, we may need enable/disable the napi
  regardless of whether the receive queue has lots to be refilled. This may
  add extra latency.
 
 Sure, but does it actually happen?  We only use the work when we run out
 of memory.  If this happens in normal behaviour we need to change
 something else...

True, I will change to use a global one.

Thanks
 
 Thanks,
 Rusty.
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [net-next rfc v7 1/3] virtio-net: separate fields of sending/receiving queue from virtnet_info

2012-12-03 Thread Rusty Russell
Jason Wang  writes:
> On Monday, December 03, 2012 12:25:42 PM Rusty Russell wrote:
>> > +
>> > +  /* Work struct for refilling if we run low on memory. */
>> > +  struct delayed_work refill;
>> 
>> I can't really see the justificaiton for a refill per queue.  Just have
>> one work iterate all the queues if it happens, unless it happens often
>> (in which case, we need to look harder at this anyway).
>
> But during this kind of iteration, we may need enable/disable the napi 
> regardless of whether the receive queue has lots to be refilled. This may add 
> extra latency. 

Sure, but does it actually happen?  We only use the work when we run out
of memory.  If this happens in normal behaviour we need to change
something else...

Thanks,
Rusty.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [net-next rfc v7 1/3] virtio-net: separate fields of sending/receiving queue from virtnet_info

2012-12-03 Thread Michael S. Tsirkin
On Mon, Dec 03, 2012 at 01:15:01PM +0800, Jason Wang wrote:
> > > +
> 
> > > + /* Work struct for refilling if we run low on memory. */
> 
> > > + struct delayed_work refill;
> 
> >
> 
> > I can't really see the justificaiton for a refill per queue. Just have
> 
> > one work iterate all the queues if it happens, unless it happens often
> 
> > (in which case, we need to look harder at this anyway).
> 
>  
> 
> But during this kind of iteration, we may need enable/disable the napi
> regardless of whether the receive queue has lots to be refilled. This may add
> extra latency.

We are running from the timer, so latency is not a concern I think.

-- 
MST
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [net-next rfc v7 1/3] virtio-net: separate fields of sending/receiving queue from virtnet_info

2012-12-03 Thread Michael S. Tsirkin
On Mon, Dec 03, 2012 at 01:15:01PM +0800, Jason Wang wrote:
   +
 
   + /* Work struct for refilling if we run low on memory. */
 
   + struct delayed_work refill;
 
 
 
  I can't really see the justificaiton for a refill per queue. Just have
 
  one work iterate all the queues if it happens, unless it happens often
 
  (in which case, we need to look harder at this anyway).
 
  
 
 But during this kind of iteration, we may need enable/disable the napi
 regardless of whether the receive queue has lots to be refilled. This may add
 extra latency.

We are running from the timer, so latency is not a concern I think.

-- 
MST
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [net-next rfc v7 1/3] virtio-net: separate fields of sending/receiving queue from virtnet_info

2012-12-03 Thread Rusty Russell
Jason Wang jasow...@redhat.com writes:
 On Monday, December 03, 2012 12:25:42 PM Rusty Russell wrote:
  +
  +  /* Work struct for refilling if we run low on memory. */
  +  struct delayed_work refill;
 
 I can't really see the justificaiton for a refill per queue.  Just have
 one work iterate all the queues if it happens, unless it happens often
 (in which case, we need to look harder at this anyway).

 But during this kind of iteration, we may need enable/disable the napi 
 regardless of whether the receive queue has lots to be refilled. This may add 
 extra latency. 

Sure, but does it actually happen?  We only use the work when we run out
of memory.  If this happens in normal behaviour we need to change
something else...

Thanks,
Rusty.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [net-next rfc v7 1/3] virtio-net: separate fields of sending/receiving queue from virtnet_info

2012-12-02 Thread Rusty Russell
Jason Wang  writes:
> To support multiqueue transmitq/receiveq, the first step is to separate queue
> related structure from virtnet_info. This patch introduce send_queue and
> receive_queue structure and use the pointer to them as the parameter in
> functions handling sending/receiving.

OK, seems like a straightforward xform: a few nit-picks:

> +/* Internal representation of a receive virtqueue */
> +struct receive_queue {
> + /* Virtqueue associated with this receive_queue */
> + struct virtqueue *vq;
> +
> +struct napi_struct napi;
> +
> +/* Number of input buffers, and max we've ever had. */
> +unsigned int num, max;

Weird whitespace here.

> +
> + /* Work struct for refilling if we run low on memory. */
> + struct delayed_work refill;

I can't really see the justificaiton for a refill per queue.  Just have
one work iterate all the queues if it happens, unless it happens often
(in which case, we need to look harder at this anyway).

>  struct virtnet_info {
>   struct virtio_device *vdev;
> - struct virtqueue *rvq, *svq, *cvq;
> + struct virtqueue *cvq;
>   struct net_device *dev;
>   struct napi_struct napi;

You leave napi here, and take it away in the next patch.  I think it's
supposed to go away now.

Cheers,
Rusty.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [net-next rfc v7 1/3] virtio-net: separate fields of sending/receiving queue from virtnet_info

2012-12-02 Thread Rusty Russell
Jason Wang jasow...@redhat.com writes:
 To support multiqueue transmitq/receiveq, the first step is to separate queue
 related structure from virtnet_info. This patch introduce send_queue and
 receive_queue structure and use the pointer to them as the parameter in
 functions handling sending/receiving.

OK, seems like a straightforward xform: a few nit-picks:

 +/* Internal representation of a receive virtqueue */
 +struct receive_queue {
 + /* Virtqueue associated with this receive_queue */
 + struct virtqueue *vq;
 +
 +struct napi_struct napi;
 +
 +/* Number of input buffers, and max we've ever had. */
 +unsigned int num, max;

Weird whitespace here.

 +
 + /* Work struct for refilling if we run low on memory. */
 + struct delayed_work refill;

I can't really see the justificaiton for a refill per queue.  Just have
one work iterate all the queues if it happens, unless it happens often
(in which case, we need to look harder at this anyway).

  struct virtnet_info {
   struct virtio_device *vdev;
 - struct virtqueue *rvq, *svq, *cvq;
 + struct virtqueue *cvq;
   struct net_device *dev;
   struct napi_struct napi;

You leave napi here, and take it away in the next patch.  I think it's
supposed to go away now.

Cheers,
Rusty.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[net-next rfc v7 1/3] virtio-net: separate fields of sending/receiving queue from virtnet_info

2012-11-27 Thread Jason Wang
To support multiqueue transmitq/receiveq, the first step is to separate queue
related structure from virtnet_info. This patch introduce send_queue and
receive_queue structure and use the pointer to them as the parameter in
functions handling sending/receiving.

Signed-off-by: Krishna Kumar 
Signed-off-by: Jason Wang 
---
 drivers/net/virtio_net.c |  289 +-
 1 files changed, 158 insertions(+), 131 deletions(-)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 26c502e..7975133 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -51,16 +51,44 @@ struct virtnet_stats {
u64 rx_packets;
 };
 
+/* Internal representation of a send virtqueue */
+struct send_queue {
+   /* Virtqueue associated with this send _queue */
+   struct virtqueue *vq;
+
+   /* TX: fragments + linear part + virtio header */
+   struct scatterlist sg[MAX_SKB_FRAGS + 2];
+};
+
+/* Internal representation of a receive virtqueue */
+struct receive_queue {
+   /* Virtqueue associated with this receive_queue */
+   struct virtqueue *vq;
+
+struct napi_struct napi;
+
+/* Number of input buffers, and max we've ever had. */
+unsigned int num, max;
+
+   /* Work struct for refilling if we run low on memory. */
+   struct delayed_work refill;
+
+   /* Chain pages by the private ptr. */
+   struct page *pages;
+
+   /* RX: fragments + linear part + virtio header */
+   struct scatterlist sg[MAX_SKB_FRAGS + 2];
+};
+
 struct virtnet_info {
struct virtio_device *vdev;
-   struct virtqueue *rvq, *svq, *cvq;
+   struct virtqueue *cvq;
struct net_device *dev;
struct napi_struct napi;
+   struct send_queue sq;
+   struct receive_queue rq;
unsigned int status;
 
-   /* Number of input buffers, and max we've ever had. */
-   unsigned int num, max;
-
/* I like... big packets and I cannot lie! */
bool big_packets;
 
@@ -73,21 +101,11 @@ struct virtnet_info {
/* Active statistics */
struct virtnet_stats __percpu *stats;
 
-   /* Work struct for refilling if we run low on memory. */
-   struct delayed_work refill;
-
/* Work struct for config space updates */
struct work_struct config_work;
 
/* Lock for config space updates */
struct mutex config_lock;
-
-   /* Chain pages by the private ptr. */
-   struct page *pages;
-
-   /* fragments + linear part + virtio header */
-   struct scatterlist rx_sg[MAX_SKB_FRAGS + 2];
-   struct scatterlist tx_sg[MAX_SKB_FRAGS + 2];
 };
 
 struct skb_vnet_hdr {
@@ -117,22 +135,22 @@ static inline struct skb_vnet_hdr *skb_vnet_hdr(struct 
sk_buff *skb)
  * private is used to chain pages for big packets, put the whole
  * most recent used list in the beginning for reuse
  */
-static void give_pages(struct virtnet_info *vi, struct page *page)
+static void give_pages(struct receive_queue *rq, struct page *page)
 {
struct page *end;
 
-   /* Find end of list, sew whole thing into vi->pages. */
+   /* Find end of list, sew whole thing into vi->rq.pages. */
for (end = page; end->private; end = (struct page *)end->private);
-   end->private = (unsigned long)vi->pages;
-   vi->pages = page;
+   end->private = (unsigned long)rq->pages;
+   rq->pages = page;
 }
 
-static struct page *get_a_page(struct virtnet_info *vi, gfp_t gfp_mask)
+static struct page *get_a_page(struct receive_queue *rq, gfp_t gfp_mask)
 {
-   struct page *p = vi->pages;
+   struct page *p = rq->pages;
 
if (p) {
-   vi->pages = (struct page *)p->private;
+   rq->pages = (struct page *)p->private;
/* clear private here, it is used to chain pages */
p->private = 0;
} else
@@ -140,12 +158,12 @@ static struct page *get_a_page(struct virtnet_info *vi, 
gfp_t gfp_mask)
return p;
 }
 
-static void skb_xmit_done(struct virtqueue *svq)
+static void skb_xmit_done(struct virtqueue *vq)
 {
-   struct virtnet_info *vi = svq->vdev->priv;
+   struct virtnet_info *vi = vq->vdev->priv;
 
/* Suppress further interrupts. */
-   virtqueue_disable_cb(svq);
+   virtqueue_disable_cb(vq);
 
/* We were probably waiting for more output buffers. */
netif_wake_queue(vi->dev);
@@ -167,9 +185,10 @@ static void set_skb_frag(struct sk_buff *skb, struct page 
*page,
 }
 
 /* Called from bottom half context */
-static struct sk_buff *page_to_skb(struct virtnet_info *vi,
+static struct sk_buff *page_to_skb(struct receive_queue *rq,
   struct page *page, unsigned int len)
 {
+   struct virtnet_info *vi = rq->vq->vdev->priv;
struct sk_buff *skb;
struct skb_vnet_hdr *hdr;
unsigned int copy, hdr_len, offset;
@@ -224,12 +243,12 @@ static struct sk_buff *page_to_skb(struct 

[net-next rfc v7 1/3] virtio-net: separate fields of sending/receiving queue from virtnet_info

2012-11-27 Thread Jason Wang
To support multiqueue transmitq/receiveq, the first step is to separate queue
related structure from virtnet_info. This patch introduce send_queue and
receive_queue structure and use the pointer to them as the parameter in
functions handling sending/receiving.

Signed-off-by: Krishna Kumar krkum...@in.ibm.com
Signed-off-by: Jason Wang jasow...@redhat.com
---
 drivers/net/virtio_net.c |  289 +-
 1 files changed, 158 insertions(+), 131 deletions(-)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 26c502e..7975133 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -51,16 +51,44 @@ struct virtnet_stats {
u64 rx_packets;
 };
 
+/* Internal representation of a send virtqueue */
+struct send_queue {
+   /* Virtqueue associated with this send _queue */
+   struct virtqueue *vq;
+
+   /* TX: fragments + linear part + virtio header */
+   struct scatterlist sg[MAX_SKB_FRAGS + 2];
+};
+
+/* Internal representation of a receive virtqueue */
+struct receive_queue {
+   /* Virtqueue associated with this receive_queue */
+   struct virtqueue *vq;
+
+struct napi_struct napi;
+
+/* Number of input buffers, and max we've ever had. */
+unsigned int num, max;
+
+   /* Work struct for refilling if we run low on memory. */
+   struct delayed_work refill;
+
+   /* Chain pages by the private ptr. */
+   struct page *pages;
+
+   /* RX: fragments + linear part + virtio header */
+   struct scatterlist sg[MAX_SKB_FRAGS + 2];
+};
+
 struct virtnet_info {
struct virtio_device *vdev;
-   struct virtqueue *rvq, *svq, *cvq;
+   struct virtqueue *cvq;
struct net_device *dev;
struct napi_struct napi;
+   struct send_queue sq;
+   struct receive_queue rq;
unsigned int status;
 
-   /* Number of input buffers, and max we've ever had. */
-   unsigned int num, max;
-
/* I like... big packets and I cannot lie! */
bool big_packets;
 
@@ -73,21 +101,11 @@ struct virtnet_info {
/* Active statistics */
struct virtnet_stats __percpu *stats;
 
-   /* Work struct for refilling if we run low on memory. */
-   struct delayed_work refill;
-
/* Work struct for config space updates */
struct work_struct config_work;
 
/* Lock for config space updates */
struct mutex config_lock;
-
-   /* Chain pages by the private ptr. */
-   struct page *pages;
-
-   /* fragments + linear part + virtio header */
-   struct scatterlist rx_sg[MAX_SKB_FRAGS + 2];
-   struct scatterlist tx_sg[MAX_SKB_FRAGS + 2];
 };
 
 struct skb_vnet_hdr {
@@ -117,22 +135,22 @@ static inline struct skb_vnet_hdr *skb_vnet_hdr(struct 
sk_buff *skb)
  * private is used to chain pages for big packets, put the whole
  * most recent used list in the beginning for reuse
  */
-static void give_pages(struct virtnet_info *vi, struct page *page)
+static void give_pages(struct receive_queue *rq, struct page *page)
 {
struct page *end;
 
-   /* Find end of list, sew whole thing into vi-pages. */
+   /* Find end of list, sew whole thing into vi-rq.pages. */
for (end = page; end-private; end = (struct page *)end-private);
-   end-private = (unsigned long)vi-pages;
-   vi-pages = page;
+   end-private = (unsigned long)rq-pages;
+   rq-pages = page;
 }
 
-static struct page *get_a_page(struct virtnet_info *vi, gfp_t gfp_mask)
+static struct page *get_a_page(struct receive_queue *rq, gfp_t gfp_mask)
 {
-   struct page *p = vi-pages;
+   struct page *p = rq-pages;
 
if (p) {
-   vi-pages = (struct page *)p-private;
+   rq-pages = (struct page *)p-private;
/* clear private here, it is used to chain pages */
p-private = 0;
} else
@@ -140,12 +158,12 @@ static struct page *get_a_page(struct virtnet_info *vi, 
gfp_t gfp_mask)
return p;
 }
 
-static void skb_xmit_done(struct virtqueue *svq)
+static void skb_xmit_done(struct virtqueue *vq)
 {
-   struct virtnet_info *vi = svq-vdev-priv;
+   struct virtnet_info *vi = vq-vdev-priv;
 
/* Suppress further interrupts. */
-   virtqueue_disable_cb(svq);
+   virtqueue_disable_cb(vq);
 
/* We were probably waiting for more output buffers. */
netif_wake_queue(vi-dev);
@@ -167,9 +185,10 @@ static void set_skb_frag(struct sk_buff *skb, struct page 
*page,
 }
 
 /* Called from bottom half context */
-static struct sk_buff *page_to_skb(struct virtnet_info *vi,
+static struct sk_buff *page_to_skb(struct receive_queue *rq,
   struct page *page, unsigned int len)
 {
+   struct virtnet_info *vi = rq-vq-vdev-priv;
struct sk_buff *skb;
struct skb_vnet_hdr *hdr;
unsigned int copy, hdr_len, offset;
@@ -224,12 +243,12 @@ static struct sk_buff