On Fri, Jun 08, 2012 at 11:35:25AM +0800, Jason Wang wrote:
> >>>  @@ -655,7 +695,17 @@ static netdev_tx_t start_xmit(struct sk_buff *skb, 
> >>> struct net_device *dev)
> >>>                   kfree_skb(skb);
> >>>                   return NETDEV_TX_OK;
> >>>           }
> >>>  -        virtqueue_kick(vi->svq);
> >>>  +
> >>>  +        kick = virtqueue_kick_prepare(vi->svq);
> >>>  +        if (unlikely(kick))
> >>>  +                virtqueue_notify(vi->svq);
> >>>  +
> >>>  +        u64_stats_update_begin(&stats->syncp);
> >>>  +        if (unlikely(kick))
> >>>  +                stats->data[VIRTNET_TX_KICKS]++;
> >>>  +        stats->data[VIRTNET_TX_Q_BYTES] += skb->len;
> >>>  +        stats->data[VIRTNET_TX_Q_PACKETS]++;
> >is this statistic interesting?
> >how about decrementing when we free?
> >this way we see how many are pending..
> >
> 
> Currently we didn't have per-vq statistics but per-cpu, so the skb
> could be sent by one vcpu and freed by another.
> Pehaps another reason to use per-queue satistics.

Just to stress these things do not need to contradict:
you can have per cpu stats for each queue.

-- 
MST
_______________________________________________
Virtualization mailing list
[email protected]
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Reply via email to