On Tue, 2007-26-06 at 13:57 -0700, David Miller wrote:
From: jamal [EMAIL PROTECTED]
Date: Tue, 26 Jun 2007 09:27:28 -0400
Back to the question: Do you recall how this number was arrived at?
128 packets will be sent out at GiGe in about 80 microsecs, so from a
feel-the-wind-direction
From: jamal [EMAIL PROTECTED]
Date: Wed, 27 Jun 2007 18:32:45 -0400
On Tue, 2007-26-06 at 13:57 -0700, David Miller wrote:
From: jamal [EMAIL PROTECTED]
Date: Tue, 26 Jun 2007 09:27:28 -0400
Back to the question: Do you recall how this number was arrived at?
128 packets will be
On Wed, 2007-27-06 at 15:54 -0700, David Miller wrote:
The thing that's really important is that the value is not so
large such that the TX ring can become empty.
In the case of batching, varying the values makes a difference.
The logic is that if you can tune it so that the driver takes
From: jamal [EMAIL PROTECTED]
Date: Wed, 27 Jun 2007 20:15:47 -0400
On Wed, 2007-27-06 at 15:54 -0700, David Miller wrote:
The thing that's really important is that the value is not so
large such that the TX ring can become empty.
In the case of batching, varying the values makes a
On Fri, 2007-22-06 at 09:26 +0800, Zhu Yi wrote:
On Thu, 2007-06-21 at 11:39 -0400, jamal wrote:
It sounds stupid I'm still trying to convince you why we need multiqueue
support in Qdisc when everybody else are already working on the code,
If you go back historically (maybe 2 years ago on
From: jamal [EMAIL PROTECTED]
Date: Mon, 25 Jun 2007 12:47:31 -0400
On Fri, 2007-22-06 at 09:26 +0800, Zhu Yi wrote:
We don't have THL and THH in our driver. They are what you suggested.
The queue wakeup number is 1/4 of the ring size.
So how did you pick 1/4? Experimentation? If you look
I gave you two opportunities to bail out of this discussion, i am gonna
take that your rejection to that offer implies you my friend wants to
get to the bottom of this i.e you are on a mission to find the truth.
So lets continue this.
On Wed, 2007-20-06 at 13:51 +0800, Zhu Yi wrote:
No,
On Thu, 2007-06-21 at 11:39 -0400, jamal wrote:
I gave you two opportunities to bail out of this discussion, i am gonna
take that your rejection to that offer implies you my friend wants to
get to the bottom of this i.e you are on a mission to find the truth.
So lets continue this.
It sounds
On Tue, 2007-19-06 at 10:12 +0800, Zhu Yi wrote:
Mine was much simpler. We don't need to
consider the wireless dynamic priority change case at this time. Just
tell me what you suppose the driver to do (stop|start queue) when the
hardware PHL is full but PHH is empty?
I already responded to
Hello Yi,
On Mon, 2007-18-06 at 09:18 +0800, Zhu Yi wrote:
Would you respond the question I asked early,
I thought i did respond to all questions you asked but some may have
been lost in the noise.
in your model how to
define the queue wakeup strategy in the driver to deal with the PHL
On Mon, 2007-06-18 at 11:16 -0400, jamal wrote:
in your model how to
define the queue wakeup strategy in the driver to deal with the PHL full
situation? Consider about 1) both high prio and low prio packets could
come (you cannot predict it beforehand)
I am assuming by come you mean
On Fri, 2007-06-15 at 06:49 -0400, jamal wrote:
Hello Yi,
On Fri, 2007-15-06 at 09:27 +0800, Zhu Yi wrote:
1. driver becomes complicated (as it is too elaborate in the queue
wakeup strategies design)
I am not sure i see the complexity in the wireless driver's wakeup
strategy. I just
Hello Yi,
On Fri, 2007-15-06 at 09:27 +0800, Zhu Yi wrote:
1. driver becomes complicated (as it is too elaborate in the queue
wakeup strategies design)
I am not sure i see the complexity in the wireless driver's wakeup
strategy. I just gave some suggestions to use management frames - they
Hi Yi,
On Thu, 2007-14-06 at 10:44 +0800, Zhu Yi wrote:
On Wed, 2007-06-13 at 08:32 -0400, jamal wrote:
The key arguement i make (from day one actually) is to leave the
majority of the work to the driver.
But it seems not feasible the Qdisc needs to know nothing about the
hardware rings.
On Thu, 2007-06-14 at 07:48 -0400, jamal wrote:
I dont have much time to followup for sometime to come. I have left my
answer above. To clarify, incase i wasnt clear, I am saying:
a) It is better to have the driver change via some strategy of when to
open the tx path than trying to be generic.
Zhu Yi wrote:
On Tue, 2007-06-12 at 23:17 +0200, Patrick McHardy wrote:
I've hacked up a
small multiqueue simulator device and to my big surprise my testing
showed that Jamal's suggestion of using a single queue state seems to
work better than I expected. But I've been doing mostly testing of
On Wed, 2007-13-06 at 13:56 +0800, Zhu Yi wrote:
The key argument for Jamal's solution is the NIC will send out 32
packets in the full PHL in a reasonably short time (a few microsecs per
Jamal's calculation). But for wireless, the PHL hardware has low
probability to seize the wireless medium
jamal writes:
The key arguement i make (from day one actually) is to leave the
majority of the work to the driver.
My view of wireless WMM etc is it is a different media behavior
(compared to wired ethernet) which means a different view of strategy
for when it opens the valve to allow
Wow - Robert in the house, I cant resist i have to say something before
i run out;-
On Wed, 2007-13-06 at 15:12 +0200, Robert Olsson wrote:
Haven't got all details. IMO we need to support some bonding-like
scenario too. Where one CPU is feeding just one TX-ring. (and TX-buffers
cleared
-Original Message-
From: J Hadi Salim [mailto:[EMAIL PROTECTED] On Behalf Of jamal
For the Leonid-NIC (for lack of better name) it may be harder to do
parallelization on rcv if you use what i said above. But you could
use a different model on receive - such as create a single
jamal writes:
I think the one described by Leonid has not just 8 tx/rx rings but also
a separate register set, MSI binding etc iirc. The only shared resources
as far as i understood Leonid are the bus and the ethernet wire.
AFAIK most new NIC will look like this...
I still lack a
I'm starting to wonder how a multi-queue NIC differs from a bunch of
bonded single-queue NICs, and if there is leverage opportunity there.
rick jones
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at
From: jamal [EMAIL PROTECTED]
Date: Wed, 13 Jun 2007 09:33:22 -0400
So in such a case (assuming 8 rings), One model is creating 4 netdev
devices each based on single tx/rx ring and register set and then
having a mother netdev (what you call the bond) that feeds these
children netdev based on
From: jamal [EMAIL PROTECTED]
Date: Wed, 13 Jun 2007 09:33:22 -0400
So in such a case (assuming 8 rings), One model is creating
4 netdev
devices each based on single tx/rx ring and register set and then
having a mother netdev (what you call the bond) that feeds these
children
PJ Waskiewicz wrote:
diff --git a/net/sched/sch_generic.c
b/net/sched/sch_generic.c index
f28bb2d..b9dc2a6 100644
--- a/net/sched/sch_generic.c
+++ b/net/sched/sch_generic.c
@@ -123,7 +123,8 @@ static inline int qdisc_restart(struct
net_device *dev)
/* And
On Wed, 2007-13-06 at 11:20 -0700, David Miller wrote:
From: jamal [EMAIL PROTECTED]
Date: Wed, 13 Jun 2007 09:33:22 -0400
So in such a case (assuming 8 rings), One model is creating 4 netdev
devices each based on single tx/rx ring and register set and then
having a mother netdev (what
On Wed, 2007-06-13 at 13:34 +0200, Patrick McHardy wrote:
The key argument for Jamal's solution is the NIC will send out 32
packets in the full PHL in a reasonably short time (a few microsecs
per
Jamal's calculation). But for wireless, the PHL hardware has low
probability to seize the
On Wed, 2007-06-13 at 08:32 -0400, jamal wrote:
The key arguement i make (from day one actually) is to leave the
majority of the work to the driver.
But it seems not feasible the Qdisc needs to know nothing about the
hardware rings.
My view of wireless WMM etc is it is a different media
On Mon, 2007-06-11 at 08:23 -0400, jamal wrote:
On Mon, 2007-11-06 at 13:58 +0200, Patrick McHardy wrote:
Thats not true. Assume PSL has lots of packets, PSH is empty. We
fill the PHL queue until their is no room left, so the driver
has to stop the queue.
Sure. Packets stashed on the
On Tue, 2007-12-06 at 11:19 +0200, Johannes Berg wrote:
On Mon, 2007-06-11 at 08:23 -0400, jamal wrote:
Sure. Packets stashed on the any DMA ring are considered gone to the
wire. That is a very valid assumption to make.
Not at all! Packets could be on the DMA queue forever if you're
jamal wrote:
the qdisc has a chance to hand out either a packet
of the same priority or higher priority, but at the cost of
at worst (n - 1) * m unnecessary dequeues+requeues in case
there is only a packet of lowest priority and we need to
fully serve all higher priority HW queues before
Hi Jamal,
Here is a simple scenario (nothing here is rare of extreme case):
- Busy wireless environment
- FTP TX on BE queue (low priority)
- Skype TX on VO queue (high priority)
The channel is busy with high priority packets hence the BE packets are
transmitted to the air rarely so the DMA/HW
On Tue, 2007-12-06 at 15:21 +0200, Patrick McHardy wrote:
jamal wrote:
Yes. Using a higher threshold reduces the overhead, but leads to
lower priority packets getting out even if higher priority packets
are present in the qdisc.
As per earlier discussion, the packets already given to
Guy,
I apologize for not responding immediately - i promise to in a few hours
when i get back (and read it over some good coffee) - seems like you
have some good stuff there; thanks for taking the time despite the
overload.
cheers,
jamal
On Tue, 2007-12-06 at 17:04 +0300, Cohen, Guy wrote:
Hi
From: Patrick McHardy [EMAIL PROTECTED]
Date: Tue, 12 Jun 2007 15:21:54 +0200
So how do we move forward?
We're going to put hw multiqueue support in, all of this discussion
has been pointless, I just watch this thread and basically laugh at
the resistence to hw multiqueue support :-)
-
To
If hardware w/ multiple queues will the capability for different MAC
addresses, different RX filters, etc. does it make sense to add that
below the net_device level?
We will have to add all the configuration machinery at the per-queue
level that already exists at the per-netdev level.
Jeff Garzik wrote:
If hardware w/ multiple queues will the capability for different MAC
addresses, different RX filters, etc. does it make sense to add that
below the net_device level?
We will have to add all the configuration machinery at the per-queue
level that already exists at the
From: Ben Greear [EMAIL PROTECTED]
Date: Tue, 12 Jun 2007 14:17:44 -0700
Jeff Garzik wrote:
If hardware w/ multiple queues will the capability for different MAC
addresses, different RX filters, etc. does it make sense to add that
below the net_device level?
We will have to add
David Miller wrote:
From: Ben Greear [EMAIL PROTECTED]
Date: Tue, 12 Jun 2007 14:17:44 -0700
Jeff Garzik wrote:
If hardware w/ multiple queues will the capability for different MAC
addresses, different RX filters, etc. does it make sense to add that
below the net_device level?
We will have
David Miller wrote:
From: Ben Greear [EMAIL PROTECTED]
Date: Tue, 12 Jun 2007 14:17:44 -0700
Jeff Garzik wrote:
If hardware w/ multiple queues will the capability for different MAC
addresses, different RX filters, etc. does it make sense to add that
below the net_device level?
We will have
The MAC is still very much centralized in most designs.
So one way they'll do it is to support assigning N MAC addresses,
and you configure the input filters of the chip to push packets
for each MAC to the proper receive queue.
So the MAC will accept any of those in the N MAC
From: Jeff Garzik [EMAIL PROTECTED]
Date: Tue, 12 Jun 2007 17:46:20 -0400
Not quite... You'll have to deal with multiple Rx filters, not just the
current one-filter-for-all model present in today's NICs. Pools of
queues will have separate configured characteristics. The steer
portion
From: Ben Greear [EMAIL PROTECTED]
Date: Tue, 12 Jun 2007 14:46:50 -0700
And, since the mac-vlan can work as pure software on top of any NIC that
can go promisc and send with arbitrary source MAC, it will already work
with virtually all wired ethernet devices currently in existence.
From: Jason Lunz [EMAIL PROTECTED]
Date: Tue, 12 Jun 2007 17:47:53 -0400
Are you aware of any hardware designs that allow other ways to map
packets onto rx queues? I can think of several scenarios where it could
be advantageous to map packets by IP 3- or 5-tuple to get cpu locality
all the
Roland Dreier wrote:
The MAC is still very much centralized in most designs.
So one way they'll do it is to support assigning N MAC addresses,
and you configure the input filters of the chip to push packets
for each MAC to the proper receive queue.
So the MAC will accept any of
From: Roland Dreier [EMAIL PROTECTED]
Date: Tue, 12 Jun 2007 14:52:11 -0700
I think you're misunderstanding. These NICs still have only one
physical port, so sending or receiving real packets onto a physical
wire is fundamentally serialized. The steering of packets to receive
queues is done
From: Jeff Garzik [EMAIL PROTECTED]
Date: Tue, 12 Jun 2007 17:59:43 -0400
And where shall we put the configuration machinery, to support sub-queues?
Shall we duplicate the existing configuration code for sub-queues?
What will ifconfig/ip usage look like?
How will it differ from configurating
On Tue, Jun 12, 2007 at 02:55:34PM -0700, David Miller wrote:
These chips allow this too, Microsoft defined a standard for
RX queue interrupt hashing by flow so everyone puts it, or
something like it, in hardware.
I think you're referring to RSS?
David Miller wrote:
If you're asking about the virtualization scenerio, the
control node (dom0 or whatever) is the only entity which
can get at programming the filters and will set it up
properly based upon which parts of the physical device
are being exported to which guest nodes.
You're
Ben Greear wrote:
That sounds plausible for many uses, but it may also be useful to have
the virtual devices. Having 802.1Q VLANs be 'real' devices has worked out
quite well, so I think there is a place for a 'mac-vlan' as well.
Virtual devices are pretty much the only solution we have right
On Tue, Jun 12, 2007 at 02:26:58PM -0700, David Miller wrote:
The MAC is still very much centralized in most designs.
So one way they'll do it is to support assigning N MAC addresses,
and you configure the input filters of the chip to push packets
for each MAC to the proper receive queue.
Jeff Garzik wrote:
Ben Greear wrote:
That sounds plausible for many uses, but it may also be useful to have
the virtual devices. Having 802.1Q VLANs be 'real' devices has worked
out
quite well, so I think there is a place for a 'mac-vlan' as well.
Virtual devices are pretty much the only
Hi Guy,
On Tue, 2007-12-06 at 17:04 +0300, Cohen, Guy wrote:
Hi Jamal,
Here is a simple scenario (nothing here is rare of extreme case):
- Busy wireless environment
- FTP TX on BE queue (low priority)
- Skype TX on VO queue (high priority)
The channel is busy with high priority packets
];
[EMAIL PROTECTED]
Subject: Re: [PATCH] NET: Multiqueue network device support.
On Tue, Jun 12, 2007 at 02:26:58PM -0700, David Miller wrote:
The MAC is still very much centralized in most designs.
So one way they'll do it is to support assigning N MAC addresses,
and you configure the input
On Tue, 2007-06-12 at 23:17 +0200, Patrick McHardy wrote:
I've hacked up a
small multiqueue simulator device and to my big surprise my testing
showed that Jamal's suggestion of using a single queue state seems to
work better than I expected. But I've been doing mostly testing of
the device
jamal wrote:
On Wed, 2007-06-06 at 17:11 +0200, Patrick McHardy wrote:
[...]
The problem is the premise is _innacurate_.
Since you havent followed the discussion, i will try to be brief (which
is hard).
If you want verbosity it is in my previous emails:
Consider a simple example of
jamal wrote:
On Wed, 2007-06-06 at 15:35 -0700, David Miller wrote:
The problem with this line of thinking is that it ignores the fact
that it is bad to not queue to the device when there is space
available, _even_ for lower priority packets.
So use a different scheduler. Dont use strict
Waskiewicz Jr, Peter P wrote:
If they have multiple TX queues, independantly programmable, that
single lock is stupid.
We could use per-queue TX locks for such hardware, but we can't
support that currently.
There could be bad packet reordering with this (like some SMP
routers used to do).
On Mon, 2007-11-06 at 13:58 +0200, Patrick McHardy wrote:
Thats not true. Assume PSL has lots of packets, PSH is empty. We
fill the PHL queue until their is no room left, so the driver
has to stop the queue.
Sure. Packets stashed on the any DMA ring are considered gone to the
wire. That is a
jamal wrote:
On Mon, 2007-11-06 at 13:58 +0200, Patrick McHardy wrote:
Thats not true. Assume PSL has lots of packets, PSH is empty. We
fill the PHL queue until their is no room left, so the driver
has to stop the queue.
Sure. Packets stashed on the any DMA ring are considered gone to
On Mon, 2007-11-06 at 14:39 +0200, Patrick McHardy wrote:
jamal wrote:
On Mon, 2007-11-06 at 13:58 +0200, Patrick McHardy wrote:
Sure. Packets stashed on the any DMA ring are considered gone to the
wire. That is a very valid assumption to make.
I disagree, its obviously not true
jamal wrote:
On Mon, 2007-11-06 at 14:39 +0200, Patrick McHardy wrote:
Sure. Packets stashed on the any DMA ring are considered gone to the
wire. That is a very valid assumption to make.
I disagree, its obviously not true
Patrick, you are making too strong a statement.
Well, its not.
On Mon, 2007-11-06 at 15:03 +0200, Patrick McHardy wrote:
jamal wrote:
Well, its not.
I dont wanna go into those old style debates again; so lets drop this
point.
Take a step back:
When you put a packet on the DMA ring, are you ever going to take it
away at some point before it goes to
jamal wrote:
On Mon, 2007-11-06 at 15:03 +0200, Patrick McHardy wrote:
Take a step back:
When you put a packet on the DMA ring, are you ever going to take it
away at some point before it goes to the wire?
No, but its nevertheless not on the wire yet and the HW scheduler
controls when it will
Patrick McHardy wrote:
jamal wrote:
Sure - but what is wrong with that?
Nothing, this was just to illustrate why I disagree with the
assumption
that the packet has hit the wire. On second thought I do agree with
your
assumption for the single HW queue case, at the point we hand the
On Mon, 2007-11-06 at 16:03 +0200, Patrick McHardy wrote:
jamal wrote:
Sure - but what is wrong with that?
Nothing, this was just to illustrate why I disagree with the assumption
that the packet has hit the wire.
fair enough.
On second thought I do agree with your
assumption for the
Cohen, Guy wrote:
Patrick McHardy wrote:
jamal wrote:
Sure - but what is wrong with that?
Nothing, this was just to illustrate why I disagree with the
assumption
that the packet has hit the wire. On second thought I do agree with
your
assumption for the single HW queue case, at the
On Mon, 2007-11-06 at 17:30 +0300, Cohen, Guy wrote:
For WiFi devices the HW often implements the scheduling, especially when
QoS (WMM/11e/11n) is implemented. There are few traffic queues defined
by the specs and the selection of the next queue to transmit a packet
from, is determined in
jamal wrote:
On Mon, 2007-11-06 at 16:03 +0200, Patrick McHardy wrote:
Read again what I wrote about the n 2 case. Low priority queues might
starve high priority queues when using a single queue state for a
maximum of the time it takes to service n - 2 queues with max_qlen - 1
packets queued
On 6/11/07, jamal [EMAIL PROTECTED] wrote:
On Mon, 2007-11-06 at 17:30 +0300, Cohen, Guy wrote:
For WiFi devices the HW often implements the scheduling, especially when
QoS (WMM/11e/11n) is implemented. There are few traffic queues defined
by the specs and the selection of the next queue to
On Mon, 2007-11-06 at 16:49 +0200, Patrick McHardy wrote:
Let me explain with some ASCII art :)
Ok ;-
We have n empty HW queues with a maximum length of m packets per queue:
[0] empty
[1] empty
[2] empty
..
[n-1] empty
Asumming 0 i take it is higher prio than n-1.
Now we receive
On Mon, 2007-11-06 at 18:00 +0300, Tomas Winkler wrote:
On 6/11/07, jamal [EMAIL PROTECTED] wrote:
On Mon, 2007-11-06 at 17:30 +0300, Cohen, Guy wrote:
For WiFi devices the HW often implements the scheduling, especially when
QoS (WMM/11e/11n) is implemented. There are few traffic
jamal wrote:
On Mon, 2007-11-06 at 16:49 +0200, Patrick McHardy wrote:
We have n empty HW queues with a maximum length of m packets per queue:
[0] empty
[1] empty
[2] empty
..
[n-1] empty
Asumming 0 i take it is higher prio than n-1.
Yes.
Now we receive m - 1 packets for each all
On Mon, 2007-11-06 at 17:12 +0200, Patrick McHardy wrote:
Ok, so let me revert that; 0 is higher prio than n-1.
Yes.
Ok, gotcha.
possibly long time is where we diverge ;-
Worst cast is (n - 2) * (m - 1) + 1 full sized packet transmission
times.
You can do the math yourself,
Some more details inside regarding wireless QoS.
jamal wrote:
On Mon, 2007-11-06 at 17:30 +0300, Cohen, Guy wrote:
For WiFi devices the HW often implements the scheduling, especially
when
QoS (WMM/11e/11n) is implemented. There are few traffic queues
defined
by the specs and the
jamal wrote:
On Mon, 2007-11-06 at 17:12 +0200, Patrick McHardy wrote:
Worst cast is (n - 2) * (m - 1) + 1 full sized packet transmission
times.
You can do the math yourself, but we're talking about potentially
a lot of packets.
I agree if you use the strategy of a ring shutdown down
PJ Waskiewicz wrote:
diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
index f28bb2d..b9dc2a6 100644
--- a/net/sched/sch_generic.c
+++ b/net/sched/sch_generic.c
@@ -123,7 +123,8 @@ static inline int qdisc_restart(struct net_device *dev)
/* And release queue
PJ Waskiewicz wrote:
diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index e7367c7..8bcd870 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -215,6 +215,7 @@ typedef unsigned char *sk_buff_data_t;
* @pkt_type: Packet class
* @fclone: skbuff clone
PJ Waskiewicz wrote:
diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index
e7367c7..8bcd870 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -215,6 +215,7 @@ typedef unsigned char *sk_buff_data_t;
* @pkt_type: Packet class
* @fclone: skbuff clone
PJ Waskiewicz wrote:
diff --git a/net/sched/sch_generic.c
b/net/sched/sch_generic.c index
f28bb2d..b9dc2a6 100644
--- a/net/sched/sch_generic.c
+++ b/net/sched/sch_generic.c
@@ -123,7 +123,8 @@ static inline int qdisc_restart(struct
net_device *dev)
/* And
Waskiewicz Jr, Peter P wrote:
I think we can reuse skb-priority. Assuming only real
hardware devices use multiqueue support, there should be no user of
skb-priority after egress qdisc classification. The only reason
to preserve it in the qdisc layer is for software devices.
That would be
Waskiewicz Jr, Peter P wrote:
BTW, I couldn't find anything but a single
netif_wake_subqueue in your (old) e1000 patch. Why doesn't it
stop subqueues?
A previous e1000 patch stopped subqueues. The last e1000 patch I sent
to the list doesn't stop them, and that's a problem with that patch;
I think grepping will help more than testing :)
The only issue I can see is that packets going to a
multiqueue device that doesn't have a multiqueue aware qdisc
attached will get a random value. So you would have to
conditionally reset it before -enqueue.
I currently clear queue_mapping
Waskiewicz Jr, Peter P wrote:
Another question is what to do about other hard_start_xmit callers.
Independant of which field is used, should the classification
that may have happend on a different device be retained (TC
actions again)?
[...] Either way, before it gets enqueued through
On Mon, 2007-11-06 at 17:44 +0200, Patrick McHardy wrote:
jamal wrote:
[..]
- let the driver shutdown whenever a ring is full. Remember which ring X
shut it down.
- when you get a tx interupt or prun tx descriptors, if a ring = X has
transmitted a packet (or threshold of packets), then
On Mon, 2007-11-06 at 18:34 +0300, Cohen, Guy wrote:
jamal wrote:
[..]
WMM is a strict prio mechanism.
The parametrization very much favors the high prio packets when the
tx opportunity to send shows up.
Sorry, but this not as simple as you describe it. WMM is much more
complicated.
jamal wrote:
On Mon, 2007-11-06 at 17:44 +0200, Patrick McHardy wrote:
jamal wrote:
[..]
- let the driver shutdown whenever a ring is full. Remember which ring X
shut it down.
- when you get a tx interupt or prun tx descriptors, if a ring = X has
transmitted a packet (or threshold of
jamal wrote:
On Mon, 2007-11-06 at 17:44 +0200, Patrick McHardy wrote:
At this point the qdisc might send new packets. What do you do when a
packet for a full ring arrives?
Hrm... ok, is this a trick question or i am missing the obvious?;-
What is wrong with what any driver would do
Sorry - i was distracted elsewhere and didnt respond to your
earlier email; this one seems a superset.
On Tue, 2007-12-06 at 02:58 +0200, Patrick McHardy wrote:
jamal wrote:
On Mon, 2007-11-06 at 17:44 +0200, Patrick McHardy wrote:
[use case abbreviated..]
the use case is sensible.
the
; Alex Aizman
Subject: RE: [PATCH] NET: Multiqueue network device support.
our definition of channel on linux so far is a netdev
(not a DMA ring). A netdev is the entity that can be bound to a CPU.
Link layer flow control terminates (and emanates) from the netdev.
I think we are saying
On Fri, Jun 08, 2007 at 09:12:52AM -0400, jamal wrote:
To mimick that behavior in LLTX, a driver needs to use the same lock on
both tx and receive. e1000 holds a different lock on tx path from rx
path. Maybe theres something clever i am missing; but it seems to be a
bug on e1000.
It's both
On Sat, 2007-09-06 at 21:08 +1000, Herbert Xu wrote:
It takes the tx_lock in the xmit routine as well as in the clean-up
routine. However, the lock is only taken when it updates the queue
status.
Thanks to the ring buffer structure the rest of the clean-up/xmit code
will run concurrently
: RE: [PATCH] NET: Multiqueue network device support.
[Which of course leads to the complexity (and not optimizing
for the common - which is single ring NICs)].
The common for 100 Mbit and older 1Gbit is single ring NICs. Newer
PCI-X and PCIe NICs from 1Gbit to 10Gbit support multiple
On Sat, 2007-09-06 at 10:58 -0400, Leonid Grossman wrote:
IMHO, in addition to current Intel and Neterion NICs, some/most upcoming
NICs are likely to be multiqueue, since virtualization emerges as a
major driver for hw designs (there are other things of course that drive
hw, but these are
; Alex Aizman
Subject: RE: [PATCH] NET: Multiqueue network device support.
On Sat, 2007-09-06 at 10:58 -0400, Leonid Grossman wrote:
IMHO, in addition to current Intel and Neterion NICs, some/most
upcoming
NICs are likely to be multiqueue, since virtualization emerges as a
major driver
Leonid Grossman wrote:
But my point was that while virtualization capabilities of upcoming NICs
may be not even relevant to Linux, the multi-channel hw designs (a side
effect of virtualization push, if you will) will be there and a
non-virtualized stack can take advantage of them.
I'm looking
On Sat, 2007-09-06 at 17:23 -0400, Leonid Grossman wrote:
Not really. This is a very old presentation; you probably saw some newer
PR on Convergence Enhanced Ethernet, Congestion Free Ethernet etc.
Not been keeping up to date in that area.
These efforts are in very early stages and arguably
On Thu, Jun 07, 2007 at 09:35:36PM -0400, jamal wrote:
On Thu, 2007-07-06 at 17:31 -0700, Sridhar Samudrala wrote:
If the QDISC_RUNNING flag guarantees that only one CPU can call
dev-hard_start_xmit(), then why do we need to hold netif_tx_lock
for non-LLTX drivers?
I havent stared at
On Fri, 2007-08-06 at 20:39 +1000, Herbert Xu wrote:
It would guard against the poll routine which would acquire this lock
when cleaning the TX ring.
Ok, then i suppose we can conclude it is a bug on e1000 (holds tx_lock
on tx side and adapter queue lock on rx). Adding that lock will
certainly
On Fri, Jun 08, 2007 at 07:34:57AM -0400, jamal wrote:
On Fri, 2007-08-06 at 20:39 +1000, Herbert Xu wrote:
It would guard against the poll routine which would acquire this lock
when cleaning the TX ring.
Ok, then i suppose we can conclude it is a bug on e1000 (holds tx_lock
on tx side
1 - 100 of 146 matches
Mail list logo