On Aug 5, 2013, at 11:20 AM, Adrian Chadd wrote:
.. and I bet it's not a design pattern, and this is total conjecture on my
part:
* the original drivers weren't SMP safe;
* noone really sat down and figured out how to correctly synchronise
all of this stuff;
* people did the minimum
On Aug 7, 2013, at 10:16 PM, Warner Losh i...@bsdimp.com wrote:
On Aug 5, 2013, at 11:20 AM, Adrian Chadd wrote:
.. and I bet it's not a design pattern, and this is total conjecture on my
part:
* the original drivers weren't SMP safe;
* noone really sat down and figured out how to
.. and it's not just about saturate the port with traffic.
It's also about what happens if I shut down the MAC whilst I'm in the
process of programming in new RX/TX descriptors?
The ath(4) driver had a spectacular behaviour where if you mess things
up the wrong way it will quite happily DMA crap
Yup, it's an incredibly unsafe pattern. It also leads to the pattern where
auxiliary processing is handed off to a taskqueue, which then interleaves
the lock ownership with the ithread and produces out-of-order packet
reception.
Scott
On Aug 8, 2013, at 5:18 PM, Adrian Chadd adr...@freebsd.org
On Wed, Aug 7, 2013 at 5:26 AM, Mike Karels m...@karels.net wrote:
I'm replying to one of the last messages of this thread, but in part going
back to the beginning; then I'm following up on Andre's proposal.
Luigi wrote:
i am slightly unclear of what mechanisms we use to prevent races
On 07.08.2013 09:18, Luigi Rizzo wrote:
On Wed, Aug 7, 2013 at 5:26 AM, Mike Karels m...@karels.net
mailto:m...@karels.net wrote:
Jumping to (near) the end of the thread, I like most of Andre's proposal.
Running with minimal locks at this layer is an admirable goal, and I agree
with
On Aug 7, 2013, at 2:00 PM, Andre Oppermann an...@freebsd.org wrote:
On 07.08.2013 09:18, Luigi Rizzo wrote:
On Wed, Aug 7, 2013 at 5:26 AM, Mike Karels m...@karels.net
mailto:m...@karels.net wrote:
Jumping to (near) the end of the thread, I like most of Andre's proposal.
Running
On 7 August 2013 13:08, Scott Long scott4l...@yahoo.com wrote:
An even rore relevant difference is that taskqueues have a much stronger
management API. Ithreads can only be scheduled by generating a hardware
interrupt,
can only be drained by calling bus_teardown_intr(), and cannot be
On 07.08.2013 22:48, Adrian Chadd wrote:
On 7 August 2013 13:08, Scott Long scott4l...@yahoo.com wrote:
An even rore relevant difference is that taskqueues have a much stronger
management API. Ithreads can only be scheduled by generating a hardware
interrupt,
can only be drained by calling
On Aug 6, 2013, at 9:43 AM, Andre Oppermann wrote:
The driver supplies a TX frame transmit function (mostly like if_transmit
today) which does all locking and multi-queue handling internally (driver
owned. This gives driver writers the freedom to better adjust to different
hardware
On Aug 6, 2013, at 9:43 AM, Andre Oppermann wrote:
The driver supplies a TX frame transmit function (mostly like if_transmit
today) which does all locking and multi-queue handling internally (driver
owned. This gives driver writers the freedom to better adjust to different
hardware
On 05.08.2013 23:53, Luigi Rizzo wrote:
On Mon, Aug 05, 2013 at 11:04:44PM +0200, Andre Oppermann wrote:
On 05.08.2013 19:36, Luigi Rizzo wrote:
...
[picking a post at random to reply in this thread]
tell whether or not we should bail out).
Ideally we don't want to have any locks in the
thanks for the explanations and for experimenting with the various
alternatives.
I started this thread just to understand whether
something was already in place, and to make sure that what I
do with netmap is not worse than the situation we have now.
I guess that while the best solution comes
On Aug 5, 2013, at 2:23 AM, Luigi Rizzo ri...@iet.unipi.it wrote:
i am slightly unclear of what mechanisms we use to prevent races
between interface being reconfigured (up/down/multicast setting, etc,
all causing reinitialization of the rx and tx rings) and
i) packets from the host stack
- Original Message -
i am slightly unclear of what mechanisms we use to prevent races
between interface being reconfigured (up/down/multicast setting, etc,
all causing reinitialization of the rx and tx rings) and
i) packets from the host stack being sent out;
ii) interrupts from
On 5 August 2013 07:59, Bryan Venteicher bry...@daemoninthecloset.org wrote:
What I've done in my drivers is:
* Lock the core mutex
* Clear IFF_DRV_RUNNING
* Lock/unlock each queue's lock
.. and I think that's the only sane way of doing it.
I'm going to (soon) propose something
On Mon, Aug 5, 2013 at 5:46 PM, Adrian Chadd adr...@freebsd.org wrote:
On 5 August 2013 07:59, Bryan Venteicher bry...@daemoninthecloset.org
wrote:
What I've done in my drivers is:
* Lock the core mutex
* Clear IFF_DRV_RUNNING
* Lock/unlock each queue's lock
.. and I think
On 08/05/13 09:15, Luigi Rizzo wrote:
On Mon, Aug 5, 2013 at 5:46 PM, Adrian Chadd adr...@freebsd.org wrote:
On 5 August 2013 07:59, Bryan Venteicher bry...@daemoninthecloset.org
wrote:
What I've done in my drivers is:
* Lock the core mutex
* Clear IFF_DRV_RUNNING
* Lock/unlock
I'm travelling back to San Jose today; poke me tomorrow and I'll brain
dump what I did in ath(4) and the lessons learnt.
The TL;DR version - you don't want to grab an extra lock in the
read/write paths as that slows things down. Reuse the same per-queue
TX/RX lock and have:
* a reset flag that
.. and I bet it's not a design pattern, and this is total conjecture on my part:
* the original drivers weren't SMP safe;
* noone really sat down and figured out how to correctly synchronise
all of this stuff;
* people did the minimum amount of work to keep the driver from
immediately crashing,
On Aug 5, 2013, at 11:20 AM, Adrian Chadd adr...@freebsd.org wrote:
.. and I bet it's not a design pattern, and this is total conjecture on my
part:
* the original drivers weren't SMP safe;
* noone really sat down and figured out how to correctly synchronise
all of this stuff;
* people
On Mon, Aug 5, 2013 at 7:17 PM, Adrian Chadd adr...@freebsd.org wrote:
I'm travelling back to San Jose today; poke me tomorrow and I'll brain
dump what I did in ath(4) and the lessons learnt.
The TL;DR version - you don't want to grab an extra lock in the
read/write paths as that slows
Sigh, this ends up being ugly I'm afraid. I need some time to look at code
and think about it.
Jack
On Mon, Aug 5, 2013 at 10:36 AM, Luigi Rizzo ri...@iet.unipi.it wrote:
On Mon, Aug 5, 2013 at 7:17 PM, Adrian Chadd adr...@freebsd.org wrote:
I'm travelling back to San Jose today; poke me
On Mon, Aug 5, 2013 at 7:49 PM, Jack Vogel jfvo...@gmail.com wrote:
Sigh, this ends up being ugly I'm afraid. I need some time to look at code
and think about it.
actually the intel drivers seem in decent shape,
especially if we reuse IFF_DRV_RUNNING as the reset flag
and the core+queue lock
No, brian said two things:
* the flag, protected by the core lock
* per-queue flags
-adrian
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to
What do you think about this change?
Cheers,
Jack
On Mon, Aug 5, 2013 at 10:58 AM, Luigi Rizzo ri...@iet.unipi.it wrote:
On Mon, Aug 5, 2013 at 7:49 PM, Jack Vogel jfvo...@gmail.com wrote:
Sigh, this ends up being ugly I'm afraid. I need some time to look at
code and think about it.
On Mon, Aug 5, 2013 at 8:19 PM, Adrian Chadd adr...@freebsd.org wrote:
No, brian said two things:
* the flag, protected by the core lock
* per-queue flags
i see no mentions on per-queue flags on his email.
This is the relevant part
What I've done in my drivers is:
* Lock
On 05.08.2013 16:59, Bryan Venteicher wrote:
- Original Message -
i am slightly unclear of what mechanisms we use to prevent races
between interface being reconfigured (up/down/multicast setting, etc,
all causing reinitialization of the rx and tx rings) and
i) packets from the host
On Mon, Aug 5, 2013 at 8:46 PM, Jack Vogel jfvo...@gmail.com wrote:
What do you think about this change?
looks good to me. but there is no need to rush, especially
it will be nice if all interested parties agree on an approach
and possibly even naming.
I do not have any specific test case but
- Original Message -
On Mon, Aug 5, 2013 at 8:19 PM, Adrian Chadd adr...@freebsd.org wrote:
No, brian said two things:
* the flag, protected by the core lock
* per-queue flags
i see no mentions on per-queue flags on his email.
This is the relevant part
Right, I just
On 05.08.2013 19:36, Luigi Rizzo wrote:
On Mon, Aug 5, 2013 at 7:17 PM, Adrian Chadd adr...@freebsd.org wrote:
I'm travelling back to San Jose today; poke me tomorrow and I'll brain
dump what I did in ath(4) and the lessons learnt.
The TL;DR version - you don't want to grab an extra lock in
On Mon, Aug 05, 2013 at 11:04:44PM +0200, Andre Oppermann wrote:
On 05.08.2013 19:36, Luigi Rizzo wrote:
...
[picking a post at random to reply in this thread]
tell whether or not we should bail out).
Ideally we don't want to have any locks in the RX and TX path at all.
Ok i have read
32 matches
Mail list logo