Hi,
FWIW, device timeout may just be watchdog related race conditions,
rather than an actual hardware device timeout.
I have the same issues in ath(4). I need to fix a whole lot of locking
constructs before I can fix 'that'.
Adrian
___
Note: to view an individual PR, use:
http://www.freebsd.org/cgi/query-pr.cgi?pr=(number).
The following is a listing of current problems submitted by FreeBSD users.
These represent problem reports covering all versions including
experimental development code and obsolete releases.
S Tracker
On Friday, May 04, 2012 6:18:19 pm Konstantin Belousov wrote:
On Fri, May 04, 2012 at 11:30:22AM -0400, John Baldwin wrote:
On Tuesday, May 01, 2012 12:21:21 pm Konstantin Belousov wrote:
On Thu, Apr 12, 2012 at 09:38:49PM +0300, Konstantin Belousov wrote:
On Mon, Apr 09, 2012 at
.. I think you're misunderstanding how PMTU discovery works.
The _point_ is that you refuse to send said frame in the first place.
Any number of intermediary L2 devices may decide to drop your jumbo
TX'ed frame and not generate an ICMP error message, thus breaking PMTU
discovery anyway.
In any
Old Synopsis: IPv6 TCP connection hangs/drops when time/clock on the client is
stepped backwards
New Synopsis: [ip6] IPv6 TCP connection hangs/drops when time/clock on the
client is stepped backwards
Responsible-Changed-From-To: freebsd-bugs-freebsd-net
Responsible-Changed-By: linimon
Old Synopsis: IP fragment reassembly's broken: file transfer over NFSv3/UDP
fails for default NFS packet size
New Synopsis: [ip] IP fragment reassembly's broken: file transfer over
NFSv3/UDP fails for default NFS packet size
Responsible-Changed-From-To: freebsd-bugs-freebsd-net
Hi, a question for jfv@ or whoever else is familiar with the ixgbe
driver - I am looking at a system where the adapter reports a large
number of ierrors that I traced to this stat. What does this mean?
Intuitively it seems like the receive ring isn't being drained fast
enough but wanted to confirm
This is 'missed packet count', the index has actually been misinterpreted
in the
code for a while, it was mistakenly associated with queues, but its really
per
packet buffer, and there are only more than one when there are multiple
traffic
classes (ala DCB). Even so, only MPC(0) should get
While we're on the subject, I've had some confusion for some time now:
On Mon, May 7, 2012 at 5:25 PM, Jack Vogel jfvo...@gmail.com wrote:
Packets are missed when the receive FIFO has insufficient space to store the
incoming packet.
This means the on-card FIFO, i.e. the fixed-size FIFO that is
Hm, I guess I can't read. It was bwn, not bwi. Working now.
On Mon, May 7, 2012 at 5:57 PM, Adam Vande More amvandem...@gmail.comwrote:
I have a laptop trying to install on. This appears to be the only hangup:
none1@pci0:4:0:0: class=0x028000 card=0x000c1028 chip=0x431514e4
rev=0x01
On Mon, May 7, 2012 at 5:31 PM, Juli Mallett jmall...@freebsd.org wrote:
While we're on the subject, I've had some confusion for some time now:
On Mon, May 7, 2012 at 5:25 PM, Jack Vogel jfvo...@gmail.com wrote:
Packets are missed when the receive FIFO has insufficient space to store
the
normal net traffic. But for now in FreeBSD its just one which is divided
into 3 parts: TX, RX, and FDIR (flow director).
Jack, does the sw driver control in any way the partitioning of the
FIFO? I guess enabling 2 hw queues splits the FIFO in half. But
otherwise does the driver control this in
On Mon, May 7, 2012 at 9:42 PM, Vijay Singh vijju.si...@gmail.com wrote:
normal net traffic. But for now in FreeBSD its just one which is divided
into 3 parts: TX, RX, and FDIR (flow director).
Jack, does the sw driver control in any way the partitioning of the
FIFO? I guess enabling 2 hw
Juli is correct, the FIFO is not partitioned by the driver queues as they
exist in the current driver, its only seperated into the 3 parts I
mentioned.
Jack
On Mon, May 7, 2012 at 9:55 PM, Juli Mallett jmall...@freebsd.org wrote:
On Mon, May 7, 2012 at 9:42 PM, Vijay Singh
14 matches
Mail list logo