Hi,
this is a heads up in case anyone is relying on this in private VNET
modules or code. I am planning on merging this code to stable/8
probably during the weekend. It should be a NOP for almost everyone,
especially if not running a VIMAGE kernel.
/bz
--
Bjoern A. Zeeb
Submitter-Id: current-users
Originator:J. Hellenthal
Organization:
Confidential: no
Synopsis: Make 220.backup-pkgdb cd(1) and backup only the package
database.
Severity: non-critical
Priority: medium
Category: conf
Class: change-request
Release: FreeBSD
On Mon, 4 Apr 2011 07:37:15 -0700 Artem Belevich a...@freebsd.org wrote
about Re: drives 2TB on mpt device:
AB You're probably out of luck as far as 2Tb+ support for 1068-based HBAs:
AB http://kb.lsi.com/KnowledgebaseArticle16399.aspx
AB
AB Newer controllers based on LSI2008 (mps driver?) should
Hi,
i tried setting the mtu on one of my ixgbe(4) intel NICs to support
jumbo frames. This is on a box with RELENG_8 from today.
# ifconfig ix0 mtu 9198
I then get the following error:
# tail -n 1 /var/log/messages
Apr 14 12:48:43 siloneu kernel: ix0: Could not setup receive structures
I
How many 9k jumbo clusters are available?
On Thu, Apr 14, 2011 at 3:05 PM, Leon Meßner
l.mess...@physik.tu-berlin.de wrote:
Hi,
i tried setting the mtu on one of my ixgbe(4) intel NICs to support
jumbo frames. This is on a box with RELENG_8 from today.
# ifconfig ix0 mtu 9198
I then get
On Thu, Apr 14, 2011 at 03:44:23PM +0200, K. Macy wrote:
How many 9k jumbo clusters are available?
Does this output suffice as information ?
# netstat -m
8194/1031/9225 mbufs in use (current/cache/total)
8192/518/8710/25600 mbuf clusters in use (current/cache/total/max)
8192/512
That should be plenty, but how large are your receive queues?
\Kip
On Thu, Apr 14, 2011 at 4:18 PM, Leon Meßner
l.mess...@physik.tu-berlin.de wrote:
On Thu, Apr 14, 2011 at 03:44:23PM +0200, K. Macy wrote:
How many 9k jumbo clusters are available?
Does this output suffice as information ?
Hi, all,
Am 03.04.2011 um 15:44 schrieb Patrick M. Hausen:
Am 03.04.2011 um 15:36 schrieb Alexander Motin:
I've noticed that some RAID BIOS'es disable write cache on their disks.
ata(4) enabled cache in such cases, but CAM doesn't now. I'll take care of
it for ATA. For now you can manage it
On Thu, Apr 14, 2011 at 04:20:19PM +0200, K. Macy wrote:
That should be plenty, but how large are your receive queues?
How do i show that ? Random sysctl output following ;)
# sysctl net.inet.tcp.recvspace
net.inet.tcp.recvspace: 65536
# sysctl -a | grep ix.0.queue\*
On Apr 14, 2011, at 2:05 PM, Leon Meßner wrote:
Hi,
i tried setting the mtu on one of my ixgbe(4) intel NICs to support
jumbo frames. This is on a box with RELENG_8 from today.
# ifconfig ix0 mtu 9198
I then get the following error:
# tail -n 1 /var/log/messages
Apr 14 12:48:43
Also, how much memory do you have and what architecture?
On Thu, Apr 14, 2011 at 4:14 PM, Nikolay Denev nde...@gmail.com wrote:
On Apr 14, 2011, at 2:05 PM, Leon Meßner wrote:
Hi,
i tried setting the mtu on one of my ixgbe(4) intel NICs to support
jumbo frames. This is on a box with
If you get this message its only for one reason, you don't have enough mbufs
to
fill your rings. You must do one of two things, either reduce the number of
queues,
or increase the relevant mbuf pool.
Increase the 9K mbuf cluster pool.
Jack
On Thu, Apr 14, 2011 at 6:05 AM, Leon Meßner
That isn't guaranteed to work if he is KVA limited.
On Thu, Apr 14, 2011 at 6:44 PM, Jack Vogel jfvo...@gmail.com wrote:
If you get this message its only for one reason, you don't have enough mbufs
to
fill your rings. You must do one of two things, either reduce the number of
queues,
or
On Thu, Apr 14, 2011 at 08:55:17PM +0200, K. Macy wrote:
That isn't guaranteed to work if he is KVA limited.
On Thu, Apr 14, 2011 at 6:44 PM, Jack Vogel jfvo...@gmail.com wrote:
If you get this message its only for one reason, you don't have enough mbufs
to
fill your rings. You must do
On Thu, Apr 14, 2011 at 9:44 PM, Leon Meßner
l.mess...@physik.tu-berlin.de wrote:
On Thu, Apr 14, 2011 at 08:55:17PM +0200, K. Macy wrote:
That isn't guaranteed to work if he is KVA limited.
On Thu, Apr 14, 2011 at 6:44 PM, Jack Vogel jfvo...@gmail.com wrote:
If you get this message its only
So, what do you have in mind as the real problem then?
Jack
On Thu, Apr 14, 2011 at 11:55 AM, K. Macy km...@freebsd.org wrote:
That isn't guaranteed to work if he is KVA limited.
On Thu, Apr 14, 2011 at 6:44 PM, Jack Vogel jfvo...@gmail.com wrote:
If you get this message its only for one
On Thu, Apr 14, 2011 at 10:18 PM, Jack Vogel jfvo...@gmail.com wrote:
So, what do you have in mind as the real problem then?
The problem was the one that you provided the solution to. I was
simply observing that auto-tuning of mbuf jumbo cluster limits is in
need of improvement.
Kip
Jack
If you are using the latest code, then the RX ring size is set to be 2K
descriptors, so you
will use that many 9k jumbos per queue to initialize things. Having a spare
amount free
to use as you clean/refresh is needed also.
I upped the ring size for performance reasons on 10G, its possible to try
18 matches
Mail list logo