On 26.08.2013 15:43, Roger Pau Monné wrote:
On 26/08/13 15:22, Andre Oppermann wrote:
On 26.08.2013 13:02, Roger Pau Monne wrote:
r254804 and r254807 changed the types of some of the members of the
mbuf struct, and introduced some compile time errors in netback
debug messages that prevented
On 26.08.2013 13:02, Roger Pau Monne wrote:
r254804 and r254807 changed the types of some of the members of the
mbuf struct, and introduced some compile time errors in netback
debug messages that prevented compiling a XENHVM kernel.
Thanks, I fixed the printf's with r254910 in a slightly differ
I want to put these mbuf changes/updates/adjustments up for objections, if any,
before committing them.
This is a moderate overhaul of the mbuf headers and fields to take us into the
next 5 years and two releases. The mbuf headers, in particular the pkthdr, have
seen a number of new uses and abu
On 07.08.2013 09:18, Luigi Rizzo wrote:
On Wed, Aug 7, 2013 at 5:26 AM, Mike Karels mailto:m...@karels.net>> wrote:
Jumping to (near) the end of the thread, I like most of Andre's proposal.
Running with minimal locks at this layer is an admirable goal, and I agree
with most of what wa
On 05.08.2013 23:53, Luigi Rizzo wrote:
On Mon, Aug 05, 2013 at 11:04:44PM +0200, Andre Oppermann wrote:
On 05.08.2013 19:36, Luigi Rizzo wrote:
...
[picking a post at random to reply in this thread]
tell whether or not we should bail out).
Ideally we don't want to have any locks i
On 05.08.2013 19:36, Luigi Rizzo wrote:
On Mon, Aug 5, 2013 at 7:17 PM, Adrian Chadd wrote:
I'm travelling back to San Jose today; poke me tomorrow and I'll brain
dump what I did in ath(4) and the lessons learnt.
The TL;DR version - you don't want to grab an extra lock in the
read/write paths
On 05.08.2013 16:59, Bryan Venteicher wrote:
- Original Message -
i am slightly unclear of what mechanisms we use to prevent races
between interface being reconfigured (up/down/multicast setting, etc,
all causing reinitialization of the rx and tx rings) and
i) packets from the host st
On 23.07.2013 09:28, Andrey V. Elsukov wrote:
On 21.07.2013 00:43, Taku YAMAMOTO wrote:
After r253088, systems with IPSEC and KSTACK_PAGES < 4 crashes on
booting into multi-user mode.
The crash is due to sysctl -a in /etc/rc.d/initrandom ended up with
kernel stack overflow.
where type is str
On 12.07.2013 12:56, Fabian Keil wrote:
Andre Oppermann wrote:
On 10.07.2013 15:18, Fabian Keil wrote:
Andre Oppermann wrote:
We have a SYN cookie implementation for quite some time now but it
has some limitations with current realities for window scaling and
SACK encoding the in the few
On 11.07.2013 18:09, Michael Butler wrote:
On 07/11/13 12:07, Michael Butler wrote:
Seems gcc is rather fussy about propagating 'const' and fails to compile
/usr/src/sys/crypto/siphash/siphash.c after SVN r253208.
I believe the attached patch is correct but please review ..
imb
grr .
On 10.07.2013 15:18, Fabian Keil wrote:
Andre Oppermann wrote:
We have a SYN cookie implementation for quite some time now but it
has some limitations with current realities for window scaling and
SACK encoding the in the few available bits.
This patch updates and improves SYN cookies mainly
On 05.07.2013 20:38, Cy Schubert wrote:
In message <20130705084649.gc67...@freebsd.org>, Gleb Smirnoff writes:
What I'd prefer to see is the following:
- commit new ipfilter untouched to vendor-sys/ipfilter
- nuke sys/contrib/ipfilter
- svn copy vendor-sys/ipfilter to sys/netpfil/ipfilter
Hav
We have a SYN cookie implementation for quite some time now but it
has some limitations with current realities for window scaling and
SACK encoding the in the few available bits.
This patch updates and improves SYN cookies mainly by:
a) encoding of MSS, WSCALE (window scaling) and SACK into the
On 19.06.2013 11:34, Ian FREISLICH wrote:
Andre Oppermann wrote:
On 19.06.2013 11:10, Ian FREISLICH wrote:
Hi
I'm seeing this panic quite regularly now. Most recent sighting on r251858.
This panic message is not very informative and very hard to extract any
meaningful hints. Do you h
On 19.06.2013 11:10, Ian FREISLICH wrote:
Hi
I'm seeing this panic quite regularly now. Most recent sighting on r251858.
This panic message is not very informative and very hard to extract any
meaningful hints. Do you have a core dump and matching debug kernel?
--
Andre
cpuid = 15
KDB: st
On 31.05.2013 12:17, Florent Peterschmitt wrote:
Le 31/05/2013 12:13, Andre Oppermann a écrit :
Any ideas how to fix this?
# make -j8 buildworld
--- buildworld ---
make: illegal option -- J
usage: make [-BPSXeiknpqrstv] [-C directory] [-D variable]
[-d flags] [-E variable] [-f
Any ideas how to fix this?
# make -j8 buildworld
--- buildworld ---
make: illegal option -- J
usage: make [-BPSXeiknpqrstv] [-C directory] [-D variable]
[-d flags] [-E variable] [-f makefile] [-I directory]
[-j max_jobs] [-m directory] [-V variable]
[variable=value] [targe
On 08.05.2013 07:57, Andrey Smagin wrote:
I tried by analogy your patch add _padalign in all places where was before
r250300, and in stcp. Uptime now 15hours, before - 5min - 3hour max. I waiting
for more statistic.
my changes:
The padalign commit has been backed out in r250300 and we are in
On 25.04.2013 07:40, Olivier Cochard-Labbé wrote:
On Wed, Apr 24, 2013 at 1:46 PM, Sami Halabi wrote:
3. there some point of improved performance (without fw) that went down
again somewhere before Clang got prod.
Found it !
It's commit 242402: "Rework the known mutexes..."
Again one has to
On 24.04.2013 12:45, Olivier Cochard-Labbé wrote:
Hi all,
here is the result of my simple-and-dummy bench script regarding
forwarding/ipfw/pf performance evolution on -current on a single-core
server with one flow only.
It's the result of more than 810 bench tests (including reboot between
each)
On 15.04.2013 05:32, Julian Elischer wrote:
On 4/11/13 5:18 PM, Andre Oppermann wrote:
Excuse me for being slightly spammy but I've received feedback that we
haven't spread this information widely enough outside the inner circles
and interested people missed the announcement.
EuroB
On 15.04.2013 19:48, Cy Schubert wrote:
I did consider a port but given it would has to touch bits and pieces of
the source tree (/usr/src), a port would be messy and the decision was made
to work on importing it into base.
Actually it shouldn't touch many if any pieces of src/sys. Everything
Excuse me for being slightly spammy but I've received feedback that we
haven't spread this information widely enough outside the inner circles
and interested people missed the announcement.
EuroBSDcon 2013: September 28-29 in Malta
=
EuroBSDcon is the Euro
On 15.03.2013 15:08, Rick Macklem wrote:
Lars Eggert wrote:
Hi,
this reminds me that I ran into an issue lately with the new NFS and
locking for NFSv3 mounts on a client that ran -CURRENT and a server
that ran -STABLE.
When I ran "portmaster -a" on the client, which mounted /usr/ports and
/usr
On 16.03.2013 01:44, Peter Wemm wrote:
On Fri, Mar 15, 2013 at 2:03 PM, John Baldwin wrote:
On Friday, March 15, 2013 11:24:32 am Andre Oppermann wrote:
On 15.03.2013 14:46, John Baldwin wrote:
On Friday, March 15, 2013 9:40:56 am Andre Oppermann wrote:
Hi Rick, all,
is there a plan to
On 15.03.2013 14:46, John Baldwin wrote:
On Friday, March 15, 2013 9:40:56 am Andre Oppermann wrote:
Hi Rick, all,
is there a plan to decide for one NFS implementation for FreeBSD 10.0,
or to keep both around indefinately?
I'm talking about:
oldNFS in sys/{nfs, nfsclient, nfss
On 15.03.2013 15:01, Eggert, Lars wrote:
Hi,
this reminds me that I ran into an issue lately with the new NFS and locking
for NFSv3 mounts on a client that ran -CURRENT and a server that ran
-STABLE.
When I ran "portmaster -a" on the client, which mounted /usr/ports and
/usr/local, as well as
Hi Rick, all,
is there a plan to decide for one NFS implementation for FreeBSD 10.0,
or to keep both around indefinately?
I'm talking about:
oldNFS in sys/{nfs, nfsclient, nfsserver} NFSv2+NFSv3
newNFS in sys/fs/{nfs, nfsclient, nfsserver} NFSv2+NFSv3+NFSv4
NewNFS supports newer NFS stan
On 08.03.2013 10:16, Konstantin Belousov wrote:
On Thu, Mar 07, 2013 at 06:03:51PM +0100, Andre Oppermann wrote:
pager_map: is used for pager IO to a storage media (disk). Not
pageable. Calculation: MAXPHYS * min(max(nbuf/4, 16), 256).
>
It is more versatile. The space is used for pb
On 01.02.2013 18:09, Alan Cox wrote:
On 02/01/2013 07:25, Andre Oppermann wrote:
Rebase auto-sizing of limits on the available KVM/kmem_map instead of
physical
memory. Depending on the kernel and architecture configuration these
two can
be very different.
Comments and reviews
As an outcome of the recent problems with auto-sizing and auto-tuning of
the various kernel subsystems and related memory structures I've taken a
closer look at the whole KVM inner working and initialization process.
I've found the VM and KVM initialization to be somewhat obscure and stuck
half-w
On 31.01.2013 23:25, Ian Lepore wrote:
On Thu, 2013-01-31 at 18:13 +0100, Andre Oppermann wrote:
On 28.01.2013 20:20, Alan Cox wrote:
On 01/28/2013 08:22, Ian Lepore wrote:
On Mon, 2013-01-28 at 00:09 -0600, Alan Cox wrote:
On Sun, Jan 27, 2013 at 12:11 PM, Ian Lepore wrote:
I ran into a
On 28.01.2013 20:20, Alan Cox wrote:
On 01/28/2013 08:22, Ian Lepore wrote:
On Mon, 2013-01-28 at 00:09 -0600, Alan Cox wrote:
On Sun, Jan 27, 2013 at 12:11 PM, Ian Lepore wrote:
I ran into a panic while attempting to un-tar a large file on a
DreamPlug (arm-based system) running -current. T
On 31.01.2013 12:27, Eggert, Lars wrote:
Hi,
On Jan 30, 2013, at 22:43, Craig Rodrigues wrote:
What you need to do is, before the FreeBSD kernel boots, your
loader needs to export some environment variables. This will trigger
the various behaviors in the FreeBSD mount code.
the loader can e
On 23.01.2013 00:22, Artem Belevich wrote:
On Mon, Jan 21, 2013 at 1:06 PM, Pawel Jakub Dawidek wrote:
On Fri, Jan 18, 2013 at 08:26:04AM -0800, m...@freebsd.org wrote:
Should it be set to a larger initial value based on min(physical,KVM) space
available?
It needs to be smaller than the
The autotuning work is reaching into many places of the kernel and
while trying to tie up all lose ends I've got stuck in the kmem_map
and how it works or what its limitations are.
During startup the VM is initialized and an initial kernel virtual
memory map is setup in kmem_init() covering the e
On 27.11.2012 20:16, Alan Cox wrote:
On 11/27/2012 12:43, Andre Oppermann wrote:
On 27.11.2012 19:27, Alan Cox wrote:
On 11/27/2012 12:08, Andre Oppermann wrote:
On 27.11.2012 17:42, Alan Cox wrote:
On 11/27/2012 09:06, Konstantin Belousov wrote:
On Tue, Nov 27, 2012 at 12:26:44PM +0100
On 27.11.2012 19:27, Alan Cox wrote:
On 11/27/2012 12:08, Andre Oppermann wrote:
On 27.11.2012 17:42, Alan Cox wrote:
On 11/27/2012 09:06, Konstantin Belousov wrote:
On Tue, Nov 27, 2012 at 12:26:44PM +0100, Andre Oppermann wrote:
FreeBSD bbb.ccc 10.0-CURRENT FreeBSD 10.0-CURRENT #0:
Fri Nov
On 27.11.2012 17:42, Alan Cox wrote:
On 11/27/2012 09:06, Konstantin Belousov wrote:
On Tue, Nov 27, 2012 at 12:26:44PM +0100, Andre Oppermann wrote:
FreeBSD bbb.ccc 10.0-CURRENT FreeBSD 10.0-CURRENT #0:
Fri Nov 23 17:00:40 CET 2012
a...@bbb.ccc:/usr/obj/usr/src/head/sys/GENERIC amd64
#0
On 27.11.2012 16:51, Konstantin Belousov wrote:
On Tue, Nov 27, 2012 at 04:38:12PM +0100, Andre Oppermann wrote:
On 27.11.2012 16:06, Konstantin Belousov wrote:
On Tue, Nov 27, 2012 at 12:26:44PM +0100, Andre Oppermann wrote:
FreeBSD bbb.ccc 10.0-CURRENT FreeBSD 10.0-CURRENT #0:
Fri Nov 23 17
On 27.11.2012 16:40, Andriy Gapon wrote:
on 27/11/2012 17:38 Andre Oppermann said the following:
Clang doing a manual kernel build of my work tree with "make -j8 kernel".
This sounds like a "process" that may have triggered the problem.
But is it the process that mad
On 27.11.2012 16:06, Konstantin Belousov wrote:
On Tue, Nov 27, 2012 at 12:26:44PM +0100, Andre Oppermann wrote:
FreeBSD bbb.ccc 10.0-CURRENT FreeBSD 10.0-CURRENT #0:
Fri Nov 23 17:00:40 CET 2012
a...@bbb.ccc:/usr/obj/usr/src/head/sys/GENERIC amd64
#0 doadump (textdump=-2014022336) at pcpu.h
On 27.11.2012 16:05, Attilio Rao wrote:
On Tue, Nov 27, 2012 at 11:26 AM, Andre Oppermann wrote:
FreeBSD bbb.ccc 10.0-CURRENT FreeBSD 10.0-CURRENT #0:
Fri Nov 23 17:00:40 CET 2012
a...@bbb.ccc:/usr/obj/usr/src/head/sys/GENERIC amd64
#0 doadump (textdump=-2014022336) at pcpu.h:229
#1
FreeBSD bbb.ccc 10.0-CURRENT FreeBSD 10.0-CURRENT #0:
Fri Nov 23 17:00:40 CET 2012
a...@bbb.ccc:/usr/obj/usr/src/head/sys/GENERIC amd64
#0 doadump (textdump=-2014022336) at pcpu.h:229
#1 0x8033e2d2 in db_fncall (dummy1=,
dummy2=,
dummy3=, dummy4=)
at /usr/src/head/sys/ddb/db_
Hello
I currently working on a number of drivers for popular network
cards and extend them with automatic hybrid interrupt/polling
ithread processing with life-lock prevention (so that the driver
can't consume all CPU when under heavy load or attack).
To properly test this I need the proper hard
On 14.11.2012 03:10, Alfred Perlstein wrote:
Andre, do you think the variable "realmem" could be exported as something like
kmemsize or something?
Or maybe a function call to subr_param.c?
The reason I ask is that I would like to scale things like number of default
sysv semaphores to
somethin
On 06.11.2012 12:02, Fabien Thomas wrote:
Hi Luigi,
do you agree on polling having outlived its usefulness in the light
of interrupt moderating NIC's and SMP complications/disadvantages?
If you have only one interface yes polling is not really necessary.
If you have 10 interfaces the inter
On 06.11.2012 12:30, Luigi Rizzo wrote:
On Tue, Nov 06, 2012 at 11:23:34AM +0100, Andre Oppermann wrote:
...
Hi Luigi,
do you agree on polling having outlived its usefulness in the light
of interrupt moderating NIC's and SMP complications/disadvantages?
yes, we should let it rest in
On 06.11.2012 11:27, Poul-Henning Kamp wrote:
In message <5098e526.6070...@freebsd.org>, Andre Oppermann writes:
Hi Luigi,
do you agree on polling having outlived its usefulness in the light
of interrupt moderating NIC's and SMP complications/disadvantages?
Can I jus
On 05.11.2012 17:57, Luigi Rizzo wrote:
On Mon, Nov 05, 2012 at 04:25:36PM +, Joe Holden wrote:
Luigi Rizzo wrote:
On Mon, Nov 05, 2012 at 08:11:41AM -0500, Ryan Stone wrote:
On Mon, Nov 5, 2012 at 4:40 AM, Joe Holden wrote:
doh, running kernel wasn't as GENERIC as I thought it was, loo
On 05.11.2012 02:39, Manfred Antar wrote:
At 01:57 PM 11/4/2012, you wrote:
On 04.11.2012 21:15, Andreas Tobler wrote:
On 04.11.12 14:57, Andre Oppermann wrote:
On 04.11.2012 13:11, Kim Culhan wrote:
On Sun, November 4, 2012 6:21 am, Dimitry Andric wrote:
On 2012-11-04 02:13, Manfred Antar
On 04.11.2012 21:15, Andreas Tobler wrote:
On 04.11.12 14:57, Andre Oppermann wrote:
On 04.11.2012 13:11, Kim Culhan wrote:
On Sun, November 4, 2012 6:21 am, Dimitry Andric wrote:
On 2012-11-04 02:13, Manfred Antar wrote:
At 03:29 PM 11/3/2012, Adrian Chadd wrote:
After the commit, there
On 22.10.2012 15:28, John Baldwin wrote:
On Sunday, October 21, 2012 7:11:10 am Andre Oppermann wrote:
What's keeping kernel modules from building in parallel with
"make -j8"?
They don't for you? They do for me either via 'make buildkernel'
or the old method.
On 04.11.2012 13:11, Kim Culhan wrote:
On Sun, November 4, 2012 6:21 am, Dimitry Andric wrote:
On 2012-11-04 02:13, Manfred Antar wrote:
At 03:29 PM 11/3/2012, Adrian Chadd wrote:
On 3 November 2012 10:40, Manfred Antar wrote:
i have problem connecting to freebsd box on local network since l
On 04.11.2012 02:13, Manfred Antar wrote:
At 03:29 PM 11/3/2012, Adrian Chadd wrote:
On 3 November 2012 10:40, Manfred Antar wrote:
i have problem connecting to freebsd box on local network since last sunday.
the last kernel that works:
FreeBSD 10.0-CURRENT #0: Sun Oct 28 12:14:38 PDT 2012
a
On 24.10.2012 16:26, Vadim Goncharov wrote:
Hi,
We have 'maxusers' tunable which affects many other tunables, e.g. number of
network mbuf/clusters which is often too low on current machines.
The mbuf/cluster limit can be rethought as well. Since it all comes
out of UMA and is not pre-allocate
The ZERO_COPY_SOCKETS kernel option has been removed from HEAD
today. Please see the explanation in the attached commit message.
--
Andre
--- Begin Message ---
Author: andre
Date: Tue Oct 23 16:33:43 2012
New Revision: 241955
URL: http://svn.freebsd.org/changeset/base/241955
Log:
Note the rem
On 20.10.2012 15:11, Vladislav Prodan wrote:
FreeBSD 9.1-PRERELEASE #0: Wed Jul 25 01:40:56 EEST 2012
I have the server: 8 cores AMD, 16GB RAM, 4x3TB HDD in RAID10 for ZFS.
Sometime wheels fall off the server and the network.
Can this clean-up memory for ZFS cache?
I enclose a picture with th
What's keeping kernel modules from building in parallel with
"make -j8"?
--
Andre
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebs
On 03.10.2012 22:03, Adrian Chadd wrote:
Hi,
somaxconn is the connection queue depth. If it's sitting at a couple
hundred thousand then something else is going crazily wrong.
I understand your frustration, but there's a lot of instances where
the application just isn't doing things "right" and
On 19.04.2012 22:46, Luigi Rizzo wrote:
On Thu, Apr 19, 2012 at 10:05:37PM +0200, Andre Oppermann wrote:
On 19.04.2012 15:30, Luigi Rizzo wrote:
I have been running some performance tests on UDP sockets,
using the netsend program in tools/tools/netrate/netsend
and instrumenting the source code
On 20.04.2012 08:35, Luigi Rizzo wrote:
On Fri, Apr 20, 2012 at 12:37:21AM +0200, Andre Oppermann wrote:
On 20.04.2012 00:03, Luigi Rizzo wrote:
On Thu, Apr 19, 2012 at 11:20:00PM +0200, Andre Oppermann wrote:
On 19.04.2012 22:46, Luigi Rizzo wrote:
The allocation happens while the code has
On 20.04.2012 10:26, Alexander V. Chernikov wrote:
On 20.04.2012 01:12, Andre Oppermann wrote:
On 19.04.2012 22:34, K. Macy wrote:
If the number of peers is bounded then you can use the flowtable. Max
PPS is much higher bypassing routing lookup. However, it doesn't scale
>
On 20.04.2012 00:03, Luigi Rizzo wrote:
On Thu, Apr 19, 2012 at 11:20:00PM +0200, Andre Oppermann wrote:
On 19.04.2012 22:46, Luigi Rizzo wrote:
The allocation happens while the code has already an exclusive
lock on so->snd_buf so a pool of fresh buffers could be attached
there.
Ah, there
On 19.04.2012 23:17, K. Macy wrote:
This only helps if your flows aren't hitting the same rtentry.
Otherwise you still convoy on the lock for the rtentry itself to
increment and decrement the rtentry's reference count.
The rtentry lock isn't obtained anymore. While the rmlock read
lock is hel
On 19.04.2012 22:46, Luigi Rizzo wrote:
On Thu, Apr 19, 2012 at 10:05:37PM +0200, Andre Oppermann wrote:
On 19.04.2012 15:30, Luigi Rizzo wrote:
I have been running some performance tests on UDP sockets,
using the netsend program in tools/tools/netrate/netsend
and instrumenting the source code
On 19.04.2012 22:34, K. Macy wrote:
This is indeed a big problem. I'm working (rough edges remain) on
changing the routing table locking to an rmlock (read-mostly) which
This only helps if your flows aren't hitting the same rtentry.
Otherwise you still convoy on the lock for the rtentry itse
On 19.04.2012 15:30, Luigi Rizzo wrote:
I have been running some performance tests on UDP sockets,
using the netsend program in tools/tools/netrate/netsend
and instrumenting the source code and the kernel do return in
various points of the path. Here are some results which
I hope you find interes
On 11.04.2012 13:00, Luigi Rizzo wrote:
On Wed, Apr 11, 2012 at 12:35:10PM +0200, Andre Oppermann wrote:
On 11.04.2012 01:32, Luigi Rizzo wrote:
On Tue, Apr 10, 2012 at 07:05:00PM -0400, Barney Wolff wrote:
CPU cache?
Cx states?
powerd?
powerd is disabled, and i am going down to C1 at most
On 11.04.2012 01:32, Luigi Rizzo wrote:
On Tue, Apr 10, 2012 at 07:05:00PM -0400, Barney Wolff wrote:
CPU cache?
Cx states?
powerd?
powerd is disabled, and i am going down to C1 at most
> sysctl -a | grep cx
hw.acpi.cpu.cx_lowest: C1
dev.cpu.0.cx_supported: C1/1 C2/80
On 02.04.2012 18:21, Alexandre Martins wrote:
Dear,
I have currently having troubles with a basic socket stress.
The socket are setup to use non-blocking I/O.
During this stress-test, the kernel is running mbuf exhaustion, the goal is to
see system limits.
If the program make a write on a soc
On 08.12.2011 16:34, Luigi Rizzo wrote:
On Fri, Dec 09, 2011 at 12:11:50AM +1100, Lawrence Stewart wrote:
On 12/08/11 05:08, Luigi Rizzo wrote:
...
I ran a bunch of tests on the ixgbe (82599) using RELENG_8 (which
seems slightly faster than HEAD) using MTU=1500 and various
combinations of card
On 08.12.2011 14:11, Lawrence Stewart wrote:
On 12/08/11 05:08, Luigi Rizzo wrote:
On Wed, Dec 07, 2011 at 11:59:43AM +0100, Andre Oppermann wrote:
On 06.12.2011 22:06, Luigi Rizzo wrote:
...
Even in my experiments there is a lot of instability in the results.
I don't know exactly wher
On 06.12.2011 22:06, Luigi Rizzo wrote:
On Tue, Dec 06, 2011 at 07:40:21PM +0200, Daniel Kalchev wrote:
I see significant difference between number of interrupts on the Intel and the
AMD blades. When performing a test between the Intel and AMD blades, the Intel
blade generates 20,000-35,000 in
On a newly installed development machine I installed subversion-freebsd
from ports and ended up with a huge dependency chain. Eventually I got
the usual gnu-hell (auto-*, lib*), but also python27, tcl-8.5, perl-5.12
and m4. This is a bit too much. The last four should not be required to
check out
Just saw the link to a very interesting paper on SMP scalability.
A very good read and highly relevant for our efforts as well. In
certain areas we may already fare better, in others we still have
some work to do.
An Analysis of Linux Scalability to many Cores
ABSTRACT
This paper analyzes the
On 01.10.2010 06:49, Matthew Dillon wrote:
I don't remember the reference but I read a comprehensive comparison
between various indexing methods about a year ago and the splay tree
did considerably better than a RB-tree. The RB-tree actually did
fairly poorly.
It heavily de
On 30.09.2010 19:51, Ivan Voras wrote:
On 09/30/10 18:37, Andre Oppermann wrote:
Both the vmmap and page table make use of splay trees to manage the
entries and to speed up lookups compared to long to traverse linked
lists or more memory expensive hash tables. Some structures though
do have
On 30.09.2010 23:44, Andre Oppermann wrote:
On 30.09.2010 20:04, Roman Divacky wrote:
On Thu, Sep 30, 2010 at 07:49:00PM +0200, Roman Divacky wrote:
On Thu, Sep 30, 2010 at 07:46:32PM +0200, Andre Oppermann wrote:
On 30.09.2010 19:24, Roman Divacky wrote:
are you aware of Summer of Code 2008
On 30.09.2010 20:04, Roman Divacky wrote:
On Thu, Sep 30, 2010 at 07:49:00PM +0200, Roman Divacky wrote:
On Thu, Sep 30, 2010 at 07:46:32PM +0200, Andre Oppermann wrote:
On 30.09.2010 19:24, Roman Divacky wrote:
are you aware of Summer of Code 2008 project by Mayur Shardul?
I remember that
On 30.09.2010 20:38, Alfred Perlstein wrote:
Andre,
Your observations on the effectiveness of the splay tree
mirror the concerns I have with it when I read about it.
I have always wondered though if the splay-tree algorithm
was modified to only perform rotations when a lookup required
more than
On 30.09.2010 20:01, Alan Cox wrote:
On Thu, Sep 30, 2010 at 12:37 PM, Andre Oppermann wrote:
On 30.09.2010 18:37, Andre Oppermann wrote:
Just for the kick of it I decided to take a closer look at the use of
splay trees (inherited from Mach if I read the history correctly) in
the FreeBSD VM
On 30.09.2010 19:24, Roman Divacky wrote:
are you aware of Summer of Code 2008 project by Mayur Shardul?
I remember that there was this project but I never saw any numbers
or other outcome of it. Haven't checked p4 to look at the code
though.
--
Andre
quoting: http://www.freebsd.org/project
On 30.09.2010 19:15, Matthew Fleming wrote:
On Thu, Sep 30, 2010 at 9:37 AM, Andre Oppermann wrote:
Just for the kick of it I decided to take a closer look at the use of
splay trees (inherited from Mach if I read the history correctly) in
the FreeBSD VM system suspecting an interesting journey
On 30.09.2010 18:37, Andre Oppermann wrote:
Just for the kick of it I decided to take a closer look at the use of
splay trees (inherited from Mach if I read the history correctly) in
the FreeBSD VM system suspecting an interesting journey.
Correcting myself regarding the history: The splay
Just for the kick of it I decided to take a closer look at the use of
splay trees (inherited from Mach if I read the history correctly) in
the FreeBSD VM system suspecting an interesting journey.
The VM system has two major structures:
1) the VM map which is per process and manages its address s
On 15.09.2010 17:19, Bjoern A. Zeeb wrote:
On Mon, 13 Sep 2010, Andre Oppermann wrote:
Hey,
When a TCP connection via loopback back to localhost is made the whole
send, segmentation and receive path (with larger packets though) is still
executed. This has some considerable overhead.
To short
On 14.09.2010 18:08, Fabien Thomas wrote:
On 14 sept. 2010, at 17:41, Andre Oppermann wrote:
On 14.09.2010 11:18, Fabien Thomas wrote:
Great,
This will maybe kill the long time debate about "my loopback is slow vs linux"
To have the best of both world what about a socket option
On 14.09.2010 12:35, Maxim Dounin wrote:
Hello!
On Tue, Sep 14, 2010 at 12:12:03PM +0200, Ian FREISLICH wrote:
Fabien Thomas wrote:
Great,
This will maybe kill the long time debate about "my loopback is slow vs
linux"
To have the best of both world what about a socket option to
enable/disabl
On 14.09.2010 12:12, Ian FREISLICH wrote:
Fabien Thomas wrote:
Great,
This will maybe kill the long time debate about "my loopback is slow vs
linux"
To have the best of both world what about a socket option to
enable/disable fusing:
can be useful when you need to see some connection "packetized
quot;.
A sysctl to that effect is already in the patch.
--
Andre
Fabien
On 13 sept. 2010, at 13:33, Andre Oppermann wrote:
When a TCP connection via loopback back to localhost is made the whole
send, segmentation and receive path (with larger packets though) is still
executed. This has some c
On 13.09.2010 14:45, Poul-Henning Kamp wrote:
In message<4c8e0c1e.2020...@networx.ch>, Andre Oppermann writes:
To short-circuit the send and receive sockets on localhost TCP connections
I've made a proof-of-concept patch that directly places the data in the
other side's socke
When a TCP connection via loopback back to localhost is made the whole
send, segmentation and receive path (with larger packets though) is still
executed. This has some considerable overhead.
To short-circuit the send and receive sockets on localhost TCP connections
I've made a proof-of-concept
On 02.09.2010 00:11, ben wilber wrote:
On Sep 1, 2010, at 8:57 AM, Andre Oppermann wrote:
On 01.09.2010 01:13, ben wilber wrote:
Hi,
I just upgraded from r210042 to r212073 and keep getting the panic
introduced in r211317:
panic: tcp_output: len<= tso_segsz
Please try the attached pa
On 02.09.2010 00:11, ben wilber wrote:
On Sep 1, 2010, at 8:57 AM, Andre Oppermann wrote:
On 01.09.2010 01:13, ben wilber wrote:
Hi,
I just upgraded from r210042 to r212073 and keep getting the panic
introduced in r211317:
panic: tcp_output: len<= tso_segsz
Please try the attached pa
On 01.09.2010 01:13, ben wilber wrote:
Hi,
I just upgraded from r210042 to r212073 and keep getting the panic
introduced in r211317:
panic: tcp_output: len<= tso_segsz
Please try the attached patch and report back whether it
fixes the issue.
I'm a bit in a hurry right now and can respond aga
> Greetings,
>
> I found two calls of bcopy() in tcp_hostcache.c got the source and the
> destination pointer wrongly.
> # Perhaps the author got confused with memcpy(), I guess.
Indeed. Originally these were memcpy calls and I've been asked to change
them to bcopy calls. Which I did. It didn'
This is the committed fix for the lockup problems a couple of you
have experienced. If you don't have this fix already please re-
cvsup and remake the kernel.
--
Andre--- Begin Message ---
andre 2003/11/28 08:33:03 PST
FreeBSD src repository
Modified files:
sys/netinet
"M. Warner Losh" wrote:
>
> In message: <[EMAIL PROTECTED]>
> Andre Oppermann <[EMAIL PROTECTED]> writes:
> : Please try the attached patch which should fix it.
>
> I've been having crashes all the time since the hostcache went into
&g
Hello all,
I've found what probably is going wrong in tcp_hostcache. The problem is me
cutting corners (what goes around comes around...) in tcp_hc_insert when the
bucket limit is reached. I made the #if 0 too big and the bucket was not
removed from the tailqueue when we hit the bucket limit. A
1 - 100 of 143 matches
Mail list logo