On Wed, 20 Jun 2001, Terry Lambert wrote:
Back to swapping socket structures...
You could swap them if you wanted to give up some KVA
space to be able to do it.
Which is a problem, especially for Linux. The problem
here is that there are x86 machines around with 64GB of
RAM. Linux has just
On Wed, 20 Jun 2001, Terry Lambert wrote:
assistance (John Dyson's work on the unified VM and
buffer cache predated all such non-academic work in
all commercial UNIX implementations by almost two years,
and included cache coloring, which was a brand new
concept, at the time). FreeBSD has
On Tue, 19 Jun 2001, Matt Dillon wrote:
to handle more then 250 requests/sec. With the connection load you
want to handle, the chance of the data being cacheable in ram is
fairly low. So a disk-based caching proxy will drop connection
performance by two orders of
:
:On Tue, 19 Jun 2001, Matt Dillon wrote:
:
: to handle more then 250 requests/sec. With the connection load you
: want to handle, the chance of the data being cacheable in ram is
: fairly low. So a disk-based caching proxy will drop connection
: performance by two orders of
On Wed, 20 Jun 2001, Matt Dillon wrote:
This is fairly easy to do. You can use SO_SNDBUF and SO_RCVBUF
socket opts to adjust the tcp buffer space. You can make the default
small and receive-centric and when you think you've got a good
connection you can pump it up.
I
Ashutosh S. Rajekar wrote:
For the diskless case I don't know if you can make
it to a million simultanious connections, but Terry
has gotten his boxes to do a hundred thousand so we
know that at least is doable. But rather then spend a
Hmmm. I wonder how much TCP/IP
I guess we beat you to the punch...
We have a product which is now shipping, and which currently
supports 1,000,000 concurrent connections.
I guess quite a lot of people are at it right now, the prime one is
NetScaler. If I'm not wrong, they brag about a million connections or so,
on a
Ashutosh S. Rajekar wrote:
I guess we beat you to the punch...
We have a product which is now shipping, and which currently
supports 1,000,000 concurrent connections.
I guess quite a lot of people are at it right now, the prime
one is NetScaler. If I'm not wrong, they brag about a
On Wed, 20 Jun 2001, Terry Lambert wrote:
Their 3200 only has 1G of RAM; you could _barely_ fit the
TCP state for 1,000,000 connections into just 1G of RAM,
and have a tiny amount left over for buffers, drivers,
the rest of your kernel, etc.. I can't believe that their
3100 (only 512M of
:
:On Wed, 20 Jun 2001, Matt Dillon wrote:
:
: This is fairly easy to do. You can use SO_SNDBUF and SO_RCVBUF
: socket opts to adjust the tcp buffer space. You can make the default
: small and receive-centric and when you think you've got a good
: connection you can pump it up.
On Wed, Jun 20, 2001 at 12:04:22AM -0700, Matt Dillon wrote:
A web proxy could be
round-robined fairly easily, but for a mail relay it is often a good
idea to split the incoming and outgoing mail into two separate round
robins (two separate groups of machines).
Why's that?
On Wed, 20 Jun 2001, Matt Dillon wrote:
I don't think this represents the biggest problem you would face,
though. It is far more likely that hung or slow connections
(e.g. the originator goes away without disconnecting the socket
or the originator is on a slow link) will
Terry Lambert wrote:
Ashutosh S. Rajekar wrote:
I guess we beat you to the punch...
We have a product which is now shipping, and which currently
supports 1,000,000 concurrent connections.
I guess quite a lot of people are at it right now, the prime
one is NetScaler. If I'm
Ashutosh S. Rajekar wrote:
On Wed, 20 Jun 2001, Terry Lambert wrote:
Their 3200 only has 1G of RAM; you could _barely_ fit the
TCP state for 1,000,000 connections into just 1G of RAM,
and have a tiny amount left over for buffers, drivers,
the rest of your kernel, etc.. I can't
void wrote:
On Wed, Jun 20, 2001 at 12:04:22AM -0700, Matt Dillon wrote:
A web proxy could be
round-robined fairly easily, but for a mail relay it
is often a good idea to split the incoming and outgoing
mail into two separate round robins (two separate groups
of
Rik van Riel wrote:
On Wed, 20 Jun 2001, Matt Dillon wrote:
I don't think this represents the biggest problem
you would face, though. It is far more likely that
hung or slow connections (e.g. the originator goes
away without disconnecting the socket or the
On Mon, 18 Jun 2001, Matt Dillon wrote:
Don't worry about the MMU. Tests have shown that while 4MB pages are
nice, the performance boost is relatively minor. The kernel maps itself
using 4MB pages but normal 4K pte's are used for kernel allocations.
What you are doing
:Well, we are building a web accelerator box called WebEnhance, that would
:support around a million TCP/IP connections (brag .. brag..). It would
:selectively function as a Layer 2/4/7 switch. And its going to run a
:kernel proxy, and probably nothing significant in user mode. It might be
Hi,
I'm trying to give the kernel (4.0-RELEASE) 2Gb of memory to work with. I
can afford to have 4Gb of physical memory on one of my servers, and hence
the experiments.
Is it safe to play around with KERNBASE, and get away without breaking
code ? Is there any other advisable method if this one
:Hi,
:
:I'm trying to give the kernel (4.0-RELEASE) 2Gb of memory to work with. I
:can afford to have 4Gb of physical memory on one of my servers, and hence
:the experiments.
:
:Is it safe to play around with KERNBASE, and get away without breaking
:code ? Is there any other advisable method if
:Hi,
:
:I'm trying to give the kernel (4.0-RELEASE) 2Gb of memory to work with. I
:can afford to have 4Gb of physical memory on one of my servers, and hence
:the experiments.
:
:Is it safe to play around with KERNBASE, and get away without breaking
:code ? Is there any other advisable method if
On Mon, 18 Jun 2001, Matt Dillon wrote:
DG changed KERNBASE a while back to reserve a gigabyte of VM for the
kernel. This should be sufficient on a 4G machine but it depends where
your resources are going. If your server's resources are user-process
centric then you don't
:An associated question: along with this, changing the kernel to use only
:PDEs should be better for TLB performance. Mapping 4Mb at a time would
:definitely be much better than 4k. I'm talking of having the entire kernel
:(at least the code) find mappings in the TLB, and keeping 4Mb mappings
23 matches
Mail list logo