(1) Feature request: Can a knob be added to turn down the verbosity of
Varnish logging? Right now on a quad-core Xeon we can service about
14k conn/s, which is good, but I wonder whether we could eke out even
more performance by quelling information that we don't need to log.
(2) HTTP/1.1
Does 'sysctl fs.file-max' say? It should be = the ulimit.
--Michael
On Wed, Feb 20, 2008 at 4:04 PM, Andrew Knapp [EMAIL PROTECTED] wrote:
Hello,
I'm getting this error when running varnishd:
Child said (2, 15369): Assert error in wrk_thread(), cache_pool.c line
217:
PROTECTED] On Behalf
Of Michael S. Fischer
Sent: Thursday, February 28, 2008 1:57 PM
To: Andrew Knapp
Cc: varnish-misc@projects.linpro.no
Subject: Re: Child dying with Too many open files
Is varnishd being started as root? (even if it drops privileges
later) Only root can
On Tue, Mar 4, 2008 at 1:53 AM, Henning Stener [EMAIL PROTECTED] wrote:
Are you sending one request per connection and closing it, or are you
serving a number of requests to 10K different connections? In the last
case how many requests/sec are you seeing?
In our test, we sent about 200
On Fri, Mar 14, 2008 at 1:37 PM, Sascha Ottolski [EMAIL PROTECTED] wrote:
The challenge is to server 20+ million image files, I guess with up to
1500 req/sec at peak.
A modern disk drive can service 100 random IOPS (@ 10ms/seek, that's
reasonable). Without any caching, you'd need 15 disks to
On Sun, Mar 16, 2008 at 10:02 AM, Michael S. Fischer
[EMAIL PROTECTED] wrote:
I don't know why I'm having such a problem with this. Sigh! I think
I got it right this time.
If I were designing such a service, my choices would be:
Corrections:
(1) 4 machines, each with 4-disk RAID 0
On Mon, Mar 17, 2008 at 12:42 AM, Dag-Erling Smørgrav [EMAIL PROTECTED] wrote:
Michael S. Fischer [EMAIL PROTECTED] writes:
Dag-Erling Smørgrav [EMAIL PROTECTED] writes:
I think the default timeout on backends connection may be a little
short, though.
I assume
On Fri, Mar 21, 2008 at 3:36 AM, Ricardo Newbery [EMAIL PROTECTED] wrote:
and I'm wondering if the first part of this is unnecessary. For
example, what happens if I have this...
if (req.http.Cookie ~ (__ac=|_ZopeId=)) {
pass;
}
but no Cookie header is present in
The Transfer-Encoding: header is missing from the Varnish response as well.
--Michael
On Thu, Mar 27, 2008 at 7:55 AM, Florian Engelhardt [EMAIL PROTECTED]
wrote:
Hello,
i've got a problem with the X-JSON HTTP-Header not beeing delivered by
varnish in pipe and pass mode.
My application
On Fri, Mar 28, 2008 at 4:58 AM, Florian Engelhardt [EMAIL PROTECTED]
wrote:
You could store the sessions on a separate server, for instance on a
memcache or in a database
Good idea. (Though if you use memcached, you'd probably want to
periodically copy the backing store to a file to survive
On Mon, Mar 31, 2008 at 10:34 PM, Stig Sandbeck Mathisen [EMAIL PROTECTED]
wrote:
On Mon, 31 Mar 2008 20:10:06 +0200, Sascha Ottolski [EMAIL PROTECTED]
said:
is there anything like a snapshot release that is worth giving it a
try, especially if my configuration will hopefully stay simple
On Mon, Mar 31, 2008 at 11:08 AM, Sascha Ottolski [EMAIL PROTECTED] wrote:
probably not exactly the same, but may be someone finds it useful: If
just started to dive a bit into HAProxy (http://haproxy.1wt.eu/): the
development version has the ability to calculate the loadbalancing
based on
On Thu, Apr 3, 2008 at 10:26 AM, Sascha Ottolski [EMAIL PROTECTED] wrote:
All this with 1.1.2. It's vital to my setup to cache as many objects as
possible, for a long time, and that they really stay in the cache. Is
there anything I could do to prevent the cache being emptied? May be
I've
On Thu, Apr 3, 2008 at 10:58 AM, Sascha Ottolski [EMAIL PROTECTED] wrote:
and I don't wan't upstream caches or browsers to cache that long, only
varnish, so setting headers doesn't seem to fit.
Why not? Just curious. If it's truly cachable content, it seems as
though it would make sense
On Thu, Apr 3, 2008 at 7:37 PM, Ricardo Newbery [EMAIL PROTECTED] wrote:
URL versioning is usually not appropriate for html
pages or other primary resources that are intended to be reached directly by
the end user and whose URLs must not change.
Back to square one. Are these latter
On Fri, Apr 4, 2008 at 3:20 AM, Sascha Ottolski [EMAIL PROTECTED] wrote:
you are right, _if_ the working set is small. in my case, we're talking
20+ mio. small images (5-50 KB each), 400+ GB in total size, and it's
growing every day. access is very random, but there still is a good
amount
On Fri, Apr 4, 2008 at 11:05 AM, Ricardo Newbery [EMAIL PROTECTED] wrote:
Again, static content isn't only the stuff that is served from
filesystems in the classic static web server scenario. There are plenty of
dynamic applications that process content from database -- applying skins
and
On Fri, Apr 4, 2008 at 3:31 PM, Ricardo Newbery [EMAIL PROTECTED] wrote:
Again, static content isn't only the stuff that is served from
filesystems in the classic static web server scenario. There are plenty
of
dynamic applications that process content from database -- applying
skins
On Tue, Apr 8, 2008 at 4:18 PM, Ricardo Newbery [EMAIL PROTECTED] wrote:
+1 on stale-while-revalidate. I found this one to be real handy.
Another +1
--Michael
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
On Tue, Apr 8, 2008 at 4:25 PM, Michael S. Fischer [EMAIL PROTECTED] wrote:
On Tue, Apr 8, 2008 at 4:18 PM, Ricardo Newbery [EMAIL PROTECTED] wrote:
+1 on stale-while-revalidate. I found this one to be real handy.
Another +1
I should add a qualifier to my vote, that stale-while
On Tue, Apr 8, 2008 at 4:34 PM, Ricardo Newbery [EMAIL PROTECTED] wrote:
I should add a qualifier to my vote, that stale-while-revalidate
generally is used to mask suboptimal backend performance and so I
discourage it in favor of fixing the backend.
Of course the main premise of a
On Tue, Apr 15, 2008 at 12:25 AM, Ricardo Newbery
[EMAIL PROTECTED] wrote:
Assuming that nobody is an available user on your system, then is
the -u user option for varnishd superfluous?
Who's to say that nobody is an unprivileged user?
/etc/passwd:
nobody:*:0:0:alias for root:...
On Tue, Apr 15, 2008 at 1:16 AM, Poul-Henning Kamp [EMAIL PROTECTED] wrote:
Well-engineered software doesn't make potentially false assumptions
about the environment in which it runs.
And they don't.
Varnish for instance assumes that the administrator is not a total
madman, who would
On Tue, Apr 15, 2008 at 11:53 PM, Poul-Henning Kamp [EMAIL PROTECTED] wrote:
In message [EMAIL PROTECTED], Mich
ael S. Fischer writes:
Varnish for instance assumes that the administrator is not a total
madman, who would do something as patently stupid as you prospose
above, under
Why are you using Varnish to serve primarily images? Modern webservers
serve static files very efficiently off the filesystem.
Best regards,
--Michael
On Sun, Jun 1, 2008 at 8:58 AM, Barry Abrahamson [EMAIL PROTECTED]
wrote:
Hi,
Is anyone running multiple varnish instances per server (one
On Mon, Jun 2, 2008 at 7:57 AM, Chris Shenton [EMAIL PROTECTED]
wrote:
We have to fill out pounds of paperwork in order to take any outage on
a public server, no matter how small. Is there a way to restart
Varnish without any downtime -- to continue accepting but holding
connections until
Raising the number of threads will not significantly improve Varnish
concurrency in most cases. I did a test a few months ago using 4 CPUs on
RHEL 4.6 with very high request concurrency and a very low
request-per-connection ratio (i.e., 1:1, no keepalives) and found that the
magic number is about
On Wed, Jun 18, 2008 at 4:51 AM, Rafael Umann [EMAIL PROTECTED]
wrote:
If it is a 32bits system, probably the problem is that your stack size
is 10Mb. So 238 * 10mb = ~2gb
I decreased my stack size to 512Kb. Using 1gb storage files i can now
open almost 1900 threads using all the 2gb that
On Thu, Jun 19, 2008 at 5:37 AM, Rafael Umann [EMAIL PROTECTED]
wrote:
What is your request:connection ratio?
Unfortunately now i dont have servers doing 2 hits/second, and
thats why i dont have stats for you.
Actually, it's right there in your varnishstat output:
36189916
This sounds an awful lot like no PAE kernel -- i.e., 32 bits and a really
old OS.
--Michael
On Fri, Jun 20, 2008 at 2:42 AM, kuku li [EMAIL PROTECTED] wrote:
Hello,
we have been running varnish for a while but noticed that varnish will just
restart itself as the virtual memory goes to
Nearly every modern webserver has optimized file transfers using
sendfile(2). You're not going to get any better performance by shifting the
burden of this task to your caching proxies.
--Michael
On Tue, Aug 12, 2008 at 12:53 AM, Sascha Ottolski [EMAIL PROTECTED] wrote:
Hi all,
I'm certain
I assume this is for logging daemon metadata/error conditions and not
actual traffic?
If this is for request/response logging, consider implementing a
bridge daemon that reads from the SHM like varnishlog or varnishncsa
does, and which then sends the output via liblogging. This will
provide the
Smells like an architecture mismatch. Any chance you're running a
32-bit Varnish build?
--Michael
On Thu, Nov 20, 2008 at 1:34 AM, Paras Fadte [EMAIL PROTECTED] wrote:
Hi,
I have installed varnish 2.0.2 on openSUSE 10.3 (X86-64) , but it
doesn't seem to start and I get VCL compilation
could be the issue?
On Thu, Nov 20, 2008 at 4:12 PM, Michael S. Fischer
[EMAIL PROTECTED] wrote:
Smells like an architecture mismatch. Any chance you're running a
32-bit Varnish build?
--Michael
On Thu, Nov 20, 2008 at 1:34 AM, Paras Fadte [EMAIL PROTECTED] wrote:
Hi,
I have installed
How many CPUs (including all cores) are in your systems?
--Michael
On Nov 20, 2008, at 12:06 PM, Michael wrote:
Hi,
PF What does overflowed work requests in varnishstat signify ? If
this
PF number is large is it a bad sign ?
I have similar problem. overflowed work requests and dropped
On Jan 6, 2009, at 7:42 AM, Marcus Smith wrote:
The build system will automatically detect the availability of
epoll()
and build the corresponding cache_acceptor. It will also automatically
detect the availability of sendfile(), though its use is discouraged
(and disabled by default) due to
What about CARP-like cache routing (i.e., where multiple cache servers
themselves are hash buckets)? This would go a LONG way towards
scalability.
--Michael
On Jan 8, 2009, at 2:29 AM, Tollef Fog Heen wrote:
Hi,
a short while before Christmas, I wrote up a small document pointing
to
On Jan 9, 2009, at 1:59 AM, Tollef Fog Heen wrote:
| What about CARP-like cache routing (i.e., where multiple cache
servers
| themselves are hash buckets)? This would go a LONG way towards
| scalability.
http://varnish.projects.linpro.no/wiki/PostTwoShoppingList second item
sounds like
On Feb 3, 2009, at 6:25 AM, Tollef Fog Heen wrote:
If it has expired, the client just won't send it, so just check
req.http.cookie for the relevant cookie and you'll be fine.
I strongly advise against this, as it could subject you to replay
attacks.
That said, the client does not include an
On Feb 12, 2009, at 3:34 AM, Poul-Henning Kamp wrote:
Well, if people in general think our defaults should be that way, we
can change them, our defaults are whatever the consensus can agree on.
I'm with the OP. Regardless of the finer details of the RFC, if I'm a
web developer and I set the
Not that I have an answer, but I'd be curious to see the differences
in 'pmap -x pid' output for the different children.
--Michael
On Apr 7, 2009, at 6:27 PM, Darryl Dixon - Winterhouse Consulting wrote:
Hi All,
I have an odd problem that I have only noticed happening since
moving from
On Apr 29, 2009, at 9:30 AM, Nick Loman wrote:
Michael S. Fischer wrote:
On Apr 29, 2009, at 9:22 AM, Poul-Henning Kamp wrote:
In message 49f87de4.3040...@loman.net, Nick Loman writes:
Has Varnish got a solution to this problem which does not involve
time-wait recycling? One thing I've
I think the lesson of these cases is pretty clear: make your
cacheable working set fits into the proxy server's available memory --
or, if you want to exceed your available memory, make sure your hit
ratio is sufficiently high that the cache server rarely resorts to
paging in the data.
Ok, so your average latency is 16ms. At a concurrency of 10, at most,
you can obtain 625r/s.
(1 request/connection / 0.016s = 62.5 request/s/connection * 10
connections = 625 request/s)
Try increasing your benchmark concurrency.
--Michael
On Jun 1, 2009, at 11:10 PM, Andreas Jung wrote:
I think you mean 1 week :)
--Michael
On Jun 15, 2009, at 11:02 AM, Jauder Ho wrote:
Well, Velocity is in 2 weeks in San Jose if anyone wants to meet.
It's short notice but probably an appropriate conference.
http://en.oreilly.com/velocity2009
--Jauder
On Mon, Jun 15, 2009 at 3:07 AM,
What's the purpose of these requirements? Just curious.
--Michael
On Jul 25, 2009, at 9:10 PM, Ryan Chan wrote:
Hello,
I have serveral web sites running on Apache/PHP, I want to install a
Transparent Reverse Proxy (e.g. squid, varnish) to cache the static
stuff. (By looking at expire
On Jul 28, 2009, at 2:35 PM, Rob S wrote:
Thanks Darryl. However, I don't think this solution will work in our
usage. We're running a blog. Administrators get un-cached access,
straight through varnish. Then, when they publish, we issue a purge
across the entire site. We need to do this
On Sep 20, 2009, at 6:20 AM, Nils Goroll wrote:
tcp_tw_recycle is incompatible with NAT on the server side
... because it will enforce the verification of TCP time stamps.
Unless all
clients behind a NAT (actually PAD/masquerading) device use
identical timestamps
(within a certain
amd64 refers to the architecture (AKA x86_64), not the particular CPU
vendor. (As a matter of fact, I was unaware of this limitation;
AFAIK it does not exist in FreeBSD.)
In any event, mmap()ing 340GB even on a 64GB box is a recipe for
disaster; you will probably suffer death by paging if
Are you returning a Vary: Accept-Encoding in your origin server's
response headers?
--Michael
On Nov 17, 2009, at 4:01 PM, Daniel Rodriguez wrote:
Hi guys,
I'm having a problem with a varnish implementation that we are testing
to replace an ugly appliance. We were almost ready to place
Varnish does keep a log if you ask it to.
On Jan 10, 2010, at 10:37 PM, pub crawler pubcrawler@gmail.com
wrote:
Alright, up and running with Varnish successfully. Quite happy with
Varnish. Our app servers no longer are failing / overwhelmed.
Here's our new problem...
We have a lot
On Jan 18, 2010, at 5:20 AM, Tollef Fog Heen wrote:
we are considering changing the defaults on how the cache-control header
is handled in Varnish. Currently, we only look at s-maxage and maxage
to decide if and how long an object should be cached. (We also look at
expires, but that's not
On Jan 18, 2010, at 12:58 PM, pub crawler wrote:
This is an inquiry for the Varnish community.
Wondering how many folks are using Varnish purely for binary storage
and caching (graphic files, archives, audio files, video files, etc.)?
Interested specifically in large Varnish installations
On Jan 18, 2010, at 1:52 PM, Poul-Henning Kamp wrote:
In message a8edc1fb-e3e2-4be7-887a-92b0d1da9...@dynamine.net, Michael S.
Fis
cher writes:
What VM can overcome page-thrashing incurred by constantly referencing a
working set that is significantly larger than RAM?
No VM can overcome
On Jan 18, 2010, at 2:16 PM, pub crawler wrote:
Most kernels cache recently-accessed files in RAM, and so common web servers
such as Apache can ?already serve up static objects very quickly if they
are located in the buffer cache. (Varnish's apparent speed is largely
based on the same
On Jan 18, 2010, at 3:08 PM, Ken Brownfield wrote:
I have a hard time believing that any difference in the total response time
of a cached static object between Varnish and a general-purpose webserver
will be statistically significant, especially considering typical Internet
network
On Jan 18, 2010, at 3:37 PM, pub crawler wrote:
Differences in latency of serving static content can vary widely based on
the web server in use, easily tens of milliseconds or more. There are
dozens of web servers out there, some written in interpreted languages, many
custom-written for a
On Jan 18, 2010, at 3:54 PM, Ken Brownfield wrote:
Adding unnecessary software overhead will add latency to requests to the
filesystem, and obviously should be avoided. However, a cache in front of a
general web server will 1) cause an object miss to have additional latency
(though small)
On Jan 18, 2010, at 4:15 PM, Ken Brownfield wrote:
Ironically and IMHO, one of the barriers to Varnish scalability is its thread
model, though this problem strikes in the thousands of connections.
Agreed. In an early thread on varnish-misc in February 2008 I concluded that
reducing
On Jan 18, 2010, at 4:35 PM, Poul-Henning Kamp wrote:
In message 97f066dd-4044-46a7-b3e1-34ce928e8...@slide.com, Ken Brownfield
wri
tes:
Ironically and IMHO, one of the barriers to Varnish scalability
is its thread model, though this problem strikes in the thousands
of connections.
On Jan 19, 2010, at 12:46 AM, Poul-Henning Kamp wrote:
In message b5ef6a23-b6bb-49a6-8eab-1043fc7bf...@dynamine.net, Michael S.
Fis
cher writes:
Does Varnish already try to utilize CPU caches efficiently by employing =
some sort of LIFO thread reuse policy or by pinning thread pools to =
On Jan 24, 2010, at 7:23 AM, Angelo Höngens wrote:
What is thread_pool_max set to? Have you tried lowering it? We have
found that on systems with very high cache-hit ratios, 16 threads per
CPU is the sweet spot to avoid context-switch saturation.
[ang...@nmt-nlb-03 ~]$ varnishadm -T
On Jan 24, 2010, at 10:40 AM, Angelo Höngens wrote:
According to top, the CPU usage for the varnishd process is 0.0% at 400
req/sec. The load over the past 15 minutes is 0.45, probably mostly
because of haproxy running on the same machine. So I don't think load is
a problem.. My problem is
63 matches
Mail list logo