Two questions -
One, what are in the logs from when this starts?
Two, I forget the *bsd tool, but can you run the appropriate strace / truss /
dtrace tool on the process during lockups (ideally, before, through the start
of, after)?
George William Herbert
Sent from my iPhone
On May 15,
On first impression from this data? Check DNS resolution from the Squid to
that hostname. It sounds like a timeout / retry / recursion fail in
progress...
George William Herbert
Sent from my iPhone
On Jan 29, 2013, at 11:54 PM, Sandrini Christian \(xsnd\) x...@zhaw.ch
wrote:
Hi
We
http://projects.puppetlabs.com/projects/1/wiki/SSL_in_The_Year2038
32-bit date overflow, same problem as the generic UNIX Y2038 bug.
Use 64 bit systems 8-)
George William Herbert
Sent from my iPhone
On Jan 4, 2013, at 1:10 AM, Woon Khai Swen woo...@ioigroup.com wrote:
Found out the
On Dec 29, 2012, at 12:41 PM, 叶雨飞 sunyuc...@gmail.com wrote:
So you are saying even squid is configured to use 16384 fd, it
couldn't because limit is 1024?
That's kind of confusing, I tried to use ulimit -n 16384 as root to
raise FD limit now, will report back.
The ulimit settings
I still find this behavior slightly bizarre, that the ulimit in the
build environment can affect the prod envt. And it keeps biting other
people...
-george
On Tue, Oct 16, 2012 at 2:42 PM, Amos Jeffries squ...@treenet.co.nz wrote:
On 17.10.2012 03:02, Ricardo Rios wrote:
El 2012-10-16 03:17,
On Tue, Oct 16, 2012 at 3:00 PM, Amos Jeffries squ...@treenet.co.nz wrote:
On 17.10.2012 10:48, George Herbert wrote:
I still find this behavior slightly bizarre, that the ulimit in the
build environment can affect the prod envt. And it keeps biting other
people...
It's not ulimit
On Mon, Oct 1, 2012 at 6:09 AM, Graham Butler g.but...@hud.ac.uk wrote:
We are currently looking at replacing our Solaris boxes with a flavour of
Linux to run squid with a focus on Red Hat and Ubuntu. I am trying to collect
some evidence to which OS is being used to run squid and why, before
On Mon, Jul 16, 2012 at 6:25 PM, Amos Jeffries squ...@treenet.co.nz wrote:
On 17.07.2012 04:21, Waitman Gobble wrote:
On 7/16/2012 9:08 AM, William De Luca wrote:
Hey All,
I'm thinking about building a web Cache server and I was thinking
about getting one of those cheap'o Shuttle slim
On Tue, Jan 17, 2012 at 4:25 PM, jeffrey j donovan
dono...@beth.k12.pa.us wrote:
On Jan 17, 2012, at 1:02 PM, nachot wrote:
We currently have a commercial proxy solution in place but since we increased
our bandwidth to 150meg connection, the proxy is slowing things down
considerably as it's
On Mon, Apr 25, 2011 at 2:41 PM, Steve Snyder swsny...@snydernet.net wrote:
I just upgraded from CentOS 5.5 to CentOS 5.6, while running Squid v3.1.12.1
in both environments, and somehow created a race condition in the process.
Besides updating the 200+ software packages that are the
Squid is a web content cache engine, not a filesystem cache
technology. The filesystem cache / acelleration systems are a
completely different class of technology.
If the Alfresco system is doing database-like things on the back end,
filesystem cacheing in front of it is unlikely to be entirely
On Tue, Mar 22, 2011 at 7:27 PM, Marcus Kool
marcus.k...@urlfilterdb.com wrote:
Dejan,
Squid is known to be CPU bound under heavy load and the
Quad core running at 1.6 GHz in not the fastest.
A 3.2 GHz dual core will give you double speed.
Second this. CPU speed - perf wasn't quite linear
Important question - Landy, what version of squid, and what OS, are
you running on?
Was it a precompiled Squid or a custom compliation? If custom, what
were the build options?
I've seen stuff like this repeatedly in the long tail chase of
3.0-StableX versions 2ish years ago, when things went
On Wed, Sep 29, 2010 at 1:43 PM, Ralf Hildebrandt
ralf.hildebra...@charite.de wrote:
* Andrei funactivit...@gmail.com:
These are my Squid stats. I have about 23% of cache hits.
I have four squid machines, an the Request hit rate average is at:
29.3%, 27.2%, 27.4% and 26.7% (last 24h)
So
On Wed, Sep 29, 2010 at 1:54 PM, Jordon Bedwell jor...@envygeeks.com wrote:
On 09/29/2010 03:47 PM, George Herbert wrote:
On Wed, Sep 29, 2010 at 1:43 PM, Ralf Hildebrandt
ralf.hildebra...@charite.de wrote:
* Andreifunactivit...@gmail.com:
These are my Squid stats. I have about 23
On Sat, Aug 28, 2010 at 5:12 PM, Amos Jeffries squ...@treenet.co.nz wrote:
Leonardo Rodrigues wrote:
[...]
For a faster internal connection and slower Internet connection you can
look towards raising the Hit Ratio' probably the byte hits specifically.
That will drop the load on the Internet
On Fri, Jun 18, 2010 at 11:40 AM, Luis Daniel Lucio Quiroz
luis.daniel.lu...@gmail.com wrote:
Le vendredi 18 juin 2010 09:47:22, Ariel a écrit :
hello list, as estasn, I need your advice to the next stage
an ISP network with 500 users
I have a pentium 4 Dual Core + 4 GB ram + Sata 2 160 GB
On Fri, Jun 18, 2010 at 12:06 PM, Jakob Curdes j...@info-systems.de wrote:
an ISP network with 500 users
I have a pentium 4 Dual Core + 4 GB ram + Sata 2 160 GB
Squid 3.1.xx + bridge + tproxy + Centos 5.4 64 Bits
How many hits are you specting hits/min
if under 200 hits/min then you are
You will have to set up the system to collect a core dump, you need
that to tell where in the code it seg faulted.
On Tue, May 25, 2010 at 5:56 AM, sameer khan khanza...@hotmail.com wrote:
Hey
squid is just dying with fatal error:
FATAL: Received Segment Violation...dying.
2010/05/25
Do this:
ulimit -Hn
If the values is 32768 that's your current kernel/sys max value and
you're stuck.
If it's more than 32768 (and my RHEL 5.3 box says 65536) then you
should be able to increase up to that value. Unless there's an
internal signed 16-bit int involved in FD tracking inside the
On Wed, Mar 17, 2010 at 5:09 AM, Muhammad Sharfuddin
m.sharfud...@nds.com.pk wrote:
On Wed, 2010-03-17 at 19:54 +1100, Ivan . wrote:
you might want to check out this thread
http://www.mail-archive.com/squid-users@squid-cache.org/msg56216.html
I checked, but its not clear to me
do I need to
On Wed, Feb 24, 2010 at 4:22 PM, Mr. Issa(*) xnix...@gmail.com wrote:
Dear mates Well the real Problem is that when we have 100GB of cache
on the squid BOX, we notice that every 1 Hour exactly the connectivity
on the WAN interface of squid drops for 10 seconds.. then it comes
back again...
On Wed, Feb 10, 2010 at 8:50 AM, Luis Daniel Lucio Quiroz
luis.daniel.lu...@gmail.com wrote:
Le Mardi 9 Février 2010 19:34:13, Amos Jeffries a écrit :
On Tue, 9 Feb 2010 17:39:37 -0600, Luis Daniel Lucio Quiroz
luis.daniel.lu...@gmail.com wrote:
Le Mardi 9 Février 2010 17:29:23, Landy Landy
Secret compile time gotcha - your compile needs to have the max fd set
higher during the configure, make, and compile, or it doesn't actually
end up able to use the higher maxfd limit.
I do a script with roughly ulimit -HSn 32768; ./configure (long
options string included from a file)
(On CentOS
Several hundred requests per second, measured by a telco provider
squid gateway system in production usage.
I have measured 400+ in the lab for 2.7 and 600+ in the lab for
3.0STABLE3 and beyond (but latest is best); I haven't benchmarked 3.1.
I have seen sustained stable performance of prod
On Fri, Jan 8, 2010 at 3:58 PM, Landy Landy landysacco...@yahoo.com wrote:
Hello.
I just want to share with the list something I experienced earlier this week.
I have installed squid 3. stable 20 on Lenny. I had a power outage, when
power was restored I didn't have anything in
To build on Shawn's comments -
I've handled peak loads in forward cacheing in the several hundred
requests per second per Squid server, with 3.0-STABLE13 through 17 and
some older 2.6 servers, as part of a smartphone company web interface.
Servers were 4 GB dual Xeon quad core, running FreeBSD
On the other hand - used as outbound caching proxies, for typical ISP
users, 1024 may be too small. Former client of mine had it turned to
--with-maxfd=8192
Also note - when compiling on RHEL 5.x (and some other systems) you
need to have ulimit -n *of the configure and build environment* set to
Multiple hard disks, and spreading out Squid's logs and cache dirs
onto separate disks, helps a lot.
The big prod squid environment I was running for a while used 4 disks
- 1 OS, 1 logs, 2 separate aufs cache disks.
If you can't do that with your hardware, even adding a second hard
drive, with
On Mon, Sep 28, 2009 at 5:24 PM, Chris Hostetter
hossman_sq...@fucit.org wrote:
: The DNS way would indeed be nice. It's not possible in current Squid
: however, if anyone is able to sponsor some work it might be doable.
If i can demonstrate enough advantages in getting peering to work i
On Wed, Sep 23, 2009 at 1:27 PM, Luis Daniel Lucio Quiroz
luis.daniel.lu...@gmail.com wrote:
Le lundi 7 septembre 2009 01:04:49, Amos Jeffries a écrit :
Luis Daniel Lucio Quiroz wrote:
Hi all,
Well, I have a really big problem, We have deployed a Squid with digest
auth + LDAP, it was
On Thu, Aug 27, 2009 at 8:37 AM, Paul Khadrarmer...@hotmail.com wrote:
Hi,
I wish to buy hardware for squid that can support internet traffic of 200
Mbps. I have read a lot of documents on the forums but none has not got the
best answer.
1- Shall I go to intel or opteron?
2- I can get
On Tue, Jul 28, 2009 at 10:49 AM, Chris Robertsoncrobert...@gci.net wrote:
Angela Williams wrote:
Hi!
On Tuesday 28 July 2009, qwertyjjj wrote:
How much RAM would be required to run Squid Proxy for a number of users?
I realise there is no exact answer but a rough guide?
For example, I
Cool. Is there going to be a STABLE17A or something, or do we have to
hand-patch for now?
Thanks!
On Tue, Jul 28, 2009 at 12:41 AM, Amos Jeffriessqu...@treenet.co.nz wrote:
martin.pichlma...@continental-corporation.com wrote:
Thank you Amos,
your patch did the trick, it now works smoothly.
On Wed, Jul 1, 2009 at 6:13 PM, Amos Jeffriessqu...@treenet.co.nz wrote:
On Wed, 1 Jul 2009 20:55:06 -0400, Fulko Hew fulko@gmail.com wrote:
I'm new to squid, and I thought I could use it as a proxy to detect
transactions
that don't succeed and return a page to the browser that would
On Wed, Jun 17, 2009 at 9:31 PM, Amos Jeffriessqu...@treenet.co.nz wrote:
On Wed, 17 Jun 2009 20:19:49 -0600, Brett Glass
squid-us...@brettglass.com
wrote:
Everyone:
Just this past week, our Squid cache has become balky, with long
page loads from some sites and timeouts or partial page
Most of the suggestions so far have missed the mark.
Squid - like an Apache web server etc - is essentially stateless
(transactions in progress don't make permanent changes). You can run
any number of web servers or Squid servers in parallel with requests
being freely responded to by any of
The 400 code makes sense. The HTTP/0.0 in the log (vs 1.0) doesn't, to me...
-george
On Wed, Jun 10, 2009 at 6:30 PM, Chris Robertsoncrobert...@gci.net wrote:
Tech W. wrote:
Hello,
I telnet to localhost's 80 port (squid-3.0.15 is running on this port),
and send a command GET / HTTP/1.0
On Wed, May 27, 2009 at 11:22 AM, Juan C. Crespo R.
jcre...@ifxnw.com.ve wrote:
Guys
I have this issue when I try to make it (build)
main.cc:1091: warning: comparison between signed and unsigned integer
expressions
What build are you trying to compile, and on what operating system?
--
On Wed, May 27, 2009 at 4:08 AM, goody goody think...@yahoo.com wrote:
in addition to previous email, i am also receiving following messages in
cache.log.
comm_old_accept: FD 14: (53) Software caused
connection abort
httpAccept: FD 14: accept failure: (53) Software
caused connection
On Thu, May 21, 2009 at 2:57 AM, Gavin McCullagh gavin.mccull...@gcd.ie wrote:
On Thu, 21 May 2009, Travel Factory S.r.l. wrote:
it would be nice to know your configuration (cpu/ram/disk/heap/etc etc etc)
In particular, if you could give data for this page...
I'll see if I can get clearance to post a graph or two, but a major
cellphone company which choses not to identify itself would like to
thank the developers for the 3.0-STABLE14 release.
After a year of builds which did unfortunate things to themselves
every hour or so, 3.0-STABLE14 passed
42 matches
Mail list logo