Amos Jeffries wrote:
It should be safe enough to check that your system CA set is up to date.
There were changes as recently as a week ago.
---
My "system CA" -- when I searched for linux CA updating, it
said on linux there were many possible CA locations, but going
with the top choice
Yuri Voinov wrote:
Hope at this. It is difficult to make long-term plans if the software
has to die soon. :)
---
..And if SW doesn't die "soon", but only a little later? I.e. with
google's AI designing new encryption algorithms today (nothing
said about quality), how long before they can
Amos Jeffries wrote:
There is no such option. Never has been.
## ./configure --help | grep ssl
--enable-ssl-crtd ...
--with-openssl=PATH Compile with the OpenSSL libraries. ...
Oops... Conflated the two... back to configuring...
tnx,
-l
Linda W wrote:
ltrans -- I disabled translation -- should ltrans be getting made?
If so, where can I find xstrerr?
---
looks like a windows only thing, so I assumed
my build dir was corrupt. It is no longer corrupt. ;-/
:
___
squid-users mailing
ltrans -- I disabled translation -- should ltrans be getting made?
If so, where can I find xstrerr?
Thanks!
(must be buried in *somefile*!
libtool: link: g++ -Wall -Wpointer-arith -Wwrite-strings -Wcomments
-Wshadow -Werror -pipe -D_REENTRANT -m64 -DOPENSSL_LOAD_CONF -O2 -m64
Alex Rousskov wrote:
On 02/18/2013 04:01 PM, Linda W wrote:
Has anyone looked at their average cached object size
lately?
At one point, I assume due to measurements, squid
set a default to 13KB / item.
About 6 or so years ago, I checked mine out:
(cd /var/cache/squid;
cachedirs
It wasn't clear until I tried it, but it seems that the
new smp-related macros that yield names like
squid1, squid2 or cache1, or give the process numbers
1, 2, ...
Don't work in squid.conf.
It did't get a chance to try it, as the syntax
of specifying conditionals is completely unspecified, and
Has anyone looked at their average cached object size
lately?
At one point, I assume due to measurements, squid
set a default to 13KB / item.
About 6 or so years ago, I checked mine out:
(cd /var/cache/squid;
cachedirs=( $(printf %02X {0..63}) )
echo $[$(du -sk|cut -f1)/$(find ${cachedirs[@]}
Retitling... as subject may have been ignored...
Linda W wrote:
Amos Jeffries wrote:
The Squid HTTP Proxy team is very pleased to announce the availability
of the Squid-3.2.1 release!
---
BTW ---
I finally managed to get a good (I think) stack traceback.
(N.B. - earlier versions were
Amos Jeffries wrote:
The Squid HTTP Proxy team is very pleased to announce the availability
of the Squid-3.2.1 release!
Looks good so far... compiled with my previous config out of the 'box',
and is running now at least as well as the previous version! ;-)
Amos Jeffries wrote:
The Squid HTTP Proxy team is very pleased to announce the availability
of the Squid-3.2.1 release!
---
BTW ---
I finally managed to get a good (I think) stack traceback.
(N.B. - earlier versions were dumping core, but never got the
debug symbols in correctly for a
Amos Jeffries wrote:
We are still limited to one page,
---
1 page or 1 segment/item?
Looking at the output of ipcs they show a max seg
size of 32768 (32k), but the units are in kbytes, not bytes.
so the real limit looks more like 32MB
Are you sure that limit was 32K and not 32k kbytes? (i.e.
Linda W wrote:
Amos Jeffries wrote:
We are still limited to one page,
---
1 page or 1 segment/item?
I don't know who 'we' is... but on x86_64 linux, I was able to use the
perl SysV::IPC calls shmget/shmwrite/shmread/shmctl to allocate up to
my system's run-time limit (which I can up
Amos Jeffries wrote:
Hmm. Might be worth checking if those large pages are the same type of
pages SMB uses. We are still limited to one page, but if the page size
itself were changeable the two Alex's might be able to point out what
knob to change.
'twill be attempted as soon as it is
Amos Jeffries wrote:
32KB == 1 page allocation for the
particular SMB shared memory systems used.
---
Particular SMB shared memory systems?
Don't you just use something like the posix shm calls?
I see squid's cache in /dev/shm:
Ishtar:/dev/shm lh
-rw-rw-rw- 1 70K Jul 27
Better yet... um...
Won't the multiple processes use the same cache content if it is on disk?
Maybe I should ditch the shared memory option and just put a 8GB cache dir
in my /dev/shm dir... I mean it is a file system.
Perms are same as on /tmp... owner writeable
...If I can just get it
I was looking at logs/code or something and it seemed that if I wanted
to use
the multiple processes feature on a multi-core system, they need to use
shared memory.
So I setup 8GB for them to share -- verified by looking in /dev/shm and
seeing an
8GB file created by squid each time it
In analyzing the data in my cache, a few years ago admittedly (would be
larger now), I came up with 38KB being my average object size and thus
set that in
my squid.conf, and have my maximum_object_size = 1024MB (1G).
Why am I seeing messages that my **MAX** object size = 32K?? -- which is
not
Ton Muller wrote:
access webmail is not possible when i use name lookup, i must use IP
adres for it.
so, my question..
where did i make a mistake , i used basic squid config, and added only
some ports for access.
lots of possibilities --
1), I usually have clients setup to go direct to
Linda W wrote:
forgot to include version -- but I betchya that those who know why this
occurs know that it's special to 3.2 series related to sharing memory, but
I could be wrong...
In analyzing the data in my cache, a few years ago admittedly (would be
larger now), I came up with 38KB being
Amos Jeffries wrote:
On 28.03.2012 08:37, babajaga wrote:
I've read (for Linux at least) aufs is superior to diskd,
I have my doubts. First of all, it might depend upon the version of Squid
you want to use.
I use 2.7, and, looking at the source code for aufs, ony reads are async.
Having some
Amos Jeffries wrote:
That 403 is Squid or something upstream blocking the requests. So the
speed of calls is likely due to badly programed retries.
Not squid -- I kept wondering why it would keep hammering month
after month on an adddr that supposedly doesn't work -- unless it
really
If I missed this, please let me know, but I was wondering why
HTTP 1.1 wasn't on the list on the roadmap? I don't know all
the details, but compression and RANGES are two that could
speed up web usage for the average user.
Ranges, it seems to me, could be kept in a binary-sized
linked-list of
Amos Jeffries wrote:
Something weird going on with office or the activation server and their
use of 1.1 then. HTTP/1.1 is explicitly designed to not break when going
through a non-1.1 middleware proxy.
---
When are they NOT doing something weird?
Amos Jeffries wrote:
NP: Squid cannot make network fetches go faster than the line speed. All
it can do is cache things locally for a repeat request to be faster. For
the best caching controls it's a toss up between 2.7 and 3.1 as of this
writing.
Doesn't HTTP 1.1 specify that content can
Does squid support HTTP 1.1 yet? I have an older version and was
wondering if there was a version that I could upgrade to that
supported 1.1. Seems like 1.1 can provide noticeable speed
improvements over 1.0, and that would be a bonus w/my connection
speed...
If it's not there yet, are there
Just recently, I tried to access forums.scifi.com (no editorializing, please, we
all slum
sometimes...) but got a Content Encoding Error.
The page you are trying to view cannot be shown because it uses an invalid or
unsupported form of compression.
Anyone else run into this -- is it just
I see alot of these messages in my squid warning log...
Specifically, in filtering off the date, and sort+uniq+counting, I see:
var/log# grp Median response warn|cut -c36-90 |more|sort|uniq -c
107 WARNING: Median response time is 57448 milliseconds
1 WARNING: Median response time is
Henrik Nordstrom wrote:
On fre, 2008-10-24 at 11:52 -0700, Linda W wrote:
I see alot of these messages in my squid warning log...
(count=107) WARNING: Median response time is 57448 milliseconds
This can happen naturally if you at some time have only very few users
and those mostly
Is that a configurable in the config script or is it hard coded?
Amos Jeffries wrote:
Linda W wrote:
With no processes attaching to squid -- no activity -- no open
network connections -- only squid listening for connections --
why is squid walking up doing a busy-wait so often?
It's the most
With no processes attaching to squid -- no activity -- no open
network connections -- only squid listening for connections --
why is squid walking up doing a busy-wait so often?
It's the most active process -- even when it is supposedly doing nothing?
I'm running it on suse10.3,
Is squid being renamed?
Or what the relation of cacheboy to the squid project?
Or was it the 2.7 branch that was renamed to cacheboy
(is there one for girls?...not sure a I want a boy caching
my webcontent -- but thats another matter...:-))
I'm also a bit perturbed, that 3.0, which has been
GARDAIS Ionel wrote:
Beside the fact that the hit rate is low, response time are way too long
for users (cache-misses median service times are around 200ms and
cache-hits are around 3ms)
Have you tried access without a squid-proxy -- but maybe
using 'socks' (if you need to bridge
Leonardo Rodrigues Magalhães wrote:
probably the problem reported is chunked-encoding related. Please check:
http://squidproxy.wordpress.com/2008/04/29/chunked-decoding/
Blog entry http://squidproxy.wordpress.com/2008/04/29/chunked-decoding/; posted
on April 29, 2008 at 2:24pm says:
[
Visolve Squid wrote:
Linda W wrote:
2 TCP_SWAPFAIL_MISS/200
*TCP_SWAPFAIL_MISS* - The object was believed to be in the cache, but
could not be accessed.
---
Does this mean the cache index is out of sync with the cache
contents, or is there another interpretation?
I
I think I was unclear in my request for meanings...:-)
On Sat, Jul 15, 2006 at 11:50:37PM -0700, Linda W wrote:
I was trying to track down a problem and got distracted on squid status
codes. I was curious on how to interpret these:
~10,000TCP_MISSes of various sorts
~ 1,500
I was trying to track down a problem and got distracted on squid status
codes. I was curious on how to interpret these. I extracted the status
codes from each line, sorted, counted and got:
1 TCP_CLIENT_REFRESH_MISS/000
955 TCP_CLIENT_REFRESH_MISS/200
511 TCP_CLIENT_REFRESH_MISS/304
Henrik Nordstrom wrote:
fre 2006-03-17 klockan 19:31 -0800 skrev Linda W:
Mystery solved.
My shell _expanded_ control sequences by default in echo. (echo \1 - becomes
echo ^A).
Apparently there are literals in the configure script like \\1 \\2 that
were trying to echo a literal '\1
Henrik Nordstrom wrote:
lör 2006-03-18 klockan 14:15 -0800 skrev Linda W:
Bash added the feature to allow dropping of the leading
0, accepting strings: \0nnn, \nnn, and \xHH. I'm guessing that
most bash users run in a shell that has expansion turned off by default or
this would have
Henrik Nordstrom wrote:
The devel site is different even if you also can find a CVS repository
there.. The devel site is targeting developers, not users. The contents
of the two CVS repositories is also slightly different as some
autogenerated junk not needed by developers but quite needed by
Henrik Nordstrom wrote:
Only from the main CVS (cvs.squid-cache.org)
---
That's what I used:
export CVSROOT=':pserver:[EMAIL PROTECTED]:/squid'
cvs co [update] squid3
The error messages...
this file is generated when you run configure. Or more exacly when
configure
Henrik Nordstrom wrote:
It it, I see a bunch of defines starting with CPPUNIT_ that are broken.
They have the embedded control characters (^A and ^B) as though they were
place holders for values some script should have filled in:
Odd..
Maybe my configure options are sufficiently weird so as
Henrik Nordstrom wrote:
Copy-pasted your configure line on my Fedora Core 5 test 3 x86_64 box,
and it compiles just fine..
---
Figures.
1. Grab the current snapshot release.
---
Ahead of you. Already done (squid-3.0-PRE3-200603017.tar.bz2).
2. Start with just ./configure, without
Mystery solved.
My shell _expanded_ control sequences by default in echo. (echo \1 - becomes
echo ^A).
Apparently there are literals in the configure script like \\1 \\2 that
were trying to echo a literal '\1' into a sed script. Instead it was
echoed in as a control-A.
Am I misremembering,
In
src/Makefile.in, which I think is used to generate src/Makefile,
there are lines (~line 3065):
globals.cc: globals.h mk-globals-c.pl
$(AWK) -f $(srcdir)/mk-globals-c.awk $(srcdir)/globals.h $@
string_arrays.c: enums.h mk-string-arrays.pl
$(AWK) -f
Chris Robertson wrote:
Does this mean that your original problem (compiling Squid 3) has been
solved?
---
Solved -- not exactly, but eliminated, it was! :-)
*SNIP* Henrik did a better job of answering the remaining bits than I could
ever hope to.
---
Yeah...his post was
Chris Robertson wrote:
I have not tried compiling Squid 3, so this I'm going to be of little help,
but I thought that I would point out that Squid 3 is not production ready...
---
Yesbut it's been fairly stable in day-to-day use for over a year.
I forget the exact reasons, but once
Armin Marxer wrote:
Have you tried checking up on the pools in the cachemgr.cgi?
It can indicate to you if the clients are falling into the pools
or not.
---
Sometimes I see my life as a sitcom -- w/me making silly mistakes
and some laugh track going in the background
Grabbed the June-14 tarball and config'ed it (I hope) for local machine.
Run into a compile error in building basic/auth -- is this a samba source
synchronization error? Here's the messages up to the error point:
make[3]: Entering directory `/home/proj/squid-3.0-PRE3-20050614/src/auth'
followed your directions. It worked.
My bad -- I hadn't deleted the old swap.state files, so it recreated
missing directories (I'm guessing).
Anyway regarding the other questions...
Henrik Nordstrom wrote:
2) Shouldn't object distribution, in the cache, be roughly equal in all
buckets? I can see
Henrik Nordstrom wrote:
There is three condiditons for items to be deleted from the cache:
1. Replaced by a newer object for the same URL
---
No brainer.
2. Removed by the removal policy to make room for new objects when the
cache is full.
---
Again, no brainer.
3. Expired
Thanks for this! I kept getting error reports from my firewall software
about
Windows Update continually trying to go direct from behind my Squid proxy.
It's this type of hidden information that is an example of
quintessential dislike for working with MS OS's. It is on my
machine and dated Aug
I don't know about the specific websites you are trying to access, but
I know that some streaming media types are available on alternate outgoing
ports. If you only open up '80', for example, I wouldn't think https
(on port
443) would work very well. You say you only open port 80? That means
I'm wondering if I'm not getting the best usage out of my squid cache.
I'm also wondering if there might not be a bug in squid regarding the
-z switch.
I noted some time ago that when I had 256 top level dirs in my squid
cache, and 256 dirs in each of those, only the first few entries were
being
I've never setup a squid proxy in transparent mode. Am I correct in
assuming
I need to also have ip_chains in my kernel to route the traffic from my
internal net to the outside world or would simple entries to the routing
table work?
I only have 1-2 addresses that I want to transparently
Henrik Nordstrom wrote:
On Sat, 25 Oct 2003, Linda W. wrote:
Was about to move my squid directory off onto it's own partition and was
wondering what filesystem to use, in the sense of is there a linux (x86)
filesystem that performs best for squid? Any special params for block
size? It's
Was about to move my squid directory off onto it's own partition and was
wondering
what filesystem to use, in the sense of is there a linux (x86)
filesystem that
performs best for squid? Any special params for block size? It's just
a single
SCSI disk.
I googled and didn't find any benchmarks
57 matches
Mail list logo