Hi Greg,
I use this combo to check if request is cached or no and why:
debug_options 20,9 27,9 31,9 70,9 82,9 22,9 84,9 90,9
Then in cache.log file you can search by 'YES' or 'NO' (uppercase) to
see if the content is cacheable (and cached) or not and the reason of
this decision. Hope this
--
*From: *Pavel Kazlenka
*Date: *Wed, Feb 4, 2015 14:10
*To: *Rajkumar Prasad;squid-users@lists.squid-cache.org;
*Subject:*Re: [squid-users] help with regard to http/https filtering
Hi Rajkumar,
You need SSLBump feature
(http://wiki.squid-cache.org/Features/SslBump) configured in order
Hi Rajkumar,
You need SSLBump feature (http://wiki.squid-cache.org/Features/SslBump)
configured in order to use url_regex acl against https traffic.
Best wishes,
Pavel
On 02/04/2015 09:38 AM, Rajkumar Prasad wrote:
Hi Everyone,
Have been working on very basic squid configurations and need
:27 PM, Yuri Voinov wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Just read squid.conf.documented, is it? ;)
29.01.2015 16:26, Pavel Kazlenka пишет:
Answering my own question:
Adding clientca= and cafile= options of https_port is enough to
trigger client certificate request
Hi babajaga,
You can add 'debug_options 20,9 27,9 31,9 70,9 82,9 22,9 84,9 90,9' to
your squid config to debug caching issues.
Search through log for string that contains 'NO' (in uppercase). This
string should explain why squid made decision not to cache http response.
Best wishes,
Pavel
No.
On 05/18/2014 12:29 PM, anly.zhang wrote:
Hi!
I found that returned to normal after cancel the transparent proxy
Settings.
such as change http_port 3128 transparent to http_port 3128.
So ,It didn't applicable for transparent proxy by squid 3.1.10.
Is applicable transparent and
Hi Elizer,
I'm pretty far from selinux understanding, but I have two suggestions
for you:
1) sealert tool can be used for getting human-readable output. E.g.
sealert -a /var/log/audit/audit.log /path/to/mylogfile.txt
2) If you just want just to start squid again and do not care about
Hi,
Feel free to use 24 workers. There should not be deficiency in squid
performance.
For better performance, use cpu_affinity_map configuration directive to
bind each squid worker to dedicated cpu core explicitly.
Best wishes,
Pavel
On 02/12/2014 05:29 PM, Dr.x wrote:
hi all ,
ive tried
Hi,
I guess you miss some important for troubleshooting information. Can you
access web sites from location 1 using proxy 1? Can you access web sites
from proxy1 directly (e.g. using curl)? At now, I'd suspect that point
of failure is between proxy1 and internet.
Best wishes,
Pavel
On
Here's something similar to your problem:
http://www.squid-cache.org/mail-archive/squid-users/201005/0134.html
Do you have gcc-c++ and kernel-devel packages installed?
On 12/27/2013 06:27 PM, csn233 wrote:
I'm getting this netfilter error when compiling 3.3.11 on Centos 6.5:
checking if
TCP (telnet) timeout means that you have networking issue.
Check firewalls, routing as well as if squid is started and is listening
on port (#netstat -ntpl on squid node).
On 12/01/2013 12:24 PM, janwen wrote:
just try to use squid,i try to setup squid 2 days.
i use squidclient
to ubuntu.San.
Escape character is '^]'.
so squid start ok.no firewall settings.
Then you have no ip connect between client and squid server. Check using
ping.
P.S. Please, don't CC me, use 'reply to list' action (if available in
your client).
On 2013-12-1 17:29, Pavel Kazlenka wrote:
TCP
and squid server, just start tcpdump on squid proxy
(e.g. # tcpdump -i any 'port 3128'). Then try telnet from client again.
You will not see anything in capture on server.
On 2013-12-1 17:37, Pavel Kazlenka wrote:
On 12/01/2013 12:32 PM, janwen wrote:
thanks for your reply:
all your suggest,i tried
). However, looking at your client's
address, I'm starting to suspect incorrect NAT settings (check that
conntrack module is on on your nat box if the one exists).
On 2013-12-1 17:48, Pavel Kazlenka wrote:
On 12/01/2013 12:41 PM, janwen wrote:
thanks,i telnet ssh port is ok:telent ip 22
and how
On 11/30/2013 03:33 PM, Monah Baki wrote:
Hi Amos,
Thanks for the explanation. I switched to intercept yet once I restart
squid, I am still seeing the No forward proxy ports configured.
The same machine later on will also be running IPtables since it has 2
NIC's in it.
You need both one
Hi,
Just want to put my two pennies in. 'Slow' internet navigation through
squid is often observed in case of incorrect DNS server settings on
squid box. Often issues are:
- ipv6 DNS queries are performed first;
- first DNS server in /etc/resolv.conf is not responsible.
Both of these cases
Hi Ahmad,
Please see replies to your question inlined:
On 10/22/2013 11:29 AM, Ahmad wrote:
hi all ,
actually im asking about squid squid 3 that are running at the same time
under ubuntu or debian OS,
my question is ,
can we get benefit from doing this ???
I don't think so? Is there any
Hi Marko,
Squid's kerberos helper has debug mode. Just add '-d' switch to
'auth_param negotiate program /usr/sbin/squid_kerb_auth' string in
squid.conf file.
Also here are some useful information and tips:
http://wiki.squid-cache.org/ConfigExamples/Authenticate/Kerberos#Troubleshooting_Tools
Ok.
Is it possible for you to dump traffic into file like this:
#tcpdump -i any 'port your squid proxy port or port 53 or host
66.151.79.155' -w /tmp/squid.pcap
And post the /tmp/squid.pcap into some of public hosting?
Also, please note, that your dump contains plain text passwords. This
Could you check also availability of primary DNS server on proxy node? I
suspect that the one is not available, so squid makes dns query to
primary server, waits for timeout (5 seconds by default IIRC) and then
queries the secondary DNS server (which answers to squid and you get
your page with
Hi John,
As Amos mentioned, it would be great to see http payload of the packets
(use -Afnn switches for this). Also traffic on proxy itself is more
interesting.
IIRC, 502 status code means that your proxy has some issues when reading
data from origin server. Do you see anything suspicious in
Hi susu,
The most common reason in this case is that video file is too big to be
placed in cache when default squid disk cache settings are used. Please,
show your squid.conf, especially cache_dir string.
Also you can add 'debug_options 20,9 27,9 31,9 70,9 22,9 90,9' into your
config file. I
Ok, these two strings matter here, I guess:
CheckQuickAbort2: YES bad content length
storeCheckCachable: NO: release requested
if (curlen expectlen) {
debugs(90, 3, CheckQuickAbort2: YES bad content length);
If I understand correctly, squid downloaded from origin server more,
than
You have to understand, that only you are the one who can troubleshoot
this. I can only point you at any steps. At the moment I have no answer
to your question, but there is a chance we see anything interesting
comparing the captures.
On 09/17/2013 02:51 PM, susu wrote:
Hi Pavel,
I have
On 09/11/2013 07:19 AM, Mohsen Dehghani wrote:
Thanks everybody
[The problem resolved]
After adding following lines to /etc/security/limits.conf
root soft nofile 6
root hard nofile 6
but I am eager to know the rationale behind it, cuz squid runs as user proxy not
root
On
This could be ugly troubleshooting practice, but you can try to modify
your init script (or upstart job, not sure how exactly squid is being
started in ubuntu). The idea is to add 'ulimit -n
/tmp/squid.descriptors' and see if the number is really 65k.
On 09/14/2013 09:41 AM, Mohsen Dehghani
On 09/14/2013 03:44 PM, Eliezer Croitoru wrote:
SORRY typo:
http://www.linuxtopia.org/online_books/linux_administrators_security_guide/16_Linux_Limiting_and_Monitoring_Users.html#PAM
the above can clarify more about the ulimit stuff.
The basic solution is to define the soft limit in the init
Hi gentlemen,
I'm trying to cache youtube videos following
http://wiki.squid-cache.org/Features/StoreID guide.
But seems like squid rejects to cache the content because original
server returns header 'Cache-control:private' and 'refresh pattern ...
ignore-private' doesn't take effect. Here is
Thank you Amos,
On 09/12/2013 02:53 PM, Amos Jeffries wrote:
On 12/09/2013 10:51 p.m., Pavel Kazlenka wrote:
Hi gentlemen,
I'm trying to cache youtube videos following
http://wiki.squid-cache.org/Features/StoreID guide.
But seems like squid rejects to cache the content because original
On 09/12/2013 03:12 PM, Pavel Kazlenka wrote:
Thank you Amos,
On 09/12/2013 02:53 PM, Amos Jeffries wrote:
On 12/09/2013 10:51 p.m., Pavel Kazlenka wrote:
Hi gentlemen,
I'm trying to cache youtube videos following
http://wiki.squid-cache.org/Features/StoreID guide.
But seems like squid
I don't see any logic here. Are you sure your squid is started not by root?
Is replacing 'root' by 'squid' or '*' solves issue as well?
On 09/11/2013 07:19 AM, Mohsen Dehghani wrote:
Thanks everybody
[The problem resolved]
After adding following lines to /etc/security/limits.conf
root soft
Hi Mohsen,
Please note, that there are also system limit on opened file descriptors
number (that is set by default to 1024 per user in most linux systems).
You could see/change current system limits using ulimit tool or in
/etc/security/limits.d/ files.
Best wishes,
Pavel
On 09/10/2013
Hi Naira,
No, squid doesn't has native support for threads. See details at
http://wiki.squid-cache.org/Features/SmpScale#Why_processes.3F_Aren.27t_threads_better.3F
Best wishes,
Pavel
On 09/10/2013 07:53 PM, Naira Kaieski wrote:
Hi,
Sorry if this issue has already been addressed, but
Hi Luis,
eCap project is not dead, however it is being developed not so fast as
possible. Please, try ecap's mailing list
(http://www.e-cap.org/mailman/listinfo/users/) or launchpad answers
server (http://www.e-cap.org/mailman/listinfo/users/).
No need to ask here as squid's mailing list
Hi Carlos,
Please note, that client's requests also spend file descriptors. Use
netstat to find the exact number.
If you use ubuntu you could be interested in this thread too:
http://www.squid-cache.org/mail-archive/squid-users/201212/0276.html
Best wishes,
Pavel
On 08/20/2013 09:57 PM,
Hi Matthew,
If squid doesn't stop any http requests/responses than it can be that
some part of traffic from client goes (or tries to go) directly to
server and the other part goes through squid. This could be caused by
e.g. incorrect NAT settings or routing. You could install some tool like
Hi,
You can use dstdomain acl type. See details at
http://www.squid-cache.org/Doc/config/acl/
Best wishes,
Pavel
On 08/21/2013 03:21 AM, junio wrote:
I'm okay to block facebook in the company I work for, I can not redirect port
443 successfully.
--
View this message in context:
Hi Gaurav,
You probably have some problem with https traffic.
Does your squid works in forwarding or interception mode? What is your
squid config? Are any other https sites work correctly (facebook, google)?
Best wishes,
Pavel
On 08/20/2013 10:57 AM, Gaurav Saxena wrote:
Hi,
I am not able
Gaurav
-Original Message-
From: Pavel Kazlenka [mailto:pavel.kazle...@measurement-factory.com]
Sent: 20 August 2013 13:26
To: Gaurav Saxena
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] office 365 not accessible via squid proxy
Hi Gaurav,
You probably have some problem
Hi,
Segment violation is definitely the bug. Would you mind reporting the
one to squid's bug tracker (bugs.squid-cache.org0?
TIA,
Pavel
On 08/18/2013 12:56 AM, Golden Shadow wrote:
Hi,
I think I got the answer to my question and thought it would be nice if I post
what I've concluded to
Hi Dmitry,
This is known problem with configuration file parsing in 3.4.0.1. Just
wait for stable version.
Details in this thread:
http://www.squid-cache.org/mail-archive/squid-users/201308/0016.html
Best wishes,
Pavel
On 08/06/2013 03:00 PM, Dmitry Melekhov wrote:
Hello!
Just tried to
Hi Gustavo,
sounds like your internet channel is not reliable enough. Anyway if you
think the problem is in squid itself you should consider analysing
access.log and cache.log with debug output.
Best wishes,
Pavel
On 08/01/2013 10:16 PM, Gustavo Esquivel wrote:
Hi,
actually i'm using squid
Hi Ahmad,
On 07/31/2013 03:36 PM, Ahmad wrote:
hi ,
i have a question
i have a server with 48 G of rams ,
in squid.conf file ive served mem for squid to be only 1 G
but my question is why is my total memory is get full after sometime ??
result is below from my server :
43 matches
Mail list logo