[squid-users] problem whith squid and google search engine

2014-06-09 Thread Дмитрий Шиленко

There is a very strange problem. I have freebsd 9.1 gateway configured
with ipfv ipnat and I decided to set up a squid. Installed from ports
SQUID 3.3. As soon as I run it - gugle.tsom immediately blocks my
network and try to access the search engine says that my requests are
sent automatically.Once turn off the squid - all ok. Prompt in what
could be the problem?

This is my config file:

http_port 127.0.0.1:3128
http_port 127.0.0.1:3129 intercept
connect_timeout 20 second
dns_v4_first on
shutdown_lifetime 1 seconds
cache deny all
#cache_mem 256 MB
#maximum_object_size_in_memory 512 KB
coredump_dir /usr/local/squid
access_log daemon:/usr/local/squid/log/access.log squid
#strip_query_terms off
log_mime_hdrs on
#forwarded_for transparent
#via off
cache_mgr root@localhost
visible_hostname proxy.localnet.local

acl localnet src 192.168.0.0/24 # RFC1918 possible internal network
acl CONNECT method CONNECT
acl AdminsIP src /usr/local/etc/squid/AccessLists/AdminsIP.txt
acl RestrictedDomains dstdomain 
/usr/local/etc/squid/AccessLists/RestrictedDomains.txt

acl MimeAudioVideo  rep_mime_type audio video
acl UrlIP url_regex -i 
^http://[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}/.*


http_access allow manager localhost
#http_access allow manager CacheManagerIP
http_access deny manager
#Значение disable all отключает управление кэшем
#cachemgr_passwd disable all

http_access deny CONNECT
http_access deny to_localhost
http_access allow AdminsIP
http_access deny RestrictedDomains
#http_access deny UrlIP
http_access allow localnet
http_access deny all
#http_reply_access allow AdminsIP
#http_reply_access deny MimeAudioVideo
http_reply_access allow all
#refresh_pattern ^ftp:   144020% 10080
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320



Re: [squid-users] problem whith squid and google search engine

2014-06-09 Thread Amos Jeffries
On 9/06/2014 6:24 p.m., Дмитрий Шиленко wrote:
 This is my config file:
 
 http_port 127.0.0.1:3128
 http_port 127.0.0.1:3129 intercept

Okay, so Squid takes in:
 * forward-proxy traffic to port 3128
 * NAT intercepted port 80 traffc (via port 3129)

Google does not use HTTP anymore. They use HTTPS almost exclusively.
Which means port 443 TLS encrypted traffic or CONNECT requests over port
3128.

But...

 connect_timeout 20 second
 dns_v4_first on
 shutdown_lifetime 1 seconds
 cache deny all
 #cache_mem 256 MB
 #maximum_object_size_in_memory 512 KB
 coredump_dir /usr/local/squid
 access_log daemon:/usr/local/squid/log/access.log squid
 #strip_query_terms off
 log_mime_hdrs on
 #forwarded_for transparent
 #via off
 cache_mgr root@localhost
 visible_hostname proxy.localnet.local
 
 acl localnet src 192.168.0.0/24 # RFC1918 possible internal network
 acl CONNECT method CONNECT
 acl AdminsIP src /usr/local/etc/squid/AccessLists/AdminsIP.txt
 acl RestrictedDomains dstdomain
 /usr/local/etc/squid/AccessLists/RestrictedDomains.txt
 acl MimeAudioVideo  rep_mime_type audio video
 acl UrlIP url_regex -i
 ^http://[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}/.*
 
 http_access allow manager localhost
 #http_access allow manager CacheManagerIP
 http_access deny manager
 #Значение disable all отключает управление кэшем
 #cachemgr_passwd disable all
 
 http_access deny CONNECT

... you have denied all use of CONNECT. Even to transfer HTTPS.

The default recommended config has !SSL_Ports on the end of that line
in order to permit HTTPS traffic like google through the proxy.


Also, check that you are NOT intercepting or bocking port 443. Your
Squid is currently not setup to handle TLS/SSL.

Amos

 http_access deny to_localhost
 http_access allow AdminsIP
 http_access deny RestrictedDomains
 #http_access deny UrlIP
 http_access allow localnet
 http_access deny all
 #http_reply_access allow AdminsIP
 #http_reply_access deny MimeAudioVideo
 http_reply_access allow all
 #refresh_pattern ^ftp:   144020% 10080
 refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
 refresh_pattern .   0   20% 4320
 
 Amos Jeffries писал 09.06.2014 04:11:
 On 9/06/2014 3:10 a.m., Дмитрий Шиленко wrote:
 There is a very strange problem. I have freebsd 9.1 gateway configured
 with ipfv ipnat and I decided to set up a squid. Installed from ports
 SQUID 3.3. As soon as I run it - gugle.tsom immediately blocks my
 network and try to access the search engine says that my requests are
 sent automatically.Once turn off the squid - all ok. Prompt in what
 could be the problem?


 Something in the configuration. But you omitted those details aong with
 the actual error message details. So we cannot help more than that.

 Amos
 
 



[squid-users] problem whith squid and google search engine

2014-06-09 Thread Дмитрий Шиленко
my mistake - I have this line commented out in the original configuration. 
When I put the configuration in a letter - accidentally deleted the comment 
character #



Amos Jeffries писал 09.06.2014 10:12:

On 9/06/2014 6:24 p.m., Дмитрий Шиленко wrote:

This is my config file:

http_port 127.0.0.1:3128
http_port 127.0.0.1:3129 intercept


Okay, so Squid takes in:
 * forward-proxy traffic to port 3128
 * NAT intercepted port 80 traffc (via port 3129)

Google does not use HTTP anymore. They use HTTPS almost exclusively.
Which means port 443 TLS encrypted traffic or CONNECT requests over port
3128.

But...


connect_timeout 20 second
dns_v4_first on
shutdown_lifetime 1 seconds
cache deny all
#cache_mem 256 MB
#maximum_object_size_in_memory 512 KB
coredump_dir /usr/local/squid
access_log daemon:/usr/local/squid/log/access.log squid
#strip_query_terms off
log_mime_hdrs on
#forwarded_for transparent
#via off
cache_mgr root@localhost
visible_hostname proxy.localnet.local

acl localnet src 192.168.0.0/24 # RFC1918 possible internal network
acl CONNECT method CONNECT
acl AdminsIP src /usr/local/etc/squid/AccessLists/AdminsIP.txt
acl RestrictedDomains dstdomain
/usr/local/etc/squid/AccessLists/RestrictedDomains.txt
acl MimeAudioVideo  rep_mime_type audio video
acl UrlIP url_regex -i
^http://[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}/.*

http_access allow manager localhost
#http_access allow manager CacheManagerIP
http_access deny manager
#Значение disable all отключает управление кэшем
#cachemgr_passwd disable all

http_access deny CONNECT


... you have denied all use of CONNECT. Even to transfer HTTPS.

The default recommended config has !SSL_Ports on the end of that line
in order to permit HTTPS traffic like google through the proxy.


Also, check that you are NOT intercepting or bocking port 443. Your
Squid is currently not setup to handle TLS/SSL.

Amos


http_access deny to_localhost
http_access allow AdminsIP
http_access deny RestrictedDomains
#http_access deny UrlIP
http_access allow localnet
http_access deny all
#http_reply_access allow AdminsIP
#http_reply_access deny MimeAudioVideo
http_reply_access allow all
#refresh_pattern ^ftp:   144020% 10080
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320

Amos Jeffries писал 09.06.2014 04:11:

On 9/06/2014 3:10 a.m., Дмитрий Шиленко wrote:

There is a very strange problem. I have freebsd 9.1 gateway configured
with ipfv ipnat and I decided to set up a squid. Installed from ports
SQUID 3.3. As soon as I run it - gugle.tsom immediately blocks my
network and try to access the search engine says that my requests are
sent automatically.Once turn off the squid - all ok. Prompt in what
could be the problem?



Something in the configuration. But you omitted those details aong with
the actual error message details. So we cannot help more than that.

Amos





--
 С ув. Шиленко Дмитрий
 Системный инженер
 global-it.com.ua
 моб. (063)142-32-59
 офис 221-55-72


[squid-users] Re: How to build ext_session_acl ?

2014-06-09 Thread babajaga
Thanx, that was the probelm, dblib-dev was not installed.
Which leads me to a suggestion:
As it is general policy to include most features of squid doing a plain
./configure, which also includes _all_
external auth helpers, configure should also check for _all_ dependencies to
be satisfied. 
Obviously silently  ext_session_acl is not built, in case dblib-dev is
missing. 
I consider this not very consequent.
Alternative: Change the very generous policy of including almost everything
just to the opposite, to a 
minimalistic default ./configure. And then do appropiate checking for
dependencies.
This might also help to trace down some bugs, as the squid having this bugs
has a minimal set of 
code modules included.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/How-to-build-ext-session-acl-tp4666258p4666265.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] squid smp fails -k reconfigure

2014-06-09 Thread Fernando Lozano

Hi Alex,

First of all, thanks for the feedback.




I find very strange that workers 6 and 5 try to get aufs cache stores.
They are supposed to be the rock store disker and the coordinator! My
squid.conf has:

workers 4
cache_dir aufs /cache/worker${process_number} 25000 16 256 min-size=31001 
max-size=346030080

AUFS store is not SMP-aware. You should not be using it in SMP
configurations IMO.
Squid configuration examples and the wiki page about SMP tells to use 
${process_number} to setup exclusive aufs caches for each worker.


See, for example,
http://wiki.squid-cache.org/ConfigExamples/SmpCarpCluster

Which was the inspiration for my setup.

If we should not use anything except rock store for SMP this should be 
explicit in the docs/wiki/faq.






Is squid -k reconfigure working well for everyone else with SMP?

Reconfigure does not work well without SMP. It works even worse with
SMP. Log file rotation is an example of a problematic area specific to SMP.


That's also news to me. I could not find anything on squid-cache.org 
stating reconfigure is problematic and should be avoided.


Just to make sure I understand correctly: I shoud restart squid, instead 
of reconfiguring, when changing acls, and cope with the downtime?






Should I try to hide those directives from them?

In general, no. It is best to let your squid.conf reflect the entire
Squid instance configuration rather than sprinkle it with SMP
conditionals. Besides, many options have defaults so hiding them will
not produce the intended results. Squid will eventually learn to ignore
irrelevant settings on its own.

However, if hiding some directive works around a significant SMP
problem, hiding it may be better than waiting for others to fix the
actual problem.
So I'll try to hide the cache_dir aufs from the processes that doesn't 
need them, as the default would be having no disk cache. I'll tell you 
if this helped or not.



[]s, Fernando Lozano



Re: [squid-users] squid smp fails -k reconfigure

2014-06-09 Thread Alex Rousskov
On 06/09/2014 08:38 AM, Fernando Lozano wrote:

 I find very strange that workers 6 and 5 try to get aufs cache stores.
 They are supposed to be the rock store disker and the coordinator! My
 squid.conf has:

 workers 4
 cache_dir aufs /cache/worker${process_number} 25000 16 256
 min-size=31001 max-size=346030080

 AUFS store is not SMP-aware. You should not be using it in SMP
 configurations IMO.

 Squid configuration examples and the wiki page about SMP tells to use
 ${process_number} to setup exclusive aufs caches for each worker.

 If we should not use anything except rock store for SMP this should be
 explicit in the docs/wiki/faq.

The opinions of what should and should not be used for SMP vary. Squid
documentation reflects the lack of consensus. I am sharing my own
opinion. YMMV.


 Is squid -k reconfigure working well for everyone else with SMP?
 Reconfigure does not work well without SMP. It works even worse with
 SMP. Log file rotation is an example of a problematic area specific to
 SMP.
 
 That's also news to me. I could not find anything on squid-cache.org
 stating reconfigure is problematic and should be avoided.

Whether restart is better than reconfigure depends on your specific
environment, needs, and fears. There is no general rule. For example,
reconfigure may leak 100 MB of RAM in some environments. That could be
OK if you have plenty of spare RAM and reconfigure infrequently. The
same could lead to disasters if you are tight on RAM or reconfigure a
few times per hour.

Moreover, Squid developers have different opinions regarding fixing some
of those leaks. For example, a recent patch fixing some of the leaks was
not committed in leu of waiting for more complex patches with the same
user-visible effect.

Memory leaks is just one example. There are other problems with
reconfigure such as rejecting some incoming connections. I do not have a
complete list and YMMV.


 Just to make sure I understand correctly: I shoud restart squid, instead
 of reconfiguring, when changing acls, and cope with the downtime?

Sorry, I do not know the answer to that question because it depends on
many local factors. I recommend:

  * short-term: testing and selecting the least of the two evils;
  * long-term: helping improve reconfigure support.


HTH,

Alex.



[squid-users] Reverse proxy with multiple SSL sites

2014-06-09 Thread Roberto Carna
Dear, just one question...is it possible to use a Squid reverse proxy
with several SSL sites/certificates, all listening in TCP/443 in the
same public IP ???

Thanks a lot,

Roberto


Re: [squid-users] Reverse proxy with multiple SSL sites

2014-06-09 Thread Eliezer Croitoru

Hey Roberto,

Yes but with limitations.
Squid can use only one certificate per ip:port pair.
This leaves you with the only option of using squid with one certificate 
that overlaps multiple domains in the form of *.domain.com which will 
include all domain.com and subdomains.


There is a function which is not in use by squid that is called SNI 
which allows the client to request a specific site\domain on the first 
stages of the SSL negotiation which allows the service to send a 
specific certificate as default and others in a case of a matched domain 
from by SNI.


As far as I can tell and remember apache and nginx supports SNI.

Regards,
Eliezer

On 06/09/2014 06:15 PM, Roberto Carna wrote:

Dear, just one question...is it possible to use a Squid reverse proxy
with several SSL sites/certificates, all listening in TCP/443 in the
same public IP ???

Thanks a lot,

Roberto




Re: [squid-users] Reverse proxy with multiple SSL sites

2014-06-09 Thread dweimer

On 06/09/2014 10:31 am, Eliezer Croitoru wrote:

Hey Roberto,

Yes but with limitations.
Squid can use only one certificate per ip:port pair.
This leaves you with the only option of using squid with one
certificate that overlaps multiple domains in the form of
*.domain.com which will include all domain.com and subdomains.

There is a function which is not in use by squid that is called SNI
which allows the client to request a specific site\domain on the first
stages of the SSL negotiation which allows the service to send a
specific certificate as default and others in a case of a matched
domain from by SNI.

As far as I can tell and remember apache and nginx supports SNI.

Regards,
Eliezer

On 06/09/2014 06:15 PM, Roberto Carna wrote:

Dear, just one question...is it possible to use a Squid reverse proxy
with several SSL sites/certificates, all listening in TCP/443 in the
same public IP ???

Thanks a lot,

Roberto


There is a third option, using Subject Alternative Names on the 
certificate (sometimes called UCC, Unified Communications Certificate).  
This allows it to be valid for domain1.com, domain2.com, domain3.com, 
etc.  Far cheaper than a *.domain.com certificate, however the 
certificate vendor will have  limit as to how many you can use, and 
charge more for the additional domains.  I use this option on our Squid 
Reverse proxy at work (using a 15 domain ucc from GoDaddy.com), however 
you should note that all domain names are listed on the certificate.  In 
our case we are hosting websites for multiple divisions of the same 
parent company.  It would not be wise to do this if hosting websites for 
third party customers, as you wouldn't want to give the impression that 
company1 has something to do with company2, and so on.


--
Thanks,
   Dean E. Weimer
   http://www.dweimer.net/


[squid-users] Squid 3.4.5 Centos RPMs is OUT.

2014-06-09 Thread Eliezer Croitoru
I am happy to release the new RPMs of squid 3.4.5 for Centos 6.5 32bit 
and 64bit CPUs.


Sorry for the delay due to a more in-depth tests of the new release.
The new release includes couple bug fixes but the daily fixes for couple 
log format issues was not ported into this RPM so sorry.


Since I started releasing the RPMs I try to choose a subject to cover 
the RPM release with more then just an announcement.
- I am looking for subjects to write about and that are of interest to 
the users list participants.


I have tried to not touch StoreID subjects due to my relationship with 
the code but I wanted to take the time and focus on a specific subject 
with StoreID.


StoreID means to de-duplicate exact match content that is identified by 
different urls on different domains\hosts.
One of the main issues that is built-in almost any apache web-server and 
many others is with ETAG.
ETAG should identify the object uniquely and squid do try to use this 
specific header to identify stale objects in the cache.

The usage of ETAG when not really needed can make cache life harder.
It means that every object which has two ETAG headers cannot be cached 
properly.

Apache uses three variables to form the ETAG of a file:
- modification time
- size
- inode number in the FS

The above makes the ETAG represent a very unique identifier to a 
resource\object but it makes the ETAG local only to the specific server.
The example of different ETAG to the same object can be in the case of 
multiple servers in a cluster which is being load-balanced.
These servers can have the same physical network segment but still have 
a different ETAG.


The good the bad and the ugly of the issue:
Good is that you always know as a cache from what server you 
downloaded the file.

Bad since it's hard to de-duplicate with StoreID
Ugly since there is no need in a linux distro mirror which already uses 
sha and other hashes to verify the file content.


So in general all linux distros mirrors can remove safely the ETAG 
unless it will be absolute for all.


And The other related subject is the new API of unveiltech.com for StoreID:
The idea is very nice and I was asked about helpers in other languages 
then PHP so I came up with three languages perl,python,ruby.

The scripts can be found here:
http://www1.ngtech.co.il/squid/storeid/storeid_api.py
http://www1.ngtech.co.il/squid/storeid/storeid_api.pl
http://www1.ngtech.co.il/squid/storeid/storeid_api.rb

Like always a nice video about UNIX OS from ATT archives:
http://www.youtube.com/watch?v=tc4ROCJYbm0

The above video can show the big difference between old times systems to 
our days systems when you can see that every document is being checked 
millions of times compared to a dictionary when in old times it could 
took couple seconds to do the same trick.


* Any notes and comments are wanted and welcome!

There is a new format for the RPM's since there is no need for sysvinit 
separated package.
Also to resolve couple helpers dependencies issues which cannot be met 
by CentOS BASE repo I made the helpers package and also to separate the 
helpers from the core of squid.


The RPMS at:
http://www1.ngtech.co.il/rpm/centos/6/x86_64/
http://www1.ngtech.co.il/rpm/centos/6/i686/

The package includes 3 RPMs one for the squid core one for the helpers, 
the other is for debuging.

http://www1.ngtech.co.il/rpm/centos/6/x86_64/squid-3.4.5-1.el6.x86_64.rpm
http://www1.ngtech.co.il/rpm/centos/6/x86_64/squid-debuginfo-3.4.5-1.el6.x86_64.rpm
http://www1.ngtech.co.il/rpm/centos/6/x86_64/squid-helpers-3.4.5-1.el6.x86_64.rpm

Also note that I have release a i686 package at:
http://www1.ngtech.co.il/rpm/centos/6/i686/

The package includes 3 RPMs one for the squid core and helpers, the
other is for debuging and the third is the init script.
http://www1.ngtech.co.il/rpm/centos/6/i686/squid-3.4.5-1.el6.i686.rpm
http://www1.ngtech.co.il/rpm/centos/6/i686/squid-debuginfo-3.4.5-1.el6.i686.rpm
http://www1.ngtech.co.il/rpm/centos/6/i686/squid-helpers-3.4.5-1.el6.i686.rpm

To Each and everyone of them there is an *asc* file which contains PGP and
MD5 SHA1 SHA2 SHA256 SHA384 SHA512 hashes.

I also released the SRPM which is very simple at:
http://www1.ngtech.co.il/rpm/centos/6/SRPMS/squid-3.4.5-1.el6.src.rpm

Eliezer


[squid-users] Re: Squid 3.4.x Videos/Music Booster

2014-06-09 Thread Stakres
Hi All,

Version https://sourceforge.net/projects/squidvideosbooster/ *1.02*  is
released including:
- Scripts in Perl, Ruby and Python.
- Special key 'porn' to de-duplicate or not Porn web sites

Bye Fred




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-3-4-x-Videos-Music-Booster-tp4666154p4666272.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] memory_cache_shared no support for atomic operations

2014-06-09 Thread Cassiano Martin
Hello there.

I'm trying to setup my squid workers to share cache_mem, bu when I
activate memory_cache_shared I get this error:

FATAL: memory_cache_shared is on, but no support for atomic operations detected

What I'm missing? Does this imply in something related to kernel
configuration? I've not found any tip regarding this.

thanks


Re: [squid-users] memory_cache_shared no support for atomic operations

2014-06-09 Thread Eliezer Croitoru

What OS are you using?
Is this a 32 bit OS?

Eliezer

On 06/09/2014 11:50 PM, Cassiano Martin wrote:

Hello there.

I'm trying to setup my squid workers to share cache_mem, bu when I
activate memory_cache_shared I get this error:

FATAL: memory_cache_shared is on, but no support for atomic operations detected

What I'm missing? Does this imply in something related to kernel
configuration? I've not found any tip regarding this.

thanks





[squid-users] problem whith squid and google search engine

2014-06-09 Thread Дмитрий Шиленко

that's what gives me Google immediately after the start of the SQUID:

Google sorry We're sorry ...but your computer or network may be sending 
automated queries. To protect our osers, we can't process your request right 
now.


and I found these lines in access.log:

1402348733.904  0 94.102.49.2 NONE/400 247 HEAD / - HIER_NONE/- 
text/html [] [HTTP/1.1 400 Bad Request\r\nServer: 
squid/3.3.11\r\nMime-Version: 1.0\r\nDate: Mon, 09
1402348733.910  0 94.102.49.2 NONE/400 3770 POST / - HIER_NONE/- 
text/html [] [HTTP/1.1 400 Bad Request\r\nServer: 
squid/3.3.11\r\nMime-Version: 1.0\r\nDate: Mon, 0
1402348733.993  0 94.102.49.2 NONE/400 247 HEAD / - HIER_NONE/- text/html 
[] [HTTP/1.1 400 Bad Request\r\nServer: squid/3.3.11\r\nMime-Version: 
1.0\r\nDate: Mon, 09
1402348733.999  0 94.102.49.2 NONE/400 3770 POST / - HIER_NONE/- 
text/html [] [HTTP/1.1 400 Bad Request\r\nServer: 
squid/3.3.11\r\nMime-Version: 1.0\r\nDate: Mon, 0
1402348734.083  0 94.102.49.2 NONE/400 247 HEAD / - HIER_NONE/- text/html 
[] [HTTP/1.1 400 Bad Request\r\nServer: squid/3.3.11\r\nMime-Version: 
1.0\r\nDate: Mon, 09
1402348734.089  0 94.102.49.2 NONE/400 3770 POST / - HIER_NONE/- 
text/html [] [HTTP/1.1 400 Bad Request\r\nServer: 
squid/3.3.11\r\nMime-Version: 1.0\r\nDate: Mon, 0
1402348734.161 31 82.230.87.168 TCP_DENIED/403 3643 CONNECT 
66.163.169.178:443 - HIER_NONE/- text/html [User-Agent: Mozilla/4.0 
(compatible; Win32; WinHttp.WinHttpR
1402348734.173  0 94.102.49.2 NONE/400 247 HEAD / - HIER_NONE/- text/html 
[] [HTTP/1.1 400 Bad Request\r\nServer: squid/3.3.11\r\nMime-Version: 
1.0\r\nDate: Mon, 09
1402348734.177  0 94.102.49.2 NONE/400 3770 POST / - HIER_NONE/- 
text/html [] [HTTP/1.1 400 Bad Request\r\nServer: 
squid/3.3.11\r\nMime-Version: 1.0\r\nDate: Mon, 0
1402348734.721 31 182.84.98.180 TCP_DENIED/403 3770 GET 
http://www.tonxshop.com/ - HIER_NONE/- text/html [Accept: */*\r\nReferer: 
http://www.baidu.com\r\nAccept-Lan
1402348735.261  0 94.102.49.2 NONE/400 247 HEAD / - HIER_NONE/- text/html 
[] [HTTP/1.1 400 Bad Request\r\nServer: squid/3.3.11\r\nMime-Version: 
1.0\r\nDate: Mon, 09
1402348735.266  0 94.102.49.2 NONE/400 3770 POST / - HIER_NONE/- 
text/html [] [HTTP/1.1 400 Bad Request\r\nServer: 
squid/3.3.11\r\nMime-Version: 1.0\r\nDate: Mon, 0
1402348735.350  0 94.102.49.2 NONE/400 247 HEAD / - HIER_NONE/- text/html 
[] [HTTP/1.1 400 Bad Request\r\nServer: squid/3.3.11\r\nMime-Version: 
1.0\r\nDate: Mon, 09
1402348735.356  0 94.102.49.2 NONE/400 3770 POST / - HIER_NONE/- 
text/html [] [HTTP/1.1 400 Bad Request\r\nServer: 
squid/3.3.11\r\nMime-Version: 1.0\r\nDate: Mon, 0
1402348735.372 33 114.45.156.32 TCP_DENIED/403 3473 GET 
http://www.google.com/ - HIER_NONE/- text/html [Host: www.google.com\r\n] 
[HTTP/1.1 403 Forbidden\r\nServer:
1402348735.409 31 46.4.101.88 TCP_DENIED/403 3779 GET 
http://us.search.yahoo.com/search? - HIER_NONE/- text/html [Accept: 
text/html\r\nUser-Agent: as_qdr=all\r\nHo
1402348735.440  0 94.102.49.2 NONE/400 247 HEAD / - HIER_NONE/- text/html 
[] [HTTP/1.1 400 Bad Request\r\nServer: squid/3.3.11\r\nMime-Version: 
1.0\r\nDate: Mon, 09
1402348735.444  0 94.102.49.2 NONE/400 3770 POST / - HIER_NONE/- 
text/html [] [HTTP/1.1 400 Bad Request\r\nServer: 
squid/3.3.11\r\nMime-Version: 1.0\r\nDate: Mon, 0
1402348735.528  0 94.102.49.2 NONE/400 247 HEAD / - HIER_NONE/- text/html 
[] [HTTP/1.1 400 Bad Request\r\nServer: squid/3.3.11\r\nMime-Version: 
1.0\r\nDate: Mon, 09
1402348735.534  0 94.102.49.2 NONE/400 3770 POST / - HIER_NONE/- 
text/html [] [HTTP/1.1 400 Bad Request\r\nServer: 
squid/3.3.11\r\nMime-Version: 1.0\r\nDate: Mon, 0
1402348735.619  0 94.102.49.2 NONE/400 247 HEAD / - HIER_NONE/- text/html 
[] [HTTP/1.1 400 Bad Request\r\nServer: squid/3.3.11\r\nMime-Version: 
1.0\r\nDate: Mon, 09
1402348735.622  0 94.102.49.2 NONE/400 3770 POST / - HIER_NONE/- 
text/html [] [HTTP/1.1 400 Bad Request\r\nServer: 
squid/3.3.11\r\nMime-Version: 1.0\r\nDate: Mon, 0
1402348735.709  0 94.102.49.2 NONE/400 247 HEAD / - HIER_NONE/- text/html 
[] [HTTP/1.1 400 Bad Request\r\nServer: squid/3.3.11\r\nMime-Version: 
1.0\r\nDate: Mon, 09
1402348735.712  0 94.102.49.2 NONE/400 3770 POST / - HIER_NONE/- 
text/html [] [HTTP/1.1 400 Bad Request\r\nServer: 
squid/3.3.11\r\nMime-Version: 1.0\r\nDate: Mon, 0
1402348735.791  0 182.84.98.180 TCP_DENIED/403 3770 GET 
http://www.tonxshop.com/ - HIER_NONE/- text/html [Accept: */*\r\nReferer: 
http://www.baidu.com\r\nAccept-Lan
1402348735.798  0 94.102.49.2 NONE/400 247 HEAD / - HIER_NONE/- text/html 
[] [HTTP/1.1 400 Bad Request\r\nServer: squid/3.3.11\r\nMime-Version: 
1.0\r\nDate: Mon, 09
1402348735.802  0 94.102.49.2 NONE/400 3770 POST / - HIER_NONE/- 
text/html [] [HTTP/1.1 400 Bad Request\r\nServer: 
squid/3.3.11\r\nMime-Version: 1.0\r\nDate: Mon, 0
1402348735.887  0 94.102.49.2 NONE/400 247 HEAD / - HIER_NONE/- text/html 
[] [HTTP/1.1 400 Bad 

Re: [squid-users] memory_cache_shared no support for atomic operations

2014-06-09 Thread Eliezer Croitoru

On 06/10/2014 12:43 AM, Cassiano Martin wrote:

Yes its 32 bit custom built OS

As far as I can remember the shared memory needed 64bit OS and HW.
I am not 100% sure yet.

Eliezer


[squid-users] squid with qlproxy on fedora 20 not working for https traffic

2014-06-09 Thread MrErr
Hi have spent two days googling and going through these forums and have not
been able to get https filtering working. I am new to all of this kind of
networking stuff. So i do need a lot of help :)

I have a gateway machine which is my rotuer. On this same gateway i have
squid and qlproxy installed. I want to be able to filter on both http and
https. Only http filtering works now, but not https. So i am not able to
make google default to safe search. 

I am going to paste my configuration files, so my apologies for the long
files.

My squid.conf is 

acl localnet src 192.168.13.0/24
acl localnet src 127.0.0.1/8
acl wanip src  97.90.225.128
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 8080# http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
http_access deny !Safe_ports
http_access allow CONNECT SSL_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost manager
http_access deny manager
http_access deny to_localhost
http_access allow localnet
http_access allow wanip
http_access allow localhost
http_access deny all
http_port 192.168.13.1:3128
http_port 192.168.13.1:3129 intercept
https_port 192.168.13.1:3130 intercept ssl-bump cert=/etc/squid/myCA.pem
acl qlproxy_https_exclusions dstdomain
/etc/opt/quintolabs/qlproxy/squid/https_exclusions.conf
acl qlproxy_https_targets dstdomain
/etc/opt/quintolabs/qlproxy/squid/https_targets.conf
ssl_bump none localhost
ssl_bump server-first qlproxy_https_targets
always_direct allow all
cache_dir ufs /var/spool/squid 100 16 256
coredump_dir /var/spool/squid
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320
icap_enable on
icap_preview_enable on
icap_preview_size 4096
icap_persistent_connections on
icap_send_client_ip on
icap_send_client_username on
icap_client_username_header X-Client-Username
icap_service qlproxy1 reqmod_precache 0 icap://127.0.0.1:1344/reqmod
icap_service qlproxy2 respmod_precache 0 icap://127.0.0.1:1344/respmod
adaptation_access qlproxy1 allow all
adaptation_access qlproxy2 allow all

my iptables are

# Generated by iptables-save v1.4.19.1 on Mon Jun  9 20:03:48 2014
*nat
:PREROUTING ACCEPT [683:114416]
:INPUT ACCEPT [477:31902]
:OUTPUT ACCEPT [441:27340]
:POSTROUTING ACCEPT [2:176]
:OUTPUT_direct - [0:0]
:POSTROUTING_ZONES - [0:0]
:POSTROUTING_ZONES_SOURCE - [0:0]
:POSTROUTING_direct - [0:0]
:POST_external - [0:0]
:POST_external_allow - [0:0]
:POST_external_deny - [0:0]
:POST_external_log - [0:0]
:POST_internal - [0:0]
:POST_internal_allow - [0:0]
:POST_internal_deny - [0:0]
:POST_internal_log - [0:0]
:POST_public - [0:0]
:POST_public_allow - [0:0]
:POST_public_deny - [0:0]
:POST_public_log - [0:0]
:PREROUTING_ZONES - [0:0]
:PREROUTING_ZONES_SOURCE - [0:0]
:PREROUTING_direct - [0:0]
:PRE_external - [0:0]
:PRE_external_allow - [0:0]
:PRE_external_deny - [0:0]
:PRE_external_log - [0:0]
:PRE_internal - [0:0]
:PRE_internal_allow - [0:0]
:PRE_internal_deny - [0:0]
:PRE_internal_log - [0:0]
:PRE_public - [0:0]
:PRE_public_allow - [0:0]
:PRE_public_deny - [0:0]
:PRE_public_log - [0:0]
-A PREROUTING -j PREROUTING_direct
-A PREROUTING -j PREROUTING_ZONES_SOURCE
-A PREROUTING -j PREROUTING_ZONES
-A OUTPUT -j OUTPUT_direct
-A POSTROUTING -j POSTROUTING_direct
-A POSTROUTING -j POSTROUTING_ZONES_SOURCE
-A POSTROUTING -j POSTROUTING_ZONES
-A POSTROUTING_ZONES -o p6p1 -g POST_internal
-A POSTROUTING_ZONES -o p2p1 -g POST_external
-A POSTROUTING_ZONES -g POST_public
-A POST_external -j POST_external_log
-A POST_external -j POST_external_deny
-A POST_external -j POST_external_allow
-A POST_external_allow ! -i lo -j MASQUERADE
-A POST_internal -j POST_internal_log
-A POST_internal -j POST_internal_deny
-A POST_internal -j POST_internal_allow
-A POST_public -j POST_public_log
-A POST_public -j POST_public_deny
-A POST_public -j POST_public_allow
-A POST_public_allow ! -i lo -j MASQUERADE
-A PREROUTING_ZONES -i p6p1 -g PRE_internal
-A PREROUTING_ZONES -i p2p1 -g PRE_external
-A PREROUTING_ZONES -g PRE_public
-A PREROUTING_direct -i p6p1 -p tcp -m tcp --dport 80 -j DNAT
--to-destination 192.168.13.1:3129
-A PREROUTING_direct -i p6p1 -p tcp -m tcp --dport 443 -j DNAT
--to-destination 192.168.13.1:3130
-A PREROUTING_direct -i p2p1 -p tcp -m tcp --dport 443 -j REDIRECT
--to-ports 3130
-A PREROUTING_direct -i p2p1 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports
3129
-A PRE_external -j PRE_external_log
-A PRE_external -j PRE_external_deny
-A PRE_external -j PRE_external_allow
-A PRE_external_allow 

Re: [squid-users] memory_cache_shared no support for atomic operations

2014-06-09 Thread Amos Jeffries
On 10/06/2014 10:10 a.m., Eliezer Croitoru wrote:
 On 06/10/2014 12:43 AM, Cassiano Martin wrote:
 Yes its 32 bit custom built OS
 As far as I can remember the shared memory needed 64bit OS and HW.
 I am not 100% sure yet.

Yes 64-bit atomics are required.

And for now it is also restricted to GNU-style atomic operations.
Although with a bit of patching these can be replaced with C++11
standardized atomics.

Amos



[squid-users] Re: Hotmail issue in squid 3.4.4

2014-06-09 Thread vin_krish
Hi Eliezer,

 Sorry for late reply as I was busy with some other issues. But I
tested long back but was not able to reply you.
I tested with your bash script but it throws error all time as:

2014/06/10 10:33:13| Accepting HTTP Socket connections at local=[::]:3128
remote=[::] FD 19 flags=9
2014/06/10 10:33:13| Accepting NAT intercepted SSL bumped HTTPS Socket
connections at local=[::]:3129 remote=[::] FD 20 flags=41
2014/06/10 10:33:13| WARNING: ssl_crtd #Hlpr0 exited
2014/06/10 10:33:13| Too few ssl_crtd processes are running (need 1/10)
2014/06/10 10:33:13| Closing HTTP port [::]:3128
2014/06/10 10:33:13| Closing HTTPS port [::]:3129
2014/06/10 10:33:13| storeDirWriteCleanLogs: Starting...
2014/06/10 10:33:13|   Finished.  Wrote 0 entries.
2014/06/10 10:33:13|   Took 0.00 seconds (  0.00 entries/sec).
FATAL: The ssl_crtd helpers are crashing too rapidly, need help!


and my configuration is:

http_port 3128 
https_port 3129 intercept ssl-bump generate-host-certificates=on
dynamic_cert_mem_cache_size=16MB  cert=/etc/squid3/ssl_cert/myCA.pem
sslcrtd_program /usr/local/squid/libexec/ssl_crtd -s
/etc/squid3/ssl_cert/ssl_db -M 16MB
sslcrtd_children 10

I have gone through forum and search also, as they specify about the change
of permission and ownership to my user 'squid' to the ssl directory, but it
didn't work. 

Can you please help me out...

Regards,
krish



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Hotmail-issue-in-squid-3-4-4-tp4666020p4666279.html
Sent from the Squid - Users mailing list archive at Nabble.com.