Re: [squid-users] https and external acl

2011-02-07 Thread Amos Jeffries

On 08/02/11 05:16, Luis Enrique Sanchez Arce wrote:


I have configure external acl in squid. If the external acl return ERR and the 
request is HTTPS the proxy return connection refuse. What is the possible 
problem ?.

If the request is HTTP squid show a page with access denied.



Problem is malicious people attacking web browsers in ways that made 
them decide never to show the user the body of any response to CONNECT.


There is no way you can make the error page show up when the browser 
decides not to show it.


NP: If you want to use a special custom URL in deny_info the newly 
released squid-3.1.11 includes support for HTTP/1.1 307 redirects to an 
error page, some browsers (Firefox and Iceweasel so far) support that 
response to CONNECT.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.10
  Beta testers wanted for 3.2.0.4


Re: [squid-users] origin compression and squid

2011-02-07 Thread Amos Jeffries

On 08/02/11 10:47, George N. Zavalis wrote:

Hi all,

I am using squid 2.7.stable9 , and gzip compression in the origin servers
(IIS)

Whenever a client requests a compressed resource for the first time (assume
a new url/resource) it is cached and the compressed version is served to the
clients requesting the compressed version of the resource.
Everything works fine till now, but when one client request the UNcompressed
version of the resource/url - no matter the TTL of the resource (even if it
is one year or more) - the request ends up to the origin (miss) the
Uncompressed version is fetched,cached and served to the client and the
following clients even if they request the compressed version.

I even tried to specific request a compressed version of the resource with
WFETCH and the appropriate request headers (Accept-Encoding: gzip) but I am
still getting the Uncompressed version of the resource (me and the rest
clients requesting the resource)


This happening with all my resources - no matter of mime type.



Your IIS is omitting the Vary: header on non-compressed responses. It 
should be returning that header on all responses for a URL which varies.


NP: You may see that Squid 2.7 offers the broken_vary_header directive 
as a workaround. Using this workaround results in non-compressed variant 
always being served. The breakage in IIS is not part of the use-case 
this directive fixes.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.10
  Beta testers wanted for 3.2.0.4


[squid-users] {Solved] [squid-users] Transparent Proxy not working for HTTPS, ftp etc.. Plz help

2011-02-07 Thread Jayakrishnan
Hello all,

Anyways I sorted it my self. The problem was with my ip configuration.
I have created IP Aliasing in my LAN nic. that is it had 2 IPs. one
in 10.10.10.1 which is the gateway for my one set of LAN clients. The
other was 192.168.1.150 which I was supposed to add to add other
clients. But when i removed the 192.168.1.150 ip definition from the
interface it
all worked fine.




On Thu, Feb 3, 2011 at 6:48 PM, Jayakrishnan  wrote:
> On Thu, Feb 3, 2011 at 5:56 PM, Amos Jeffries  wrote:
>> On 04/02/11 00:50, Jayakrishnan wrote:
>>>
>>> Dear all,
>>>
>>> I am shamed to tell you that I have same old problem with transparent
>>> proxying. Please help me out with this.
>>
>> Sure.
>>
>> Answer:
>>  it is not possible to "transparent proxy" any protocol other than plain
>> HTTP with Squid.
>>
>> There you go. Problem solved.
>
> Yea, I know that Squid is a HTTP proxy. But I masqueraded my HTTPS
> traffic by using IPTables. I do not want to cache HTTPS traffic as I
> know that it violates man in the middle theory.
>
> However We need to allow https traffic also right? I request you to
> kindly check my iptables configuration attached and advice what I am
> missing. As I told you we have a nating Access Point/Router at the
> end. so that the internet interface in my squid box is also in private
> net..
>
> Please advice!!!
>
>>
>> 
>>>
>>> Everything is working fine but transparent proxying is not working for
>>> https ftp traffic. However there is no point in having a transparent
>>> proxy with out https support. Is there any thing to do if nating is
>>> taking place in my WIRELESS ACCESS POINT/ROUTER.
>>>
>>
>> The point of Squid is to optimize and manage HTTP. If that alone is not
>> enough then you need other tools.
>>
>> In the case of FTP you can look at FROX (FTP proxy).
>>
>> Amos
>> --
>> Please be using
>>  Current Stable Squid 2.7.STABLE9 or 3.1.10
>>  Beta testers wanted for 3.2.0.4
>>
>
>
>
> --
> Regards,
>
> Jayakrishnan. L
>
> Visit:
> www.foralllinux.blogspot.com
> www.jayakrishnan.bravehost.com
>



-- 
Regards,

Jayakrishnan. L

Visit:
www.foralllinux.blogspot.com
www.jayakrishnan.bravehost.com

--

Thanks and Regards,

Jayakrishnan L

Customer Engineer

Server C.o.E

HCL Infosystems Ltd.

42-49, Hardware Park, Kancha Imarat,

Pahadi Shareef, Hyderabad -5.

Mob: +91-9505105924

www.hclinfosystems.com

www.foralllinux.blogspot.com

www.jayakrishnan.bravehost.com



www.hcl.in



P Please do not print this email unless it is absolutely necessary.
Save paper. Save trees.  Spread environmental awareness



DISCLAIMER:

---



The contents of this e-mail and any attachment(s) are confidential and
intended for the named recipient(s) only.

It shall not attach any liability on the originator or HCL or its
affiliates. Any views or opinions presented in

this email are solely those of the author and may not necessarily
reflect the opinions of HCL or its affiliates.

Any form of reproduction, dissemination, copying, disclosure,
modification, distribution and / or publication of

this message without the prior written consent of the author of this
e-mail is strictly prohibited. If you have

received this email in error please delete it and notify the sender
immediately. Before opening any mail and

attachments please check them for viruses and defect.



---


Re: [squid-users] Caching based on accept-language

2011-02-07 Thread Amos Jeffries

On 08/02/11 15:12, Jeff Gerbracht wrote:

I'm trying to set up squid to cache several of our dynamic pages for
which we have both EN and FR translations.  We use the browser setting
for language to determine which language to return to the user so the
URL is the same for both languages.  Is there any way to enable Squid
3.1 to use the URL in combination with the accept-language from the
header to generate the cache key.  Currently, whichever language is
first requested is what is returned by a cache hit.  We have apache in
front of squid so if squid can't do what we need, any suggestions on
how to work with apache and squid  in combination to cache both the
english and french versions of a page.


Squid does not (yet) support that find-grained level of smart variant 
handling. It will happily cache variants on the full-text of the named 
headers though.


What you need to do is specify the language variance in the same way you 
specify compressed/non-compressed variance.


Sent from the web server:
  Vary: Accept-Language

(it may need combining with the existing Vary header values, probably to 
"Vary: Accept-Language, Accept-Encoding")


With a ETag header as well wherever possible.

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.10
  Beta testers wanted for 3.2.0.4


[squid-users] Caching based on accept-language

2011-02-07 Thread Jeff Gerbracht
I'm trying to set up squid to cache several of our dynamic pages for
which we have both EN and FR translations.  We use the browser setting
for language to determine which language to return to the user so the
URL is the same for both languages.  Is there any way to enable Squid
3.1 to use the URL in combination with the accept-language from the
header to generate the cache key.  Currently, whichever language is
first requested is what is returned by a cache hit.  We have apache in
front of squid so if squid can't do what we need, any suggestions on
how to work with apache and squid  in combination to cache both the
english and french versions of a page.
   Thanks
   Jeff
-- 
Jeff Gerbracht
Lead Application Developer
Neotropical Birds, Breeding Bird Atlas, eBird
Cornell Lab of Ornithology
607-254-2117


Re: [squid-users] Squid 3.1.10 Congestion Warnings

2011-02-07 Thread Amos Jeffries
On Mon, 7 Feb 2011 10:40:42 -0500, Michael Grasso wrote:
> I'm receiving the below congestion warning several times a day. I'm
> wondering if this is anything to be concerned about.
> 
> 2011/02/07 10:06:07| squidaio_queue_request: WARNING - Queue congestion
> 

It's to be expected shortly after startup if you have lots if users. Gets
printed every time squid doubles the 
If you are getting it regularly it is probably a sign that your Squid is
crashing or restarting.


> My squid.con file is below:
> 
> #
> # Recommended minimum configuration:
> #
> acl manager proto cache_object
> acl localhost src 127.0.0.1/32 ::1
> acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1
> 
> # Example rule allowing access from your local networks.
> # Adapt to list your (internal) IP networks from where browsing
> # should be allowed
> acl localnet src 10.10.0.0/16 # RFC1918 possible internal network
> acl localnet src fc00::/7   # RFC 4193 local private network range
> acl localnet src fe80::/10  # RFC 4291 link-local (directly plugged)
> machines
> 
> acl SSL_ports port 443
> acl SSL_ports port 7001
> acl Safe_ports port 80# http
> acl Safe_ports port 21# ftp
> acl Safe_ports port 443   # https
> acl Safe_ports port 70# gopher
> acl Safe_ports port 210   # wais
> acl Safe_ports port 1025-65535  # unregistered ports
> acl Safe_ports port 280   # http-mgmt
> acl Safe_ports port 488   # gss-http
> acl Safe_ports port 591   # filemaker
> acl Safe_ports port 777   # multiling http
> acl CONNECT method CONNECT
> 
> acl snmppublic snmp_community cadc
> acl snmpsrv src 10.10.2.202
> snmp_access allow snmppublic snmpsrv
> snmp_incoming_address 10.10.2.226
> snmp_port 3401
> 
> acl malware_block_list url_regex -i
> "/usr/local/squid/malware_block_list.txt"
> http_access deny malware_block_list
> deny_info http://intranet.cadc.circdc.dcn/malwarealert/malware.htm
> malware_block_list
> 

In an unrelated optimization...

  You may want to move this down to directly underneath the "INSERT YOUR
OWN RULE(S) HERE". The Safe_ports and SSL_ports checks are more efficient,
the determining factor is whether there are malware requests they catch
which you want to get that reply page.


> #
> # Recommended minimum Access Permission configuration:
> #
> # Only allow cachemgr access from localhost
> http_access allow manager snmpsrv
> http_access deny manager
> 
> # Deny requests to certain unsafe ports
> http_access deny !Safe_ports
> 
> # Deny CONNECT to other than secure SSL ports
> http_access deny CONNECT !SSL_ports
> 
> # We strongly recommend the following be uncommented to protect innocent
> # web applications running on the proxy server who think the only
> # one who can access services on "localhost" is a local user
> #http_access deny to_localhost
> 
> #
> # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
> #
> 
> # Example rule allowing access from your local networks.
> # Adapt localnet in the ACL section to list your (internal) IP networks
> # from where browsing should be allowed
> http_access allow localnet
> http_access allow localhost
> 
> # And finally deny all other access to this proxy
> http_access deny all
> 
> # Squid normally listens to port 3128
> http_port 10.10.2.226:3128
> 
> # We recommend you to use at least the following line.
> hierarchy_stoplist cgi-bin ?
> 
> # Uncomment and adjust the following to add a disk cache directory.
> cache_replacement_policy heap GDSF
> cache_dir aufs /cache1/cache 16384 16 256
> cache_dir aufs /cache2/cache 16384 16 256
> 
> # Leave coredumps in the first cache dir
> coredump_dir /usr/local/squid/var/cache
> 
> # Add any of your own refresh_pattern entries above these.
> refresh_pattern ^ftp: 1440  20%   10080
> refresh_pattern ^gopher:  1440  0%1440
> refresh_pattern -i (/cgi-bin/|\?) 0 0%0
> refresh_pattern .   0 20%   4320
> 
> icap_enable on
> icap_send_client_ip on
> icap_send_client_username on
> icap_client_username_encode off
> icap_client_username_header X-Authenticated-User
> icap_preview_enable on
> icap_preview_size 1024
> icap_service service_req reqmod_precache bypass=1
> icap://127.0.0.1:1344/squidclamav
> adaptation_access service_req allow all
> icap_service service_resp respmod_precahe bypass=1
> icap://127.0.0.1:1344/squidclamav
> adaptation_access service_resp allow all
> 
> cache_access_log none

FYI: The above directive is named just "access log".

> cache_mgr mgra...@cadc.uscourts.gov
> ftp_user sq...@cadc.uscourts.gov
> cache_mem 512 MB
> dns_nameservers 10.10.2.214 10.10.2.215
> refresh_all_ims on
> memory_replacement_policy heap GDSF
> maximum_object_size_in_memory 1024 KB
> shutdown_lifetime 5 seconds
> client_db off
> 
> 
> The server has two dual core processors, 8 GB of RAM and two 15K hard
> drives for my aufs cache volumes.
> I just put the server into production and it has about 50 users
configured
> to use the proxy.
> 
> Any help is appreciated.

It's 

[squid-users] origin compression and squid

2011-02-07 Thread George N. Zavalis
Hi all,

I am using squid 2.7.stable9 , and gzip compression in the origin servers
(IIS)

Whenever a client requests a compressed resource for the first time (assume
a new url/resource) it is cached and the compressed version is served to the
clients requesting the compressed version of the resource. 
Everything works fine till now, but when one client request the UNcompressed
version of the resource/url - no matter the TTL of the resource (even if it
is one year or more) - the request ends up to the origin (miss) the
Uncompressed version is fetched,cached and served to the client and the
following clients even if they request the compressed version.

I even tried to specific request a compressed version of the resource with
WFETCH and the appropriate request headers (Accept-Encoding: gzip) but I am
still getting the Uncompressed version of the resource (me and the rest
clients requesting the resource)



This happening with all my resources - no matter of mime type.

Any ideas..


Thanks in advance,
George




[squid-users] https and external acl

2011-02-07 Thread Luis Enrique Sanchez Arce

I have configure external acl in squid. If the external acl return ERR and the 
request is HTTPS the proxy return connection refuse. What is the possible 
problem ?.

If the request is HTTP squid show a page with access denied. 



[squid-users] Squid 3.1.10 Congestion Warnings

2011-02-07 Thread Michael_Grasso

I'm receiving the below congestion warning several times a day. I'm
wondering if this is anything to be concerned about.

2011/02/07 10:06:07| squidaio_queue_request: WARNING - Queue congestion

My squid.con file is below:

#
# Recommended minimum configuration:
#
acl manager proto cache_object
acl localhost src 127.0.0.1/32 ::1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1

# Example rule allowing access from your local networks.
# Adapt to list your (internal) IP networks from where browsing
# should be allowed
acl localnet src 10.10.0.0/16 # RFC1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly plugged)
machines

acl SSL_ports port 443
acl SSL_ports port 7001
acl Safe_ports port 80# http
acl Safe_ports port 21# ftp
acl Safe_ports port 443   # https
acl Safe_ports port 70# gopher
acl Safe_ports port 210   # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280   # http-mgmt
acl Safe_ports port 488   # gss-http
acl Safe_ports port 591   # filemaker
acl Safe_ports port 777   # multiling http
acl CONNECT method CONNECT

acl snmppublic snmp_community cadc
acl snmpsrv src 10.10.2.202
snmp_access allow snmppublic snmpsrv
snmp_incoming_address 10.10.2.226
snmp_port 3401

acl malware_block_list url_regex -i
"/usr/local/squid/malware_block_list.txt"
http_access deny malware_block_list
deny_info http://intranet.cadc.circdc.dcn/malwarealert/malware.htm
malware_block_list

#
# Recommended minimum Access Permission configuration:
#
# Only allow cachemgr access from localhost
http_access allow manager snmpsrv
http_access deny manager

# Deny requests to certain unsafe ports
http_access deny !Safe_ports

# Deny CONNECT to other than secure SSL ports
http_access deny CONNECT !SSL_ports

# We strongly recommend the following be uncommented to protect innocent
# web applications running on the proxy server who think the only
# one who can access services on "localhost" is a local user
#http_access deny to_localhost

#
# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
#

# Example rule allowing access from your local networks.
# Adapt localnet in the ACL section to list your (internal) IP networks
# from where browsing should be allowed
http_access allow localnet
http_access allow localhost

# And finally deny all other access to this proxy
http_access deny all

# Squid normally listens to port 3128
http_port 10.10.2.226:3128

# We recommend you to use at least the following line.
hierarchy_stoplist cgi-bin ?

# Uncomment and adjust the following to add a disk cache directory.
cache_replacement_policy heap GDSF
cache_dir aufs /cache1/cache 16384 16 256
cache_dir aufs /cache2/cache 16384 16 256

# Leave coredumps in the first cache dir
coredump_dir /usr/local/squid/var/cache

# Add any of your own refresh_pattern entries above these.
refresh_pattern ^ftp: 1440  20%   10080
refresh_pattern ^gopher:  1440  0%1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%0
refresh_pattern .   0 20%   4320

icap_enable on
icap_send_client_ip on
icap_send_client_username on
icap_client_username_encode off
icap_client_username_header X-Authenticated-User
icap_preview_enable on
icap_preview_size 1024
icap_service service_req reqmod_precache bypass=1
icap://127.0.0.1:1344/squidclamav
adaptation_access service_req allow all
icap_service service_resp respmod_precahe bypass=1
icap://127.0.0.1:1344/squidclamav
adaptation_access service_resp allow all

cache_access_log none
cache_mgr mgra...@cadc.uscourts.gov
ftp_user sq...@cadc.uscourts.gov
cache_mem 512 MB
dns_nameservers 10.10.2.214 10.10.2.215
refresh_all_ims on
memory_replacement_policy heap GDSF
maximum_object_size_in_memory 1024 KB
shutdown_lifetime 5 seconds
client_db off


The server has two dual core processors, 8 GB of RAM and two 15K hard
drives for my aufs cache volumes.
I just put the server into production and it has about 50 users configured
to use the proxy.

Any help is appreciated.

Thank you,

Mike Grasso
Data Network Administrator
DC Circuit Court of Appeals
(202) 216-7443