[squid-users] Define two cache_peer directives with same IP but different ports

2014-07-14 Thread Klaus Reithmaier
Hello,

I  have two machines with each two squid processes. I want that every  process 
is querying the other three over htcp if it has a specified  element in its 
cache.

So this is my setting:

 
| Proxyserver1: IP 192.168.1.1 | | Proxyserver2: IP 192.168.1.2 |
 
  | squid1: Port 8080 |   | squid1: Port 8080 |
  | squid2: Port 8081 |   | squid2: Port 8081 |
  -   -

This is the cache_peer configuration on Proxyserver1 process squid1:
-- START config --
cache_peer 192.168.1.1 sibling 8081 4828 proxy-only htcp
cache_peer 192.168.1.2 sibling 8080 4827 proxy-only htcp
cache_peer 192.168.1.2 sibling 8081 4828 proxy-only htcp
-- END config --

It's obvious, that
cache_peer 192.168.1.2 sibling 8080 4827 proxy-only htcp and
cache_peer 192.168.1.2 sibling 8081 4828 proxy-only htcp
are different proxies, because they are using different ports. But squid can't 
be started:

FATAL: ERROR: cache_peer 192.168.1.2 specified twice
Squid Cache (Version 3.3.12): Terminated abnormally.

How can I define two siblings on the same machine?

Thanks




This e-mail contains confidential and/or privileged information from the 
Lindner Group. If you are not the intended recipient or have received this 
e-mail by fault, please notify the sender immediately and destroy this e-mail. 
Any unauthorized copying and/or distribution of this e-mail is strictly not 
allowed.


Re: [squid-users] Define two cache_peer directives with same IP but different ports

2014-07-14 Thread Antony Stone
On Monday 14 July 2014 at 12:21:19, Klaus Reithmaier wrote:

 Hello,
 
 I  have two machines with each two squid processes. I want that every 
 process is querying the other three over htcp if it has a specified 
 element in its cache.
 
 So this is my setting:
 
  
 
 | Proxyserver1: IP 192.168.1.1 | | Proxyserver2: IP 192.168.1.2 |
 
  
   | squid1: Port 8080 |   | squid1: Port 8080 |
   | squid2: Port 8081 |   | squid2: Port 8081 |
   -   -
 
 This is the cache_peer configuration on Proxyserver1 process squid1:
 -- START config --
 cache_peer 192.168.1.1 sibling 8081 4828 proxy-only htcp
 cache_peer 192.168.1.2 sibling 8080 4827 proxy-only htcp
 cache_peer 192.168.1.2 sibling 8081 4828 proxy-only htcp
 -- END config --
 
 It's obvious, that
 cache_peer 192.168.1.2 sibling 8080 4827 proxy-only htcp and
 cache_peer 192.168.1.2 sibling 8081 4828 proxy-only htcp
 are different proxies, because they are using different ports. But squid
 can't be started:
 
 FATAL: ERROR: cache_peer 192.168.1.2 specified twice
 Squid Cache (Version 3.3.12): Terminated abnormally.
 
 How can I define two siblings on the same machine?

You could add another IP address to each machine and bind one squid process to 
each IP, for example:

proxyserver1, squid1: 192.168.1.1 port 8080
proxyserver1, squid2: 192.168.1.3 port 8080
proxyserver2, squid1: 192.186.1.2 port 8080
proxyserver2, squid2: 192.168.1.4 port 8080


Regards,


Antony.

-- 
“If code doesn’t receive constant love, it turns to shit.”

 - Brad Fitzpatrick, Google engineer

   Please reply to the list;
 please *don't* CC me.


Re: [squid-users] Define two cache_peer directives with same IP but different ports

2014-07-14 Thread Amos Jeffries
On 14/07/2014 10:32 p.m., Antony Stone wrote:
 On Monday 14 July 2014 at 12:21:19, Klaus Reithmaier wrote:
 
 Hello,

 I  have two machines with each two squid processes. I want that every 
 process is querying the other three over htcp if it has a specified 
 element in its cache.

 So this is my setting:

  

 | Proxyserver1: IP 192.168.1.1 | | Proxyserver2: IP 192.168.1.2 |

  
   | squid1: Port 8080 |   | squid1: Port 8080 |
   | squid2: Port 8081 |   | squid2: Port 8081 |
   -   -

 This is the cache_peer configuration on Proxyserver1 process squid1:
 -- START config --
 cache_peer 192.168.1.1 sibling 8081 4828 proxy-only htcp
 cache_peer 192.168.1.2 sibling 8080 4827 proxy-only htcp
 cache_peer 192.168.1.2 sibling 8081 4828 proxy-only htcp
 -- END config --

 It's obvious, that
 cache_peer 192.168.1.2 sibling 8080 4827 proxy-only htcp and
 cache_peer 192.168.1.2 sibling 8081 4828 proxy-only htcp
 are different proxies, because they are using different ports. But squid
 can't be started:

 FATAL: ERROR: cache_peer 192.168.1.2 specified twice
 Squid Cache (Version 3.3.12): Terminated abnormally.

 How can I define two siblings on the same machine?

Define the cache_peer name= option to a unique value for each cache_peer
line. The default name is the IP/host parameter value.

Amos



Re: [squid-users] Define two cache_peer directives with same IP but different ports

2014-07-14 Thread Klaus Reithmaier
 

-Amos Jeffries squ...@treenet.co.nz schrieb: -
An: squid-users@squid-cache.org
Von: Amos Jeffries squ...@treenet.co.nz
Datum: 14.07.2014 12:39
Betreff: Re: [squid-users] Define two cache_peer directives with same IP but 
different ports

 On 14/07/2014 10:32 p.m., Antony Stone wrote:
 On Monday 14 July 2014 at 12:21:19, Klaus Reithmaier wrote:
 
 Hello,

 I  have two machines with each two squid processes. I want that every 
 process is querying the other three over htcp if it has a specified 
 element in its cache.

 So this is my setting:

      

 | Proxyserver1: IP 192.168.1.1 |     | Proxyserver2: IP 192.168.1.2 |

      
   | squid1: Port 8080 |                   | squid1: Port 8080 |
   | squid2: Port 8081 |                   | squid2: Port 8081 |
   -                   -

 This is the cache_peer configuration on Proxyserver1 process squid1:
 -- START config --
 cache_peer 192.168.1.1 sibling 8081 4828 proxy-only htcp
 cache_peer 192.168.1.2 sibling 8080 4827 proxy-only htcp
 cache_peer 192.168.1.2 sibling 8081 4828 proxy-only htcp
 -- END config --

 It's obvious, that
 cache_peer 192.168.1.2 sibling 8080 4827 proxy-only htcp and
 cache_peer 192.168.1.2 sibling 8081 4828 proxy-only htcp
 are different proxies, because they are using different ports. But squid
 can't be started:

 FATAL: ERROR: cache_peer 192.168.1.2 specified twice
 Squid Cache (Version 3.3.12): Terminated abnormally.

 How can I define two siblings on the same machine?

 Define the cache_peer name= option to a unique value for each cache_peer
 line. The default name is the IP/host parameter value.
 
 Amos


Thanks Amos, exactly what i searched for. Sorry overlooking this in the 
documentation...

Klaus




This e-mail contains confidential and/or privileged information from the 
Lindner Group. If you are not the intended recipient or have received this 
e-mail by fault, please notify the sender immediately and destroy this e-mail. 
Any unauthorized copying and/or distribution of this e-mail is strictly not 
allowed.




This e-mail contains confidential and/or privileged information from the 
Lindner Group. If you are not the intended recipient or have received this 
e-mail by fault, please notify the sender immediately and destroy this e-mail. 
Any unauthorized copying and/or distribution of this e-mail is strictly not 
allowed.


RE: [squid-users] squid: Memory utilization higher than expected since moving from 3.3 to 3.4 and Vary: working

2014-07-14 Thread Martin Sperl
Hi!

I did a bit of an analysis of the data gathered so far.

Current status: 8236072KB of allocated memory by squid since restart of squid 
on the 8th, so about 5-6 days.

The following memory pools have most of an increase in the last 2 days (100kB):
Type-date   KB-20140712 KB-20140714 KB-Delta
Cnt-20140712Cnt-20140714Cnt-Delta
Total   5629096 7494319 1865223 26439770
337042107264440
mem_node2375038 3192370 817332  588017  790374  202357
4K Buffer   1138460 146 361536  284615  374999  90384
Short Strings   456219  606107  149888  1167918815516319
3837131
16K Buffer  213120  323120  11  13320   20195   
6875
HttpHeaderEntry 312495  415162  102667  5714194 7591516 1877322
2K Buffer   249876  351226  101350  124938  175613  50675
8K Buffer   135944  182360  46416   16993   
22795   5802
HttpReply   133991  178174  44183   490023  651607  161584
MemObject   114845  152713  37868   490004  651575  161571
Medium Strings  90893   120859  29966   727141  966866  
239725
cbdata BodyPipe (39)65367   88238   22871   440363  
594443  154080
HttpHdrCc   41486   55327   13841   442515  
590153  147638
32K Buffer  23584   35360   11776   737 
1105368
cbdata MemBuf (13)  30627   40726   10099   490026  
651615  161589
LRU policy node 46068   49871   38031965553 
2127797 162244
64K Buffer  16642240576 26  
35  9
Long Strings14442007563 2888
40141126
StoreEntry  173530  173746  216 1480781 1482628 1847

All of those have linear increases.
They also show similar wavy behavior - when one has a bump then some of the 
others also have a Bump.

So now there are several groups:
* pools that stay constant (wordlist,...)
* pools that show variability like our traffic-curves (Comm::Connections)
* pools that increase minimally (starting at 80% of current KB 2 days ago) 
(ip_cache_entry, LRU_policy_node)
* pool that increases a bit (starting at 35% of current KB 2days ago) 
fqdncache_entry
* Pools that increase a lot (starting at below 20% of the currend KB 2 days 
ago) - which are (sorted from Biggest to smallest KB footprint):
** mem_node
** 4K Buffer
** Short Strings
** HttpHeaderEntry
** 2K Buffer
** 16K Buffer
** 8K Buffer
** Http Reply
** Mem Object
** Medium Strings
** cbdata BodyPipe (39)
** HttpHdrCc
** cbdata MemBuff(13)
** 32K Buffer
** Long Strings

So there must be something that links all of those in the last group together.

If you again look at the delta of the % between hours one can find that most of 
those show again a traffic-curve pattern in the increase (which is the wavy 
part I was talking about earlier)
All of the pools in this specific group show (again) similar behavior with 
similar ratios.

So it seems to me as we keeping too much information in our cache, which never 
gets evicted - as I had said earlier: my guess would be the extra info to 
manage Vary possibly related to some cleanup processes not evicting all the 
related objects in cache...

This is when I started to look at some other variation reported in other values.

So here the values of StoreEntries for the last few days:
20140709-020001:1472007 StoreEntries
20140710-020001:1475545 StoreEntries
20140711-020001:1478025 StoreEntries
20140712-020001:1480771 StoreEntries
20140713-020001:1481721 StoreEntries
20140714-020001:1482608 StoreEntries
These stayed almost constant...

But looking at  StoreEntries with MemObjects the picture is totally different.
20140709-020001:128542 StoreEntries with MemObjects
20140710-020001:275923 StoreEntries with MemObjects
20140711-020001:387990 StoreEntries with MemObjects
20140712-020001:489994 StoreEntries with MemObjects
20140713-020001:571872 StoreEntries with MemObjects
20140714-020001:651560 StoreEntries with MemObjects

And on disk objects:
20140709-020001:1470163 on-disk objects
20140710-020001:1472215 on-disk objects
20140711-020001:1473671 on-disk objects
20140712-020001:1475614 on-disk objects
20140713-020001:1475933 on-disk objects
20140714-020001:1476291 on-disk objects
(constant again)

And  Hot Object Cache Items:
20140709-020001:128532 Hot Object Cache Items
20140710-020001:275907 Hot Object Cache Items
20140711-020001:387985 Hot Object Cache Items
20140712-020001:489989

[squid-users] 502 Bad Gateway

2014-07-14 Thread ama...@tin.it
 Hello
I have a problem with
- squid-3.3.9
- squid-3.4.5 
but NO 
problem with:
- squid-2.7.stable9
- without proxy

I have tested with 
firefox 24.6 and ie explorer 8.0.

On browser the error displayed is:


The following error was encountered while trying to retrieve the URL: 
http://www.regione.lombardia.it/

Read Error

The system returned: (104) Connection reset by peer


An error condition occurred while reading data from the network. Please 
retry your request.

Your cache administrator is ...

On access.log

(on ver 3.3.9) 
1405342317.708  7 xxx.xxx.xxx.xxx:52686 
TCP_MISS/502 4072 GET http://www.regione.lombardia.it/- 
HIER_DIRECT/www.regione.lombardia.it
text/html HTTP/1.1 - 357 4072 Mozilla/5.0 (Windows NT 6.1; rv:24.0) 
Gecko/20100101 Firefox/24.0

(on ver 3.4.5)
1405342612.999  7 xxx.
xxx.xxx.xxx:52718 TCP_MISS/502 4072 GET http://www.regione.lombardia.it/
- HIER_DIRECT/www.regione.lombardia.ittext/html HTTP/1.1 - 357 4072 
Mozilla/5.0 (Windows NT 6.1; rv:24.0) Gecko/20100101 Firefox/24.0


(on 2.7)
1405339974.820366 xxx.xxx.xxx.xxx:52473 TCP_MISS/200 
148096 GET http://www.regione.lombardia.it/shared/ccurl/683/852/banner.jpg
- DIRECT/62.101.84.131 image/jpeg HTTP/1.1 
http://www.regione.lombardia.it/cs/Satellite?c=Pagechildpagename=HomeSPRL/HomePageLayoutcid=1194454760265pagename=HMSPRLWrapperrendermode=live
 528 148096 Mozilla/5.0 (Windows NT 6.1; rv:24.0) Gecko/20100101 
Firefox/24.0



Do you have any idea?

Regards,
 Maurizio 


Re: [squid-users] 502 Bad Gateway

2014-07-14 Thread Eliezer Croitoru

Hey There,

We do not have a clue about your setup(you didn't shared any relevant 
information).


What OS are you using?
Is it a self compiled version of squid?
Is this a reverse or forward proxy?
You can use this script:
http://www1.ngtech.co.il/squid/basic_data.sh

To share relevant information.
You can modify the relevant files location if needed inside the script.
This script uses more then just what needed in your case but it won't do 
any harm by sharing this information.


Eliezer

On 07/14/2014 04:14 PM, ama...@tin.it wrote:

  Hello
I have a problem with
- squid-3.3.9
- squid-3.4.5
but NO
problem with:
- squid-2.7.stable9
- without proxy

I have tested with
firefox 24.6 and ie explorer 8.0.

On browser the error displayed is:




Re: [squid-users] Define two cache_peer directives with same IP but different ports

2014-07-14 Thread Eliezer Croitoru

It's very easy to miss..
I think it can be notified to the user with a simple one liner but I do 
not know 100% it's right.


Eliezer

On 07/14/2014 02:08 PM, Klaus Reithmaier wrote:

Thanks Amos, exactly what i searched for. Sorry overlooking this in the 
documentation...

Klaus




[squid-users] squid head RPM for the request.

2014-07-14 Thread Eliezer Croitoru

I got a request for squid HEAD RPMs privately and I now it's public.
It can be found here:
http://www1.ngtech.co.il/rpm/centos/6/x86_64/head/
http://www1.ngtech.co.il/rpm/centos/6/x86_64/head/squid-3.5.0.003-1.el6.x86_64.rpm
http://www1.ngtech.co.il/rpm/centos/6/x86_64/head/squid-debuginfo-3.5.0.003-1.el6.x86_64.rpm
http://www1.ngtech.co.il/rpm/centos/6/x86_64/head/squid-helpers-3.5.0.003-1.el6.x86_64.rpm

It's based on the sources of:
http://www.squid-cache.org/Versions/v3/3.HEAD/squid-3.HEAD-20140714-r13496.tar.gz

I do hope to release 3.4.6 in the next week but since I have been 
walking over the bugs at the bugzilla it takes time to understand what 
will affect the new release and what will not.


I do consider to release only 3.4.7 due to couple issues.

Eliezer


Re: [squid-users] Re: how can i get the localport in forward proxy mode?

2014-07-14 Thread Eliezer Croitoru

You are Wrong..
If you authenticate using a regular HTTP page and use a login page it 
will work for mobile but they Won't gain access throw the proxy until 
they will login in the splash or another page.

It's being used on many WIFI networks and it's very simple to implement.

Good Luck,
Eliezer


On 07/13/2014 03:45 AM, freefall12 wrote:

The 2 alternatives you've suggested appear to be not fit for my plan.

To my knowledge, ext_session_acl will still need a page for
reauthentication, which doesn't work with mobile apps. is that correct?
Also, changing the port range definition a larger may not be a practical
solution when users grow to hundreds or thousands.





[squid-users] Problem to set up multi-cpu multi-ports squid 3.3.12

2014-07-14 Thread Patrick Chemla

Hi,

I have a multi-ports config of squid running from version 3.1.19 
upgraded to 3.3.12. Working like a charm, but the traffic is reaching 
one cpu limit.


I want to use SMP capabilities with SMP workers on my 8 cpus/64G mem 
Fedora 20 box.


I saw in the 
http://wiki.squid-cache.org/Features/SmpScale#What_can_workers_share.3F 
that workers can share http_ports, right?


When I run with workers 1 , I can see the squid-1 process listen on the 
designed port with netstat.


When I run with workers greater than 1, I can see processes squid-1, 
squid-2... squid-n with ps -ef|fgrep squid, but not any  process 
listening to any tcp port with netstat -apn (I see all processes 
listening to udp ports).


I can't find any configuration example featuring SMP workers capability 
for squid 3.3.12, including http_ports lines.


Could any one help me there?

Thanks a lot
Patrick



Re: [squid-users] Problem to set up multi-cpu multi-ports squid 3.3.12

2014-07-14 Thread Eliezer Croitoru

Hey There,

It depends, In a case you are using UFS\AUFS cache_dir you cannot.. use 
SMP with it.

You will need to use rock and only rock as a cache_dir for the time being.
You need to run squid -kparse to make sure your settings makes sense 
for squid.
else then that you should look at cache.log to see if there is any 
output that will make sense of the result.

If you want us to look at your squid.conf share it..

I will not try to spoil anything about since it's Fedora a powerful 
system but not all sysadmin will like to work with it for too long due 
to the short life cycle of this OS.


Eliezer

On 07/14/2014 06:30 PM, Patrick Chemla wrote:

Hi,

I have a multi-ports config of squid running from version 3.1.19
upgraded to 3.3.12. Working like a charm, but the traffic is reaching
one cpu limit.

I want to use SMP capabilities with SMP workers on my 8 cpus/64G mem
Fedora 20 box.

I saw in the
http://wiki.squid-cache.org/Features/SmpScale#What_can_workers_share.3F
that workers can share http_ports, right?

When I run with workers 1 , I can see the squid-1 process listen on the
designed port with netstat.

When I run with workers greater than 1, I can see processes squid-1,
squid-2... squid-n with ps -ef|fgrep squid, but not any  process
listening to any tcp port with netstat -apn (I see all processes
listening to udp ports).

I can't find any configuration example featuring SMP workers capability
for squid 3.3.12, including http_ports lines.

Could any one help me there?

Thanks a lot
Patrick




Re: [squid-users] Problem to set up multi-cpu multi-ports squid 3.3.12

2014-07-14 Thread Patrick Chemla

Hey Eliezer,

Happy to read you.

What do you call rock as cache_dir?

Here is squid -kparse
2014/07/14 18:13:25| Startup: Initializing Authentication Schemes ...
2014/07/14 18:13:25| Startup: Initialized Authentication Scheme 'basic'
2014/07/14 18:13:25| Startup: Initialized Authentication Scheme 'digest'
2014/07/14 18:13:25| Startup: Initialized Authentication Scheme 'negotiate'
2014/07/14 18:13:25| Startup: Initialized Authentication Scheme 'ntlm'
2014/07/14 18:13:25| Startup: Initialized Authentication.
2014/07/14 18:13:25| Processing Configuration File: 
/etc/squid/squid.conf (depth 0)

2014/07/14 18:13:25| Processing: acl localhost src 127.0.0.1/32 ::1
2014/07/14 18:13:25| WARNING: (B) '127.0.0.1' is a subnetwork of (A) 
'127.0.0.1'
2014/07/14 18:13:25| WARNING: because of this '127.0.0.1' is ignored to 
keep splay tree searching predictable
2014/07/14 18:13:25| WARNING: You should probably remove '127.0.0.1' 
from the ACL named 'localhost'
2014/07/14 18:13:25| WARNING: (B) '127.0.0.1' is a subnetwork of (A) 
'127.0.0.1'
2014/07/14 18:13:25| WARNING: because of this '127.0.0.1' is ignored to 
keep splay tree searching predictable
2014/07/14 18:13:25| WARNING: You should probably remove '127.0.0.1' 
from the ACL named 'localhost'

2014/07/14 18:13:25| WARNING: (B) '::1' is a subnetwork of (A) '::1'
2014/07/14 18:13:25| WARNING: because of this '::1' is ignored to keep 
splay tree searching predictable
2014/07/14 18:13:25| WARNING: You should probably remove '::1' from the 
ACL named 'localhost'

2014/07/14 18:13:25| WARNING: (B) '::1' is a subnetwork of (A) '::1'
2014/07/14 18:13:25| WARNING: because of this '::1' is ignored to keep 
splay tree searching predictable
2014/07/14 18:13:25| WARNING: You should probably remove '::1' from the 
ACL named 'localhost'

2014/07/14 18:13:25| Processing: acl internalnet src 10.0.0.1/32
2014/07/14 18:13:25| Processing: acl to_localhost dst 127.0.0.0/8 
0.0.0.0/32 ::1
2014/07/14 18:13:25| WARNING: (B) '127.0.0.0/8' is a subnetwork of (A) 
'127.0.0.0/8'
2014/07/14 18:13:25| WARNING: because of this '127.0.0.0/8' is ignored 
to keep splay tree searching predictable
2014/07/14 18:13:25| WARNING: You should probably remove '127.0.0.0/8' 
from the ACL named 'to_localhost'

2014/07/14 18:13:25| WARNING: (B) '0.0.0.0' is a subnetwork of (A) '0.0.0.0'
2014/07/14 18:13:25| WARNING: because of this '0.0.0.0' is ignored to 
keep splay tree searching predictable
2014/07/14 18:13:25| WARNING: You should probably remove '0.0.0.0' from 
the ACL named 'to_localhost'

2014/07/14 18:13:25| WARNING: (B) '0.0.0.0' is a subnetwork of (A) '0.0.0.0'
2014/07/14 18:13:25| WARNING: because of this '0.0.0.0' is ignored to 
keep splay tree searching predictable
2014/07/14 18:13:25| WARNING: You should probably remove '0.0.0.0' from 
the ACL named 'to_localhost'

2014/07/14 18:13:25| WARNING: (B) '::1' is a subnetwork of (A) '::1'
2014/07/14 18:13:25| WARNING: because of this '::1' is ignored to keep 
splay tree searching predictable
2014/07/14 18:13:25| WARNING: You should probably remove '::1' from the 
ACL named 'to_localhost'

2014/07/14 18:13:25| WARNING: (B) '::1' is a subnetwork of (A) '::1'
2014/07/14 18:13:25| WARNING: because of this '::1' is ignored to keep 
splay tree searching predictable
2014/07/14 18:13:25| WARNING: You should probably remove '::1' from the 
ACL named 'to_localhost'
2014/07/14 18:13:25| Processing: acl localnet src 10.0.0.0/8# 
RFC1918 possible internal network
2014/07/14 18:13:25| Processing: acl localnet src 172.16.0.0/12 # 
RFC1918 possible internal network
2014/07/14 18:13:25| Processing: acl localnet src 192.168.0.0/16
# RFC1918 possible internal network
2014/07/14 18:13:25| Processing: acl localnet src fc00::/7   # RFC 
4193 local private network range
2014/07/14 18:13:25| Processing: acl localnet src fe80::/10  # RFC 
4291 link-local (directly plugged) machines

2014/07/14 18:13:25| Processing: acl SSL_ports port 443
2014/07/14 18:13:25| Processing: acl Safe_ports port 80 # http
2014/07/14 18:13:25| Processing: acl Safe_ports port 21 # ftp
2014/07/14 18:13:25| Processing: acl Safe_ports port 443
# https

2014/07/14 18:13:25| Processing: acl Safe_ports port 70 # gopher
2014/07/14 18:13:25| Processing: acl Safe_ports port 210
# wais
2014/07/14 18:13:25| Processing: acl Safe_ports port 1025-65535 # 
unregistered ports
2014/07/14 18:13:25| Processing: acl Safe_ports port 280
# http-mgmt
2014/07/14 18:13:25| Processing: acl Safe_ports port 488
# gss-http
2014/07/14 18:13:25| Processing: acl Safe_ports port 591
# filemaker
2014/07/14 18:13:25| Processing: acl Safe_ports port 777
# multiling http

2014/07/14 18:13:25| Processing: acl CONNECT method CONNECT
2014/07/14 18:13:25| Processing: http_port 8200
2014/07/14 18:13:25| Processing: include /etc/squid/conf.d/*
2014/07/14 18:13:25| Processing Configuration File: 

[squid-users] Host header forgery policy

2014-07-14 Thread Edwin Marqe
Hi all,

After an upgrade of squid3 to version 3.3.8-1ubuntu6, I got the
unpleasant surprise of what is called the Host header forgery
policy.

I've read the documentation of this part, and although I understand
the motivation of its implementation, I honestly see not very
practical implementing this without the possibility of disabling it,
basically because not all scenarios fit the requirements written on
the documentation.

I have about 30 clients and I've configured squid3 to be a transparent
proxy on port 3128 on a remote server. The entry point is port 8080
which is then redirected on the same host to the port 3128.

However, *any* opened URL throws the warning:

2014/07/14 19:21:52.612| SECURITY ALERT: Host header forgery detected
on local=10.10.0.1:8080 remote=10.10.0.6:59150 FD 9 flags=33 (local IP
does not match any domain IP)
2014/07/14 19:21:52.612| SECURITY ALERT: By user agent: Mozilla/5.0
(Windows NT 6.1; WOW64; rv:30.0) Gecko/20100101 Firefox/30.0
2014/07/14 19:21:52.612| SECURITY ALERT: on URL: google.com:443
2014/07/14 19:21:52.612| abandoning local=10.10.0.1:8080
remote=10.10.0.6:59150 FD 9 flags=33

I have manually configured the browser of these clients - the problem
is that in the company's network I have my DNS servers and on the
remote host (where the Squid server is running) there are others, and
as this is hosted by an external company which doesn't allow changing
those DNS nameservers, I wonder what to do? Is there any solution at
this point?

Thanks.


Re: [squid-users] Host header forgery policy

2014-07-14 Thread Eliezer Croitoru

Hey There,

I do not know your setup but if you run:
dig domain.com
and the results are different from what the client tries to request it 
seems to be a Host Header Forgery like..
In the case of google, it seems like google instead of pointing to one 
of your servers points to a local server but I cannot know which one is it.
You know your network the best and if the client and squid uses 
different DNS servers this would be the result.


The basic fix to that will be to use the same DNS for both squid and the 
client.


Regards,
Eliezer

On 07/14/2014 08:46 PM, Edwin Marqe wrote:

I have about 30 clients and I've configured squid3 to be a transparent
proxy on port 3128 on a remote server. The entry point is port 8080
which is then redirected on the same host to the port 3128.

However,*any*  opened URL throws the warning:

2014/07/14 19:21:52.612| SECURITY ALERT: Host header forgery detected
on local=10.10.0.1:8080 remote=10.10.0.6:59150 FD 9 flags=33 (local IP
does not match any domain IP)
2014/07/14 19:21:52.612| SECURITY ALERT: By user agent: Mozilla/5.0
(Windows NT 6.1; WOW64; rv:30.0) Gecko/20100101 Firefox/30.0
2014/07/14 19:21:52.612| SECURITY ALERT: on URL: google.com:443
2014/07/14 19:21:52.612| abandoning local=10.10.0.1:8080
remote=10.10.0.6:59150 FD 9 flags=33

I have manually configured the browser of these clients - the problem
is that in the company's network I have my DNS servers and on the
remote host (where the Squid server is running) there are others, and
as this is hosted by an external company which doesn't allow changing
those DNS nameservers, I wonder what to do? Is there any solution at
this point?

Thanks.




Re: [squid-users] Host header forgery policy

2014-07-14 Thread Edwin Marqe
Hi Eliezer,

I understand that, but this is pretty much the point of my e-mail. In
my company we don't work with servers installed physically here,
instead, we rent servers to a company. We use 2 nameservers for our
clients, and the IT company uses others and additionally they don't
allow to change them and they're restricted to their net... So I don't
know what else can I do.

We don't have a specific configuration for the google.com DNS entry,
so I don't really know why Squid says it's pointing to a local
address. The address appearing in the log is the local address of the
client making the request. There's no other redirection nor complex
iptables rules for this. Any idea?

Thanks


Re: [squid-users] Host header forgery policy

2014-07-14 Thread James Lay
On Mon, 2014-07-14 at 19:23 +0100, Edwin Marqe wrote:
 Hi Eliezer,
 
 I understand that, but this is pretty much the point of my e-mail. In
 my company we don't work with servers installed physically here,
 instead, we rent servers to a company. We use 2 nameservers for our
 clients, and the IT company uses others and additionally they don't
 allow to change them and they're restricted to their net... So I don't
 know what else can I do.
 
 We don't have a specific configuration for the google.com DNS entry,
 so I don't really know why Squid says it's pointing to a local
 address. The address appearing in the log is the local address of the
 client making the request. There's no other redirection nor complex
 iptables rules for this. Any idea?
 
 Thanks

Per docs:

http://wiki.squid-cache.org/KnowledgeBase/HostHeaderForgery

James



Re: [squid-users] Host header forgery policy

2014-07-14 Thread Eliezer Croitoru

On 07/14/2014 09:23 PM, Edwin Marqe wrote:

Hi Eliezer,

I understand that, but this is pretty much the point of my e-mail. In
my company we don't work with servers installed physically here,
instead, we rent servers to a company. We use 2 nameservers for our
clients, and the IT company uses others and additionally they don't
allow to change them and they're restricted to their net... So I don't
know what else can I do.

It's still not squid related issue...


We don't have a specific configuration for the google.com DNS entry,
so I don't really know why Squid says it's pointing to a local
address.

It's not...
It's only referring to the client address as in 10.10.10.6.

 The address appearing in the log is the local address of the

client making the request. There's no other redirection nor complex
iptables rules for this. Any idea?

Indeed there is..
You can do one of two:
- use the IT DNS server
- Force the DNS of the clients on squid

One of the above should be done and if the company do not give you this 
you can tell them that it's required to operate squid and if they do not 
want to let you use\forward to the DNS server they need to fix the issue 
by them-self.

It's pretty simple from one way or another.

Eliezer




Re: [squid-users] Problem to set up multi-cpu multi-ports squid 3.3.12

2014-07-14 Thread Eliezer Croitoru

On 07/14/2014 08:42 PM, Patrick Chemla wrote:

Hey Eliezer,

Happy to read you.

What do you call rock as cache_dir?


Squid uses cache_dir to store objects on disk.
If you don't know what it is I will refer you to the configuration pages:
http://www.squid-cache.org/Doc/config/cache_dir/

Your basic issue is related to SHM and\or selinux.
you can use the basic_data.sh script to get most of the needed 
information about your system and the issue.


You need to first disable selinux or use permissive mode.
Then make sure you have a SHM partition mounted.
Only then squid will work with SMP support.

Good Luck,
Eliezer


[squid-users] Re: Problem to set up multi-cpu multi-ports squid 3.3.12

2014-07-14 Thread babajaga
Besides SMP, there is still the old fashioned option of multiple instances
of squid, in a sandwich config. 
http://wiki.squid-cache.org/MultipleInstances

Besides described port rotation, you can set up 3 squids, for example: 
one frontend, just doing ACLs and request dispatching (carp), and 2
backends, with real caching. 
This variant has the advantage avoiding double caching, which might happen
in the port rotation alternative.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Problem-to-set-up-multi-cpu-multi-ports-squid-3-3-12-tp4666906p4666915.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] feature request for sslbump

2014-07-14 Thread Brendan Kearney
On Mon, 2014-07-14 at 15:57 +1200, Jason Haar wrote:
 Hi there
 
 I've started testing sslbump with ssl_bump server-first and have
 noticed something (squid-3.4.5)
 
 If your clients have the Proxy CA cert installed and go to legitimate
 https websites, then everything works perfectly (excluding Chrome with
 it's pinning, but there's no way around that). However, if someone goes
 to a https website with either a self-signed cert or a server cert
 signed by an unknown CA, then squid generates a legitimate SSL cert
 for the site, but shows the squid error page to the browser - telling
 them the error
 
 The problem with that model is that it means no-one can get to websites
 using self-signed certs. Using sslproxy_cert_adapt to allow such
 self-signed certs is not a good idea - as then squid is effectively
 legitimizing the server - which may be a Very Bad Thing
 
 So I was thinking, how about if squid (upon noticing the external site
 isn't trustworthy) generates a deliberate self-signed server cert itself
 (ie not signed by the Proxy CA)? Then the browser would see the
 untrusted cert, the user would get the popup asking if they want to
 ignore cert errors, and can then choose whether to trust it or not. That
 way the user can still get to sites using self-signed certs, and the
 proxy gets to see into the content, potentially running AVs over
 content/etc.
 
 ...or haven't I looked hard enough and this is already an option? :-)
 
 Thanks
 

an unnamed enterprise vendor provides the Preserve Untrusted Issuer
functionality, very much like you describe.  it leaves the decision to
the user whether or not to accept the untrusted cert.  the cert needs to
be valid (within its dates), and match the URL exactly or via
wildcarding or SAN to be allowed, too.  since i have not started
intercepting ssl with squid yet, i have not run into this scenario or
contemplated what i would look to do in it.



Re: [squid-users] feature request for sslbump

2014-07-14 Thread Lawrence Pingree
Several ssl inspecting firewalls also provide this capability.

Sent from my iPhone

 On Jul 14, 2014, at 6:10 PM, Brendan Kearney bpk...@gmail.com wrote:
 
 On Mon, 2014-07-14 at 15:57 +1200, Jason Haar wrote:
 Hi there
 
 I've started testing sslbump with ssl_bump server-first and have
 noticed something (squid-3.4.5)
 
 If your clients have the Proxy CA cert installed and go to legitimate
 https websites, then everything works perfectly (excluding Chrome with
 it's pinning, but there's no way around that). However, if someone goes
 to a https website with either a self-signed cert or a server cert
 signed by an unknown CA, then squid generates a legitimate SSL cert
 for the site, but shows the squid error page to the browser - telling
 them the error
 
 The problem with that model is that it means no-one can get to websites
 using self-signed certs. Using sslproxy_cert_adapt to allow such
 self-signed certs is not a good idea - as then squid is effectively
 legitimizing the server - which may be a Very Bad Thing
 
 So I was thinking, how about if squid (upon noticing the external site
 isn't trustworthy) generates a deliberate self-signed server cert itself
 (ie not signed by the Proxy CA)? Then the browser would see the
 untrusted cert, the user would get the popup asking if they want to
 ignore cert errors, and can then choose whether to trust it or not. That
 way the user can still get to sites using self-signed certs, and the
 proxy gets to see into the content, potentially running AVs over
 content/etc.
 
 ...or haven't I looked hard enough and this is already an option? :-)
 
 Thanks
 
 an unnamed enterprise vendor provides the Preserve Untrusted Issuer
 functionality, very much like you describe.  it leaves the decision to
 the user whether or not to accept the untrusted cert.  the cert needs to
 be valid (within its dates), and match the URL exactly or via
 wildcarding or SAN to be allowed, too.  since i have not started
 intercepting ssl with squid yet, i have not run into this scenario or
 contemplated what i would look to do in it.