Re: [squid-users] SSL bumping without faked server certificates

2015-11-23 Thread Stefan Kutzke
Hi Alex,

sorry for the late reply.

> > 2015/11/10 19:24:30.181 kid1| 33,5|...
> > 2015/11/10 19:25:30.016 kid1| 33,3| AsyncCall.cc(93) ScheduleCall:
> > IoCallback.cc(135) will call
> > ConnStateData::clientPinnedConnectionRead(local=172.31.1.15:49421
> > remote=212.45.105.89:443 FD 15 flags=1, flag=-10, data=0x19ced08)
> > [call349]>
>
> This one second gap after a successful SSL negotiation with the
> origin server is rather suspicious, but I am going to ignore it ...

This is not one second but one minute and just the default timeout of curl.


Nevertheless I have built a new RPM package with the latest 3.5.11 source and 
the patch you mentioned.
The result is the same. I have reduced the curl timeout to 10 seconds:

Client:
# curl -vvv --connect-timeout 10 
https://school.bettermarks.com/static/flexclient4/bm_exerciseseries.swf -o 
/dev/null
* About to connect() to school.bettermarks.com port 443 (#0)
*   Trying 212.45.105.89... connected
* Connected to school.bettermarks.com (212.45.105.89) port 443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
*   CAfile: /etc/pki/tls/certs/ca-bundle.crt
  CApath: none
* NSS error -5990
* Closing connection #0
* SSL connect error
curl: (35) SSL connect error

Now there is a 10 second gap in Squid's cache log.

Squid:
2015/11/23 10:20:05.152 kid1| 33,5| client_side.cc(3693) httpsCreate: will 
negotate SSL on local=212.45.105.89:443 remote=10.0.0.2:41428 FD 11 flags=33
2015/11/23 10:20:05.152 kid1| 33,5| AsyncCall.cc(26) AsyncCall: The AsyncCall 
ConnStateData::requestTimeout constructed, this=0x1ff6340 [call77]
2015/11/23 10:20:14.992 kid1| 83,7| bio.cc(168) stateChanged: FD 11 now: 0x10 
UNKWN  (before/accept initialization)
2015/11/23 10:20:14.992 kid1| 83,7| bio.cc(168) stateChanged: FD 11 now: 0x2001 
UNKWN  (before/accept initialization)
2015/11/23 10:20:14.992 kid1| 83,5| bio.cc(118) read: FD 11 read 0 <= 11
2015/11/23 10:20:14.992 kid1| 83,5| bio.cc(144) readAndBuffer: read 0 out of 11 
bytes
2015/11/23 10:20:14.992 kid1| 83,2| client_side.cc(3725) Squid_SSL_accept: 
Error negotiating SSL connection on FD 11: Aborted by client: 5


I digged deeper into the traffic using Wireshark. As a reminder my network 
setup:
Client (10.0.0.2)  <--->  (10.0.0.1) Squid (10.31.1.15)  <--->  212.45.105.89 
(Origin)

Here is the relevant packet flow. I have stripped off DNS, NTP, etc. and time 
format is UTC (Squid's cache log above shows UTC+1):

Client:
10 2015-11-23 09:20:04.971734836 10.0.0.2 212.45.105.89 TCP 74 41428→443 [SYN] 
Seq=0 Win=14600 Len=0 MSS=1460 SACK_PERM=1 TSval=5322725 TSecr=0 WS=128
12 2015-11-23 09:20:04.971946983 212.45.105.89 10.0.0.2 TCP 74 443→41428 [SYN, 
ACK] Seq=0 Ack=1 Win=14480 Len=0 MSS=1460 SACK_PERM=1 TSval=2202045 
TSecr=5322725 WS=128
13 2015-11-23 09:20:04.971968589 10.0.0.2 212.45.105.89 TCP 66 41428→443 [ACK] 
Seq=1 Ack=1 Win=14720 Len=0 TSval=5322726 TSecr=2202045
17 2015-11-23 09:20:05.047529339 10.0.0.2 212.45.105.89 SSL 174 Client Hello
19 2015-11-23 09:20:05.047868761 212.45.105.89 10.0.0.2 TCP 66 443→41428 [ACK] 
Seq=1 Ack=109 Win=14592 Len=0 TSval=2202121 TSecr=5322801
26 2015-11-23 09:20:14.980851745 10.0.0.2 212.45.105.89 TCP 66 41428→443 [FIN, 
ACK] Seq=109 Ack=1 Win=14720 Len=0 TSval=5332735 TSecr=2202121
27 2015-11-23 09:20:14.982049717 212.45.105.89 10.0.0.2 TCP 66 443→41428 [FIN, 
ACK] Seq=1 Ack=110 Win=14592 Len=0 TSval=2212055 TSecr=5332735
28 2015-11-23 09:20:14.982087279 10.0.0.2 212.45.105.89 TCP 66 41428→443 [ACK] 
Seq=110 Ack=2 Win=14720 Len=0 TSval=5332736 TSecr=2212055

Squid:
13 2015-11-23 09:20:04.983024000 10.0.0.2 212.45.105.89 TCP 74 41428→443 [SYN] 
Seq=0 Win=14600 Len=0 MSS=1460 SACK_PERM=1 TSval=5322725 TSecr=0 WS=128
14 2015-11-23 09:20:04.98308 212.45.105.89 10.0.0.2 TCP 74 443→41428 [SYN, 
ACK] Seq=0 Ack=1 Win=14480 Len=0 MSS=1460 SACK_PERM=1 TSval=2202045 
TSecr=5322725 WS=128
17 2015-11-23 09:20:04.983252000 10.0.0.2 212.45.105.89 TCP 66 41428→443 [ACK] 
Seq=1 Ack=1 Win=14720 Len=0 TSval=5322726 TSecr=2202045
26 2015-11-23 09:20:05.058868000 10.0.0.2 212.45.105.89 SSL 174 Client Hello
27 2015-11-23 09:20:05.058927000 212.45.105.89 10.0.0.2 TCP 66 443→41428 [ACK] 
Seq=1 Ack=109 Win=14592 Len=0 TSval=2202121 TSecr=5322801
32 2015-11-23 09:20:05.060596000 172.31.1.15 212.45.105.89 TCP 74 34995→443 
[SYN] Seq=0 Win=14600 Len=0 MSS=1460 SACK_PERM=1 TSval=2202122 TSecr=0 WS=128
33 2015-11-23 09:20:05.081926000 212.45.105.89 172.31.1.15 TCP 74 443→34995 
[SYN, ACK] Seq=0 Ack=1 Win=4380 Len=0 MSS=1460 TSval=866426570 TSecr=2202122 
SACK_PERM=1
34 2015-11-23 09:20:05.081976000 172.31.1.15 212.45.105.89 TCP 66 34995→443 
[ACK] Seq=1 Ack=1 Win=14600 Len=0 TSval=2202144 TSecr=866426570
35 2015-11-23 09:20:05.082267000 172.31.1.15 212.45.105.89 TLSv1.2 359 Client 
Hello
36 2015-11-23 09:20:05.114617000 212.45.105.89 172.31.1.15 TLSv1.2 1514 Server 
Hello
37 2015-11-23 09:20:05.114654000 172.31.1.15 212.45.105.89 TCP 66 34995→443 
[ACK] Seq=294 Ack=1449 Win=17376 Len=0 TSval=2202177 TSecr=866426602

Re: [squid-users] Store-ID documentation could be a little clearer.

2015-11-23 Thread Amos Jeffries
On 24/11/2015 1:38 p.m., 1508 wrote:
> Hello,
> 
> Thank you for your replies.  I spent a long time typeing this and I would be
> grateful if you can read it all at least twice slowly before sending a
> reply.
> 
> A reminder... Give yourself a smug smile if you find a spelling mistake, my
> screen reader is used to my Typonese and my seeing eye dog can't proof read.  
> 
> Yes, I am almost blind but not daft... 
> 
> I also said 
> 
> I am not trying to pick any holes... You both are far cleverer than me. Vi
> is rocket science, Nano is my friend. I am trying to establish some facts to
> make an accurate bit of documentation... I want to do something to pay back
> many peoples efforts.
> 
> Anyway, E (Sorry I cant type the rest of your name forgive me), I looked at
> your article you found on google. I prefer the man pages first then the
> programs web pages and documentation. Bear in mind I use a screen reader and
> it takes ages to listen to stuff.
> 
> 
> I would like to create a working example so I intend to use the sourceforge
> example in the database. Id pick something that is reproduceable from
> Sourceforge to help the new user check the database and script are working.
> 

I think I get what you are trying to do. But we do things a little
differently in the Squid documentation.

Features/ wiki pages are for documenting the Squid feature and teaching
people about it. *NOT* providing working configurations. There are
usually just too many moving parts for the latter.

We have ConfigExamples/ wiki pages for narrow configuration how-to's
like I think you are proposing.

As far as I can see the Features/StoreID page already contains the full
and accurate information aboute StoreID feature itself. It may appear to
be missing a lot of info about setting up helpers, but that is because
this is a plugin interface feature.

The helpers and everything about setting any of them up is unrelated to
StoreID itself. The expectation is that there would be a helper for
every type of DB or storage engine anyone can dream up for putting the
StoreID data into, or for generating and calculating it on the fly.

> 
> Amos, I am not being critical, one article you gave me said database entries
> are separated by whitespace, the man page says:
> 
>  so I went with the man page.

There seems to be a misunderstanding here.

The Features/StoreID/DB page contains complete and accurate information
about the patterns registry we run. This is a simple set of pattern
pairs which are known to be checked and confirmed safe to use. It is
also *just* a flat-file DB. Many different helpers could use it or the
info provided.

The helper we provide is intended to be capable of reading the data D/L
from the wiki, if not there is a bug to be resolved. Possibly in the
helper docs or its internal regex. But it is not restrcted to those
datasets, nor required to use them.

In the general case, it is expected that the wiki pattern sets be
transformed into whatever DB format the helper being used needs. So the
wiki documents what format the examples displayed take, not what formats
it could potentially be mapped to. (Also partially in a way to clarify
what the HTML mangled view of the datases should be read as, you will
notice the line wrapping gets screwed up in the web view).

The helper-specific man page should document what format the DB used by
that helper takes. It is best to ensure that custom entries being added
for the helper DB meet the relevant helpers DB format. Even if D/L'ing
the dataset(s) from our wiki, for use in our provided flat-file helper.

> 
> Now I know configuration files can be anywhere depending on what you are
> running, windoze or Linux or coal fired abacus.  The thing is it would not
> matter where the files are if we know the filenames and give an example on
> how to find them eg:  in windows or  in Linux.
> 
> So the outline of my bit of documentation would be along the lines of (it is
> not set in stone just something I cobbled up on my brailler) 
> 

The Features/StoreID page is the central documentation for introducing
the feature and describing all this. I've split your descriptive test
and referenced what we have. It you notice carefully the orders match.

PS. the overall layout is a templated style, so all Feature pages should
have the same section layout to make learning Squid features an easy-ish
process (though older ones need updating sometimes).

> You want to use StoreID. 

Firstly we document what StoreID *is* (the "Details" section), what it
does and what the pros and cons of using it are (the "Known Issues"
section).
 No assuming they already know and want it. ToC is available if they
want to skip that part.

> OK you will need a few things like Squid,

This is assumed, the reader is on the Squid website reading about Squid
functionality/features. They may not already have Squid, but assume they
are fully aware that it will be needed.

> perl 

Perl is a basic system requirement of having Squid 

Re: [squid-users] Transparent HTTPS Squid proxy with upstream parent

2015-11-23 Thread Michael Ludvig

Hi Amos

On 09/11/15 12:55, Amos Jeffries wrote:

On 9/11/2015 11:55 a.m., Michael Ludvig wrote:

[client] -> HTTPS -> [my_proxy] -> SSL -> [upstream_proxy] -> HTTPS ->
[target]

Can you provide some config hints for both proxies please? The
SSL-related bits only as that's the unclear part.

my_proxy:
  cache_peer example.com 3129 0 ssl

upstream_proxy:
  https_port 3129 cert=/path/to/cert


This works well when the [client] has $https_proxy set to point to 
[my_proxy] - it then talks SSL to [upstream_proxy] and things work nicely.


However with transparent proxy / sslbump on [my_proxy] I keep getting:

Failed to establish a secure connection to 10.205.28.183 (=this is 
[upstream_proxy])

The system returned:
[No Error] (TLS code: SQUID_X509_V_ERR_DOMAIN_MISMATCH)
Certificate does not match domainname: /C=NZ/O=Example 
CA/CN=parent.example.com


On [my_proxy] I've got:
https_port 8443 intercept ssl-bump generate-host-certificates=on \
dynamic_cert_mem_cache_size=4MB cert=/etc/squid/intermediate.pem
acl step1 at_step SslBump1
ssl_bump peek step1
ssl_bump bump all

cache_peer parent.example.com parent 3129 0 no-query ssl \
sslflags=DONT_VERIFY_DOMAIN,DONT_VERIFY_PEER
sslproxy_flags DONT_VERIFY_DOMAIN,DONT_VERIFY_PEER

On the [upstream_proxy] I've got:
https_port 3129 cert=/etc/squid/parent.example.com.pem
visible_hostname parent.example.com

I've got the certificates issued to parent.example.com and the record 
for parent.example.com in /etc/hosts on [my_proxy]


What am I doing wrong / how to make it work for transparent ssl proxying?

Thanks!

Michael




___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Transparent HTTPS Squid proxy with upstream parent

2015-11-23 Thread Amos Jeffries
On 24/11/2015 5:49 p.m., Michael Ludvig wrote:
> Hi Amos
> 
> On 09/11/15 12:55, Amos Jeffries wrote:
>> On 9/11/2015 11:55 a.m., Michael Ludvig wrote:
>>> [client] -> HTTPS -> [my_proxy] -> SSL -> [upstream_proxy] -> HTTPS ->
>>> [target]
>>>
>>> Can you provide some config hints for both proxies please? The
>>> SSL-related bits only as that's the unclear part.
>> my_proxy:
>>   cache_peer example.com 3129 0 ssl
>>
>> upstream_proxy:
>>   https_port 3129 cert=/path/to/cert
> 
> This works well when the [client] has $https_proxy set to point to
> [my_proxy] - it then talks SSL to [upstream_proxy] and things work nicely.
> 

That is for what you documented:
  [client] -> HTTPS -> [my_proxy]


> However with transparent proxy / sslbump on [my_proxy] I keep getting:
> 

That is two separate and entirely different traffic types:

A) [client] -> HTTP--(NAT)--> [my_proxy]

B) [client] -> TLS--(NAT)--> [my_proxy]


(A) requires "http_port ... intercept ssl-bump cert=/path/to/cert"

(B) requires "https_port ... intercept ssl-bump cert=/path/to/cert"

above is the minimum configuration. The generate-* etc settings you
mention below are useful as well.

> Failed to establish a secure connection to 10.205.28.183 (=this is
> [upstream_proxy])
> The system returned:
> [No Error] (TLS code: SQUID_X509_V_ERR_DOMAIN_MISMATCH)
> Certificate does not match domainname: /C=NZ/O=Example
> CA/CN=parent.example.com
> 
> On [my_proxy] I've got:
> https_port 8443 intercept ssl-bump generate-host-certificates=on \
> dynamic_cert_mem_cache_size=4MB cert=/etc/squid/intermediate.pem
> acl step1 at_step SslBump1
> ssl_bump peek step1
> ssl_bump bump all

This is bumping with only the client details known. In order to
impersonate the server you also need to fetch the server details (peek
or stare at step2), then bump at step3.

Aymeric also recently found a bug in the SNI details being sent to
peers. The very latest 3.5 snapshot may be needed as well as the step2
config change.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Store-ID documentation could be a little clearer.

2015-11-23 Thread 1508
Hello,

Thank you for your replies.  I spent a long time typeing this and I would be
grateful if you can read it all at least twice slowly before sending a
reply.

A reminder... Give yourself a smug smile if you find a spelling mistake, my
screen reader is used to my Typonese and my seeing eye dog can't proof read.  

Yes, I am almost blind but not daft... 

I also said 

I am not trying to pick any holes... You both are far cleverer than me. Vi
is rocket science, Nano is my friend. I am trying to establish some facts to
make an accurate bit of documentation... I want to do something to pay back
many peoples efforts.

Anyway, E (Sorry I cant type the rest of your name forgive me), I looked at
your article you found on google. I prefer the man pages first then the
programs web pages and documentation. Bear in mind I use a screen reader and
it takes ages to listen to stuff.


I would like to create a working example so I intend to use the sourceforge
example in the database. Id pick something that is reproduceable from
Sourceforge to help the new user check the database and script are working.


Amos, I am not being critical, one article you gave me said database entries
are separated by whitespace, the man page says:

 so I went with the man page.

Now I know configuration files can be anywhere depending on what you are
running, windoze or Linux or coal fired abacus.  The thing is it would not
matter where the files are if we know the filenames and give an example on
how to find them eg:  in windows or  in Linux.


So the outline of my bit of documentation would be along the lines of (it is
not set in stone just something I cobbled up on my brailler) 

You want to use StoreID. 
OK you will need a few things like Squid, perl the rewrite script, and a
database file and an entry in the squid.conf file. 
You can find the rewrite script by doing .. command on linux or  on
Windows (or  on another OS if somebody has the command to tell me.) 
You need to create a database file called  and put (sourceforge example
in it) and save the file to . on Linux or . on windows. 
Make sure the entries on the database are sepearated by a .
(tab/whitespace). 
When you have done this you can type ...(example command to test the
script) in Linux or ... in Windows to test your database works. 
If you see ERR then something is not right. 
If you see ... congratulations. 
You MAY have to tell squid set up the cache direcories if you have not done
it already with  and then start Squid with  (give
examples like init.d or sysctrl etc for fedora ubuntu and other popular
linuxes windows abacus etc...) 
Now go to your web browser and set up the proxy settings to the ip address
of your squid server and the correct port. 
Try the (example) in your web browser to see if the page arrives and
check the ... log file to see it was dealt with correctly (miss 1st time
then, hit after a few retries) linux cat ...log | grep  or windows use
snaketail

I can polish the documentation once WE CAN WORK TOGETHER to get the
information correct. Please give me a chance to put something back I dont
want any credit and you can licence it any way you wish.

Best wishes,
Terry.





--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Store-ID-documentation-could-be-a-little-clearer-tp4674755p4674768.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid3.x have issue with some sites, squid2.x not.

2015-11-23 Thread Beto Moreno
Hi guys.

I have face some issue with Squid Cache: Version 3.4.10.

Example this site:

www.salud.gob.mx
www.issemym.gob.mx

I cannot access this site.

Now, I have a other installation running squid 2.7.x, in that network
I can access those without any issue.

With squid 3.x I got this error in logs:

The error log is: TCP_MISS_ABORTED/000.

U wait for the browser and after a while u get:

Operation timed out

I have check my squid settings but don't see any parameter that could affect:

---begin of config-
auth_param basic
/usr/pbi/squid-amd64/local/libexec/squid/basic_ldap_auth -v 3 -b
dc=XXX,dc=local -D cn=Manager,dc=XXX,dc=local -w  -f uid=%s -u -P
192.168.2.24:389
auth_param basic realm Please enter your credentials to access the proxy
auth_param basic children 5 startup=0 idle=1 concurrency=0
auth_param basic credentialsttl 300 seconds
auth_param basic casesensitive off
authenticate_cache_garbage_interval 3600 seconds
authenticate_ttl 3600 seconds
authenticate_ip_ttl 1 seconds
acl SINDICATO_IPS src  192.168.2.142 192.168.2.143
acl SINDICATO_USRS proxy_auth  smartinez
acl password proxy_auth  REQUIRED
acl ext_manager src  192.168.2.4
acl blacklist dstdom_regex - -i
(.facebook.com)|(.twitter.com)|(.instagram.com)|(.mozilla.net)|(.skype.com)|(.skypeassets.com)
acl unrestricted_hosts src  192.168.2.1
acl HTTPS proto  HTTPS
acl HTTP proto  HTTP
acl connect method  CONNECT
acl purge method  PURGE
acl sslports port  443 563
acl safeports port  21 70 80 210 280 443 488 563 591 631 777 901 3128
3127 1025-65535 7653 9042 9049 9079 9080 9081 9082 10081
acl allsrc src  ::/0
acl dynamic urlpath_regex  (cgi-bin)|(\?)
acl localnet src  192.168.2.0/24
acl to_localhost dst  ::1 0.0.0.0 127.0.0.0/8
acl localhost src  ::1 127.0.0.1 192.168.2.24
acl manager url_regex - -i (^cache_object://) +i
(^https?://[^/]+/squid-internal-mgr/)
acl all src  ::/0
acl ssl::certSelfSigned ssl_error  X509_V_ERR_DEPTH_ZERO_SELF_SIGNED_CERT
acl ssl::certUntrusted ssl_error  X509_V_ERR_INVALID_CA
X509_V_ERR_SELF_SIGNED_CERT_IN_CHAIN
X509_V_ERR_UNABLE_TO_VERIFY_LEAF_SIGNATURE
X509_V_ERR_UNABLE_TO_GET_ISSUER_CERT
X509_V_ERR_UNABLE_TO_GET_ISSUER_CERT_LOCALLY X509_V_ERR_CERT_UNTRUSTED
acl ssl::certDomainMismatch ssl_error  SQUID_X509_V_ERR_DOMAIN_MISMATCH
acl ssl::certNotYetValid ssl_error  X509_V_ERR_CERT_NOT_YET_VALID
acl ssl::certHasExpired ssl_error  X509_V_ERR_CERT_HAS_EXPIRED
follow_x_forwarded_for deny all
 acl_uses_indirect_client on
delay_pool_uses_indirect_client on
log_uses_indirect_client on
http_access allow manager localhost
 http_access allow manager ext_manager
 http_access deny manager
 http_access allow purge localhost
 http_access deny purge
 http_access deny !safeports
 http_access deny connect !sslports
 http_access deny blacklist
 http_access allow ING_REST_USRS ING_REST_IPS ING_REST_SITES
 http_access deny ING_REST_USRS
 http_access allow REST_USRS REST_IPS REST_SITES
 http_access deny REST_USRS REST_IPS
 http_access allow NOMINA_USRS NOMINA_IPS NOMINA_SITES
 http_access deny NOMINA_USRS NOMINA_IPS
 http_access deny allsrc
 http_port 192.168.2.4:3128 name=192.168.2.4:3128 connection-auth=on
host_verify_strict off
client_dst_passthru on
ssl_unclean_shutdown off
sslproxy_version 1
sslproxy_cert_sign signUntrusted (sslproxy_cert_sign signUntrusted line)
sslproxy_cert_sign signSelf (sslproxy_cert_sign signSelf line)
sslproxy_cert_sign signTrusted (sslproxy_cert_sign signTrusted line)
sslcrtd_program /usr/local/libexec/squid/ssl_crtd -s /var/lib/ssl_db -M 4MB

sslcrtd_children 32 startup=5 idle=1 concurrency=0
sslcrtvalidator_children 32 startup=5 idle=1 concurrency=1
dead_peer_timeout 10 seconds
forward_max_tries 10
cache_mem 2097152000 bytes
maximum_object_size_in_memory 262144 bytes
memory_cache_shared off
memory_cache_mode always
memory_replacement_policy heap GDSF
cache_replacement_policy heap LFUDA
minimum_object_size 0 bytes
maximum_object_size 4194304 bytes
cache_dir aufs /var/squid/cache 64000 16 256 IOEngine=DiskThreads
store_dir_select_algorithm least-load
max_open_disk_fds 0
cache_swap_low 96
cache_swap_high 98
access_log /var/squid/logs/access.log squid(access_log
/var/squid/logs/access.log line)
logfile_daemon /usr/local/libexec/squid/log_file_daemon
cache_store_log none
logfile_rotate 14
mime_table /usr/local/etc/squid/mime.conf
log_mime_hdrs off
pid_filename /var/run/squid/squid.pid
client_netmask :::::::
strip_query_terms on
buffered_logs off
netdb_filename /var/squid/logs/netdb.state
cache_log /var/squid/logs/cache.log
debug_options rotate=14
coredump_dir none
ftp_user Squid@
ftp_passive on
ftp_epsv_all off
ftp_epsv on
ftp_eprt on
ftp_sanitycheck on
ftp_telnet_protocol on
diskd_program /usr/local/libexec/squid/diskd
unlinkd_program /usr/local/libexec/squid/unlinkd
pinger_program /usr/pbi/squid-amd64/local/libexec/squid/pinger
pinger_enable off

url_rewrite_children 20 startup=0 idle=1 concurrency=0
url_rewrite_host_header on
url_rewrite_bypass off

store_id_children 

Re: [squid-users] file descriptors leak

2015-11-23 Thread Amos Jeffries
On 24/11/2015 7:45 a.m., André Janna wrote:
> 
> Assin Em 22/11/2015 16:25, Eliezer Croitoru escreveu:
>> Hey Andre,
>>
>> There are couple things to the picture.
>> It's not only squid that is the "blame".
>> It depends on what your OS tcp stack settings are.
>> To verify couple things you can try to use the netstat tool.
>> run the command "netstat -nto" to see what is the timers status.
>> You can then see how long will a new connection stay in the
>> established state.
>> It might be the squid settings but if the client is not there it could
>> be because of some tcp tunable kernel settings.
> 
> Hi Eliezer and Amos,
> my kernel is a regular Debian Jessie kernel using the following tcp values.
> tcp_keepalive_time: 7200
> tcp_keepalive_intvl: 25
> tcp_keepalive_probes: 9
> tcp_retries1: 3
> tcp_retries2: 15
> tcp_fin_timeout: 60
> So in my understanding the longest timeout is set to 2 hours and a few
> minutes for keepalive connections.

Okay. It is not always your kernel Squid machine. I've seen one mobile
network where the Ethernet<->radio modem was interpreting the radio
being alive as TCP keep-alive needing to stay alive. So just having the
phones connected to the network would keep everything active.

IIRC the only fix for that scenario is reducing Squid's client_lifetime
value.


FYI: unless you have a specific need for 3.5 you should be fine with the
3.4 squid3 package that is available for Jesse from Debian backports.
The alternative is going the other way and upgrading right to the latest
3.5 snapshot (and/or 4.0 snapshot) to see if it is one of the CONNECT or
TLS issues we have fixed recently.

> 
> Today I monitored file descriptors 23 and 24 on my box during 5 hours
> and lsof always showed:
> squid  6574   proxy   23u IPv6 5320944 
> 0t0TCP 172.16.10.22:3126->192.168.90.35:34571 (CLOSE_WAIT)
> squid  6574   proxy   24u IPv6 5327276 
> 0t0TCP 172.16.10.22:3126->192.168.89.236:49435 (ESTABLISHED)
> while netstat always showed:
> tcp6   1  0 172.16.10.22:3126 192.168.90.35:34571
> CLOSE_WAIT  6574/(squid-1)   off (0.00/0/0)
> tcp6   0  0 172.16.10.22:3126 192.168.89.236:49435   
> ESTABLISHED 6574/(squid-1)   off (0.00/0/0)
> 
> The "off" flag in netstat output tells that for these sockets keepalive
> and retransmission timers are disabled.

Oooh. That should mean 30sec timout and then RST. Not even a whole
minute of idle time.

> Right now netstat shows 15,568 connections on squid port 3126 and only
> 107 have timer set to a value other than "off".
> 
> I read that connections that are in CLOSE_WAIT state don't have any tcp
> timeout, it's Squid that must close the socket.

Squid closes the socket/FD as soon as it received the FIN or RST that
began the CLOSE_WAIT state. Unless it was Squid closing that began it.

> 
>  About the connections in ESTABLISHED state, I monitored the connection
> to mobile device 192.168.89.236 using "tcpdump -i eth2 -n host
> 192.168.89.236" during 2 hours and a half.
> Tcpdump didn't record any packet and netstat is still displaying:
> tcp6   1  0 172.16.10.22:3126 192.168.90.35:34571
> CLOSE_WAIT  6574/(squid-1)   off (0.00/0/0)
> tcp6   0  0 172.16.10.22:3126 192.168.89.236:49435   
> ESTABLISHED 6574/(squid-1)   off (0.00/0/0)
> 
> So unfortunately I still don't understand why Squid or the kernel don't
> close these sockets.

Neither. So it is time to move away from lsof and start using packet
capture to get a full-body packet trace to find out what exact packets
are happening on at least one affected TCP connection.

If possible identifying one of these connections from its SYN onwards
would be great, but if not then a 20min period of activity on an
existing one might still how more hints.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid3.x have issue with some sites, squid2.x not.

2015-11-23 Thread Amos Jeffries
On 24/11/2015 1:56 p.m., Beto Moreno wrote:
> Hi guys.
> 
> I have face some issue with Squid Cache: Version 3.4.10.
> 
> Example this site:
> 
> www.salud.gob.mx
> www.issemym.gob.mx
> 
> I cannot access this site.
> 
> Now, I have a other installation running squid 2.7.x, in that network
> I can access those without any issue.

3.x has HTTP/1.1 support, 2.x is HTTP/1.0-only.
3.4 has about 12 years of code development difference to 2.7.
It is no surprise when they act different (good or bad).


> 
> With squid 3.x I got this error in logs:
> 
> The error log is: TCP_MISS_ABORTED/000.

That is not an error, that is a log field value that says "the client
disconnected before anything was delivered to it."

The rest of the line taht you omitted contains more data critical to
explaining or understanding the situation.


> 
> U wait for the browser and after a while u get:
> 
> Operation timed out
> 
> I have check my squid settings but don't see any parameter that could affect:
> 

> Can someone help debuging this issue?

Maybe. Not with the info provided so far.

It is easier to work with just your squid.conf settings (the set of
things you added/changed from default behaviour), not the full squid
internal state of the config.

Also the full access.log line(s) and any cache.log entries.


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] negotiate_wrapper: Return 'AF = * username

2015-11-23 Thread Amos Jeffries
On 24/11/2015 5:57 a.m., Michael Pelletier wrote:
> Hello,
> 
> I have squid in the production environment and everything is running well.
> I am building a new server that will be used as a new template of squid in
> our virtual environment.
> 
> for some reason on the new template server I am getting negotiate_wrapper
> inserting a "*" before the username. This of course is not matching any
> users when I do a group matching in LDAP.
> 
>  negotiate_wrapper: Return 'AF = * [username]
> 
> Yet, this is not happening in the production systems. Does anyone know what
> is going on?

The format of the Negotiate authentication lines is "AF"  .

Where token is the base64 encoded Negotiate/Kerberos token to be sent to
the client to confirm authentication success. "*" is used when the
client is performing Negotiate/NTLM, which does not use that token.

Is that "=" symbol also in the result lines? if so it is what is
screwing things up.

IIRC we fixed this problem in the helper a long while back, please try
an upgrade. If it is occuring in the latest squid releases, please
provide which exact version you are using, and the cache.log trace when
diagnostics is enabled on the helper.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] file descriptors leak

2015-11-23 Thread André Janna


Assin Em 22/11/2015 16:25, Eliezer Croitoru escreveu:

Hey Andre,

There are couple things to the picture.
It's not only squid that is the "blame".
It depends on what your OS tcp stack settings are.
To verify couple things you can try to use the netstat tool.
run the command "netstat -nto" to see what is the timers status.
You can then see how long will a new connection stay in the 
established state.
It might be the squid settings but if the client is not there it could 
be because of some tcp tunable kernel settings.


Hi Eliezer and Amos,
my kernel is a regular Debian Jessie kernel using the following tcp values.
tcp_keepalive_time: 7200
tcp_keepalive_intvl: 25
tcp_keepalive_probes: 9
tcp_retries1: 3
tcp_retries2: 15
tcp_fin_timeout: 60
So in my understanding the longest timeout is set to 2 hours and a few 
minutes for keepalive connections.


Today I monitored file descriptors 23 and 24 on my box during 5 hours 
and lsof always showed:
squid  6574   proxy   23u IPv6 5320944  
0t0TCP 172.16.10.22:3126->192.168.90.35:34571 (CLOSE_WAIT)
squid  6574   proxy   24u IPv6 5327276  
0t0TCP 172.16.10.22:3126->192.168.89.236:49435 (ESTABLISHED)

while netstat always showed:
tcp6   1  0 172.16.10.22:3126 192.168.90.35:34571 
CLOSE_WAIT  6574/(squid-1)   off (0.00/0/0)
tcp6   0  0 172.16.10.22:3126 192.168.89.236:49435
ESTABLISHED 6574/(squid-1)   off (0.00/0/0)


The "off" flag in netstat output tells that for these sockets keepalive 
and retransmission timers are disabled.
Right now netstat shows 15,568 connections on squid port 3126 and only 
107 have timer set to a value other than "off".


I read that connections that are in CLOSE_WAIT state don't have any tcp 
timeout, it's Squid that must close the socket.


 About the connections in ESTABLISHED state, I monitored the connection 
to mobile device 192.168.89.236 using "tcpdump -i eth2 -n host 
192.168.89.236" during 2 hours and a half.

Tcpdump didn't record any packet and netstat is still displaying:
tcp6   1  0 172.16.10.22:3126 192.168.90.35:34571 
CLOSE_WAIT  6574/(squid-1)   off (0.00/0/0)
tcp6   0  0 172.16.10.22:3126 192.168.89.236:49435
ESTABLISHED 6574/(squid-1)   off (0.00/0/0)


So unfortunately I still don't understand why Squid or the kernel don't 
close these sockets.



Regards,
  André

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] negotiate_wrapper: Return 'AF = * username

2015-11-23 Thread Michael Pelletier
Hello,

I have squid in the production environment and everything is running well.
I am building a new server that will be used as a new template of squid in
our virtual environment.

for some reason on the new template server I am getting negotiate_wrapper
inserting a "*" before the username. This of course is not matching any
users when I do a group matching in LDAP.

 negotiate_wrapper: Return 'AF = * [username]

Yet, this is not happening in the production systems. Does anyone know what
is going on?


Michael

-- 


*Disclaimer: *Under Florida law, e-mail addresses are public records. If 
you do not want your e-mail address released in response to a public 
records request, do not send electronic mail to this entity. Instead, 
contact this office by phone or in writing.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid intercept mode fo http & https

2015-11-23 Thread Ahmad Alzaeem

Amos , 
Is it possible to let squid blind to the ds tip and lookup  only  to the domain 
name in the packet ???

Awaiting ur reply 

Thank you 

-Original Message-
From: Ahmad Alzaeem [mailto:ahmed.za...@netstream.ps] 
Sent: Sunday, November 22, 2015 9:45 AM
To: 'Amos Jeffries'
Cc: 'squid-users@lists.squid-cache.org'
Subject: RE: [squid-users] squid intercept mode fo http & https

Amos , thank you so much for your kind reply  .

The topology is complex and I cant do it like setting up the gateway to be the 
squid and im forced to work on DNS .

Im just asking is it possible to work on that way with squid ?
Or
Its impossible to have it working ???

I have its werid and not popular , but im forced to do it on that  way .

So  again , can we use like redsocks or any redirector to help me in this issue 
?


If squid can work on that way , do I need to add more directives to let it work 
?

As I mentioned from logs it stuck and lookup for destination ip  ip :
1448121518.847  0 xx.79.120 TCP_MISS/503 4183 GET http://cnn.com/ - 
ORIGINAL_DST/10.159.144.206 text/html
1448121526.056  0 xx.79.120 TCP_MISS/503 399 HEAD http://cnn.com/ - 
ORIGINAL_DST/10.159.144.206 text/html


so if I was understanding well , I guess squid will work on the domain name not 
on the ip and I suppose it to work , but so far I don’t know why !

Thank you amos  again , I appreciate all ur help and the team support help , 
all of you were and still a nice helpers


cheers

-Original Message-
From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of Amos Jeffries
Sent: Sunday, November 22, 2015 3:51 AM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] squid intercept mode fo http & https

On 22/11/2015 5:56 a.m., Ahmad Alzaeem wrote:
> Thanks fot your reply .
> 
> I know that my DNS is weird .
> 
> But all I need is
> I have access to DNS server , but I don’t have access to pcs to give them 
> ip:port in their browsers .
> 
> So yes , im forced to work on that way .

You should not be. Have a read through
. Notice that DNS 
weirdness is not mentioned anywhere, not even as a last-resort method.



> 
> And I want to filter my websites and the only way to go internet is using the 
> proxy .
> 
> So what do you suggest ?

Try the methods listed in that wiki page for WPAD/PAC auto-configuration (aka 
"transparent proxy configuration", notice that is a 3-word phrase).
That will catch a lot of the main-stream browsers.

When that is done set up your routers for *routing* the port 80/443 traffic 
through the Squid machine. With NAT (aka "transparent interception proxy", 
notice that is a different 3-word phrase)

No DNS required in any of that.

> 
> So again , the packet go to squid , but inside this packet the name of 
> websites and ds tip is the proxy ip.

Exactly. That is all Squid is given to work with.

> 
> What settings needed on squid to operate such as get the info from name and 
> skip dst ip ?
> 
>  If u look @ the log files u will understand my idea
> 

We already understand your idea. Others have had it before. The reason it is 
not popular is the extremely complicated nature of the multiple pieces of high 
performance high-uptime hardware required just to keep it from falling over 
and/or hitting the side effects you have seen so far, and many others you have 
not even got close to reaching yet. When things go wrong the clients also need 
an individual reset to clear their internal DNS caches.

Route packets to Squid (no DNS) just like normally routed packets if Squid were 
a border gateway, then NAT or TPROXY intercept into the proxy itself on the 
same machine. FAR more robust.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users