Re: [squid-users] Monitoring bandwidth usage: good and bad news

2014-07-16 Thread Eliezer Croitoru

Hey Fernando,

What would expect from this monitoring tool to do?

Eliezer

On 07/15/2014 11:11 PM, ferna...@lozano.eti.br wrote:

Hi there,

As stated in another thread, using the access log format st seems
ineffective to measure upload bandwidth to things like Google Drive.
Amos stated that this could be related to a CONNECT issue.

Is anyone aware of this issue? Is there a bug report?

Now the good news: I'm collecting data from squidclient mgr:usage, and
all attributes client.*kbytes_in/out and the server ones seems to be
correct, and accounting for HTTPS downloads and uploads.

Is anyone aware of a ready-to-use monitoring tool that uses squidlcient
for those metrics?


[]s, Fernando Lozano




[squid-users] Re: Basic LDAP on 2008 R2, groups and refresh time

2014-07-16 Thread masterx81
Hi!
Any idea about my problem?
By now i think that i'll use the scheduled reconfigure It's not the best
way but it seem to work...



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Basic-LDAP-on-2008-R2-groups-and-refresh-time-tp4666845p4666938.html
Sent from the Squid - Users mailing list archive at Nabble.com.


RE: [squid-users] squid: Memory utilization higher than expected since moving from 3.3 to 3.4 and Vary: working

2014-07-16 Thread Martin Sperl
Any Idea as to what could be the issue - since the last post 2 days ago the 
memory footprint has increased by 1.2GB to 9424592Kb.
With now 788561 hot objects (from 651556)

The number of StoreEntry-pool objects (which I assume is the number of real 
objects in memory cache) has even decreased (from 173746 to 173624).

I created a full dump of mgr:vm_objects and there I find 789431 KEYS (so in 
principle close to the number of hot objects).
The question is: could we infer some information from this output?

Here some statistics on those Objects:
Distribution of 1st line after KEY:
Count   line1
 718698 STORE_OK  IN_MEMORY SWAPOUT_DONE PING_DONE
  63156 STORE_OK  IN_MEMORY SWAPOUT_DONE PING_NONE
   7516 STORE_OK  IN_MEMORY SWAPOUT_NONE PING_DONE
 51 STORE_OK  IN_MEMORY SWAPOUT_NONE PING_NONE
  6 STORE_OK  NOT_IN_MEMORY SWAPOUT_NONE PING_DONE
  3 STORE_PENDING NOT_IN_MEMORY SWAPOUT_NONE PING_DONE
  1 STORE_PENDING NOT_IN_MEMORY SWAPOUT_NONE PING_NONE

Distribution of 2nd line after KEY:
Count   line2
 515372 REVALIDATE,CACHABLE,DISPATCHED,VALIDATED
 237538 CACHABLE,DISPATCHED,VALIDATED
  28944 CACHABLE,VALIDATED
   7048 CACHABLE,DISPATCHED,NEGCACHED,VALIDATED
468 REVALIDATE,CACHABLE,DISPATCHED,NEGCACHED,VALIDATED
 51 SPECIAL,CACHABLE,VALIDATED
  5 RELEASE_REQUEST,DISPATCHED,PRIVATE,VALIDATED
  2 REVALIDATE,RELEASE_REQUEST,DISPATCHED,PRIVATE,VALIDATED
  2 CACHABLE,DISPATCHED,PRIVATE,FWD_HDR_WAIT,VALIDATED
  1 DELAY_SENDING,RELEASE_REQUEST,PRIVATE,VALIDATED

Here the count of objects that have the same URL:
Obj_count number of URL occurences
720711 1
  23276 2
   2216 3
   1134 4
588 5
283 6
214 7
111 8
 72 9
 70 10
 81 11
 37 12
 30 13
 21 14
  4 15
  2 16
  3 17
  5 18
 10 19
  1 20
  1 21
  1 22
  2 25
  2 28
(1 would indicate a VARY policy is in place and we have multiple objects)

Objects with vary_headers in object: 40591

If I sum up the inmem_hi: values I get: 2918369522, so 2.9GB.

So it seems as if there must be some major overhead for those inmem objects...

If I look at locks, * clients, * refs and there specifically at the refs 
value I get the following distribution:
Obj_count ref_val
  12240 0
 592487 1
  78355 2
  25285 3
  12901 4
   8173 5
   5787 6
   4100 7
   3143 8
   2541 9
   2318 10
   1859 11
   1725 12
   1470 13
   1275 14
   1231 15
   1042 16
867 17
853 18
723 19
643 20
669 21
631 22
574 23
496 24
469 25
431 26
423 27
464 28
394 29
357 30
368 31
350 32
315 33
330 34
280 35
299 36
239 37
264 38
218 39
...
  1 65000
  1 65017
  1 65028
  1 65074
  1 65089
  1 65183
  1 65248
  1 65299
  1 65364
  1 65521

As for Expiry-times - here the Expiry-time in days relative to now with the 
number of objects in cache:
Obj_count EXP days in the past
  42511   -16267
  12585-6199
  1 -209
  1 -172
  1 -171
  2 -169
  1 -157
  1 -149
   1635  -85
   2233  -84
701  -83
388  -82
336  -81
234  -80
175  -79
139  -78
 88  -77
 85  -76
 63  -75
 82  -74
 58  -73
 48  -72
 49  -71
 49  -70
 32  -69
 50  -68
 20  -67
 25  -66
 32  -65
 49  -64
 22  -63
 39  -62
 32  -61
 32  -60
 19  -59
 13  -58
  9  -57
 10  -56
 24  -55
 14  -54
 47  -53
 24  -52
 27  -51
 24  -50
 17  -49
 36  -48
 75  -47
 38  -46
 58  -45
 61  -44
 14  -43
 55  -42
 23  -41
 27  -40
 42  -39
 53  -38
 46  -37
 68  -36
101  -35
 52  -34
 52  -33
 35  -32
 88  -31
 39  -30
 39  -29
 58  -28
 86  -27
 77  -26
 83  -25
 83  -24
 77  -23
 79  -22
123  -21
123  -20
176  -19
128  -18
170  -17
141  -16
153  -15
144  -14
101  -13
122  -12
342  -11
220  -10
177   -9
212   -8
  27001   -7
  61767   -6
  71550   -5
  79084   -4
  82293   -3
  91091   -2
113077   -1
102432   -0
  790680
  131971
  18
286  168
121  169
 57  170
 30  171
 17  172
114  173
610  174
656  175
325  176
169 

Re: [squid-users] Re: Basic LDAP on 2008 R2, groups and refresh time

2014-07-16 Thread Eliezer Croitoru

Hey,

I have reviewed your squid.conf and you are missing critical specific 
configuration in the external acl type:

http://www.squid-cache.org/Doc/config/external_acl_type/

Notice that there are default values which you should define or change.
in your case ttl=30 or 60 should give you what you need and also 
explains your situation.


All The Bests,
Eliezer

On 07/11/2014 02:31 PM, masterx81 wrote:

I've got an error on
egrep: invalid option -- '^'
on squid.conf listing
I've fixed it simply putting a space after the -v parameter.

So, i'll attach the output.

Thanks!!
log.txt
http://squid-web-proxy-cache.1019090.n4.nabble.com/file/n4666847/log.txt



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Basic-LDAP-on-2008-R2-groups-and-refresh-time-tp4666845p4666847.html
Sent from the Squid - Users mailing list archive at Nabble.com.





Re: [squid-users] Re: Three questions about Squid configuration

2014-07-16 Thread Amos Jeffries
On 16/07/2014 9:23 a.m., Nicolás wrote:
 Thanks! That would indeed cover the first issue :-) I initially used
 redirect because somewhere I read that it's not a good idea forwarding
 the traffic directly to the port where squid listens and it should be
 pointed to another port instead and then redirected.

Sounds like you read one of my explanations and did not quite get it.
Hope this helps clarfy:

That is all true regarding *intercepted* port 80 traffic. The traffic
which is actually destined to a webserver directly.

For traffic such as your testing with (CONNECT etc) on non-80 ports the
traffic is destined to a proxy. So the NAT IP addressing does not matter
and the security checks on the interception do more harm than good.

This is why you should keep the ports separate. Because the traffic on
port 80 and the traffic destined to a proxy are quite different beasts.

 However, working as
 this, it would be enough to set a firewall policy to permit just the
 client range of IPs. Let's see whether I can solve the second issue too...
 

Yes, if I am understanding you that firewall policy should be needed
regardless of whether you are dealing with explicitly configured clients
or intercepting the port 80 traffic.

Amos



Re: [squid-users] problem streaming video

2014-07-16 Thread Amos Jeffries
On 16/07/2014 4:18 p.m., Lawrence Pingree wrote:
 I have found that although RFC's state that you should have VIA and forwarded 
 for headers, firewalls and intrusion detection devices are now blocking 
 (based 
 on their configuration of the organization) proxies that are detected using 
 these headers as the method for detection.
 

Do you have much in the way of data on that?

My finding is that this is almost always bad code.

Systems which break internally (crash or hang - resulting in zero sized
reply). Fairly consistently do so if they are passed unknown or an
IPv6 address in the XFF header. Some also fail if they are passed
multiple IPv4 or sometimes if the (optional) SP characters are omitted.

unknown, and multiple IPv4 has *aways* been part of the design for
X-Forwarded-For. So the only explanation if those fail is bad code
handling the header value.

Amos



[squid-users] Re: Basic LDAP on 2008 R2, groups and refresh time

2014-07-16 Thread masterx81
Wow! It works!
I've focused on the ttl of the helpers, but i've not noticed that there was
also a ttl of the external_acl_type!

Really thanks for your help!!!



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Basic-LDAP-on-2008-R2-groups-and-refresh-time-tp4666845p4666943.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Re: Three questions about Squid configuration

2014-07-16 Thread Nicolás

El 16/07/2014 12:31, Amos Jeffries escribió:

On 16/07/2014 9:23 a.m., Nicolás wrote:

Thanks! That would indeed cover the first issue :-) I initially used
redirect because somewhere I read that it's not a good idea forwarding
the traffic directly to the port where squid listens and it should be
pointed to another port instead and then redirected.

Sounds like you read one of my explanations and did not quite get it.
Hope this helps clarfy:

That is all true regarding *intercepted* port 80 traffic. The traffic
which is actually destined to a webserver directly.

For traffic such as your testing with (CONNECT etc) on non-80 ports the
traffic is destined to a proxy. So the NAT IP addressing does not matter
and the security checks on the interception do more harm than good.

This is why you should keep the ports separate. Because the traffic on
port 80 and the traffic destined to a proxy are quite different beasts.


Ok, now it's crystal clear. However, trying to reproduce the 
configuration on the link that babajaga proposed, I get a loop on the 
squid side on any link opened from the client side. On the client side, 
I just added the OUTPUT DNAT iptables rule to make it match the 3128 IP 
and port of the remote server. On the server side there are not iptables 
rules, just the -j ACCEPT policy for the 3128 port, which is the 
intercept port.


2014/07/15 23:09:46| WARNING: Forwarding loop detected for:
GET /favicon.ico HTTP/1.1
Host: www.google.es
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 
Firefox/24.0

Accept: image/png,image/*;q=0.8,*/*;q=0.5
Accept-Language: es-ES,es;q=0.8,en-US;q=0.5,en;q=0.3
Accept-Encoding: gzip, deflate
Cookie: 
PREF=ID=119a6e25e6eccb3b:U=95e37afd611b606e:FF=0:TM=1404500940:LM=1404513627:S=r7E-Xed2muOOp-ay; 
NID=67=M5geOtyDtp5evLidOfam1uzfhl6likehxjXo7KcamK8c5jXptfx9zJc-5L7jhvYvnfTvtXYJ3yza7cE8fRq2x0iyVEHN9Pn2hz9urrC_Qt_xNH6IQCoT-3-eXTwb2h4f; 
OGPC=5-25:

Via: 1.1 homeSecureProxy (squid/3.3.8)
X-Forwarded-For: 77.231.176.236
Cache-Control: max-age=259200
Connection: keep-alive

1405462555.918  0 SERVER-IP TCP_MISS/403 4285 GET http://google.es/ 
- HIER_NONE/- text/html
1405462555.918  1 CLIENT-IP TCP_MISS/403 4404 GET http://google.es/ 
- HIER_DIRECT/CLIENT-IP text/html


I just replaced the SERVER-IP and CLIENT-IP IPs.

Is there any extra rules necessary on the server side to make the 
intercept mechanism work? I tried debugging it with tcpdump but I can't 
see anything strange.


Thanks.


However, working as
this, it would be enough to set a firewall policy to permit just the
client range of IPs. Let's see whether I can solve the second issue too...


Yes, if I am understanding you that firewall policy should be needed
regardless of whether you are dealing with explicitly configured clients
or intercepting the port 80 traffic.

Amos





Re: [squid-users] Re: Three questions about Squid configuration

2014-07-16 Thread Nicolás

El 16/07/2014 13:50, Nicolás escribió:

El 16/07/2014 12:31, Amos Jeffries escribió:

On 16/07/2014 9:23 a.m., Nicolás wrote:

Thanks! That would indeed cover the first issue :-) I initially used
redirect because somewhere I read that it's not a good idea forwarding
the traffic directly to the port where squid listens and it should be
pointed to another port instead and then redirected.

Sounds like you read one of my explanations and did not quite get it.
Hope this helps clarfy:

That is all true regarding *intercepted* port 80 traffic. The traffic
which is actually destined to a webserver directly.

For traffic such as your testing with (CONNECT etc) on non-80 ports the
traffic is destined to a proxy. So the NAT IP addressing does not matter
and the security checks on the interception do more harm than good.

This is why you should keep the ports separate. Because the traffic on
port 80 and the traffic destined to a proxy are quite different beasts.


Ok, now it's crystal clear. However, trying to reproduce the 
configuration on the link that babajaga proposed, I get a loop on the 
squid side on any link opened from the client side. On the client 
side, I just added the OUTPUT DNAT iptables rule to make it match the 
3128 IP and port of the remote server. On the server side there are 
not iptables rules, just the -j ACCEPT policy for the 3128 port, which 
is the intercept port.


2014/07/15 23:09:46| WARNING: Forwarding loop detected for:
GET /favicon.ico HTTP/1.1
Host: www.google.es
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 
Firefox/24.0

Accept: image/png,image/*;q=0.8,*/*;q=0.5
Accept-Language: es-ES,es;q=0.8,en-US;q=0.5,en;q=0.3
Accept-Encoding: gzip, deflate
Cookie: 
PREF=ID=119a6e25e6eccb3b:U=95e37afd611b606e:FF=0:TM=1404500940:LM=1404513627:S=r7E-Xed2muOOp-ay; 
NID=67=M5geOtyDtp5evLidOfam1uzfhl6likehxjXo7KcamK8c5jXptfx9zJc-5L7jhvYvnfTvtXYJ3yza7cE8fRq2x0iyVEHN9Pn2hz9urrC_Qt_xNH6IQCoT-3-eXTwb2h4f; 
OGPC=5-25:

Via: 1.1 homeSecureProxy (squid/3.3.8)
X-Forwarded-For: 77.231.176.236
Cache-Control: max-age=259200
Connection: keep-alive

1405462555.918  0 SERVER-IP TCP_MISS/403 4285 GET 
http://google.es/ - HIER_NONE/- text/html
1405462555.918  1 CLIENT-IP TCP_MISS/403 4404 GET 
http://google.es/ - HIER_DIRECT/CLIENT-IP text/html




Sorry, this last line should be:

1405462555.918  1 CLIENT-IP TCP_MISS/403 4404 GET http://google.es/ 
- HIER_DIRECT/SERVER-IP text/html



I just replaced the SERVER-IP and CLIENT-IP IPs.

Is there any extra rules necessary on the server side to make the 
intercept mechanism work? I tried debugging it with tcpdump but I 
can't see anything strange.


Thanks.


However, working as
this, it would be enough to set a firewall policy to permit just the
client range of IPs. Let's see whether I can solve the second issue 
too...



Yes, if I am understanding you that firewall policy should be needed
regardless of whether you are dealing with explicitly configured clients
or intercepting the port 80 traffic.

Amos







Re: [squid-users] Monitoring bandwidth usage: good and bad news

2014-07-16 Thread fernando

Hi Eliezer,



What would expect from this monitoring tool to do?


Per-user and per-host bandwidth monitoring for both upload and 
download.


When using access log parsers like sarg and calamaris we get only 
download bandwidth. It's easy to configure them to generate a parallel 
set of upload reports from a parallel access.log that switches %st to 
%st but it looks like squid wont log upload sizes for CONNECT requests, 
so the big badwidth eaters like google drive won't show any upload 
traffic.


From squidclient mgr:utilization I could get only agregate upload and 
download bandwidth. Not per user or per host.


And of course I'd like to find something ready to use instead of 
hacking my own scripts to query squid, generate logs and plot graphics. 
;-)


[]s, Fernando Lozano



Eliezer

On 07/15/2014 11:11 PM, ferna...@lozano.eti.brwrote:


Hi there, As stated in another thread, using the access log format
st seems ineffective to measure upload bandwidth to things like
Google Drive. Amos stated that this could be related to a CONNECT
issue. Is anyone aware of this issue? Is there a bug report? Now the
good news: I'm collecting data from squidclient mgr:usage, and all
attributes client.*kbytes_in/out and the server ones seems to be
correct, and accounting for HTTPS downloads and uploads. Is anyone
aware of a ready-to-use monitoring tool that uses squidlcient for 
those

metrics? []s, Fernando Lozano


Re: [squid-users] Three questions about Squid configuration

2014-07-16 Thread Eliezer Croitoru

Hey Nicolas,

Can we got from step 0 please?
What OS are you running?
Is it a self compiled squid or from the OS repository?
do you have more then one network interface on this machine?
What is the network scheme?
If it's a CentOS machine, can you run this script on it?
http://www1.ngtech.co.il/squid/basic_data.sh

The main issue you have is a looping or wrong redirection.
You need to differentiate any local traffic coming from the local 
machine and from squid process to other users and other machines.

depends on your OS you should have an iptables module of owner matching.
you should add it like this:
iptables -t nat -I PREROUTING --match owner --uid-owner 
squid_user_account_name_or_number_id -p tcp --dport 80 -m conntrack 
--ctstate NEW,ESTABLISHED -j ACCEPT


This should solve most of your issues when using the proper intercept port.
In a case you are trying to reach another destination ports you should 
add a special rule to ACCEPT like in the example by owner id and using 
the other port.


Eliezer

On 07/15/2014 10:09 PM, Nicolás wrote:

Hi there!

It's been years I haven't played around with squid so I wanted to make a
simple configuration just to see whether I remember the basic things,
and I found two problems:

I'm running:

# squid3 -v
Squid Cache: Version 3.3.8

1) My configuration is the default that the package provides, I just
added another http_port, so now I got:
  http_port 3128
  http_port 3127 intercept

  Afterwards, I setup a REDIRECT iptables rule to make anything
coming to port 8080 be redirected to one of these 2 ports. If I redirect
it to port 3128, everything works fine, squid actually behaves as a
transparent proxy applying the http_access and acl rules correctly. But
if I redirect it to port 3127, any request results in a 111 Connection
refused error. This is the only one rule in my iptables, so it cannot be
related to some rules misconfiguration.

  iptables -t nat -A PREROUTING -i eth0 -p tcp -m tcp --dport 8080
-j REDIRECT --to-ports 312X

  I enabled debugging via the -d flag, there's absolutely nothing
regarding to these requests. The access log shows the request like this:

 1405450438.913  0 origin.ip TCP_MISS/503 3487 GET
http://www.devels.es/ - HIER_DIRECT/machine.public.ip text/html

  So at this point, my questions are 2:

  1.1) What could be causing this behavior?
  1.2) If the default redirect port (3128) works as a transparent
proxy (intercept), then what's the concept difference between both
configurations?

2) There are some websites using SSL that I cannot reach using squid,
resulting in a 110 Connection timed out error. One of them is Facebook:

pi@rpi ~ $ telnet machine.public.ip 8080
Trying machine.public.ip...
Connected to machine.public.ip.
Escape character is '^]'.

CONNECT www.facebook.com:443
HTTP/1.1 503 Service Unavailable
Server: squid/3.3.8
Mime-Version: 1.0
Date: Tue, 15 Jul 2014 19:00:23 GMT
Content-Type: text/html
Content-Length: 3085
X-Squid-Error: ERR_CONNECT_FAIL 110
Vary: Accept-Language
Content-Language: en

[...]

p id=sysmsgThe system returned: i(110) Connection timed out/i/p

pThe remote host or network may be down. Please try the request
again./p

[...]
Connection closed by foreign host.

 However, from the server which hosts squid, I can make a wget or
curl request to facebook. I even installed the same version of squid on
a local virtual machine over my computer just to test and it works,
replicating exactly the same both squid and iptables config. What could
be the cause of this?

Thanks for the help!

Regards,

Nicolás




Re: [squid-users] Three questions about Squid configuration

2014-07-16 Thread Nicolás

Hi Eliezer,

This is a Ubuntu Trusty 14.04 64 bits, the package is from the APT 
repository and there is just one network in both the client and server 
side. My aim is to redirect al the outgoing client traffic to the port 
3128 on a remote server. So I initially did 2 steps as far as iptables 
config goes:


On the client side: iptables -t nat -A OUTPUT -p tcp --dport 80 -j DNAT 
--to-destination SQUIDIP:3128
On the server side: iptables -I INPUT -p tcp -d SQUIDIP --dport 3128 -j 
ACCEPT


I tried adding this rule:

iptables -t nat -I PREROUTING --match owner --uid-owner proxy -p tcp 
--dport 3128 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT


But I get a warning like this:

[707317.691686] ip_tables: owner match: used from hooks PREROUTING, but 
only valid from OUTPUT/POSTROUTING


However, there's a thing I don't get: When the squid server receives 
packets from the clients, they go directly to port 3128. Why should 
squid3 also send requests to itself on the same port? Shouldn't they be 
redirected to the proper destination?


I also tried disabling iptables (leaving all the chains empty, so the 
3128 port is also opened) and still happens the same. Weird...


Any hints?

Thanks!

El 16/07/2014 14:57, Eliezer Croitoru escribió:

Hey Nicolas,

Can we got from step 0 please?
What OS are you running?
Is it a self compiled squid or from the OS repository?
do you have more then one network interface on this machine?
What is the network scheme?
If it's a CentOS machine, can you run this script on it?
http://www1.ngtech.co.il/squid/basic_data.sh

The main issue you have is a looping or wrong redirection.
You need to differentiate any local traffic coming from the local 
machine and from squid process to other users and other machines.

depends on your OS you should have an iptables module of owner matching.
you should add it like this:
iptables -t nat -I PREROUTING --match owner --uid-owner 
squid_user_account_name_or_number_id -p tcp --dport 80 -m conntrack 
--ctstate NEW,ESTABLISHED -j ACCEPT


This should solve most of your issues when using the proper intercept 
port.
In a case you are trying to reach another destination ports you should 
add a special rule to ACCEPT like in the example by owner id and using 
the other port.


Eliezer

On 07/15/2014 10:09 PM, Nicolás wrote:

Hi there!

It's been years I haven't played around with squid so I wanted to make a
simple configuration just to see whether I remember the basic things,
and I found two problems:

I'm running:

# squid3 -v
Squid Cache: Version 3.3.8

1) My configuration is the default that the package provides, I just
added another http_port, so now I got:
  http_port 3128
  http_port 3127 intercept

  Afterwards, I setup a REDIRECT iptables rule to make anything
coming to port 8080 be redirected to one of these 2 ports. If I redirect
it to port 3128, everything works fine, squid actually behaves as a
transparent proxy applying the http_access and acl rules correctly. But
if I redirect it to port 3127, any request results in a 111 Connection
refused error. This is the only one rule in my iptables, so it cannot be
related to some rules misconfiguration.

  iptables -t nat -A PREROUTING -i eth0 -p tcp -m tcp --dport 8080
-j REDIRECT --to-ports 312X

  I enabled debugging via the -d flag, there's absolutely nothing
regarding to these requests. The access log shows the request like this:

 1405450438.913  0 origin.ip TCP_MISS/503 3487 GET
http://www.devels.es/ - HIER_DIRECT/machine.public.ip text/html

  So at this point, my questions are 2:

  1.1) What could be causing this behavior?
  1.2) If the default redirect port (3128) works as a transparent
proxy (intercept), then what's the concept difference between both
configurations?

2) There are some websites using SSL that I cannot reach using squid,
resulting in a 110 Connection timed out error. One of them is Facebook:

pi@rpi ~ $ telnet machine.public.ip 8080
Trying machine.public.ip...
Connected to machine.public.ip.
Escape character is '^]'.

CONNECT www.facebook.com:443
HTTP/1.1 503 Service Unavailable
Server: squid/3.3.8
Mime-Version: 1.0
Date: Tue, 15 Jul 2014 19:00:23 GMT
Content-Type: text/html
Content-Length: 3085
X-Squid-Error: ERR_CONNECT_FAIL 110
Vary: Accept-Language
Content-Language: en

[...]

p id=sysmsgThe system returned: i(110) Connection timed 
out/i/p


pThe remote host or network may be down. Please try the request
again./p

[...]
Connection closed by foreign host.

 However, from the server which hosts squid, I can make a wget or
curl request to facebook. I even installed the same version of squid on
a local virtual machine over my computer just to test and it works,
replicating exactly the same both squid and iptables config. What could
be the cause of this?

Thanks for the help!

Regards,

Nicolás






[squid-users] Re: Three questions about Squid configuration

2014-07-16 Thread babajaga
there is just one network in both the client and server
side.
On the client side,
I just added the OUTPUT DNAT iptables rule to make it match the 3128 IP
and port of the remote server.

Sorry, I am a bit confused.
Pls, read carefully:
#Example for squid and NAT on same machine: !!
iptables -t nat -A OUTPUT -p tcp --dport 80 -j DNAT --to-destination
SQUIDIP:3128 

This also means, that client machine (running the browser, transparently)
and squid-machine are in the same net, and that squid then forwards the
request
to the real destination/server.

According to your posts, squid and NAT seem NOT to be on same machine. 




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Three-questions-about-Squid-configuration-tp4666931p4666949.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Re: Three questions about Squid configuration

2014-07-16 Thread Nicolás
I just realized that part 5 minutes ago... Sorry for the nuisance! In my 
case I need to use as a proxy a different machine because otherwise I'd 
have to set one per client with the same rules, which seems not very 
scalable. The final schema would be this:


Client 1 \
Client 2  \
Client 3   - squid3 server - internet
Client 4  /
Client 5 /

Also, the server running squid3 as transparent proxy would be under a 
different public IP and router than the clients (a remote server... 
requirement of my company), and all of them are using just one network 
interface. What iptables rules would I need to achieve this scenario?


Thanks!

El 16/07/2014 18:38, babajaga escribió:

there is just one network in both the client and server

side.

On the client side,

I just added the OUTPUT DNAT iptables rule to make it match the 3128 IP
and port of the remote server.

Sorry, I am a bit confused.
Pls, read carefully:
#Example for squid and NAT on same machine: !!
iptables -t nat -A OUTPUT -p tcp --dport 80 -j DNAT --to-destination
SQUIDIP:3128

This also means, that client machine (running the browser, transparently)
and squid-machine are in the same net, and that squid then forwards the
request
to the real destination/server.

According to your posts, squid and NAT seem NOT to be on same machine.




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Three-questions-about-Squid-configuration-tp4666931p4666949.html
Sent from the Squid - Users mailing list archive at Nabble.com.




Re: [squid-users] Re: Three questions about Squid configuration

2014-07-16 Thread Eliezer Croitoru

Will be is one thing...
In any case just run the script I gave you to get the basic information 
from the OS it is good enough for IP address etc..


The rule I gave you should be on the OUTPUT as iptables claims.
I am yet not sure about the network structure and there for not sure 
about the issue.
Do not try to intercept port 8080 for google because it wont work and 
the response is good for that


Eliezer

On 07/16/2014 08:50 PM, Nicolás wrote:

I just realized that part 5 minutes ago... Sorry for the nuisance! In my
case I need to use as a proxy a different machine because otherwise I'd
have to set one per client with the same rules, which seems not very
scalable. The final schema would be this:

Client 1 \
Client 2  \
Client 3   - squid3 server - internet
Client 4  /
Client 5 /

Also, the server running squid3 as transparent proxy would be under a
different public IP and router than the clients (a remote server...
requirement of my company), and all of them are using just one network
interface. What iptables rules would I need to achieve this scenario?

Thanks!




Re: [squid-users] RockStore Fatal Error

2014-07-16 Thread Alex Rousskov
On 07/12/2014 04:04 AM, Nyamul Hassan wrote:

 Alex, as per your previous suggestion, we did all the
 troubleshooting steps in the link for SmpScale.  Working on them
 removed the errors in our 1st Squid installation (original email).

Glad you are making progress.


 Now, we are facing problem on another machine.  We did all those steps
 mentioned in SmpScale, yet this machine is giving the same problems.

You may want to show exactly what problems you are seeing on the second
machine. It is difficult to guess what the same means after so many
back-and-forth emails. As always, please make sure you show all errors
and warnings, not just the last FATAL message.

Where does you Squid create .ipc files? Does that directory exist? Can
Squid write there?


Thank you,

Alex.


 SHM is already installed.
 
 Amos, as for file permission, the following all have permission as
 squid.squid:
 /var/run/squid
 /var/log/squid
 
 ls on /dev/shm shows:
 [root@proxy04 ~]# ll /dev/shm
 total 124912
 -rw--- 1 squid squid7340144 Jul 12 06:57 squid-cache_mem.shm
 -rw--- 1 squid squid   68159528 Jul 12 06:57
 squid-cachestore.cache1.rock.shm
 -rw--- 1 squid squid   68159528 Jul 12 06:57
 squid-cachestore.cache4.rock.shm
 -rw--- 1 squid squid 16 Jul 12 06:57 squid-io_file__metadata.shm
 -rw--- 1 squid squid 262228 Jul 12 06:57 squid-io_file__queues.shm
 -rw--- 1 squid squid 84 Jul 12 06:57 squid-io_file__readers.shm
 -rw--- 1 squid squid 2295383692 Jul 12 06:57 squid-squid-page-pool.shm
 
 So, Squid process does seem to be able to read / write to SHM.
 
 This is the output of df:
 [root@proxy04 ~]# df -H
 Filesystem  Size  Used Avail Use% Mounted on
 /dev/sda363G  9.2G   50G  16% /
 tmpfs   4.1G  128M  4.0G   4% /dev/shm
 /dev/sda1   204M  114M   80M  59% /boot
 /dev/sdb316G  235M  299G   1% /cachestore/cache1
 /dev/sdc316G  251M  299G   1% /cachestore/cache4
 shm 4.1G  128M  4.0G   4% /dev/shm
 
 SELINUX is disabled.
 [root@proxy04 ~]# sestatus
 SELinux status: disabled
 
 What else could be interfering with the SHM?
 
 Regards
 HASSAN
 
 
 On Sat, Jul 12, 2014 at 9:46 AM, Alex Rousskov
 rouss...@measurement-factory.com wrote:
 On 07/11/2014 07:23 PM, Nyamul Hassan wrote:
 However, whenever we start without the -N, we get the same error:
 FATAL: Rock cache_dir at /cachestore/cache1/rock/rock failed to open
 db file: (11) Resource temporarily unavailable

 Most likely, this is a side effect, not the cause. Ignore until all
 other errors are gone.


 We are also seeing these lines:
 commBind: Cannot bind socket FD 17 to [::]: (13) Permission denied

 This is a real problem. A solution may be found in the Troubleshooting
 section of http://wiki.squid-cache.org/Features/SmpScale


 HTH,

 Alex.




Re: [squid-users] RockStore Fatal Error

2014-07-16 Thread Nyamul Hassan
Hi Alex,

On Thu, Jul 17, 2014 at 5:28 AM, Alex Rousskov
rouss...@measurement-factory.com wrote:

 Where does you Squid create .ipc files? Does that directory exist? Can
 Squid write there?


You are right once again.  Although I have the configuration directive
of pid_filename pid_filename /var/run/squid/squid.pid, that
apparently only tells squid about the particular file, not the
localstatedir.  The localstatedir is still set to what was
configured during compile time.

In my case, it was /var/local/squid-3.4.6/var/run/squid.  After
changing permission, it worked!

I got confused with this bug report:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=710126

Thank you!

Shouldn't Squid complain that it could not write to localstatedir?

Regards
HASSAN

On Thu, Jul 17, 2014 at 5:28 AM, Alex Rousskov
rouss...@measurement-factory.com wrote:
 On 07/12/2014 04:04 AM, Nyamul Hassan wrote:

 Alex, as per your previous suggestion, we did all the
 troubleshooting steps in the link for SmpScale.  Working on them
 removed the errors in our 1st Squid installation (original email).

 Glad you are making progress.


 Now, we are facing problem on another machine.  We did all those steps
 mentioned in SmpScale, yet this machine is giving the same problems.

 You may want to show exactly what problems you are seeing on the second
 machine. It is difficult to guess what the same means after so many
 back-and-forth emails. As always, please make sure you show all errors
 and warnings, not just the last FATAL message.

 Where does you Squid create .ipc files? Does that directory exist? Can
 Squid write there?


 Thank you,

 Alex.


 SHM is already installed.

 Amos, as for file permission, the following all have permission as
 squid.squid:
 /var/run/squid
 /var/log/squid

 ls on /dev/shm shows:
 [root@proxy04 ~]# ll /dev/shm
 total 124912
 -rw--- 1 squid squid7340144 Jul 12 06:57 squid-cache_mem.shm
 -rw--- 1 squid squid   68159528 Jul 12 06:57
 squid-cachestore.cache1.rock.shm
 -rw--- 1 squid squid   68159528 Jul 12 06:57
 squid-cachestore.cache4.rock.shm
 -rw--- 1 squid squid 16 Jul 12 06:57 squid-io_file__metadata.shm
 -rw--- 1 squid squid 262228 Jul 12 06:57 squid-io_file__queues.shm
 -rw--- 1 squid squid 84 Jul 12 06:57 squid-io_file__readers.shm
 -rw--- 1 squid squid 2295383692 Jul 12 06:57 squid-squid-page-pool.shm

 So, Squid process does seem to be able to read / write to SHM.

 This is the output of df:
 [root@proxy04 ~]# df -H
 Filesystem  Size  Used Avail Use% Mounted on
 /dev/sda363G  9.2G   50G  16% /
 tmpfs   4.1G  128M  4.0G   4% /dev/shm
 /dev/sda1   204M  114M   80M  59% /boot
 /dev/sdb316G  235M  299G   1% /cachestore/cache1
 /dev/sdc316G  251M  299G   1% /cachestore/cache4
 shm 4.1G  128M  4.0G   4% /dev/shm

 SELINUX is disabled.
 [root@proxy04 ~]# sestatus
 SELinux status: disabled

 What else could be interfering with the SHM?

 Regards
 HASSAN


 On Sat, Jul 12, 2014 at 9:46 AM, Alex Rousskov
 rouss...@measurement-factory.com wrote:
 On 07/11/2014 07:23 PM, Nyamul Hassan wrote:
 However, whenever we start without the -N, we get the same error:
 FATAL: Rock cache_dir at /cachestore/cache1/rock/rock failed to open
 db file: (11) Resource temporarily unavailable

 Most likely, this is a side effect, not the cause. Ignore until all
 other errors are gone.


 We are also seeing these lines:
 commBind: Cannot bind socket FD 17 to [::]: (13) Permission denied

 This is a real problem. A solution may be found in the Troubleshooting
 section of http://wiki.squid-cache.org/Features/SmpScale


 HTH,

 Alex.




Re: [squid-users] squid: Memory utilization higher than expected since moving from 3.3 to 3.4 and Vary: working

2014-07-16 Thread Alex Rousskov
On 07/14/2014 05:36 AM, Martin Sperl wrote:

 * Pools that increase a lot (starting at below 20% of the currend KB 2 days 
 ago) - which are (sorted from Biggest to smallest KB footprint):
 ** mem_node
 ** 4K Buffer
 ** Short Strings
 ** HttpHeaderEntry
 ** 2K Buffer
 ** 16K Buffer
 ** 8K Buffer
 ** Http Reply
 ** Mem Object
 ** Medium Strings
 ** cbdata BodyPipe (39)
 ** HttpHdrCc
 ** cbdata MemBuff(13)
 ** 32K Buffer
 ** Long Strings


 So there must be something that links all of those in the last group together.

MemObject structures contain or tie most (possibly all) of the above
objects. MemObjects are used for current transactions and non-shared
memory cache storage. The ones used for non-shared memory cache storage
are called hot objects. However, some current transactions might
affect hot objects counters as well, I guess. These stats is messy and
imprecise.

Please note that every MemObject must have a StoreEntry but StoreEntries
may lack MemObject. When working with large caches, most of the
StoreEntries without MemObject would correspond to on-disk objects that
are _not_ also cached in memory.

The above is more complex for SMP-aware caches which, I think, you are
not using.


 So here the values of StoreEntries for the last few days:
 20140709-020001:1472007 StoreEntries
 20140710-020001:1475545 StoreEntries
 20140711-020001:1478025 StoreEntries
 20140712-020001:1480771 StoreEntries
 20140713-020001:1481721 StoreEntries
 20140714-020001:1482608 StoreEntries
 These stayed almost constant...

OK, the total number of unique cache entry keys (among memory and disk
caches) is not growing much.


 But looking at  StoreEntries with MemObjects the picture is totally 
 different.
 20140709-020001:128542 StoreEntries with MemObjects
 20140710-020001:275923 StoreEntries with MemObjects
 20140711-020001:387990 StoreEntries with MemObjects
 20140712-020001:489994 StoreEntries with MemObjects
 20140713-020001:571872 StoreEntries with MemObjects
 20140714-020001:651560 StoreEntries with MemObjects

OK, your memory cache is filling, possibly from swapped in disk entries
(so that the total number of keys does not grow much)??

FWIW, the StoreEntries with part of the label is misleading. These are
just MemObjects. However, that distinction is only important if
MemObjects are leaking separately from StoreEntries.


 So if you look at the finer details and traffic pattern we again see that 
 traffic pattern for:
 * storeEntries with MemObjects
 * Hot Object Cache Items

Which are both about MemObjects.


 And these show similar behavior to the pools mentioned above.

Yes, the StoreEntries with MemObjects counter is just the MemObject
pool counter.


 If I sum up the inmem_hi: values I get: 2918369522, so 2.9GB.
 
 So it seems as if there must be some major overhead for those inmem objects...

How do you calculate the overhead? 2.9GB is useful payload, not
overhead. Are you comparing 2.9GB with your total Squid memory footprint
of about 9GB?


 So the question is: why do we underestimate memory_object sizes by a
 factor of aproximately 2?

Sorry, you lost me here. What do you mean by memory_object sizes,
where do we estimate them, and x2 compared to what?


Please note that the above comments and questions are _not_ meant to
indicate that there is no leak or that your analysis is flawed! I am
just trying to understand if you have found a leak or still need to keep
looking [elsewhere].


Are you willing to run Squid with a tiny memory cache (e.g., 100MB) for
a while? This would remove the natural memory cache growth as a variable...


Thank you,

Alex.


 -Original Message-
 From: Martin Sperl 
 Sent: Freitag, 11. Juli 2014 09:06
 To: Amos Jeffries; squid-users@squid-cache.org
 Subject: RE: [squid-users] squid: Memory utilization higher than expected 
 since moving from 3.3 to 3.4 and Vary: working
 
 The basic connection stats are in the mgr:info:
 File descriptor usage for squid:
 Maximum number of file descriptors:   65536
 Largest file desc currently in use:   1351
 Number of file desc currently in use:  249
 Files queued for open:   0
 Available number of file descriptors: 65287
 Reserved number of file descriptors:   100
 Store Disk files open:   0
 
 Also: our loadbalancer will disconnect idle connections after some time and I 
 believe the config has similar settings...
 
 Will send you the hourly details since the restart in a personal email due to 
 size limits of the mailinglist.
 
 Here the current size of the process:
 squid15022  9.5 29.6 4951452 4838272 ? Sl   Jul08 317:01 (squid-1) -f 
 /opt/cw/squid/squid.conf
 
 Martin
 
 -Original Message-
 From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
 Sent: Freitag, 11. Juli 2014 05:24
 To: Martin Sperl; squid-users@squid-cache.org
 Subject: Re: 

Re: [squid-users] RockStore Fatal Error

2014-07-16 Thread Alex Rousskov
On 07/16/2014 06:17 PM, Nyamul Hassan wrote:
 Shouldn't Squid complain that it could not write to localstatedir?

Yes, Squid should and does, but the error message is very misleading. So
far, nobody has taken the time to add code that would render those
errors in a more meaningful way, mentioning file system paths instead of
IPv6 any addresses.

http://wiki.squid-cache.org/SquidFaq/AboutSquid#How_to_add_a_new_Squid_feature.2C_enhance.2C_of_fix_something.3F


Cheers,

Alex.



Re: [squid-users] RockStore Fatal Error

2014-07-16 Thread Amos Jeffries
On 17/07/2014 12:17 p.m., Nyamul Hassan wrote:
 Hi Alex,
 
 On Thu, Jul 17, 2014 at 5:28 AM, Alex Rousskov wrote:

 Where does you Squid create .ipc files? Does that directory exist? Can
 Squid write there?

 
 You are right once again.  Although I have the configuration directive
 of pid_filename pid_filename /var/run/squid/squid.pid, that
 apparently only tells squid about the particular file, not the
 localstatedir.  The localstatedir is still set to what was
 configured during compile time.

Exactly. *_filename mean the *file name*.

Directives in Squid which configure directory locations end in *_dir.
Such as cache_dir, chroot_dir, coredump_dir and so forth.

 
 In my case, it was /var/local/squid-3.4.6/var/run/squid.  After
 changing permission, it worked!
 
 I got confused with this bug report:
 https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=710126
 
 Thank you!
 
 Shouldn't Squid complain that it could not write to localstatedir?

The presence of localstatedir in this process is an artifact of Linux.
Other OS use other places unrelated to localstatedir. And as far as
Squid is concerned it is opening a UDS network socket. So there is not
really an easy way to reliably tell where the permission error is coming
from (and our weak attempt to do so produces the commBind FUD).


PS. when building custom builds you may want to follow the OS-specific
build instructions for your system layout. With just --prefix altered to
setup a pseudo chroot location.
 Sounds like for you that would be
http://wiki.squid-cache.org/KnowledgeBase/Debian#Compiling

Amos



[squid-users] Re: Hotmail issue in squid 3.4.4

2014-07-16 Thread vin_krish
Hi Eliezer ,

  Please help me in solving this issue. If anyone solved the
issue about blank page when we open 'http://www.hotmail.com'. Please reply.


Regards,
krish



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Hotmail-issue-in-squid-3-4-4-tp4666020p4666957.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] squid block email

2014-07-16 Thread vin_krish
Hi all,

 I'm using squid 3.4.4, I want to block users from sending email(ex:
gmail, yahoo, .etc..).
Does squid provide any option...??


Regards,
krish



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/squid-block-email-tp4666958.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Re: Three questions about Squid configuration

2014-07-16 Thread Nicolás

Hi Eliezer,

This would be the output of your script. This is not CentOS so some 
things have failed... and I just obscurated the public IP related data. 
I tried adding the rule you proposed (as you may see in the output), but 
unfortunately it made no difference, I'm still having the redirect loop.


 terminal type:
xterm
 SHELL type:
/bin/bash
\033[00;32m kernel and machine info:\033[0m
Linux vps81276 2.6.32-042stab092.2 #1 SMP Tue Jul 8 10:35:55 MSK 2014 
x86_64 x86_64 x86_64 GNU/Linux

./basic_data.sh: line 48: green_mesage: command not found
./basic_data.sh: line 49: sestatus: command not found
\033[00;32m iptables rules:\033[0m
# Generated by iptables-save v1.4.21 on Thu Jul 17 07:35:34 2014
*nat
:PREROUTING ACCEPT [26:1878]
:POSTROUTING ACCEPT [37:2588]
:OUTPUT ACCEPT [35:2468]
-A OUTPUT -p tcp -m owner --uid-owner 13 -m tcp --dport 3128 -m 
conntrack --ctstate NEW,ESTABLISHED -j ACCEPT

COMMIT
# Completed on Thu Jul 17 07:35:34 2014
# Generated by iptables-save v1.4.21 on Thu Jul 17 07:35:34 2014
*mangle
:PREROUTING ACCEPT [1063:131533]
:INPUT ACCEPT [1063:131533]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [887:158471]
:POSTROUTING ACCEPT [887:158471]
COMMIT
# Completed on Thu Jul 17 07:35:34 2014
# Generated by iptables-save v1.4.21 on Thu Jul 17 07:35:34 2014
*filter
:INPUT ACCEPT [1063:131533]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [887:158471]
COMMIT
# Completed on Thu Jul 17 07:35:34 2014
\033[00;32m tproxy module loaded?:\033[0m
\033[00;32m routes are:\033[0m
10.10.0.2 dev tun0  proto kernel  scope link  src 10.10.0.1
PUBLIC-IP-GATEWAY/24 dev venet0  proto kernel  scope link  src PUBLIC-IP
10.10.0.0/24 via 10.10.0.2 dev tun0
default dev venet0  scope link
\033[00;32m registered route tables:\033[0m
255 local
254 main
253 default
0   unspec
\033[00;32m tproxy route table:\033[0m
\033[00;32m ip policy rules:\033[0m
0:  from all lookup local
32766:  from all lookup main
32767:  from all lookup default
\033[00;32m links info:\033[0m
1: lo: LOOPBACK,UP,LOWER_UP mtu 16436 qdisc noqueue state UNKNOWN mode 
DEFAULT

link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: venet0: BROADCAST,POINTOPOINT,NOARP,UP,LOWER_UP mtu 1500 qdisc 
noqueue state UNKNOWN mode DEFAULT

link/void
3: tun0: POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP mtu 1500 qdisc 
pfifo_fast state UNKNOWN mode DEFAULT qlen 100

link/none
\033[00;32m ip addresses:\033[0m
1: lo: LOOPBACK,UP,LOWER_UP mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
   valid_lft forever preferred_lft forever
2: venet0: BROADCAST,POINTOPOINT,NOARP,UP,LOWER_UP mtu 1500 qdisc 
noqueue state UNKNOWN

link/void
inet 127.0.0.2/32 scope host venet0
inet PUBLIC-IP/24 brd PUBLIC-IP-BROADCAST scope global venet0:0
inet6 2001:41d0:52:d00::265/56 scope global
   valid_lft forever preferred_lft forever
3: tun0: POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP mtu 1500 qdisc 
pfifo_fast state UNKNOWN qlen 100

link/none
inet 10.10.0.1 peer 10.10.0.2/32 scope global tun0
\033[00;32m arp list:\033[0m
\033[00;32m listening TCP sockets:\033[0m
State  Recv-Q 
Send-QLocal 
Address:Port Peer Address:Port

LISTEN 0 0 10.10.0.1:53 *:*  users:((named,800,24))
LISTEN 0 0 PUBLIC-IP:53 *:*  users:((named,800,23))
LISTEN 0 0 127.0.0.2:53 *:*  users:((named,800,22))
LISTEN 0 0 127.0.0.1:53 *:*  users:((named,800,21))
LISTEN 0 0 *:22 *:*  users:((sshd,713,3))
LISTEN 0 0 *:3127 *:*  users:((squid3,739,10))
LISTEN 0 0 *:3128 *:*  users:((squid3,739,9))
LISTEN 0 0 *:25 *:*  users:((smtpd,1678,6),(master,930,12))
LISTEN 0 0 127.0.0.1:953 *:*  users:((named,800,25))
LISTEN 0 0 :::53 :::*  users:((named,800,20))
LISTEN 0 0 :::22 :::*  users:((sshd,713,4))
LISTEN 0 0 :::25 :::*  users:((smtpd,1678,7),(master,930,13))
LISTEN 0 0 ::1:953 :::*  users:((named,800,26))
\033[00;32m ulimit soft:\033[0m
core file size  (blocks, -c) unlimited
data seg size   (kbytes, -d) unlimited
scheduling priority (-e) 0
file size   (blocks, -f) unlimited
pending signals (-i) 256184
max locked memory   (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files  (-n) 4096
pipe size(512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority  (-r) 0
stack size  (kbytes, -s) unlimited
cpu time   (seconds, -t) unlimited
max user processes  (-u) 256184
virtual