Re: [squid-users] SQL DB squid.conf backend, who was it that asked about it?

2022-08-11 Thread Leonardo Rodrigues


    Hi Marcelo,

    Is this going to be released as free and open-source software, or 
it's a closed project? If 1st answer, then I might be able to help! 
While I wouldn't call myself an squid expert, I have to admit I have 
some knowledge on it. And i'm also from Brazil, noticed your .com.br 
email address!



Em 10/08/2022 13:25, marcelorodr...@graminsta.com.br escreveu:

Hi Amos,

It was me indeed.
We have developed a squid based php application to create VPSs and 
deliver proxies via web panel.
It is still in development, but fase 1 is working already running SQL 
user management, create VPSs and squid.conf auto configuration.
We are heading to fase 2 to use cache pears and IPv4/IPv6 routing 
depends on source.


Squid.conf got so complex at this point that its getting very hard to 
implement fase 2.


Lack of deep squid knowledge is still our weak spot.


--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid caching webpages now days?

2021-08-03 Thread Leonardo Rodrigues

Em 01/08/2021 21:01, Amos Jeffries escreveu:
Leonardo, it sounds like your decades ago decision was before squid 
gained full HTTP/1.1 caching ability. 1.0-only abilities are almost 
useless today.


Are you at least still using memory cache? That is squid configured 
without cache_dir but also without "cache deny" rule.


    Hi Amos,

    You're spot on, I clearly remember to have decided that (to stop 
caching) before full HTTP/1.1 support days. I have not actually tried 
memory cache for a while, but not because it wasn't working as expected 
or effective, it's mainly because i'm not managing those services on 
networks large enough for caching to bring real beneficts.


--


Atenciosamente / Sincerily,
    Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid caching webpages now days?

2021-08-01 Thread Leonardo Rodrigues


Em 31/07/2021 22:48, Periko Support escreveu:

Hello guys.

With today's ISP's speed increasing, does squid cache (caching web
pages) now days is a good option?

I have some customers that want to setup a cache server, but I have
doubts about how much traffic will be save, with most of the web sites
running under https.

I use squid+sg acl features.

But for me, caching  is not a bandwidth saving tool anymore.



    Of course, my experience is just MY experience and others might be 
completly different ones :) Speaking for myself, and for some small to 
medium sized customer networks I manage, caching is disabled for more 
than a decade now. Squid is still VERY useful for applying controls and 
loggings, but not caching.


--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] issues with sslbump and "Host header forgery detected" warnings

2020-11-09 Thread Leonardo Rodrigues

Em 07/11/2020 22:19, Eliezer Croitor escreveu:

Hey Leonardo,

I assume The best solution for you is a simple SNI proxy.
Squid does also that and you can try to debug this issue to make sure you 
understand what is wrong.
It clearly states that Squid doesn't see this specific address: 
local=216.58.222.106:443
As the domain: chromesyncpasswords-pa.googleapis.com:443 "real" destination 
address.

Maybe Alex or Amos remember the exact and relevant debug_options:
https://wiki.squid-cache.org/KnowledgeBase/DebugSections

I assume section 78 would be of help.
debug_options ALL,1 78,3

Is probably enough to discover what are the DNS responses and from where these 
are.
On what OS are you running this Squid?




    Hi Eliezer,

    I have already tracked the DNS stuff and I could confirm that squid 
is resolving to a different IP address than the client is, despite both 
using the same DNS server. It only happens for hosts with multiple A 
addresses or CDN hostnames that changes IP very often (every 10 seconds 
for example). It's not a bug in that regards, absolutely not, the client 
connecting to a specific IP address and squid seeing another IP to that 
hostname caught on the TLS transaction is real.


    I'm running on CentOS 8 ... and after all, maybe these findings, 
I'm about to realize doing this kind of interception, even without the 
full decrypt part, is not trivial at all, despite it works flawlessly 
(and very easily) for "regular" hostnames which translates to a single 
IP and never changes it.


    Will study this a little more. Thanks for your observations and 
recommendations!




--


Atenciosamente / Sincerily,
    Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] issues with sslbump and "Host header forgery detected" warnings

2020-11-09 Thread Leonardo Rodrigues

Em 07/11/2020 08:42, Amos Jeffries escreveu:


All we can do is minimize the occurrences (sometimes not very much). 
This wiki page has all the details of why and workarounds 
<https://wiki.squid-cache.org/KnowledgeBase/HostHeaderForgery>.




    Thanks Amos, I had already find that and it has very good 
information on the subject. Also found an old thread of you discussing 
the security concerns on bypsasing those checks, very good information, 
thanks so much :)



--


Atenciosamente / Sincerily,
    Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] issues with sslbump and "Host header forgery detected" warnings

2020-11-06 Thread Leonardo Rodrigues


    Hello Everyone,

    I'm trying to setup sslbump for the first time (on squid-4.13) and, 
at first, things seems to be working. After taking some time to 
understand the new terms (splice, bump, stare, etc), seems to got things 
somehow working.


    Actually i'm NOT looking for complete bumping (and decrypting) the 
connections. During my lab studies, I found out that simply 'splice' the 
connections is enough for me. I just wanna intercept https connections 
and have them logged, just the hostname, and that seems to be 
acchievable without even installing my certificates on the client, as 
i'm not changing anything, just 'taking a look' on the SNI values of the 
connection. The connection itself remains end-to-end protected, and 
that's fine to me. I just wanna have things logged. And that's working 
just fine.


    However, some connections are failing with the "Host header forgery 
detected" warnings. Example:


2020/11/06 18:04:21 kid1| SECURITY ALERT: Host header forgery detected 
on local=216.58.222.106:443 remote=10.4.1.123:39994 FD 73 flags=33 
(local IP does not match any domain IP)
2020/11/06 18:04:21 kid1| SECURITY ALERT: on URL: 
chromesyncpasswords-pa.googleapis.com:443


    and usually a NONE/409 (Conflict) log entry is generated on those. 
Refreshing once or twice and it will eventually work.


    I have found several discussions on this and I can confirm it 
happens on hostnames that resolvs to several different IPs or hostnames 
that, somehow, keeps changing IPs (CDNs or something like that).


    Clients are already using the same DNS server as the squid box, as 
recommended, but problem is still happening quite a lot. For regular 
hostnames who translates for a single IP address, things are 100% working.


    Questions:

    - without using WPAD or without configuring proxy on the client 
devices, is this somehow "fixable" ? Same DNS already being used ...
    - is there any chance the NONE/409 (Conflict) logs i'm seeing are 
not related to this? Maybe these are just WARNINGs and not ERRORs, or 
these would really cause a fail to the intercepted connection?
    - any other hint on this one without having to set proxy, in any 
way, on the clients? I just wanna have hostnames (and traffic generated) 
logged, no need for full decrypt (bumping) of the connections.



    Thanks !!!






--


Atenciosamente / Sincerily,
    Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] GENEVE?

2020-08-25 Thread Leonardo Rodrigues

Em 25/08/2020 16:21, Jonas Steinberg escreveu:

Is there any way to definitively confirm this?  Also is this something I could 
submit as a feature request via github or is it too crazy or out-of-scope for 
the roadmap?



    And please never forget that if you need some feature that is not 
there yet, you can always sponsor the dev team to develop it :)


--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Need help blocking an specific HTTPS website

2019-03-04 Thread Leonardo Rodrigues

Em 04/03/2019 19:27, Felipe Arturo Polanco escreveu:

Hi,

I have been trying to block https://web.whatsapp.com/ from squid and I 
have been unable to.


So far I have this:

I can block other HTTPS websites fine
I can block www.whatsapp.com <http://www.whatsapp.com> fine
I cannot block web.whatsapp.com <http://web.whatsapp.com>

I have HTTPS transparent interception enabled and I am bumping all TCP 
connections, but still this one doesn't appear to get blocked by squid.


This is part of my configuration:
===
acl blockwa1 url_regex whatsapp\.com$
acl blockwa2 dstdomain .whatsapp.com <http://whatsapp.com>
acl blockwa3 ssl::server_name .whatsapp.com <http://whatsapp.com>
acl step1 at_step SslBump1



    blockwa1 and blockwa2 should definitely block web.whatsapp.com .. 
your rules seems right.


    Can you confirm the web.whatsapp.com access are getting through 
squid ? Are these accesses on your access.log with something different 
than DENIED status ?




--


Atenciosamente / Sincerily,
    Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid on openwrt: Possible to get rid of "... SECURITY ALERT: Host header forgery detected ..." msgs ?

2019-01-24 Thread Leonardo Rodrigues

Em 23/01/2019 06:22, reinerotto escreveu:

Running squid 4.4 on very limited device, unfortunately quite a lot of
messages: "... SECURITY ALERT: Host header forgery detected ... "  show up.
Unable to eliminate real cause of this issue (even using iptables to redir
all DNS requests to one dnsmasq does not help), these annoying messages tend
to fill up cache.log, which is kept in precious RAM.
Is there an "official" method to suppress these messages ?
Or can you please give a hint, where to apply a (hopefully) simple patch ?





    I have some OpenWRT boxes running squid 3.5 and cache_log simply 
goes null ... i do have access log enabled, with scripts to rotate, 
export to another server (where log analyzis are done) and keep just a 
minimum on the box itself, as storage is a big problem on these boxes.




--


Atenciosamente / Sincerily,
    Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] want to change squid name

2018-10-01 Thread Leonardo Rodrigues

Em 01/10/18 10:08, --Ahmad-- escreveu:

i just need to have something not squid to run it on linux

i dont want squid



    so don't run squid ?!?! If someone finding that you're running 
squid and that's a problem to you, don't run it, period :)



--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] minimize squid memory usage

2018-07-10 Thread Leonardo Rodrigues

Em 09/07/18 20:45, Gordon Hsiao escreveu:


Assuming I need _absolutely_ no cache what-so-ever(to the point to 
change compile flags to disable that, if needed), no store-to-disk 
neither, i.e. no objects need to be cached at all. I just need Squid 
to check a few ACLs with absolutely minimal memory usage for now, what 
else am I missing to get that work?


    If you don't need everything that squid can offer, maybe using 
other proxy software can be a better option. There are other software, 
with less options, that for sure will have a smaller memory footprint. 
But as you just need ACL capabilities, maybe those can be enough.


    Have you tried checking that ?



--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Office 365 Support for Squid Proxy

2017-06-12 Thread Leonardo Rodrigues


i have a lot of customers who access Office 365 through squid 
proxies and have no problem at all. Office 365 is just another website, 
there's absolutely no need for special configurations for it to simply work.



Em 12/06/17 06:05, Blason R escreveu:

Hello All,

If someone can confirm if squid can very well work with Office 365? If 
anyone has any documentation can someone please forward that to me? I 
do have almost around 400 Office 365 users hence wanted to know what 
configuration I might need for Office 365 traffic?




--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] retrieve amount of traffic by username

2017-06-06 Thread Leonardo Rodrigues

Em 06/06/17 10:45, Janis Heller escreveu:

Seems like parsing would be what I need. Is the size (consumed bandwith) and 
the usernams (timestamp can be generated by my parser) being written to this 
file?
Could you show me a sample output of this file?


the already existing documentation is your friend :)

http://wiki.squid-cache.org/SquidFaq/SquidLogs


--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] HTTPS sites specifics URL

2017-02-06 Thread Leonardo Rodrigues


That's correct, when not using SSL-Bump feature (that's the one 
you're looking for), squid will only see the domain part. All the rest 
of the URL is crypted and visible only to the client (browser) and the 
server on the other side, the only two parts involved on that crypto 
session.


To enable squid to see the whole URL and be able to do full 
filtering on HTTPS requests, you're looking for SSL-Bump feature. Google 
for it, there's a LOT of tutorials and mailing list messages on that.



Em 06/02/17 12:40, Dante F. B. Colò escreveu:

Hello Everyone

I have a question , probably a noob one , i 'm trying to allow some 
https sites with specific URL's  (i mean https://domain.tld/blablabla) 
but https sites are working  only with the domain part , what i have 
to do to make this work ?




--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] New skype version can't control by squid

2015-12-23 Thread Leonardo Rodrigues

Em 22/12/15 23:10, fbismc escreveu:

Hi everyone

Below is my skype control in squid.conf

#skype
acl numeric_IPs dstdom_regex
^(([0-9]+\.[0-9]+\.[0-9]+\.[0-9]+)|(\[([0-9af]+)?:([0-9af:]+)?:([0-9af]+)?\])):443
acl Skype_UA browser ^skype
acl validUserAgent browser \S+
acl skypenet dstdomain .skype.com

After skype update to 7.17 ,the control is failed , I need to give a
"allowed" permission , the "allowed" means have a privilege to Internet
surfing.

How should I fix this problem , any suggestion will be a appreciated




Well ... if you need want someone to be able to help you, you can 
start giving some real informations on the new skype accesses that are 
failing your rules.


You have rules for user agent, IP access on port 443 and domain 
skype.com. Which accesses are not getting caught by these ? What are the 
new user agent used on the new skype accesses ???


Provide real information if you want real help (which of course, if 
not always guaranteed on a community mailing list). But be sure that 
with no real information, you wont get any useful help at all.




--


Atenciosamente / Sincerily,
    Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] logging https websites

2015-12-09 Thread Leonardo Rodrigues

Em 09/12/15 13:11, George Hollingshead escreveu:
is there a simple way to log request made to https sites.  I just want 
to see sites visited without having to set up tunneling and all this 
complex stuff i'm reading about.


Hoping there's a simple way, and yes, i'm a newb but smart enough to 
have your awesome program running; hehe


If you really want a SIMPLE way, than the answer is NO, that's not 
possible


With simply configuring the proxy on the users browsers, you'll be 
able to see the hostname, but not the full URL


user acessing https://www.gmail.com/mail/something/INBOX
will appear on the logs just as
CONNECT www.gmail.com

and that's how it works ... the path is only visible to the 
endpoints, the browser and the server, squid just carries the encripted 
tunnel between them, without knowing what's happening inside.


is it possible to decript and see the full path on the logs, being 
able to filter on them and everything else ?? YES, that's ssl-bump, but 
that's FAR from being an easy setup ...




--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP_MISS/200

2015-11-17 Thread Leonardo Rodrigues

Em 17/11/15 20:18, Jens Kallup escreveu:

Hello,

what means the log ouput TCP_MISS/200 ?
Error in squid config?


HTTP responde code 200 means 'OK, your request was processed fine', 
it's the 'everything ok' return code.


TCP_MISS means there was no cached answer for that query and so it 
was fetched from the origin server.


It's definitely not an error, there's absolutely nothing wrong on 
seeing LOTS of those on your access.log files, as you're certainly face 
LOTS of 'everything ok' requests and lots of them will not be fetched 
from the cached objects.



--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] big files caching-only proxy

2015-10-22 Thread Leonardo Rodrigues

Em 22/10/15 06:08, Amos Jeffries escreveu:

On 22/10/2015 7:13 a.m., Leonardo Rodrigues wrote:

It sounds to me that you are not so much wanting to cache only big
things, you are wanting to cache only certain sites which contain mostly
big things.

The best way to confgure that is with the cache directive. Just allow
those sites you want, and deny all others. Then you dont have to worry
about big vs small object size limits.

Though why you would want to avoid caching everything that was designed
to be cached is a bit mystifying. You might find better performance
providing several cache_dir with different size ranges in each for
optimal caching to be figured out by Squid.



At first that (caching only 'big' things) was the idea, but when i 
look to cache instagram, that really changed. I know i dont have a good 
hardware (I/O limitation) and having a VERY heterogenous group of 
people, hits were low when caching 'everything' and, in some cases, 
access was even getting slower as i do have a good internet pipe. But 
caching windows update and other 'big things' (antivirus updates, apple 
updates, etc etc) still looked interesting to me.


As you suggested, i further enhanced my ACLs that match 'what i 
want to cache' and could get it working using cache rules. I have even, 
in some cases, created two ACLs, one for the dstdom and other for the 
urlpath for matching just some extensions i want to cache. Maybe not 
perfect, but seems to be working fine after lowering the 
minimum_object_size to some few KBs.



--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] big files caching-only proxy

2015-10-21 Thread Leonardo Rodrigues


Hi,

I have a running setup for proxying only 'big' files, like Windows 
Update, Apple Updates and some other very specific URLs. That's working 
just fine, no problem on that.


For avoiding caching small things on the URLs i want to have big 
files proxied, i setup the 'minimum_object_size' for 500Kb, for example. 
That's doing just fine, working flawlessly.


Now i'm looking for caching instagram data. That's seems easy, 
instagram videos are already being cached, but i really dont know how to 
deal with the small images and thumbnails from the timetime. If i lower 
too much the minimum_object size, those will be cached as well as not 
wanted data from the other URLs.


Question is: can the minimum_object_size be paired with some ACL ? 
Can i have a minimum_object globally and another one for specific URLs 
(from an ACL) for example?


i'm running squid 3.5.8.


--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Monitoring Squid using SNMP.

2015-10-21 Thread Leonardo Rodrigues

Em 20/10/15 16:26, sebastien.boulia...@cpu.ca escreveu:


When I try to do a snmpwalk, I got a timeout.

[root@bak ~]# snmpwalk xx:3401 -c cpuread -v 1

[root@bak ~]#

Anyone monitor Squid using SNMP ? Do you experiment some issues ?




You're not getting timeout, you're getting no data, which is 
completly different from timeout.


Try giving the initial MIB number and you'll probably get the data:

[root@firewall ~]# snmpwalk -v 1 -c public localhost:3401 
.1.3.6.1.4.1.3495.1

SNMPv2-SMI::enterprises.3495.1.1.1.0 = INTEGER: 419756
SNMPv2-SMI::enterprises.3495.1.1.2.0 = INTEGER: 96398932
SNMPv2-SMI::enterprises.3495.1.1.3.0 = Timeticks: (77355691) 8 days, 
22:52:36.91

SNMPv2-SMI::enterprises.3495.1.2.1.0 = STRING: "webmaster"
SNMPv2-SMI::enterprises.3495.1.2.2.0 = STRING: "squid"
SNMPv2-SMI::enterprises.3495.1.2.3.0 = STRING: "3.5.8"


and to make things easier, i use to configure the SNMP daemon that 
runs on UDP/161 to 'proxy' requests to squid, so i dont need to worry 
about informing the correct port:


[root@firewall snmp]# grep proxy snmpd.conf
# proxying requests to squid MIB
proxy -v 1 -c public localhost:3401 .1.3.6.1.4.1.3495.1


so i can 'snmpwalk' on the default udp/161 port: (note the lack of 
:3401 port)


[root@firewall snmp]# snmpwalk -v 1 -c public localhost .1.3.6.1.4.1.3495.1
SNMPv2-SMI::enterprises.3495.1.1.1.0 = INTEGER: 419964
SNMPv2-SMI::enterprises.3495.1.1.2.0 = INTEGER: 96359504
SNMPv2-SMI::enterprises.3495.1.1.3.0 = Timeticks: (77370521) 8 days, 
22:55:05.21





--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] How to allow subdomains in my config.

2015-10-13 Thread Leonardo Rodrigues

Em 13/10/15 18:14, sebastien.boulia...@cpu.ca escreveu:


cache_peer ezproxyx.reseaubiblio.ca parent 80 0 no-query 
originserver name=ezproxycqlm


acl ezproxycqlmacl dstdomain ezproxycqlm.reseaubiblio.ca

http_access allow www80 ezproxycqlmacl

cache_peer_access ezproxycqlm allow www80 ezproxycqlmacl

cache_peer_access ezproxycqlm deny all




no guessing games would be awesome ... please post your ACL 
definitions as well



--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid cache

2015-09-30 Thread Leonardo Rodrigues

Em 30/09/15 16:35, Magic Link escreveu:

Hi,

i configure squid to use cache. It seems to work because when i did a 
try with a software's download, the second download is TCP_HIT in the 
access.log.
The question i have is : why the majority of requests can't be cached 
(i have a lot of tcp_miss/200) ? i found that dynamic content is not 
cached but i don't understand.very well.




That's the way internet works ... most of the traffic is 
dinamically generated, which in default squid configurations avoid the 
content to be cached. Nowadays, with the 'everything https' taking 
place, HTTPS is also non-cacheable (in default configurations).


And by default configurations, you must understand that they are 
the 'SECURE' configuration. Tweaking with refresh_pattern is usually not 
recommended except in some specific cases in which you're completly 
clear that you're violating the HTTP protocol and can have problems with 
that.


In short, the days of 20-30% byte-hits are gone and will never came 
back anymore.


Keep your default (and secure) squid configuration, there's no need 
to tweak refresh_pattern unless on very specific situations that you 
clearly understand what you're doing.




--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] analyzing cache in and out files

2015-09-30 Thread Leonardo Rodrigues

Em 30/09/15 04:13, Matus UHLAR - fantomas escreveu:


the problem was iirc in caching partial objects
http://wiki.squid-cache.org/Features/PartialResponsesCaching

that problem could be avoided with properly setting range_offset_limit
http://www.squid-cache.org/Doc/config/range_offset_limit/
but that also means that whole files instead of just their parts are
fetched.

it's quite possible that microsoft changed the windows updates to be 
smaller

files, but I don't know anything about this, so I wonder if you really do
cache windows updates, and how does the caching work related to 
informations

above...


yes, i'm definitely caching windows update files !!

[root@firewall ~]# cd /var/squid/
[root@firewall squid]# for i in `find . -type f`; do strings $i | head 
-3 | grep "http://;; done  | grep windowsupdate | wc -l

824

and yes, i had to configure range_offset_limit:

range_offset_limit 500 MB updates
minimum_object_size 500 KB
maximum_object_size 500 MB
quick_abort_min -1

(being 'updates' the ACL with the URLs to be cached, basically 
windowsupdate and avast definition updates - the second one required 
further tweaks with storeid_rewrite for the CDN URLs)


from access.log, i see a lot of TCP_HIT/206 (and just a few 
TCP_HIT/200), so it seems squid is able to get the fully cached file and 
provide the smaller pieces requested:


[root@firewall squid]# grep "TCP_HIT/" access.log | grep windowsupdate | 
wc -l

9860
[root@firewall squid]# bzcat access.log.20150927.bz2 | grep "TCP_HIT/" | 
grep windowsupdate | wc -l

38584

having squid to download the WHOLE file at the very first request 
(even a partial request) may be bad, but considering it will be used 
later to provide the data for other requests, even partial ones, make 
things a little better.


(this windowsupdate caching is running just for a few weeks, i expect 
HITs to grow a little more)



--


Atenciosamente / Sincerily,
    Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] analyzing cache in and out files

2015-09-29 Thread Leonardo Rodrigues

Em 29/09/15 07:42, Matus UHLAR - fantomas escreveu:

On 28.09.15 15:59, Leonardo Rodrigues wrote:
   I have a running squid that, until some weeks ago, was not doing 
any kind of cache, it was just used for access controle rules. Now i 
have enabled it for windows updateand some specificURLs caching and 
it's just working fine.


windows updates are so badly designed that the only sane way to get them
cached it running windows update server (WSUS).



WSUS works for corporate environments, not for all the others. And 
caching Windows Update with squid is pretty trivial actually, it doesnt 
even need URL rewriting as other services, youtube for example, do. And 
it works just fine !!




--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] analyzing cache in and out files

2015-09-29 Thread Leonardo Rodrigues

Em 29/09/15 10:46, Matus UHLAR - fantomas escreveu:


hmm, when did this change?
IIRC that was big problem since updates use huge files and fetch only 
parts

of them, which squid wasn't able to cache.
But i'm off for a few years, maybe M$ finally fixed that up...




i'm not a squid expert, but it seems that things became much easier 
when squid becames fully HTTP/1.1 compliant.


Caching huge files do not changed, that's needed for caching 
Windows Update files. Storage space, however, is becaming cheaper every 
year. In my setup, for example, i'm caching files up to 500Mb, i have 
absolutely no intention of caching ALL Windows Update files.







--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] analyzing cache in and out files

2015-09-29 Thread Leonardo Rodrigues

Em 28/09/15 17:55, Amos Jeffries escreveu:

The store.log is the one recording what gets added and removed from
cache. It is just that there are no available tools to do the analysis
you are asking for. Most admin (and thus tools aimed at them) are more
concerned with whether cached files are re-used (HITs and near-HITs) or
not. That is recorded in the access.log and almost all analysis tools
use that log in one format or another.



That's i was afraid, there's no tools to analyze the data. Anyway, 
thanks for the answer.



--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] user agent

2015-09-18 Thread Leonardo Rodrigues


i personally hate using !acl ... it's the easiest way, in my 
opinion, of getting in trouble and getting things to NOT work the way 
you want to.


i always prefeer to replace by other 4-5 'normal' rules than using !acl


Em 18/09/15 06:32, joe escreveu:

hi i  need to have 3 useragent replace and its not working
example
acl brs browser -i Mozilla.*Window.*
acl phone-brs browser -i Mozilla.*(Android|iPhone|iPad).*

request_header_access User-Agent deny brs !phone-brs
request_header_replace User-Agent Mozilla/5.0 (Windows NT 5.1; rv:40.0)
Gecko/20100101

request_header_access User-Agent deny phone-brs !brs
request_header_replace User-Agent Mozilla/5.0 (Android; iPhone; Mobile;)
Gecko/18.0



--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Android

2015-08-12 Thread Leonardo Rodrigues


Of course you can always use 'acl aclname browser' to identify some 
specific agents and, using that, try to match android browsers.


however, that would be basically impossible to guarantee to work 
100% because softwares that calls HTTP requests can always sent 
different identifications and, thus, your rule will not match. And those 
rules would allow, also, other browsers/OSs to fake their agent-id and, 
forcing something that will look like an Android to you, have the access 
allowed without authentication.


You can try, but i would say you can never have a fully 100% 
working and 100% fake-proof setup on that scenario.



Em 12/08/15 14:09, Jorgeley Junior escreveu:

Hi guys.
Is there a way to work around android under squid authentication???
I could make an ACL to a range of address that my wifi router 
distribute to my wifi network and deny auth for them, but I'd like to 
identify the Android clients and specify that just them do not need 
authentication.

Any ideas?
Thanks since now



--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Logging of 'indirect' requests, e.g. involving NAT or VPN

2015-06-24 Thread Leonardo Rodrigues

Em 24/06/15 15:28, Henry S. Thompson escreveu:

I've searched the documentation and mailing list archives w/o success,
and am not competent to read the source, so asking here: what is
logged as the 'remotehost' in Squid logs when a request that has been
encapsulated, as in from a machine on a local network behind a router
implementing NAT, or from a machine accessing the proxy via a VPN
connection?




logs will show the IP address that reached squid, ie. the source 
address of the connection. If that was NATted, squid will never know 
(and thus is not able to log) the original address before the NAT.




--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Migration from squid 3.1.20 to 3.4.8

2015-06-10 Thread Leonardo Rodrigues

On 10/06/15 06:39, Diercks, Frank (VRZ Koblenz) wrote:


Hallo squid-users,

i migrated our Proxy from 3.1.20 to 3.4.8. Here are the changes I made:




why going to 3.4 if it's already 'old' code ? Why not going 
straight to 3.5 which is the current release ?



--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Saving memory cache to disk on reboot?

2015-05-18 Thread Leonardo Rodrigues

On 18/05/15 08:55, Yan Seiner wrote:
The title says it all - is it possible to save the memory cache to 
disk on reboot?


I reboot my systems weekly and I wonder if this would be any advantage.


Initially, let's say that a cache can ALWAYS be lost. Sometimes it 
may not be desirable, but losing a cache must not create problems, the 
cache will simply be repopulated again and no problems should occur.


Losing terabytes of cache is not be a good idea, as that amount of 
data would take some days to be repopulated and thus, during that time, 
you'll have bad hit ratios on your cache.


As you say you're using memory cache, i'll assume that you're 
dealing with 16Gb or 32Gb of cache. We're not talking on terabytes, 
we're talking on few gigabytes.


On that scenario, i would not worry about loosing it. Unless you're 
serving just a few specific pages on that cache, which is not usually 
the case, your hit ratio already shouldn't be too high, so loosing the 
cache shouldn't be a problem, it will be populated again in few hours, 
depending the number of clients and traffic generated by them.


And my only question here is: why rebooting weekly ? Assuming 
you're running Linux or some Unix variant, that's absolutely unnecessary.



--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] i want to block images with size more than 40 KB

2015-03-18 Thread Leonardo Rodrigues

On 18/03/15 08:06, Amos Jeffries wrote:

On 19/03/2015 5:57 a.m., snakeeyes wrote:

I need help in blocking images that has size less than 40 KB

  

Use the Squid provided access controls to manage access to things.
http://wiki.squid-cache.org/SquidFaq/SquidAcl



You should know that you cannot evaluate the response size using 
only the request data. So to acchieve what you want, data from the reply 
must be considered as well, the response size for example.


Images can be identified by the presence of '.jpg' or '.png' on the 
request URL, but images can be generated on-the-fly by scripts as well, 
so you wont see those extensions all the time. In that case, analyzing 
replies mime headers can be usefull as well, the reply mime type having 
'image' is a great indication that we're receiving an image.


Put all that together and you'll acchieve the rules you want to. 
But keep in mind that you'll probably break A LOT of sites who 'slices' 
images, background images, menus and all sort of things. I would call 
that a VERY bad idea, but can be acchieved with a few rules.




--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Calculate time spent on website (per ip address)

2015-02-11 Thread Leonardo Rodrigues

On 10/02/15 20:23, Yuri Voinov wrote:


HTTP is stateless protocol (in most cases, excluding presistant
connections). So, it is impossible to determine how much time user
spent on site. Only very approximately. Right?




in most cases, probably not even close to the real deal !

--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid_ldap_auth: WARNING, could not bind to binddn 'Invalid credentials'

2014-12-15 Thread Leonardo Rodrigues


I have several squids authenticating users using ldap_auth and it 
works fine. Users are located on the 'Users' OU and my config lines are:



(single line)
auth_param basic program /usr/lib/squid/squid_ldap_auth -P -R -b 
dc=myad,dc=domain -D cn=ProxyUser,cn=Users,dc=myad,dc=domain

-w x -f sAMAccountName=%s -h ad.ip.addr.ess

(single line)
external_acl_type ldap_group children=3 ttl=300 %LOGIN 
/usr/lib/squid/squid_ldap_group -P -R -b dc=myad,dc=domain -D 
cn=ProxyUser,
cn=Users,dc=myad,dc=domain -w xxx -f 
((objectclass=person)(sAMAccountName=%v)(memberof=cn=%a,cn=Users,

dc=myad,dc=domain)) -h ad.ip.addr.ess


On 15/12/14 21:03, Ahmed Allzaeem wrote:


Hi guys

Im trying to use squid with active directory  2008 R2 as an external 
authentication


On DC called smart.ps

Create user squid and gave it delegation to the dc and put it also in 
the group admins in the OU=proxy





--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] WARNING: there are more than 100 regular expressions

2014-11-27 Thread Leonardo Rodrigues

On 27/11/14 07:59, navari.lore...@gmail.com wrote:

Consider using less REs ... is not possible.


so dont worry about this WARNING message. This is just a warning, 
not an error. If you're aware that using lots of REs can hit hard on the 
CPU usage, just go for it.




--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Anybody using squid on openWRT ?

2014-08-25 Thread Leonardo Rodrigues


If you're talking about caching, then you're absolutely correct. If 
you're using squid just for filtering and policies enforcment, as i'm 
doing, than even a small box like the routerboards i'm using (32Mb RAM 
and 64Mb flash disk) is enough for a 30-40 stations network. squid needs 
a bit of tweaking for running on those but, once you mastered that, is 
works absolutely fine. I even have it doing authentication on Windows 
ADs through ldap authenticators !


On 22/08/14 15:16, Lawrence Pingree wrote:

Plus a wifi device is severely underpowered and lacks sufficient memory and 
storage for squid to provide any real benefit (IMHO).



--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it





Re: [squid-users] RE: Anybody using squid on openWRT ?

2014-08-25 Thread Leonardo Rodrigues


Didnt noticed any slowness at all when loading www.spiegel.de 
through Squid 2.7S9 on a OpenWRT box. I'm using OpenWRT revision r42161, 
compiled from scratch. The page fully loaded in about 7-8 seconds. Could 
be faster, but i wouldnt call that the 'extremely slowness' you 
mentioned. I'm using Google DNSs 8.8.8.8 and 8.8.4.4 as the DNSs for the 
OpenWRT box and thus for squid.


I did not find meetrics.de accesses on the log, but i found 
meetrics.net which loads just fine.


Log from my access here is:
(tried to paste it here but mailing list rejected it because the message 
got bigger than 50k)


http://pastebin.com/zPat4EJz


On 22/08/14 10:22, babajaga wrote:

@James:
For details of my problems, pls ref. here:
http://squid-web-proxy-cache.1019090.n4.nabble.com/Very-slow-site-via-squid-td4667243.html

Not shure, that it is really squid. Effect is slow loading of objects from
ad-servers.
As I have an open-mesh AP, 64MB RAM, my squid2.7 does memory-only caching,
and some ACLs + forwarding some traffic to another upstream proxy on the
web.
One very slow page is here:
www.spiegel.de
It calls
*.meetrics.de , which loads veeery slow
So, in case you can confirm/deny slow response times to this site, I need to
look somewhere else for the bug.
Which would be great help, already.




--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it





Re: [squid-users] Anybody using squid on openWRT ?

2014-08-22 Thread Leonardo Rodrigues


i do use it a lot and despite the fact it's outdated, it works just 
fine for my cases. I have even made myself a patch to enable the 
compilation of ldap authenticators, so i could authenticate users 
through LDAP, usually to an AD server.



On 22/08/14 07:48, babajaga wrote:

Just trying to use offic. package for openWRT, which is based on squid2.7
only.
Having detected some DNS-issues, does anybody use squid on openWRT, and
which squid version ?




--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it





Re: [squid-users] Re: Blocking spesific url

2014-07-11 Thread Leonardo Rodrigues

Em 11/07/14 05:38, Andreas Westvik escreveu:

Here is my (working) squid.conf without the acl.

http_port 192.168.0.1:3128 transparent
#Block
acl ads dstdom_regex -i /etc/squid3/adservers
...

And here is the top of my /etc/squid3/adservers file

akamaihd\.net\/battlelog\/background-videos\/ — Not working.
rd.samsungadhub.com
ad.samsungadhub.com
http://eaassets-a.akamaihd.net/battlelog/background-videos/naval-mov.webm$




you really need to understand the ACL types. Including a full URL 
on a dstdom acl of course will not work. For blocking based on the full 
url, domain and path, you'll need an url_regex acl type instead, which 
by coincidence is exactly the one i sent on the previous message i 
replied you :)




--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it





Re: [squid-users] Blocking spesific url

2014-07-10 Thread Leonardo Rodrigues

Em 10/07/14 09:04, Alexandre escreveu:

Concerning blocking the specific URL. Someone correct me if I am wrong
but I don't believe you can not do this with only squid.
The squid ACL system can apparently block per domain:
http://wiki.squid-cache.org/SquidFaq/SquidAcl



Of course you can block specific URLs using only squid ACL options !!

#   acl aclname url_regex [-i] ^http:// ... # regex matching on 
whole URL
#   acl aclname urlpath_regex [-i] \.gif$ ...   # regex matching 
on URL path


if the URL is:

http://eaassets-a.akamaihd.net/battlelog/background-videos/naval-mov.webm

then something like:

acl blockedurl url_regex -i akamaihd\.net\/battlelog\/background-videos\/
http_access deny block

should do it ! And i not even include the filename which, i 
imagine, can change between different stages.




--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it





Re: [squid-users] How to add banner or popup window to squid 2.7

2014-05-28 Thread Leonardo Rodrigues

Em 28/05/14 18:32, Soporte Técnico escreveu:

Hi, in one job i had running squid 2.7 in transparent mode , my boss ask me
to add a popup window or banner or similar to a group of ip designated (a
range of ip that i had of computers), telling them the company policies,
only one time or one time each 24 hs or similar.

There´s any idea that could help me?




the 'session' external acl may help you on that

take a look at:
http://wiki.squid-cache.org/ConfigExamples/Portal/Splash





--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it





Re: [squid-users] Intercept HTTPS without using certificates - Just apply a QoS on the connexion

2014-05-15 Thread Leonardo Rodrigues

Em 15/05/14 14:59, Antoine Klein escreveu:

Hi there,

I need to install squid to apply a QoS in a private network with the delay pool.
In fact, this network offer a public WIFI, so that's not possible to
configure a proxy on clients.

Is it possible to intercept HTTPS connexion, apply a Delay Pool and
forward the request without decipher the SSL packet ?



I really dont think that's possible. Anyway, you can always use 
your Linux (or whatever OS you're using) QoS tools to acchieve something 
similar to delay pools but on NATted connections. You can have squid 
intercepting TCP/80 connections and apply delay pools, the TCP/443 (and 
all other indeed) connections can be throttled by QoS SO tools.




--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it





Re: [squid-users] Squid 3.4.5 is available

2014-05-06 Thread Leonardo Rodrigues

Em 06/05/14 03:52, Martin Sperl escreveu:

I guess People would more likely stay with the older squid version
(even if they are buggy) than spending that amount of time and
Hassle just to get all the dependencies compiled or even think of
upgrading to a new OS version...




i'll be one of those ... having LOTS of CentOS 5 machines still 
running and already migrating them, in no hurry at all, to CentOS 6, i 
would definitely stick to the last 'compatible' version instead of 
trying to manually compile libs and compilers or use OS beta versions.



--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it





Re: [squid-users] authentication via web page

2014-01-31 Thread Leonardo Rodrigues

Em 30/01/14 20:56, Al Zick escreveu:

Hi,

I am considering switching to authentication via a web page. Are there 
examples of how to do this somewhere? What are the pros and cons of 
this configuration? I am very concerned about security with web page 
authentication.


Also, I am not really sure if it is a good idea. For example, in most 
emails the images in them are not sent as attachments, they are 
downloaded from a web server and go through the proxy. If a re-write 
was used to load the authentication page, then it would put that page 
in place of the image. How would you authenticate the proxy with this 
scenario?


i would consider that a good idea on a guests network, for example, 
some sort of wifi hotspot. On a corporate environment, i would never 
consider that :/



--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it





Re: [squid-users] Re: how to differentiate your clients ?

2014-01-27 Thread Leonardo Rodrigues

Em 27/01/14 06:29, g35 escreveu:

Hello, thank you for your response.

Unfortunately, my client has a dynamic IP address.

My squid version is 2.7 stable 8 for windows.
Perhaps there is a way with the mac adress of my client?




MAC addresses only exists on the LAN segment of the network. In 
other words, that information dies on the 1st router it passes and as it 
passes through several routers to reach internet, MAC addresses do not 
exist and, thus, is not an useful information over internet applications.


If you cannot use IP addresses, then you'll have no other way of 
identifying your client than using some sort of authentication. There's 
SEVERAL howto's on that, you can google it easily.




--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it





Re: [squid-users] Squid and unsupported request protocols

2014-01-15 Thread Leonardo Rodrigues


If your SSH client can use a HTTPS proxy, than it will probably 
work without major changes, as connections will be proxied as CONNECT 
ones. In the case of CONNECT method, squid already works almost as a 
passthrough proxy.


If your SSH client cannot use a HTTPS proxy, than probably you wont 
be able to do that simply because squid cannot handle with SSH protocol.


Please note that 'i want to pass all traffic through squid' is 
simply the wrong way. Squid is NOT a multi-purpose proxy, it's a 
HTTP/HTTPS proxy and, as an HTTPS proxy, can deal with CONNECT 
connections which can be used to tunnel some other traffics. This 
possibility of dealing with 'other traffics' is VERY different from 
imaging it can deal ANY traffic.




Em 15/01/14 11:17, m.shahve...@ece.ut.ac.ir escreveu:

I want to pass all traffics through squid not only traffics are received
on port 80 and handling them in some ways. Now when I am doing so SSH
requests freeze without any response!




--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it





Re: [squid-users] Squid and unsupported request protocols

2014-01-15 Thread Leonardo Rodrigues

Em 15/01/14 11:04, m.shahve...@ece.ut.ac.ir escreveu:

Ok, so what should I do if I want to pass SSH requests through squid?


using an SSH client that can proxy requests through an HTTP/HTTPS 
proxy should do it. If your client cant do that, than it probably wont 
be possible as squid does not recognize SSH protocol (and never intended 
to do so).



--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it





Re: [squid-users] Squid and unsupported request protocols

2014-01-15 Thread Leonardo Rodrigues

Em 15/01/14 12:06, m.shahve...@ece.ut.ac.ir escreveu:

So what do you mean by an SSH client that can proxy requests through an
HTTP/HTTPS proxy exactly?


i mean exactly what i wrote ... if you have an SSH client that can 
proxy requests through an HTTP/HTTPS proxy, than you can use SSH through 
squid. If your SSH client cant do that, which i bet it cant, than you 
cannot do it.


My SSH client, for instance, which is ZOC for Mac, only supports 
SOCKS proxy servers, not HTTP/HTTPS ones. So, in my case, i wouldnt be 
able, with ZOC, to proxy SSH requests through squid.


there's really no other way to write what i wrote, it's plain and 
clear.



--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it





Re: [squid-users] Is there a precedence in the allowed sites ACL ? (UNCLASSIFIED)

2014-01-15 Thread Leonardo Rodrigues

Em 15/01/14 17:08, Raczek, Alan J CTR USARMY SEC (US) escreveu:


Just curious that if there is an order that Squid goes in to match a site in
the allowed sites
ACL. Top down??


Yeah ... basically top down.

http://wiki.squid-cache.org/SquidFaq/SquidAcl#Access_Lists

http_access allow|deny acl AND acl AND ...
OR
http_access allow|deny acl AND acl AND ...
OR
...


The action allow/deny will be inforced only if ALL rules (ACLs) are 
matched. On a 3 ACLs http_access line, for example, if two gives a match 
and the third not, the action will not be inforced.


Note that not inforcing a 'allow' rule is different from denying. 
Not inforcing a 'deny' rule, on the same logic, is different from allowing.


If a http_access action is not enforced, it will evaluate the next 
http_access line until it reaches the end of all http_access rules.




--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it





Re: [squid-users] Are downloads realy faster by using squid?

2013-12-19 Thread Leonardo Rodrigues


squid is not magic, it will NEVER guarantee that downloads will be 
faster.


maybe the downloaded file asks to not be cached, its webserver can 
do that


maybe your squid configurations are not allowing files that big (as 
you mentioned download, i'm assuming it's a quite larger one you're 
using for tests) to be cached


and plenty of other possibilities



Em 19/12/13 16:11, Dirk Lehmann escreveu:

Hello everybody,

the second download of a file ist not faster than the first download 
of the same file.


My Squid 3.4 is running on localhost. My browser is configured to use 
localhost http-proxy port 3128 and ftp-proxy port 3128.


Why are downloads not faster even squid is running between client and 
server?


I was wondering if you could let me know.


--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it





Re: [squid-users] Squid - Dúvidas

2013-10-16 Thread Leonardo Rodrigues

Em 16/10/13 18:38, Samuel Felipe Giehl escreveu:

Olá equipe do Squid, boa tarde!

Você poderia me tirar uma dúvida em relação a um problema que estou
enfrentando com o squid_ldap_group?

Ele não está iniciando com o Squid e quando vou analisar os logs para fazer
o troubleshooting ele somente me retorna o seguinte: WARNING: Cannot run
'/usr/lib64/squid/squid_ldap_group' process.

Você poderia me informar onde posso ver os logs mais detalhados do
squid_ldap_group para saber o que está ocorrendo?




essa lista de discussão possui o idioma inglês como idioma oficial. 
Por favor envie suas dúvidas bem como qualquer outro email para a mesma 
sempre em inglês.


this mailing list has english as its official language. Please send 
all your questions as well as any other emails using that language.




--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it





Re: [squid-users] Guest network

2013-10-09 Thread Leonardo Rodrigues


No need for two instances ...

just get squid listening on how many ports you need it to:

http_port port1
http_port port2
...
http_port portN

create ACLs for each port

acl port1 myport port1
acl port1 myport port2
...
acl portN myport portN


and get all your http_access rules with the appropriate port ACLs 
as well, thus giving completly different policies depending on the proxy 
port used.



http_access allow port1 other_rule
http_access deny port1 other_rule
etc etc



Em 09/10/13 18:04, JC Putter escreveu:

Hi i am using Squid 3.3.9 with Kerberos authentication on my network.
we know have a requirement where we need to give guest users access on
the same proxy, is it possible to run squid on a additional port and
have different ACL's for those users connecting to that port?

I know ideally having a different subnet is the best option

Thanks!



--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it





Re: [squid-users] Set up a cluster of 10 squid servers using ~170GB of memory and no disk

2013-10-03 Thread Leonardo Rodrigues


Em 02/10/13 07:02, Amos Jeffries escreveu:

On 2/10/2013 10:02 p.m., Jérôme Loyet wrote:

Hello,

I'm facing a particular situation. I have to set-up a squid cluster on
10 server. Each server has a lot of RAM (192GB).

Is it possible et effective to setup squid to use only memory for
caching (about 170GB) ?


memory-only caching is the default installation configuration for 
Squid-3.2 and later.



i dont have such scenarios and not even close to those RAM-abudant 
machines to try  ... but first thing that came to my mind was thinking 
that setting up a ramdisk (or even disks) and having squid do to 
'normal' cache on those 'disks' could be an alternative to having 
direct-memory-only caching.


is there any logic on that ?



--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it





Re: [squid-users] Best OS

2013-06-17 Thread Leonardo Rodrigues


the best one is that one you really understand how it works, you 
know how to fine tune, how to monitor, you really know how to administrate


Em 15/06/13 14:57, Bilal J.Mahdi escreveu:

Dear all

Which OS is better for squid.

Debian 7 or UBUNTU 10.04 ??



--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it





Re: [squid-users] Same cache_dir for many squid process

2013-05-31 Thread Leonardo Rodrigues

Em 30/05/13 11:24, Alex Rousskov escreveu:

On 05/30/2013 01:00 AM, Sekar Duraisamy wrote:


Iam running 3 squid process on the same machine with different ports
and i would like to use same cache_dir for all the 3 processes.

Can we use same cache_dir for all the processes?

Yes, provided you use SMP Squid and Rock cache_dir.




if you cant do that, chaining squid processes is easy and, at the 
end, will provide the functionality of all squid processes using the 
same cache_dir, but being controlled by a single squid.




--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it





Re: [squid-users] Same cache_dir for many squid process

2013-05-31 Thread Leonardo Rodrigues

Em 31/05/13 12:56, csn233 escreveu:

 if you cant do that, chaining squid processes is easy and, at the end,
will provide the functionality of all squid processes using the same
cache_dir, but being controlled by a single squid.

What does chaining mean, and how exactly do you do that?



http://wiki.squid-cache.org/Features/CacheHierarchy



--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it





Re: [squid-users] Same cache_dir for many squid process

2013-05-31 Thread Leonardo Rodrigues

Em 31/05/13 14:19, csn233 escreveu:

http://wiki.squid-cache.org/Features/CacheHierarchy

Not quite the same thing.

With cache siblings, you have sharing but also duplication of caches.


if you correctly configure your parents to not cache anything, then 
you'll have no duplications. In that case, only the squid you'll choose 
will manage cache on disk.


it can be as simple as a

cache_dir null

to the parent squids.



it's a well known workaround that, if correctly configured, can 
give you what was told to be native to SMP squid and rock cache dirs.




--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it





Re: [squid-users] Transparent Proxy Authentication.

2013-04-28 Thread Leonardo Rodrigues

Em 27/04/13 07:22, James Harper escreveu:

That's not really a useful answer though, is it?

You can't use the regular http WWW-Authenticate style authentication, but you 
can redirect the user to a captive portal style page and have them authenticate to that, 
then redirect back to the original address.

Have a look at http://en.wikipedia.org/wiki/Captive_portal for some info about 
the concept, and some limitations.

Making it work with squid is an exercise for the reader, although I'm sure 
someone has described a solution somewhere before.
  
James




Depending on your scenario, specially if that's a corporate 
network, you can somehow easily have your browsing agents (browsers) 
transparently CONFIGURED, using Windows AD policies (if that's your 
case) of even WPAD thing.


That way, having the browsers transparently CONFIGURED (that's 
absolutely different from transparently intercepted requests), you can 
use authentication with no problem at all.






--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it





Re: [squid-users] Eliminate PopUP authentication for web Windows Users

2013-03-21 Thread Leonardo Rodrigues


basic authentication type will always prompt for username/password, 
there's nothing wrong with it and no way to avoid it nor 'fix' it as 
there's nothing wrong at all


if your users are authenticated in your domain and you want squid 
do 'automagically' use those credentials for web surfing, then you'll 
have to change your authentication type to ntlm or digest or negotiate.


i have LOTS of squid boxes authenticanting on ADs using ntlm 
authentication type. It's a lot more complicated to configure than basic 
type but, once configured, it works just fine and simply.



Em 21/03/13 18:45, Carlos Daniel Perez escreveu:

Hi,

I have a Squid server configured to make querys in one ActiveDirectory
server trough squid_ldap_group. The query it's OK and authenticated users
can surf the web. But, my users need to put their users and password when
open a browser.

[ ... ]
My squid_ldap_auth line is: auth_param basic program
/usr/lib/squid3/squid_ldap_auth -R -d -b dc=enterprise,dc=com -D
cn=support,cn=Users,dc=enterprise,dc=com -w 12345 -f sAMAccountName=%s
-h
192.168.2.1




--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it





Re: [squid-users] Caching HLS content?

2013-01-31 Thread Leonardo Rodrigues


an even better approach would be correctly setup your webserver to 
send the appropriate expire times for the .m3u8 files so your caches 
neither any other one would cache them :)


a correctly expire time for the .ts could be sent as well, allowing 
them to be cached



Em 31/01/13 22:19, Scott Baker escreveu:

I want to make sure that .m3u8 files are *never* cached. Those files are
updated every 5 seconds on my server, and always have the same name.
What is the best way to make sure that they are never cached? This is
what I came up with:

refresh_pattern \.m3u8  0   0%  0

Conversely, MPEG segments are .ts files and are ALWAYS the same. The
file names roll, so once an mpeg segment is created it will *never* get
updated. Thus those files should ALWAYS be cached, and there is no
reason to ever refresh the file. How do I ensure that .ts segments are
cached, and there is no reason to re-validate them. These pieces of
content will expire after 15 minutes (it's live video), so there is no
reason to keep any .ts files that are older than 15 minutes.  This is
what I came up with:

refresh_pattern \.ts900   100%900

Currently I'm seeing a lot of TCP_REFRESH_UNMODIFIED in my logs for .ts
segments. I must be doing something wrong.




--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it





[squid-users] checking for 'real' SSL connections

2012-12-20 Thread Leonardo Rodrigues


Hi,

Is it possible, with any version of squid, to identify REAL SSL 
connections using CONNECT method ? The idea is blocking some softwares 
thattunnel connections, through squid and on 443 ports, but are not real 
SSL connections, like Skype and other P2P softwares.


I would like to be able to identify (and block) those on squid and, 
ideally, without having to install certificates on the clients.


I searched and found HOWTOs on how to get squid doing SSL 
Inspection, enabling URL filtering on SSL requests, but that demands 
installing client certificates. I really dont need to be able to 
acchieve that, simply being able to identify real and 'not SSL' 
connections would be enough to me.


Is that possible with any current squid ?Thanks for the tips !

--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it





Re: [squid-users] error files shipped on squid-3.2.3

2012-11-23 Thread Leonardo Rodrigues

Em 23/11/12 00:27, Amos Jeffries escreveu:


Firstly, what is wrong with them that needs fixing by text editor?

there's nothing wrong, it's just some customizations like 'click 
here to request access to this specific site which was blocked' ... 
things related with our infrastructure.


Secondly, edit the templates/ERR_* file and save over top of the other 
auto-generated one. Only the auto-generated files are in compact format.




nice to hear that, i hadnt seen the template ones. I'll edit based 
on them. Thanks for the tip !




--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it





[squid-users] error files shipped on squid-3.2.3

2012-11-22 Thread Leonardo Rodrigues


Hi,

I'm migrating some squid 2.7 servers directly to 3.2.3. One of the 
things that is actually annoying me on my testserver is that error files 
provided in squid-3.2 are always ONE-line-only, instead of the 2.7 ones 
which were formatted similary to HTML files.


The one-line-only doesnt matter for being correctly displayed on the 
browser, i know. But that makes a real pain to customize them.


Is this one-line-only error files needed for some reason ??

I have checked several languages and seems all files on all languages 
are one-line-only. Dont know if it matters but i'll use, in production, 
the pt-br ones.


--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it





[squid-users] clients that generates LOTS of requests per second

2012-10-18 Thread Leonardo Rodrigues


Hi,

Ihave some squid servers running on some companies and, from time 
to time, i face some clientwith some bad software thatgenerates LOTS of 
requests per second. And by a LOT i mean, sometimes, 90-100 RPS by a 
single client. That usually happens on requests that are DENIED, so they 
are processed quickly by squid.


I was initially thinking of some kind of control on these, but hey, 
requests are already being denied, there's nothing else i could do based 
on deny/allow.


So i was thinking of some kind of delay_pool but based on the number of 
requests per second. The idea was when a client (IP address) reached N 
numbers per second, squid would introduce some random (or fixed) delay 
on the replies, thus making the client slow down the number of requests 
a little.


I'm pretty sure that this kind of configuration cannot be acchieved 
using only normal squid parameters, but maybe there's some scriptthat 
can be used with external_acl that can help me with these situations.


Do you ever faced a situation like this ? If yes, what did you do ?

Thanks for any advice !

Just in case, i'm running 2.7S9 but can upgradeto latest with no problem 
to acchieve these controls.


--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it





Re: [squid-users] Browser information in logs

2012-06-01 Thread Leonardo Rodrigues



#  TAG: useragent_log
#   Squid will write the User-Agent field from HTTP requests
#   to the filename specified here.  By default useragent_log
#   is disabled.
#
#Default:
# none


i think squid must be compiled with

--enable-useragent-log

so that you can have this option working.




Em 01/06/12 14:24, Wladner Klimach escreveu:

Hello everyone!

Is there any way of  squid logs show what browser the clients
are using? I need this to know the reach of a GPO in all my domain. I hope
anyone can help me out.



--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it






Re: [squid-users] Block IP based lookups

2012-04-23 Thread Leonardo Rodrigues


despite the fact the most usual is browsing by hostnames of any 
kind, there's LOTS of pages that uses IP addresses on the links 
displayed and, thus, your rule would probably break LOTS of legitime 
browsing where the user is NOT really 'typing' an IP address.


for example, even HOTMAIL uses this !! This logline was grabbed 
from a TODAY log, it's not an old log, it's from TODAY:



1335178083.446   1058 192.168.0.162 TCP_MISS/200 127569 GET 
http://65.55.40.87/att/GetInline.aspx?messageid=8cefd7ba-8b2b-1fe1-b879-00237d65e98eattindex=0cp=-1attdepth=0imgsrc=cid%3aimage005.jpg%4001CD1A64.F6641BA0shared=1blob=MHxpbWFnZTAwNS5qcGd8aF1hZ2UvenBlZw_3d_3dhm__login=XXhm__domain=hotmail.comip=10.12.148.8d=d405mf=0hm__ts=Mon%2c%2023%20Apr%202012%2010%3a47%3a40%20GMTst=lleugerbhm__ha=01_f1a95b6922365947ae92542149a187a6c6f1b688c4afc76a77c422789965oneredir=1 
- DIRECT/65.55.40.87 image/jpeg




Em 23/04/12 09:36, Dean Weimer escreveu:

-Original Message-

Is it possible to block all IP based lookups from the browser with squid
acls?

If I assume you mean to match request to IP address,
http://192.168.1.1/, instead of to a hostname like
http://www.example.com, the following works quite well.

# Match By IP Requests
acl BYIP dstdom_regex ^[0-9\.:]*$



--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it






Re: [squid-users] STABLE squid repo location?

2011-12-15 Thread Leonardo Rodrigues


How about the official squid homepage ??

http://www.squid-cache.org/Versions/v3/3.1/changesets/


Em 15/12/11 14:48, Michael Altfield escreveu:

Hi,

Can someone please tell me where I can browse the code repository for the S=
TABLE releases of squid?

Specifically, I'm trying to find all of the changes that occurred between s=
quid-3.1.16 (Oct 13) and squid-3.1.18 (Dec 3).

I think I might have found it here (https://code.launchpad.net/~squid/squid=
/3.1), but I'm not sure if this is the STABLE repository. If it is, can som=
eone please explicitly say so in the README of the repo or on the wiki (htt=
p://wiki.squid-cache.org/BzrInstructions). If not, please let me know where=
  to find it.




--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it






Re: [squid-users] How to set the IP of the real originator in HTTP requests (instead of Squid's IP)?

2011-11-29 Thread Leonardo Rodrigues


tcp_outgoing_address is probably what you're looking for.

from the default squid.conf:


#  TAG: tcp_outgoing_address
#   Allows you to map requests to different outgoing IP addresses
#   based on the username or source address of the user making
#   the request.
#
#   tcp_outgoing_address ipaddr [[!]aclname] ...
#
#   Example where requests from 10.0.0.0/24 will be forwarded
#   with source address 10.1.0.1, 10.0.2.0/24 forwarded with
#   source address 10.1.0.2 and the rest will be forwarded with
#   source address 10.1.0.3.
#
#   acl normal_service_net src 10.0.0.0/24
#   acl good_service_net src 10.0.1.0/24 10.0.2.0/24
#   tcp_outgoing_address 10.1.0.1 normal_service_net
#   tcp_outgoing_address 10.1.0.2 good_service_net
#   tcp_outgoing_address 10.1.0.3
#
#   Processing proceeds in the order specified, and stops at first fully
#   matching line.
#
#   Note: The use of this directive using client dependent ACLs is
#   incompatible with the use of server side persistent connections. To
#   ensure correct results it is best to set 
server_persistent_connections

#   to off when using this directive in such configurations.
#
#Default:
# none




Em 29/11/11 14:35, Leonardo escreveu:

Dear all,

We have a Cisco ASA firewall between our internal network and the
Internet.  Our Squid transparent proxy (v3.1.7) is just behind the
firewall.

Our problem concerns IP address translation from private to public.
Specifically, we would like that clients go out on the Web with a
public IP address which depends on the subnet the client is in.
However, we can't differentiate the addresses as the Cisco ASA sees
only the IP private address of the Squid as originator of all HTTP
requests.
I haven't set the directive forwarded_for in my Squid config, which
should mean that, by default, the real originator is passed in a
X-Forwarded-For header.

I'd like to know if there is something else that can be done on the
Squid side, or if now I need to work solely on the config of the Cisco
ASA (as I believe).

Thanks for your time and your answers,

L.



--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it






Re: [squid-users] block TOR

2011-11-18 Thread Leonardo Rodrigues


i dont know if this is valid for TOR ... but at least Ultrasurf, 
which i have analized a bit further, encapsulates traffic over squid 
always using CONNECT method and connecting to an IP address. It's 
basically different from normal HTTPS traffic, which also uses CONNECT 
method but almost always (i have found 2-3 exceptions in some years) 
connects to a FQDN.


So, at least with Ultrasurf, i could handle it over squid simply 
blocking CONNECT connections which tries to connect to an IP address 
instead of a FQDN.


Of course, Ultrasurf (and i suppose TOR) tries to encapsulate 
traffic to the browser-configured proxy as last resort. If it finds an 
NAT-opened network, it will always tries to go direct instead of through 
the proxy. So, its mandatory that you do NOT have a NAT-opened network, 
specially on ports TCP/80 and TCP/443. If you have those ports opened 
with your NAT rules, than i really think you'll never get rid of those 
services, like TOR and Ultrasurf.





Em 18/11/11 14:03, Carlos Manuel Trepeu Pupo escreveu:

So, like I see, we (the admin) have no way to block it !!

On Thu, Sep 29, 2011 at 3:30 PM, Jenny Leebodycar...@live.com  wrote:



Date: Thu, 29 Sep 2011 11:24:55 -0400
From: charlie@gmail.com
To: squid-users@squid-cache.org
Subject: [squid-users] block TOR

There is any way to block TOR with my Squid ?

How do you get it working with tor in the first place?

I really tried for one of our users. Even used Amos's custom squid with SOCKS 
option but no go.

Jenny



--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it






Re: [squid-users] squid 2.7 ... memory leak?

2011-10-12 Thread Leonardo Rodrigues

Em 12/10/11 00:07, jiluspo escreveu:

Care to explain how do you able to figure out in that instant?
Anyway after remove ubuntu's libcap2-dev ... leaks gone.




i've never used Ubuntu, so i maybe wrong ... but at least on CentOS 
(and redhat based distros), the -devel (apparently your -dev) packages 
are needed for compiling softwares that links to that library. Removing 
the -devel, on redhat based distros, would make it impossible to 
recompile the software, but wouldnt avoid it to run properly, as to run 
it uses the libpcap2 (in your case) package, not the libpcap2-devel one.


so, probably removing the -dev wont affect running squid 
capabilities, but would avoid you to being able to recompile it.




--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it






Re: [squid-users] reading external acl from squid.conf

2011-08-16 Thread Leonardo Rodrigues


of course !!!

acl bk src /path/to/your/file.txt

file.txt would be


192.168.1.2
192.168.2.38/32
192.168.20.0/24
10.8.0.0/16


(note the /32 is not needed. if / is not specified, its automatically /32)

and after modifying the .txt file, you'll have to issue the command

squid -k reconfigure

to ask squid to re-read external files


Em 16/08/11 14:18, alexus escreveu:

is there a way to have this

acl bk src XX.XXX.XX.XX/32
acl bk src XXX.XX.XXX.XX/32

in a external file and have squid.conf reference to it?




--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it






Re: [squid-users] delay_pool

2011-07-29 Thread Leonardo Rodrigues


because that download is certainly not passing through this squid 
box, by any reason we cant point because you havent given any detail of 
your enviromnent.



Em 29/07/11 14:32, Carlos Manuel Trepeu Pupo escreveu:

In my squid 3.0 STABLE1 I have the following configuration:

delay_pools 1

delay_class 1 1
delay_parameters 1 1024/1024
delay_access 1 allow all

But one user are downloading at 120 Kb/s

Why it's that ?



--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it






Re: [squid-users] Squid SDK

2011-06-27 Thread Leonardo Rodrigues


squid is a completly open-source project, you can simply grab its 
entire source code and do whatever modifications you need to acchieve 
your goals.


if you're using squid installed by your distro or download in some 
binary format and dont have its source, you can go to


www.squid-cache.org

and download it !

Em 27/06/11 04:33, Mohsen Pahlevanzadeh escreveu:

Dear all,

I know that squid doesn't release its SDK, but i need to its
syscall.What i do? Do you know good way for using squid syscall?




--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it






Re: [squid-users] Squid SDK

2011-06-27 Thread Leonardo Rodrigues


i think you can analyse the squidclient command line utility, which 
is on squid source code, and find out how what the '-m PURGE' option 
calls ... that would be what you need.


you can use that utility for PURGing URLs from command line, for 
example:


squidclient -m PURGE http://whatever/path/file.txt


and i'm sorry i cannot assist you any further than that ... i'm 
really not a developer, i dont have a clue what are the function names 
and how to help you with more tech details.




Em 27/06/11 09:42, Mohsen Pahlevanzadeh escreveu:

I know it, and compiled it, But can i get hook or i must hack it for a
syscall? i need to on demand delete Object from cache same PURGE, But
want to use it in my code.
After it, I need to Push to cache in my code.
Can you get me name of those func instead of hack?




--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it






Re: [squid-users] Best method to refresh individual file

2011-05-12 Thread Leonardo Rodrigues



squidclient -m PURGE http://www.domain.com/something.js

will promptly exclude that file from squid caches and force a new 
fetch on the next access.



Em 12/05/11 17:48, Andy Nagai escreveu:

What is the best way to make a single css or .js file stale so can
immediately push changed file to client browser? The only way is to change
the filename each time?





--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it






Re: [squid-users] Squid as a Hotspot ?

2011-04-20 Thread Leonardo Rodrigues


you need much more than a http proxy to acchieve that. There are 
already-done projects on that. Please check:


http://nocat.net/



Em 20/04/11 09:39, Daniel Shelton escreveu:

Does anyone know?  Can Squid be set up as a wifi Hotspot?

For example, with a splash page that users will see before connecting?

Dan




--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it






Re: [squid-users] compile squid with large files option

2011-04-15 Thread Leonardo Rodrigues


i'm pretty sure that the --with-large-files is needed on 32 bit 
installations (x86). Large file support is the default on x86_64 machines.


please someone correct me if i'm wrong ...

Em 15/04/11 09:23, Helmut Hullen escreveu:

Hallo, Tóth,

Du meintest am 15.04.11:


What is the option when I compile squid to cache files over 2GB ?

 --with-large-files


How can I see what are the default compile options for an apt-get
based debian installation?


 squid -v




--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it






Re: [squid-users] Using login data of the user

2011-04-15 Thread Leonardo Rodrigues

Em 15/04/11 21:30, Joachim Wiedorn escreveu:

Hello,

since some days I search for the way how I can use the login data of the
user on his computer (client) for authentication check while he is using
his browser.

As I have understood if I activate authentication in /etc/squid3/squid.conf
then the browser ask the user at the first time of web access for username
and password. But the user always have done a login on this client computer
so why must I start this second authentication check of the user?

This way would be useful for use with LDAP or AD, but also with PAM
authentication.

Does anywhere know the solution?



if your users have already logged in on your AD network, you can 
have squid configured to use those authentication credentials for 
logging and filtering web access *WITHOUT* asking again for 
username/password.


squid has several authentication methods, not all of them does this 
'transparent' authentication. The most basic squid authentication 
method, 'basic' one, doesnt that. 'basic' authentication will ALWAYS 
give you an authentication popup. To acchieve the transparent 
authentication, you'll have to use probably ntlm, digest or negotiate 
authentication methods. Using these authentications methods *AFTER* 
having your linux box joined your AD network correctly, you can have the 
transparent authentication working. Users will open browser, no 
authentication window will pop up and, and even then, username will be 
logged on squid logs and can be used for filtering purposes.


***PLEASE*** do not confuse transparent authentication with 
transparent proxy. None authencation method will work on transparently 
intercepted requests (transparent proxy). To have ANY authentication 
method working, proxy **WILL HAVE TO BE** correctly configured on the 
browser.


Google for 'squid ntlm_auth' or 'squid squid_kerb_auth' for plenty 
of documentation on how to configure and use these authentication 
methods. Google as well for documentation on joining your linux box onto 
your AD network, this will be needed for those authentication methods to 
work.





--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it






Re: [squid-users] question on recompiling

2011-04-11 Thread Leonardo Rodrigues

Em 09/04/11 22:28, Amos Jeffries escreveu:


 -L is the library .so location parameter.
 -I is the local library header files location parameter

In 2.7 this should be as simple as:

  LDFLAGS=-L/usr/local/lib64/ \
  CFLAGS=-I/usr/local/include/ \
  ./configure
  cd helpers/negotiate_auth/squid_kerb_auth/
  make


Hi Amos,

i've tried the steps you provided but squid_kerb_auth binary is 
still getting linked to system kerberos libraries



[root@mtzsvmsquid squid_kerb_auth]# ls /usr/local/include
com_err.h  gssapi  gssapi.h  gssrpc  kadm5  kdb.h  krb5  krb5.h  profile.h
[root@mtzsvmsquid squid_kerb_auth]#

[root@mtzsvmsquid squid_kerb_auth]# ls /usr/local/lib
krb5   libgssapi_krb5.so  libgssrpc.so.4
libk5crypto.so.3.1   libkadm5clnt.so libkadm5srv.so  
libkrb5.so libkrb5support.so.0
libcom_err.so  libgssapi_krb5.so.2libgssrpc.so.4.1  
libkadm5clnt_mit.so  libkadm5srv_mit.so  libkdb5.so  
libkrb5.so.3   libkrb5support.so.0.1
libcom_err.so.3libgssapi_krb5.so.2.2  libk5crypto.so
libkadm5clnt_mit.so.8libkadm5srv_mit.so.8libkdb5.so.5
libkrb5.so.3.3
libcom_err.so.3.0  libgssrpc.so   libk5crypto.so.3  
libkadm5clnt_mit.so.8.0  libkadm5srv_mit.so.8.0  libkdb5.so.5.0  
libkrb5support.so

[root@mtzsvmsquid squid_kerb_auth]#


[root@mtzsvmsquid squid_kerb_auth]# set | grep FLAGS
CFLAGS=-I/usr/local/include/
LDFLAGS=-L/usr/local/lib/
[root@mtzsvmsquid squid_kerb_auth]#

[root@mtzsvmsquid squid_kerb_auth]# ldd squid_kerb_auth | grep krb
libgssapi_krb5.so.2 = /usr/lib64/libgssapi_krb5.so.2 
(0x0036e920)

libkrb5.so.3 = /usr/lib64/libkrb5.so.3 (0x0036eb60)
libkrb5support.so.0 = /usr/lib64/libkrb5support.so.0 
(0x0036ec20)

[root@mtzsvmsquid squid_kerb_auth]#





--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it






[squid-users] question on recompiling

2011-04-07 Thread Leonardo Rodrigues


Hi,

i have squid 2.7-stable9 compiled and running just fine on a CentOS 
5.5 x86_64 box.


because of some bugs on krb5-libs shipped with CentOS, i need to 
recompile a single helper (negotiate/squid_kerb_auth) linking with an 
updated krb5 lib which was already compiled and stored on /usr/local.


i'm having a hard time trying to do this recompilation ... i'm 
changing the -I flags, Makefile INCs, but final binary is still being 
linked to /usr/lib64/libkrb* files:


[root@mtzsquid2 squid_kerb_auth]# ldd squid_kerb_auth | grep krb
libgssapi_krb5.so.2 = /usr/lib64/libgssapi_krb5.so.2 
(0x2b3d1836c000)

libkrb5.so.3 = /usr/lib64/libkrb5.so.3 (0x2b3d1859a000)
libkrb5support.so.0 = /usr/lib64/libkrb5support.so.0 
(0x2b3d18fae000)

[root@mtzsquid2 squid_kerb_auth]#


question: what would be the correct way to recompile just this 
squid_kerb_auth and link it to kerberos libraries found on /usr/local 
instead of the system ones ?


Thanks !

--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it






Re: [squid-users] question on recompiling

2011-04-07 Thread Leonardo Rodrigues


Hi Chad,

but doing that, even temporarily, would break other softwares that 
are using krb5 libraries at that moment, wont it ?


and even if it wont break anything running, when i restore original 
versions the binary will be linked to wrong library files (the original 
ones and it was compiled to the newer ones), wont it ?



Em 07/04/11 10:21, Chad Naugle escreveu:

Correction -- Do NOT move the original versions, because it can break
things, just re-link the new copy under /usr/lib64 and see if everything
is working fine.


Chad Nauglechad.nau...@travimp.com  4/7/2011 9:17 AM

You can temporary move the /usr/lib64 versions, and copy the version
from /usr/local to /usr/lib64, just make sure they are linked
correctly.




--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it






Re: [squid-users] Squid 3 with AD Integration has Sharepoint Access problem!!

2011-03-16 Thread Leonardo Rodrigues

Em 16/03/11 11:10, Amos Jeffries escreveu:

On 17/03/11 02:41, Go Wow wrote:

Squid 3 Stable 19



So a 3.0 series release. It will not work with relayed NTLM credentials.

You need to upgrade to 3.1 before further testing is worth doing.



squid 2.7 works fine as well with relayed NTLM credentials  if 
3.1 is not an option for you, by any reason, 2.7 would be OK as well, 
except you have some specific requirement that demands you to use 3.x 
series. Both (2.7 and 3.1) supports the relayed NTLM credentials.





--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it






Re: [squid-users] download quota

2011-03-11 Thread Leonardo Rodrigues

Em 11/03/11 10:28, Helmut Hullen escreveu:

Hallo, squid-users,

is there a simple ACL for setting a download quota for users or for MAC
addresses?

squid 3.1.11 and/or 3.2.0.5

squidquota seems to be dead, squid quota manager (sqm) too. dealay
pools doesn't work per user.

Viele Gruesse!
Helmut


no, there's no easy way or 'squid-only' way to provide that. It can 
be acchieved using external_acls and probably some coding.


Please check the mailing list archives, as this subject has been 
discussed several times.




--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it






Re: [squid-users] debugging ERR_INVALID_REQ condition

2011-03-03 Thread Leonardo Rodrigues

Em 02/03/11 18:41, Amos Jeffries escreveu:


In the Log I spy 411 Length Required being the status.

For a POST with no content-length header this is invalid according to 
HTTP/1.0 and extremely dangerous to permit.


HTTP/1.1 chunking makes this okay, I see the client has attempted to 
do that. Unfortunately squid-2.7 is HTTP/1.0 with basic support for a 
few 1.1 features and only really supports chunking on GET.


You could try altering the client app, so that it uses a HTTP/1.0 
compliant request without chunking its POSTs. Or upgrading to 
squid-3.1.10 or later.





Nice to hear that Amos, thanks for your analysis on the logs.

Well ... as this is a government software that i have to use, 
there's no chance to changing it to another one. It's nice to hear that 
it's not an invalid req, but it's being consider invalid because of the 
basic HTTP/1.1 squid 2.7 capabilities.


I'll try, on the first moment, to bypass those connections and do 
not let them get caught by transparent proxy rules. And i'll look for 
squid 3.1  it's already time for me to start studying/using 3.1.


Thanks again for the infos.



--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it






[squid-users] debugging ERR_INVALID_REQ condition

2011-03-02 Thread Leonardo Rodrigues



Hi,

I have a squid 2.7 stable9 running and i'm having problems with 
some softwares that i'm required to use by the brazilian government. 
They generate HTTP connections, which are transparently intercepted 
(linux box) and directed to squid. I have enabled a full debug (ALL,9) 
and captured the connection, but i really cant understand what's wrong 
and, if it's something squid related, what can i do to workaround it. 
The only interesting thing is that it seems to be triggering an 
ERR_INVALID_REQ error ...


would you mind helping me analyze the following connection log, 
generated with ALL,9 ?


is the connection really HTTP/1.0 or HTTP/1.1 invalid ? is there 
something i can tweak on squid to get this working ? bypassing these 
connections and getting them to go on NAT instead of transparent proxy 
would be the only solution here ?



Thanks ...

log is at:
http://pastebin.com/jzEt4Wjh


--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it







Re: [squid-users] RE: Trunk grouping

2011-02-18 Thread Leonardo Rodrigues

Em 18/02/11 08:45, Malvin Rito escreveu:

Hi List,

Were upgrading our network switches and need to create multiple VLAN groups,
but since our Squid Proxy (Transparent Proxy) Server should be accessible to
all VLAN groups we need to setup a trunk grouping inside our Squid Proxy
Box.
I have a VLAN capable switch to manage and create the VLAN. Since the Squid
box is the one providing internet connection to all users on different VLAN
groups, Squid should be accessible on different VLAN group.

Is anyone has a documentation or code on how to implement trunk grouping?




VLAN configurations has nothing to do with squid. If you need some 
VLAN configuration, that would be on your OS network stack, not squid.


Just get your box into all your VLANs, using the needed switch and 
OS VLAN configurations, and squid will be fine with that.




--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it






Re: [squid-users] Log file analysis

2011-01-21 Thread Leonardo Rodrigues

Em 21/01/11 09:49, Roberto escreveu:


For both functions, I need the reports/analysis to be available 
through web pages, so GUI softwares are out for me. For the first 
task, squeezer seems to be appropriate. For the second one, Sarg seems 
fine. Would those two be the appropriate choices? Are there better 
ones for these?





maybe not better, maybe wont replace other tools  but the 
cachemgr.cgi shipped with squid can give you VERY interesting 
informations about what's happening, performance counters, etc etc.



--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it






Re: [squid-users] Squid HTTP compression

2011-01-11 Thread Leonardo Rodrigues


squid 2.7 cannot handle HTTP/1.1 which is needed for, among other 
things, the chunk encoding (compression) one. You'll simply not be able 
to get it working with squid 2.7.


i know squid 3.1 has made great improvements on HTTP/1.1 support, 
but as i dont use it, i cannot guarantee you what's working and what's not.


check squid 3.1 changelogs looking for HTTP/1.1 related things. 
Also i'd suggest you to search this mailing list archives, as HTTP/1.1 
support on squid was vastly discused here by other users.



Em 11/01/11 14:07, karj escreveu:

Hello everyone,

I have a small problem with squid (Squid 2.7.STABLE9).

I'm trying to optimize our IIS6 web portal. I activated HTTP compression
both for static and dynamic content, and this works OK when the browser asks
for content directly to the IIS server ...

BUT, when the browser asks for content thru Squid.
Content is effectively served by IIS but as the Content-Encoding
header is missing, so nor IE or FireFox can handle it !



--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it






Re: [squid-users] Cache dir in tmpfs partition

2011-01-04 Thread Leonardo Rodrigues


i dont have a clue how squid would deal with 2 cache_dir's, one 
being bigger and the other being smaller but on a fastest storage 


i really dont have a clue what to expect on that scenario, i have 
never done anything similar to that.


sorry but i cannot help you with this idea ...

Em 02/01/11 20:56, David Touzeau escreveu:

Thanks Leonardo

If i create 2 caches :

One (first) with 3Gb tmpfs memory
second with 500G hard disk memory.

Do you think that squid will increase performances using the first one.
Or did it make no sense ?



--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it






Re: [squid-users] Cache dir in tmpfs partition

2011-01-02 Thread Leonardo Rodrigues


squid need some (a lot, in fact) RAM to keep indexes od the objects 
stored on the cache. You can use RAM to create a tmpfs and use it for 
cache, of course you can. But you cant forget squid uses RAM for other 
things as well.


Getting 6 of 8 GBs to the tmpfs would, probably, leave very few RAM 
for squid (and all the rest of processes/OS stuff) and, eventually, 
squid (and the whole OS) would start swapping and then your performance 
would became VERY bad.


With the correct calculations on RAM usage by squid and other 
processes, as well with some monitoring, you can surely use some of the 
available RAM to create a tmpfs and have squid caching things there. But 
6 out of 8 i think its simply too much.



Em 02/01/11 10:24, David Touzeau escreveu:

Dear

I have a server with 8Gb memory

I would like to know there is any benefits to create a 6Gb cache in
memory with a tmpfs partition ?

best regards




--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it






Re: [squid-users] Getting a very specific source code version of squid?

2010-12-29 Thread Leonardo Rodrigues


2.6.26-2-amd64 is not an 'official' release name. This is the 
release name of the package your distro shipped, not squid team. And 
your searches took you to the most interesting information: there was NO 
squid 2.6.26, the last 2.6 one was stable23.


So, 2.6.26-2-amd64 seems to really be your KERNEL version, not your 
squid version.



Em 28/12/10 20:23, Roberto Franchesco escreveu:

I'm not sure if this is the right place to ask this or not but...

I need to get a hold the source code for a very specific version of
Squid (Version 2.6.26-2-amd64)  and I can only find 2.6.23Stable on
the squid website.

Can anyone tell me where I could find more versions of the source code?

-Rob



--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it






Re: [squid-users] priority rules in squid.conf

2010-11-22 Thread Leonardo Rodrigues

Em 22/11/10 10:01, Riccardo Castellani escreveu:

For precedence of evaluating rules, when I open my browser, what rule does
Squid analyze ?

I think rule 6, but how Squid knows if client have to use LDAP
Authentication or to look at in the file 'onlyforip' to grant Internet
access for IP Address ?
I think Squid first has to look at the rules 9 and 10, so I think there is
priority of rules which is not dependent from rules sequence ?!

I'd like solve my doubt


Rules are evaluated on the exact order you configure them in 
squid.conf. There's no magic and no trick envolved  if you want some 
rule A to be evaluated before other rule B, simply put A before B on 
squid.conf.


If you want allow some IPs without authentication, simply put that 
rule(s) before the ones that requires authentication.


evaluation is linear, just order your rules logically to acchieve 
what you need.



--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it






Re: [squid-users] possible bug on 2.7S9

2010-11-08 Thread Leonardo Rodrigues


Em 07/11/2010 01:45, Amos Jeffries escreveu:


Indicating that your NAT rules are incorrect.

The above line is simply forcing Squid to send from 127.0.0.1. It 
would only have any effect if your NAT intercept rules were forcing 
all localhost traffic back into Squid.


Removing the above line may mean that you are simply shifting the 
problem from your Squid to some web server elsewhere. Your Squid will 
be passing it requests for http://localhost:8080/...;. The upside is 
that at least it will not be a DoS flood when it arrives there.



Hi Amos,

Thanks for your tips  they made me realize that i was doing 
some 'dangerous' configurations. I have just adjusted things here, 
changed the transparent port and made a little more secure http_access 
rules to protect localhost_to access.


Thanks !


--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it






[squid-users] possible bug on 2.7S9

2010-11-06 Thread Leonardo Rodrigues


Hi,

i'll try to describe with the most details i can what i think is 
something like a forwarding-loop-detection bug on 2.7S9


i have squid 2.7S9 running on a CentOS 5.5 x64 box whici has 4 
NICs. 3 NICs are for internal networks (192.168.x) and 1 NIC is for 
internet (189.73.x.x). It was built with:


[r...@firewall squid]# squid -v
Squid Cache: Version 2.7.STABLE9
configure options:  '--prefix=/usr' '--exec-prefix=/usr/bin' 
'--bindir=/usr/bin' '--sbindir=/usr/sbin' '--libexecdir=/usr/bin' 
'--sysconfdir=/etc/squid' '--datadir=/var/squid' '--localstatedir=/var' 
'--enable-removal-policies=heap,lru' '--enable-storeio=ufs,aufs,null' 
'--enable-delay-pools' '--enable-http-violations' '--with-maxfd=8192' 
'--enable-async-io=8' '--enable-err-languages=Portuguese English' 
'--enable-default-err-language=Portuguese' '--enable-snmp' 
'--disable-ident-lookups' '--enable-linux-netfilter' 
'--enable-auth=basic digest ntlm negotiate' 
'--enable-basic-auth-helpers=DB LDAP NCSA SMB' 
'--enable-digest-auth-helpers=password ldap' 
'--enable-external-acl-helpers=ip_user ldap_group session wbinfo_group' 
'--enable-negotiate-auth-helpers=squid_kerb_auth' 
'--enable-ntlm-auth-helpers=fakeauth no_check' '--enable-useragent-log' 
'--enable-referer-log' '--disable-wccp' '--disable-wccpv2' 
'--enable-arp-acl' '--with-large-files' '--enable-large-cache-files' 
'--enable-ssl' '--enable-icmp'



i've setup squid with something like:

acl localhost src 127.0.0.1/255.255.255.255
acl localhost_to dst 127.0.0.1/255.255.255.255

acl network1 src 192.168.1.0/255.255.255.0
acl network1_to dst 192.168.1.0/255.255.255.0

acl network2 src 192.168.2.0/255.255.255.0
acl network2_to dst 192.168.2.0/255.255.255.0

acl network3 src 192.168.3.0/255.255.255.0
acl network3_to dst 192.168.3.0/255.255.255.0

http_port 8080 transparent
http_port 3128 transparent

tcp_outgoing_address 127.0.0.1 localhost_to
tcp_outgoing_address 192.168.1.1 network1_to
tcp_outgoing_address 192.168.2.1 network2_to
tcp_outgoing_address 192.168.3.1 network3_to
tcp_outgoing_address 189.73.x.x all



config is OK, it runs just fine.

problem is, on a given day, squid stop responding new connections 
and i have to stop it (service squid stop). After searching logs, i have 
found some interesting requests:



1288136326.944  48437 192.168.2.15 TCP_MISS/000 0 GET 
http://localhost:8080/sync/sis/index.php - DIRECT/127.0.0.1 -
1288136326.944  48426 127.0.0.1 TCP_MISS/000 0 GET 
http://localhost:8080/sync/sis/index.php - DIRECT/127.0.0.1 -

(and this second line repeated about 13000 times)

and during these, i got also on cache.log:

2010/10/26 21:37:59| WARNING! Your cache is running out of filedescriptors
2010/10/26 21:38:15| WARNING! Your cache is running out of filedescriptors
2010/10/26 21:38:31| WARNING! Your cache is running out of filedescriptors
2010/10/26 21:38:48| WARNING! Your cache is running out of filedescriptors
2010/10/26 21:39:04| WARNING! Your cache is running out of filedescriptors
2010/10/26 21:39:20| WARNING! Your cache is running out of filedescriptors

i'm running with 8192 filedescriptors on a 150 clients network, 
that's more than enough filedescriptors for normal usage.


(from cache.log)
2010/10/31 12:27:50| Starting Squid Cache version 2.7.STABLE9 for 
x86_64-unknown-linux-gnu...

2010/10/31 12:27:50| Process ID 16093
2010/10/31 12:27:50| With 8192 file descriptors available


Well . after found that, i tried to reproduce it doing some 
request to localhost:8080 on 8080 squid port and i could successfully 
reproduce it, all the times, with the above squid.conf configuration.


after some tryings, i have found that:

1) removing the:
tcp_outgoing_address 127.0.0.1 localhost_to

would avoid the problem and make the forwarding-loop-detection 
works fine


2) removing the transparent from
http_port 8080 transparent

would avoid the problem too, even with the tcp_outgoing_address 
127.0.0.1 active



question is . squid NOT detecting this forwarding-loop should 
be expected with this transparent and tcp_outgoing_address combination ? 
Are we talking of a bug or are we talking of some expected behavior ? Is 
there any other information that i could provide to help tracking this ?





--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it






Re: [squid-users] when will squid-2.7.STABLE10 be released?

2010-10-01 Thread Leonardo Rodrigues


have you placed a bug request for that ? Trying help to find the 
problem usually means faster resolutions times 



Em 01/10/2010 10:16, Paul Khadra escreveu:

I hope that this release will fix the memory leak problem that I have with
squid-2.7.STABLE8.



--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it






Re: [squid-users] Server Load

2010-09-30 Thread Leonardo Rodrigues

 Em 30/09/2010 08:01, Jordon Bedwell escreveu:

On 09/30/2010 04:27 AM, Mr. Issa(*) wrote:

Dear All,
Iam facing a server load of 2.0 almost all the time and a whole of
32GB of rams are consumed on the server used for squid production
My question is Why the 32GB of rams are consumed and why the server is
LOADING all the time
The squid is used in transparent mode, and cache_replacement_policy 
heap LFUDA

memory_replacement_policy heap GDSF



client_http.hits = 45.562854/sec 


with that request rate, logs could be a problem as well. Do you use 
your logs for something ? If not, try disabling them, specially 
store.log and access.log


--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it






Re: [squid-users] Can squid be configured as SMTP/SMTPS proxy?

2010-09-30 Thread Leonardo Rodrigues

 Em 30/09/2010 13:38, Alona Rossen escreveu:

Can squid be configured as SMTP/SMTPS proxy?




squid is NOT a native smtp proxy.

altough, with the CONNECT method, most used by https connection on 
squid context, you can connect in any port and, indeed, you can have a 
SMTP session through squid. For that you would need:


1) ACLs that allows CONNECT to TCP port 25 ... default configurations, 
SSL_Ports acl specifically, do not allow that
2) your smtp/smtps/whatever client would need to know how to tunnel 
connections through an https proxy


if you can acchieve both (#1 is easy, it depends only on some few 
configurations), you could successfully tunnel ANY protocol through 
squid using CONNECT method, including your smtp and smtps.


in the past (and probably nowadays also) there were several 
virus/zombies that search for open http/https proxy machines and, if 
found, send spam mail through them using exactly CONNECT to TCP 25 ports 
of servers and, when connected, deliver the SPAMs. This is REAL, this 
happened and probably also happens nowadays. And that said, with a 
capable smtp/smtps/pop3/pop3s/imap4/imap4s/whatever client, you CAN 
tunnel those connections successfully through squid !





--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it






Re: [squid-users] performance question, 1 or 2 NIC's?

2010-08-28 Thread Leonardo Rodrigues

Em 28/08/2010 12:29, Andrei escreveu:


I'm setting up a transparent Squid box for 300 users. All requests
from the router are sent to the Squid box. Squid box has one NIC,
eth0. This box receives requests (from clients) and catches content
from the web using this one NIC on its one WAN port, eth0.

Question: would it improve performance of the Squid box if I was
receiving requests (from the clients) on eth0 and caching content on
eth1? In other words, is there a benefit of using two NIC's vs. one?
This is a public IP/WAN Squid box. Both eth0 and eth1 would have a WAN
(public IP) address.


I'm on a 12Mb line.
   



Your limitation is your 12Mb line  any decent hardware can 
handle that with no problem at all. ANY 100Mbit NIC, even onboard and 
cheapers/generics one, can handle 12Mbit with no problem at all.


i really dont think adding another NIC will improve your 
performance, given your 12Mbit WAN limitation.



--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it






Re: [squid-users] Modifying and redistribution of Squid

2010-08-24 Thread Leonardo Rodrigues


modifying configuration files only is not a 'derivate work'. 
derivate work is when you modify the sources of the software.


i really dont think you need to notify anyone as, at least as you 
told, you did not modify squid in anyway.



Em 24/08/2010 18:24, Ryan escreveu:

Hi,
I have modified squid and want to redistribute it. All I did was
modify the configuration file but I saw this in the license: - You
must notify IRTF-RD regarding your distribution of
  *   the derivative work.
How do I notify IRTF-RD?

   



--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it






Re: [squid-users] Re: Dynamic delay pool status

2010-08-20 Thread Leonardo Rodrigues



Thank you for your answer. I'm sorry if i misunderstood something, but
you're telling me for example to allow 1Mbits of youtube traffic between
1PM and 2PM ?
   



sure you can do it !

acl 1PMto2PM time 13:00-14:00
acl youtube dstdomain .youtube.com

# lets 2 create 2 delay pools
# one for youtube traffic between 1PM and 2PM
# the other for youtube traffic in any other time
delay_pools 2

# both delay pools are class 1
delay_class 1 1
delay_class 2 1

# lets define who will get to delay_pool #1
# and who'll get to delay_pool #2
delay_access 1 allow youtube 1PMto2PM
delay_access 1 deny all

delay_access 2 allow youtube
delay_access deny all

# now lets define the bandwidth
# 131072 = 1Mbit ... 131072 is number of bytes
# delay_parameters requires the bandwidth in BYTES PER SECOND
delay_parameters 1 131072/131072
delay_parameters 2 -1/-1



which these confs you would get youtube bandwidth limited to 1Mbit 
during 1PM and 2PM, and unlimited in other times.


that's just an example, adjust settings to match your needs.

check your squid.conf.default, look for the delay_* parameters to 
get another explanations and options





Mmmm... Or you telling me to allow 1Mbits traffic of youtube on a short
period of time, say 5 min ?

Regards,

Jean-Baptiste


   



--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it






  1   2   3   >