Re: [squid-users] TCP_DENIED/403 errors when ads blocking is activated

2022-12-03 Thread Nicolas

You're right, Matus. That was the issue.
I replace as you suggested:
acl ads dstdom_regex "/etc/squid/ad_block.txt"
by
acl ads dstdomain "/etc/squid/ad_block.txt"
and now it works.

Thank you very much, Matus !

Have a nice day.

Nicolas.

Le 03/12/2022 à 15:02, Matus UHLAR - fantomas a écrit :

On 03.12.22 13:52, Nicolas wrote:

I installed squid on one of my servers, in order to block ads.

When I do not activate ads blocking, it works fine.
However, when I do activate ads blocking, some website are not
accessible.
I can browse www.google.com for example, but I can't access
www.linuxhint.com and a LOT of other websites.

Here's what appears in access.log :
1670071413.742  0 192.168.228.145 TCP_DENIED/403 3985 CONNECT
linuxhint.com:443 - HIER_NONE/- text/html

Here's my squid.conf file :



acl ads dstdom_regex "/etc/squid/ad_block.txt"
http_access deny ads



curl -sS -L --compressed
"http://pgl.yoyo.org/adservers/serverlist.php?hostformat=nohtml=0=plaintext;
> /etc/squid/ad_block.txt
which is on my server :
-rw-r--r-- 1 root root 60609  2 déc.  16:40 /etc/squid/ad_block.txt

I don't see anything special in that file, which contains for example :
1-1ads.com
101com.com
101order.com
123freeavatars.com
180hits.de
180searchassistant.com


the "t.co" matches.

there are no regexes in that file you should probably use "dstdomain"
instead.

regexes match . as any character and match in the middle of strings.



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] TCP_DENIED/403 errors when ads blocking is activated

2022-12-03 Thread Nicolas

Hello,

I installed squid on one of my servers, in order to block ads.

When I do not activate ads blocking, it works fine.
However, when I do activate ads blocking, some website are not accessible.
I can browse www.google.com for example, but I can't access www.linuxhint.com 
and a LOT of other websites.

Here's what appears in access.log :
1670071413.742  0 192.168.228.145 TCP_DENIED/403 3985 CONNECT 
linuxhint.com:443 - HIER_NONE/- text/html

Here's my squid.conf file :
acl localnet src 0.0.0.1-0.255.255.255  # RFC 1122 "this" network (LAN)
acl localnet src 10.0.0.0/8 # RFC 1918 local private network (LAN)
acl localnet src 100.64.0.0/10  # RFC 6598 shared address space (CGN)
acl localnet src 169.254.0.0/16 # RFC 3927 link-local (directly 
plugged) machines
acl localnet src 172.16.0.0/12  # RFC 1918 local private network (LAN)
acl localnet src 192.168.0.0/16 # RFC 1918 local private network (LAN)
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly 
plugged) machines
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
acl ads dstdom_regex "/etc/squid/ad_block.txt"
http_access deny ads
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost manager
http_access deny manager
include /etc/squid/conf.d/*
http_access allow localnet
http_access allow localhost
http_access deny all
http_port 8080
cache_dir ufs /cachesquid 600 16 256
coredump_dir /var/spool/squid
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320

Here's how I got the ad_block.txt file :
curl -sS -L --compressed 
"http://pgl.yoyo.org/adservers/serverlist.php?hostformat=nohtml=0=plaintext;
 > /etc/squid/ad_block.txt
which is on my server :
-rw-r--r-- 1 root root 60609  2 déc.  16:40 /etc/squid/ad_block.txt

I don't see anything special in that file, which contains for example :
1-1ads.com
101com.com
101order.com
123freeavatars.com
180hits.de
180searchassistant.com

There's only one occurence of linux in that file :
grep -i "linux" /etc/squid/ad_block.txt
banner.linux.se

Do you know why I got those 403 errors? I gave one example only 
(www.linuxhint.com) but a LOT of websites are not accessible anymore as soon as 
I add that line in the squid.conf file :
http_access deny ads

Thank you for your help.

Nicolas.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] How to catch a big spender ?

2019-03-25 Thread Nicolas Kovacs
Le 25/03/2019 à 20:15, Heiler Bemerguy a écrit :
> We've seen some high upload bandwidth usage on our router graphs and
> we'd like to know what was happening at that time...
> 
> Any tools or tricks to know that? I bet most of you have had this
> "curiosity" already too lol

Here's what I use to catch bandwidth hogs in our local network:

https://www.microlinux.fr/squidanalyzer-centos-7/

Cheers,

Niki

-- 
Microlinux - Solutions informatiques durables
7, place de l'église - 30730 Montpezat
Site : https://www.microlinux.fr
Mail : i...@microlinux.fr
Tél. : 04 66 63 10 32
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Replace SquidGuard with ufdbguard : configuration examples ?

2019-03-18 Thread Nicolas Kovacs
Hi,

I've been running the Squid + SquidGuard combination for quite some time
in our local school. I'm also filtering HTTPS connections using the
Squid SSL Bump functionality.

I'd like to test ufdbguard, since SquidGuard doesn't seem to be
maintained anymore, and it's also quite RAM-consuming.

I've read the PDF manual of ufdbguard, but before going any further, I'd
like to ask. Do any of you guys here use the Squid + ufdbguard
combination ? And if this is the case, can you eventually send me a few
working configuration files ? I'm currently fiddling with a local
sandbox installation, and I have some trouble putting the pieces together.

Cheers from the sunny South of France,

Niki Kovacs
-- 
Microlinux - Solutions informatiques durables
7, place de l'église - 30730 Montpezat
Site : https://www.microlinux.fr
Blog : https://blog.microlinux.fr
Mail : i...@microlinux.fr
Tél. : 04 66 63 10 32
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] How to configure a "proxy home" page ?

2018-03-25 Thread Nicolas Kovacs
Le 25/03/2018 à 13:08, Yuri a écrit :
> The problem is not install proxy CA. The problem is identify client
> has no proxy CA and redirect, and do it only one time.

That is exactly the problem. And I have yet to find a solution for that.

Current method is instruct everyone - with a printed paper in the office
- to connect to proxy.company-name.lan and then get further instructions
from the page. This works, but an automatic splash page would be more
elegant.

Niki

-- 
Microlinux - Solutions informatiques durables
7, place de l'église - 30730 Montpezat
Site : https://www.microlinux.fr
Blog : https://blog.microlinux.fr
Mail : i...@microlinux.fr
Tél. : 04 66 63 10 32
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] How to configure a "proxy home" page ?

2018-03-16 Thread Nicolas Kovacs
Le 16/03/2018 à 13:43, Yuri a écrit :
> I guess better way to do this is create special ACL to catch exactly 
> certificate error and then redirect by 302 using deny_info to proxy
> page with explanation and certificate.

This sounds like the way to go.

I just removed the root certificate from one of the clients and then
tried to open a few HTTPS sites. Invariably, I get the follwoing error
code :

SEC_ERROR_UNKNOWN_ISSUER

So how would I tell Squid in its own syntax to go to
http://nestor.microlinux.lan when it encounters such an error ? Is this
a trivial task, or more complicated to put in practice ?

BTW, this would be the last piece in my puzzle, and my installation
would be perfect if I got this to work.

Cheers,

Niki

-- 
Microlinux - Solutions informatiques durables
7, place de l'église - 30730 Montpezat
Site : https://www.microlinux.fr
Blog : https://blog.microlinux.fr
Mail : i...@microlinux.fr
Tél. : 04 66 63 10 32
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] How to configure a "proxy home" page ?

2018-03-16 Thread Nicolas Kovacs
Hi,

I have Squid + SquidGuard + SquidAnalyzer running on my LAN server as a
transparent cache + filtering proxy, and it's working real nicely.

When a client in my company wants to connect to the wifi, all he or she
has to do is this:

1. Connect to http://nestor.microlinux.lan

2. Download the nestor.microlinux.lan.der certificate

3. Install the certificate in the web browser (Firefox does it
automatically)

4. Surf the web

Now I wonder if there is a way to configure this page as a "proxy home
page" of some sorts. User who don't have the certificate installed
normally get a big fat HTTPS error as soon as they connect to a secure
site. So what I'd like to do is redirect "new" traffic to
http://nestor.microlinux.lan, which also explains what is happening.

I don't really know how to go about that, or if it is even possible.
Maybe some basic form of authentication ?

Any suggestion ?

Cheers,

Niki
-- 
Microlinux - Solutions informatiques durables
7, place de l'église - 30730 Montpezat
Site : https://www.microlinux.fr
Blog : https://blog.microlinux.fr
Mail : i...@microlinux.fr
Tél. : 04 66 63 10 32
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid + SquidGuard : static block page not working

2018-03-15 Thread Nicolas Kovacs
Le 14/03/2018 à 15:02, Yuri a écrit :
> I can confirm - ufdbguard is up-to-date and very good customizable 
> replacement for SquidGuard. Using ufdbguard last three years gives 
> perfect results and bring functionality which is absent in
> SquidGuard.
> 
> ufdbguard has good support of https (including SSL Bump), incredible 
> fast (it is thread-aware) and has small memory footprint.

Thanks everybody for your numerous suggestions.

I fiddled around much more with Squid, and for the moment, I got my
existing SquidGuard configuration from Slackware working on CentOS.

https://blog.microlinux.fr/squidguard-centos/

As soon as I have a bit of time on my hands to experiment, I'll take a
look at ufdbguard.

For the moment, SquidGuard works perfectly here.

Cheers,

Niki

-- 
Microlinux - Solutions informatiques durables
7, place de l'église - 30730 Montpezat
Site : https://www.microlinux.fr
Blog : https://blog.microlinux.fr
Mail : i...@microlinux.fr
Tél. : 04 66 63 10 32
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid + SquidGuard : static block page not working

2018-03-14 Thread Nicolas Kovacs
Le 14/03/2018 à 14:46, Marcus Kool a écrit :
> ufdbGuard is the tool that you need.
> It is an old fork of ufdbGuard with many new features, very good
> performance and it has regular maintenance.
> If you have a question, you can ask the support desk at
> www.urlfilterdb.com.
> You will get an answer from me or a colleague.

Thanks for the heads-up.

On the school server running SquidGuard, I'm using the blacklist
collection of the University of Toulouse, which has several millions (!)
of URLS/domains in about a hundred different categories.

Will I be able to use these blacklists with ufdbGuard ?

Niki

-- 
Microlinux - Solutions informatiques durables
7, place de l'église - 30730 Montpezat
Site : https://www.microlinux.fr
Blog : https://blog.microlinux.fr
Mail : i...@microlinux.fr
Tél. : 04 66 63 10 32
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid + SquidGuard : static block page not working

2018-03-14 Thread Nicolas Kovacs
Le 14/03/2018 à 14:06, Amos Jeffries a écrit :
> Then the first thing you and your readers need to be clear on is that
> SquidGuard was end-of-life'd many years ago. It is long overdue for
> removal or replacement. This has impact such as the one you saw on HTTPS
> traffic support which was only added to Squid-3 after SG stopped being
> maintained.
> 
> The best thing to be doing these days is upgrading simple configs like
> the one you presented earlier to using modern Squid features directly in
> squid.conf - as I recommended earlier.
> 
> For very complex configurations (or emergency upgrades) the ufdbguard
> tool can be used as a drop-in replacement for squidGuard while the
> config migration is evaluated. It handles the HTTPS situation better
> than SG does, but for simple configs any helper is still very much
> overkill and a performance drag.

This is the configuration which is currently in use at our local school.
The server is running Squid + SquidGuard on Slackware 14.1. We're
planning to move to CentOS 7 in June 2018, so I'd like to use this
working configuration without having to jump through burning loops or
having to reinvent the wheel.

--8<---
# /etc/squidguard/squidguard.conf

dbhome /var/lib/squidguard/dest
logdir /var/log/squidguard

time couvrefeu {
  weekly mtwhf 00:00-07:00
  weekly smtwh 22:30-24:00
}

src direction {
  ip 192.168.10.2-192.168.10.49
  ip 192.168.10.246-192.168.10.249
}

src scholae {
  ip 192.168.10.50-192.168.10.210
}

# Sites adultes
destination adult {
  domainlist adult/domains
  urllist adult/urls
  log adult
}

# Sites racistes, antisémites, incitant à la haine
destination agressif {
  domainlist agressif/domains
  urllist agressif/urls
  log agressif
}

# Sites orientés vers l'audio et la vidéo
destination audio-video {
  domainlist audio-video/domains
  urllist audio-video/urls
  log audio-video
}

# Blogs
destination blog {
  domainlist blog/domains
  urllist blog/urls
  log blog
}

# Sites pour désinfecter et mettre à jour des ordinateurs
destination cleaning {
  domainlist cleaning/domains
  urllist cleaning/urls
  log cleaning
}

# Sites décrivant la fabrication de bombes, de poison, etc.
destination dangerous_material {
  domainlist dangerous_material/domains
  urllist dangerous_material/urls
  log dangerous_material
}

# Sites de téléchargement
destination download {
  domainlist download/domains
  urllist download/urls
  log download
}

# Drogue
destination drogue {
  domainlist drogue/domains
  urllist drogue/urls
  log drogue
}

# Infos financières
destination financial {
  domainlist financial/domains
  urllist financial/urls
  log financial
}

# Forums
destination forums {
  domainlist forums/domains
  urllist forums/urls
  log forums
}

# Jeux en ligne, casino
destination gambling {
  domainlist gambling/domains
  urllist gambling/urls
  log gambling
}

# Sites de piratage et d'agressions informatiques
destination hacking {
  domainlist hacking/domains
  urllist hacking/urls
  log hacking
}

# Sites éducatifs
destination liste_bu {
  domainlist liste_bu/domains
  urllist liste_bu/urls
  log liste_bu
}

# Sonneries de mobiles
destination mobile-phone {
  domainlist mobile-phone/domains
  urllist mobile-phone/urls
  log mobile-phone
}

# Phishing, pièges bancaires, etc.
destination phishing {
  domainlist phishing/domains
  urllist phishing/urls
  log phishing
}

# Publicité
destination publicite {
  domainlist publicite/domains
  urllist publicite/urls
  log publicite
}

# Webradio
destination radio {
  domainlist radio/domains
  urllist radio/urls
  log radio
}

# Redirecteurs 1/3
destination redirector {
  domainlist redirector/domains
  urllist redirector/urls
  log redirector
}

# Redirecteurs 2/3
destination strict_redirector {
  domainlist strict_redirector/domains
  urllist strict_redirector/urls
  log strict_redirector
}

# Redirecteurs 3/3
destination strong_redirector {
  domainlist strong_redirector/domains
  urllist strong_redirector/urls
  log strong_redirector
}

# Sites qui expliquent comme tricher aux examens
destination tricheur {
  domainlist tricheur/domains
  urllist tricheur/urls
  log tricheur
}

# Warez
destination warez {
  domainlist warez/domains
  urllist warez/urls
  log warez
}

# Webmail
destination webmail {
  domainlist webmail/domains
  urllist webmail/urls
  log webmail
}

# Jeux
destination games {
  domainlist games/domains
  urllist games/urls
  log games
}

# Jeux éducatifs
destination educational_games {
  domainlist educational_games/domains
  urllist educational_games/urls
  log educational_games
}

# Sites pour adultes
destination mixed_adult {
  domainlist mixed_adult/domains
  urllist mixed_adult/urls
  log mixed_adult
}

# Sites de téléchargement
destination filehosting {
  domainlist filehosting/domains
  urllist filehosting/urls
  log filehosting
}

# Changement de propriétaire
destination reaffected {
  domainlist reaffected/domains
 

Re: [squid-users] Squid + SquidGuard : static block page not working

2018-03-14 Thread Nicolas Kovacs
Le 14/03/2018 à 13:39, Nicolas Kovacs a écrit :
> Yes, I do. Because this is part of a step-by-step course about
> SquidGuard, which worked perfectly under Slackware Linux. And my
> filtering rules are becoming increasingly complex.

FYI, this is the course. It's a HOWTO in simple text format.

I'm currently trying to adapt this to CentOS 7.

Niki

-- 
Microlinux - Solutions informatiques durables
7, place de l'église - 30730 Montpezat
Site : https://www.microlinux.fr
Blog : https://blog.microlinux.fr
Mail : i...@microlinux.fr
Tél. : 04 66 63 10 32

SquidGuard HOWTO (c) Nicolas Kovacs <i...@microlinux.fr>


Dernière révision : 5 mai 2015

Ce HOWTO décrit la mise en place du redirecteur SquidGuard pour un serveur
proxy Squid sous Slackware.

  * Généralités et prérequis
  * Installation
  * La page explicative
  * Une redirection simple
  * Récupérer les listes noires et blanches
  * Un filtre simple pour contenus problématiques
  * Automatiser les opérations


Généralités et prérequis


SquidGuard est un plug-in pour Squid. On doit donc disposer d'une installation
fonctionnelle de ce dernier.


Installation


Installer le paquet 'squidGuard' depuis le dépôt de paquets MLES.


La page explicative
---

Lorsque SquidGuard refuse l'accès à une page, c'est toujours une bonne idée
d'expliquer les raisons de ce refus aux utilisateurs. Pour commencer, on va
donc mettre en place une page d'avertissement, qui sera hébergée sur le
serveur lui-même. 

Le répertoire 'template/squidguard/html/' propose un modèle de page
explicative.

Pour la configuration d'une page web locale, voir le Apache-HOWTO.


Une redirection simple
--

Nous n'avons pas encore de listes noires et blanches ni de base de données,
mais nous pouvons déjà faire un premier test de redirection :

  1. la machine 192.168.2.2 n'est pas filtrée

  2. toutes les autres machines du réseau local sont bloquées

SquidGuard se configure par le biais du fichier de configuration
'/etc/squidguard/squidguard.conf'. Sauvegardez le fichier de configuration
d'origine :

  # cd /etc/squidguard
  # mv squidguard.conf squidguard.conf.orig

Éditez un fichier de configuration minimal comme ceci :

--8<-- /etc/squidguard/squidguard.conf ---
dbhome /var/lib/squidguard
logdir /var/log/squidguard

src admin {
  ip 192.168.2.2
}

acl {
  admin {
pass any
  }
  default {
pass none
redirect http://squidguard.nestor/avertissement.html
  }
}
--8<--

  > La directive 'dbhome' indique à SquidGuard où trouver la base de données
des listes (que nous n'avons pas encore).

  > La directive 'logdir' spécifie l'endroit où l'on désire récupérer les
logs.

  > Les sources définissent les groupes de clients. Ici, nous définissons une
seule adresse IP.

  > Les 'acl' ou "Access Control Lists" permettent de définir quelle source
peut aller ou ne pas aller vers quelle(s) destination(s). 

  > Lorsqu'une destination n'est pas autorisée, la directive 'redirect' permet
de servir une page explicative au client. 

À présent, il faut configurer Squid pour qu'il utilise SquidGuard. Éditer le
fichier '/etc/squid/squid.conf' et ajouter cette stance à la fin du fichier :

--8<-- /etc/squid/squid.conf -
url_rewrite_program /usr/bin/squidGuard -c /etc/squidguard/squidguard.conf
url_rewrite_children 5
--8<--

Recharger la configuration de Squid :

  # /etc/rc.d/rc.squid reload

Vérifier si la modification a bien été prise en compte :

  # ps aux | grep squid | grep -v grep
  root  5043  ...  /usr/sbin/squid -F
  nobody5045  ...  (squid) -F
  nobody5068  ...  (squidGuard) -c /etc/squidguard/squidguard.conf
  nobody5069  ...  (squidGuard) -c /etc/squidguard/squidguard.conf
  nobody5070  ...  (squidGuard) -c /etc/squidguard/squidguard.conf
  nobody5071  ...  (squidGuard) -c /etc/squidguard/squidguard.conf
  nobody5072  ...  (squidGuard) -c /etc/squidguard/squidguard.conf

Maintenant, on peut essayer de naviguer sur Internet :

  1. avec la machine 192.168.2.2

  2. avec une machine dont l'adresse IP n'est pas 192.168.2.2


Récupérer les listes noires et blanches
---

Dans les exemples présentés ci-dessous, nous utiliserons les listes noires et
blanches maintenues par le Centre de Ressources Informatiques de l'Université
de Toulouse. Ces listes ne font pas partie de SquidGuard. On peut les
récupérer manuellement comme ceci :
  
  # cd /var/lib/squidguard
  # wget -c ftp://ftp.ut-capitole.fr/blacklist/blacklists.tar.gz
  # tar xvzf blacklists.tar.gz
  # cd blacklists

Chacun des répertoires correspond à une catégorie (ou "destination

Re: [squid-users] Squid + SquidGuard : static block page not working

2018-03-14 Thread Nicolas Kovacs
Le 14/03/2018 à 13:33, Amos Jeffries a écrit :
> You do not need SG or any fancy redirector helpers at all for that.

Yes, I do. Because this is part of a step-by-step course about
SquidGuard, which worked perfectly under Slackware Linux. And my
filtering rules are becoming increasingly complex.

Niki


-- 
Microlinux - Solutions informatiques durables
7, place de l'église - 30730 Montpezat
Site : https://www.microlinux.fr
Blog : https://blog.microlinux.fr
Mail : i...@microlinux.fr
Tél. : 04 66 63 10 32
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid + SquidGuard : static block page not working

2018-03-14 Thread Nicolas Kovacs
Hi,

I've been working with Squid + SquidGuard for a few years, though only
on Slackware. I'm currently transferring my proxy expertise to CentOS 7,
and right now I'm having a little problem with that.

Squid works perfectly so far as a transparent HTTP + HTTPS cache proxy.

The next step is to add SquidGuard, so I installed it and edited the
most basic /etc/squid/squidGuard.conf file possible.

In this setup, my workstation (192.168.2.2) is allowed to access
anything on the Web, and all other client machines on the networks are
blocked and should be redirected to the avertissement.html block page
for every request.

--8<--
# /etc/squid/squidGuard.conf
dbhome /var/squidGuard
logdir /var/log/squidGuard

src admin {
  ip 192.168.2.2
}

acl {
  admin {
pass any
  }
  default {
pass none
redirect http://nestor.microlinux.lan/avertissement.html
  }
}
--8<--

I appended the following lines to /etc/squid/squid.conf:

--8<--
# SquidGuard
url_rewrite_program /usr/bin/squidGuard -c /etc/squid/squidGuard.conf
url_rewrite_children 5
--8<--

Now this setup sort of works. My workstation can access anything, other
clients are blocked. Unfortunately, the block page avertissement.html is
not displayed. Instead, I get a Squid error page:

  The following error was encountered while trying to retrieve the URL:
  https://http/*

  Unable to determine IP address from host name "http".

Any idea why my static block page avertissement.html is not displayed?

Cheers,

Niki
-- 
Microlinux - Solutions informatiques durables
7, place de l'église - 30730 Montpezat
Site : https://www.microlinux.fr
Blog : https://blog.microlinux.fr
Mail : i...@microlinux.fr
Tél. : 04 66 63 10 32
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Distribute root certificate to clients

2018-03-12 Thread Nicolas Kovacs
Hi,

I have a few prospective clients who want/need to log and monitor all
their web traffic and asked me to find a viable solution for this.

After a couple of weeks of fiddling, I decided to opt for the
Squid+SquidAnalyzer setup, which works quite well. I have a sandbox
installation here in my office that already works quite satisfyingly.

While working out the solution (thanks again to you guys, you know who
you are), I took some extensive notes on my technical blog:

  * https://blog.microlinux.fr/squid-centos/

  * https://blog.microlinux.fr/squid-https-centos/

  * https://blog.microlinux.fr/squidanalyzer-centos/

  * https://blog.microlinux.fr/squid-exceptions/

I have yet one problem to tackle, and I already have a solution in mind.
Though I thought I'd rather ask here first, since this is a bit new to
me, and you guys have much more experience.

Most of my clients are small businesses with up to a few dozen client
PCs, and also wireless access.

The problem I'm currently facing is: how to provide an easy installation
of Squid's root certificate? During my tests, I wrote some short
instructions for my Linux clients with Firefox, Chrome and Konqueror:

https://blog.microlinux.fr/squid-https-centos/#navigateurs

Here's what I intend to do. Configure a local web page
http://proxy.company.lan where clients can download the certificate file
proxy.company.lan.der. This page also contains quick & dirty
instructions on how to install the certificate on the most popular
browsers/platforms (Chrome, Firefox, Safari, Internet Explorer).

Each company will also have a printed document, explaining how to access
the Internet. Something like this:

  1. Open http://proxy.company.lan in your browser.

  2. Download the proxy.company.lan.der certificate file.

  3. Follow instructions to import this file into your browser.

  4. Browse the web normally.

Before doing that, I thought I'd inquire how you guys go about that. As
a long-time Slackware user I've always been a fan of the KISS principle
(Keep It Simple Stupid), so I try to have a no-nonsense approach.

Any suggestions?

Cheers from the sunny South of France,

Niki

-- 
Microlinux - Solutions informatiques durables
7, place de l'église - 30730 Montpezat
Site : https://www.microlinux.fr
Blog : https://blog.microlinux.fr
Mail : i...@microlinux.fr
Tél. : 04 66 63 10 32
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Allow some domains to bypass Squid

2018-03-11 Thread Nicolas Kovacs
Le 11/03/2018 à 19:44, Yuri a écrit :
> It's trivial to implement. Here is my config snippet:
> 
> # SSL bump rules
> acl DiscoverSNIHost at_step SslBump1
> acl NoSSLIntercept ssl::server_name_regex
> "/usr/local/squid/etc/acl.url.nobump"
> ssl_bump peek DiscoverSNIHost
> ssl_bump splice NoSSLIntercept
> ssl_bump bump all
> 
> acl.ur.nobump fragment:
> 
> # Adobe updates (web installation)
> # This requires to splice due to SSL-pinned web-downloader
> (get|platformdl|fpdownload|ardownload[0-9])\.adobe\.com

I gave this configuration a spin on my local proxy, and it works great,
without special firewall rules.

Thanks very much! You made my day!

Niki

-- 
Microlinux - Solutions informatiques durables
7, place de l'église - 30730 Montpezat
Site : https://www.microlinux.fr
Blog : https://blog.microlinux.fr
Mail : i...@microlinux.fr
Tél. : 04 66 63 10 32
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Allow some domains to bypass Squid

2018-03-11 Thread Nicolas Kovacs
Le 11/03/2018 à 16:48, Alex Crow a écrit :
> 
> It would be a lot easier to just create exceptions on the squid device
> for sites where bumping doesn't work which cause then to be tunnelled or
> spliced rather then bumped. You can then at least use dstdomain or
> ssl:servername rules. dstdomain will let you tunnel or splice, whereas
> ssl servername you will only be able to splice as an SSL connection must
> already have been started AFAIK. Your firewall will probably need
> restarting every time one of the IP addresses behind those hostnames
> changes. Squid will at least do a lookup every request for dstdomain
> (you need a good DNS server nearby or on the squid box).

What would this configuration look like? Do you have a working example?

Niki

-- 
Microlinux - Solutions informatiques durables
7, place de l'église - 30730 Montpezat
Site : https://www.microlinux.fr
Blog : https://blog.microlinux.fr
Mail : i...@microlinux.fr
Tél. : 04 66 63 10 32
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Allow some domains to bypass Squid

2018-03-11 Thread Nicolas Kovacs
Le 11/03/2018 à 12:31, Amos Jeffries a écrit :
> The whois system can provide info on the IP ranges owned by the
> companies like Google which own their own ranges.
> 
> 
> The alternative for ssl-bump is the splice action. For that you only
> need to know the server names each company uses.

I'd say the problem is solved.

I wrote a little blog article to wrap it up.

https://blog.microlinux.fr/squid-exceptions/

Cheers !

Niki

-- 
Microlinux - Solutions informatiques durables
7, place de l'église - 30730 Montpezat
Site : https://www.microlinux.fr
Blog : https://blog.microlinux.fr
Mail : i...@microlinux.fr
Tél. : 04 66 63 10 32
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Allow some domains to bypass Squid

2018-03-11 Thread Nicolas Kovacs
Le 11/03/2018 à 12:31, Amos Jeffries a écrit :
> The whois system can provide info on the IP ranges owned by the
> companies like Google which own their own ranges.
> 
> 
> The alternative for ssl-bump is the splice action. For that you only
> need to know the server names each company uses.

OK, I got something that's starting to work.

# Exceptions
EXCEPTIONS=$(egrep -v '(^\#)|(^\s+$)' /usr/local/sbin/no-proxy.txt)
for EXCEPTION in $EXCEPTIONS; do
  $IPT -A PREROUTING -t nat -i $IFACE_LAN -d $EXCEPTION -j ACCEPT
done

# Squid
$IPT -A INPUT -p tcp -i $IFACE_LAN --dport 3128 -j ACCEPT
$IPT -A INPUT -p udp -i $IFACE_LAN --dport 3128 -j ACCEPT
$IPT -A PREROUTING -t nat -i $IFACE_LAN -p tcp ! -d $SERVER_IP \
  --dport 80 -j REDIRECT --to-port 3128
$IPT -A INPUT -p tcp -i $IFACE_LAN --dport 3129 -j ACCEPT
$IPT -A INPUT -p udp -i $IFACE_LAN --dport 3129 -j ACCEPT
$IPT -A PREROUTING -t nat -i $IFACE_LAN -p tcp ! -d $SERVER_IP \
  --dport 443 -j REDIRECT --to-port 3129
$IPT -A INPUT -p tcp -i $IFACE_LAN --dport 3130 -j ACCEPT
$IPT -A INPUT -p udp -i $IFACE_LAN --dport 3130 -j ACCEPT


And here's what the no-proxy.txt file looks like:

# Ne pas utiliser le proxy pour les domaines suivants
#
# Crédit Coopératif
www.credit-cooperatif.coop
# Github
github.com
# Microlinux
microlinux.fr
microlinux.eu
# Squid
squid-cache.org
# Thunderbird
start.thunderbird.net

So far, it works fine.

Any suggestions ?

Niki


-- 
Microlinux - Solutions informatiques durables
7, place de l'église - 30730 Montpezat
Site : https://www.microlinux.fr
Blog : https://blog.microlinux.fr
Mail : i...@microlinux.fr
Tél. : 04 66 63 10 32
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Allow some domains to bypass Squid

2018-03-11 Thread Nicolas Kovacs
Le 11/03/2018 à 11:17, Amos Jeffries a écrit :
> The process is not getting anywhere close to caching being relevant. The
> error you mentioned earlier is in the TLS handshake part of the process.

I've experimented some more, and I have a partial success. Here, I'm
redirecting all HTTPS traffic *except* the one that goes to my bank:

iptables -A PREROUTING -t nat -i $IFACE_LAN -p tcp ! -d
www.credit-cooperatif.coop --dport 443 -j REDIRECT --to-port 3129

This works because my bank is hosted on a single IP. As soon as I
replace that with a domain that's hosted on multiple IP's, I get this:

iptables -A PREROUTING -t nat -i $IFACE_LAN -p tcp ! -d www.google.com
--dport 443 -j REDIRECT --to-port 3129

# firewall.sh
iptables v1.4.21: ! not allowed with multiple source or destination IP
addresses

So my question is: how can I write an iptables rule (or series of rules)
that redirect all traffic to my proxy, *except* the one going to
 ?

Cheers,

Niki

-- 
Microlinux - Solutions informatiques durables
7, place de l'église - 30730 Montpezat
Site : https://www.microlinux.fr
Blog : https://blog.microlinux.fr
Mail : i...@microlinux.fr
Tél. : 04 66 63 10 32
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Allow some domains to bypass Squid

2018-03-11 Thread Nicolas Kovacs
Le 11/03/2018 à 09:24, Amos Jeffries a écrit :
> What you need to start with is switch your thinking from "domains" to
> considering things in terms of connections and individual servers. Since
> "domain" is a URL concept, and URLs are all hidden inside the encrypted
> part of the traffic there is no knowing what that really is until after
> decryption.
> 
> However when dealing with servers and connections, the connections TLS
> SNI can tell you which *server* a client is connecting to and you can
> decide to do the splice action based on which servers you are having
> trouble with (not domains).
> 
> Or better yet, decide even earlier in your NAT system not to send that
> traffic to the proxy at all.

I tried to formulate your suggestion in my own words and sent it to the
CentOS mailing list, where I'm a regular, since this seems more to be of
an iptables-related problem ("earlier in the NAT system").

Here's my message:

--8<-

Hi,

I'm currently facing a quite tricky problem. Here goes.

I have setup Squid as a transparent HTTP+HTTPS proxy in my local
network. All web traffic gets handed over to Squid by an iptables script
on the server. Here's the relevant section in /etc/squid/squid.conf:

--8<-
# Ports du proxy
http_port 3130
http_port 3128 intercept
https_port 3129 intercept ssl-bump \
  cert=/etc/squid/ssl_cert/amandine.sandbox.lan.pem \
  generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
--8<-

And here's the corresponding section of my firewall script:

--8<-
# Commandes
IPT=/usr/sbin/iptables
SYS=/usr/sbin/sysctl
SERVICE=/usr/sbin/service

# Internet
IFACE_INET=enp2s0

# Réseau local
IFACE_LAN=virbr0
IFACE_LAN_IP=192.168.2.0/24

# Serveur
SERVER_IP=192.168.2.1

...

# Squid
$IPT -A INPUT -p tcp -i $IFACE_LAN --dport 3128 -j ACCEPT
$IPT -A INPUT -p udp -i $IFACE_LAN --dport 3128 -j ACCEPT
$IPT -A PREROUTING -t nat -i $IFACE_LAN -p tcp ! -d $SERVER_IP \
  --dport 80 -j REDIRECT --to-port 3128
$IPT -A INPUT -p tcp -i $IFACE_LAN --dport 3129 -j ACCEPT
$IPT -A INPUT -p udp -i $IFACE_LAN --dport 3129 -j ACCEPT
$IPT -A PREROUTING -t nat -i $IFACE_LAN -p tcp ! -d $SERVER_IP \
  --dport 443 -j REDIRECT --to-port 3129
$IPT -A INPUT -p tcp -i $IFACE_LAN --dport 3130 -j ACCEPT
$IPT -A INPUT -p udp -i $IFACE_LAN --dport 3130 -j ACCEPT
--8<-

This setup works nicely for the vast majority of web sites.

BUT: a handful of sites has some trouble with my local certificate. For
example, I can't sync my local Github repo anymore. Or my local OwnCloud
client spews back a warning message on every startup.

I asked on the Squid mailing list if there was a possibility to create
an exception for a list of domains, so that these can simply bypass the
proxy. The problem is, according to one of the developers, I have to
tackle that problem earlier in the process, e. g. in the firewall setup.

So here's what I want to do, in plain words:

1. Redirect all HTTP traffic (port 80) to port 3128. So far so good.

2. Redirect all HTTPS traffic (port 443) to port 3129. Equally OK.

AND...

3. DO NOT REDIRECT traffic that goes to certain domains, like:

  github.com
  credit-cooperatif.coop
  cloud.microlinux.fr
  squid-cache.org
  etc.

Ideally, these domains should be read from a simple text file.

Any idea how I could do that? I don't even know if this is theoretically
possible.

Cheers,

Niki

-- 
Microlinux - Solutions informatiques durables
7, place de l'église - 30730 Montpezat
Site : https://www.microlinux.fr
Blog : https://blog.microlinux.fr
Mail : i...@microlinux.fr
Tél. : 04 66 63 10 32
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Introduction & Squid ports

2018-03-11 Thread Nicolas Kovacs
Le 11/03/2018 à 10:17, Amos Jeffries a écrit :
> In your config you changed your 3128 to receiving port-80 (origin-form)
> syntax with "intercept". So port 3130 was necessary to takeover
> receiving of the normal proxy traffic.
> 
> The TLS wrappers on HTTPS need special handling to decrypt so that needs
> another port setup to do that decryption first and HTTP message handling
> after. "https_port" directive sets up a port for that.
> 
> NP: the "ssl-bump" flag does not mean simply receiving HTTPS traffic, it
> means specifically decrypting HTTPS traffic destined *to another server*
> - ie MITM at the TLS level. Which can be done for port-443 traffic OR
> for CONNECT messages in the proxy (port-3128) syntax traffic. Thus it is
> applicable on both https_port and http_port traffic respectively.

Thanks very much for your detailed answer !

Cheers !

Niki

-- 
Microlinux - Solutions informatiques durables
7, place de l'église - 30730 Montpezat
Site : https://www.microlinux.fr
Blog : https://blog.microlinux.fr
Mail : i...@microlinux.fr
Tél. : 04 66 63 10 32
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Allow some domains to bypass Squid

2018-03-11 Thread Nicolas Kovacs
Le 11/03/2018 à 09:24, Amos Jeffries a écrit :
> What you need to start with is switch your thinking from "domains" to
> considering things in terms of connections and individual servers. Since
> "domain" is a URL concept, and URLs are all hidden inside the encrypted
> part of the traffic there is no knowing what that really is until after
> decryption.
> 
> However when dealing with servers and connections, the connections TLS
> SNI can tell you which *server* a client is connecting to and you can
> decide to do the splice action based on which servers you are having
> trouble with (not domains).
> 
> Or better yet, decide even earlier in your NAT system not to send that
> traffic to the proxy at all.

I'm sorry, but I don't understand what you're saying.

Here's what I want, It's very simple.

Create a text file that contains a list of domains. For example:

  google.com
  hotmail.com
  github.com
  credit-cooperatif.fr

And then all connections that go to anyone of these domains don't get
cached, but simply pass through Squid.

Thanks,

Niki

-- 
Microlinux - Solutions informatiques durables
7, place de l'église - 30730 Montpezat
Site : https://www.microlinux.fr
Blog : https://blog.microlinux.fr
Mail : i...@microlinux.fr
Tél. : 04 66 63 10 32
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Allow some domains to bypass Squid

2018-03-11 Thread Nicolas Kovacs
Hi,

I have Squid setup as a transparent HTTP+HTTPS proxy in my local
network, using SSL-Bump.

The configuration works quite nicely, according to
/var/log/squid/cache.log and /var/log/squid/access.log.

This being said, I am having trouble with a handful of domains like
Github, or my OwnCloud installation. I have an OwnCloud server installed
at https://cloud.microlinux.fr, and everytime I fire up a client, I have
to confirm the use of an untrusted certificate. And on my workstation, I
can't connect to my Github repository anymore. Here's the error I get.

  # git pull
  fatal: unable to access 'https://github.com/kikinovak/centos-
  7-desktop-kde/': Peer's certificate issuer has been marked as not
  trusted by the user.

So I thought the best thing to do is to create an exception for this
handful of domains with issues.

Can I configure some domains to simply bypass the proxy in my current
(transparent) setup? Ideally, the configuration should be able to read a
simple text file containing said domains, something like
/etc/squid/bypass-these-domains.txt. And then these bypass the proxy and
get treated regularly, as if there was no proxy?

Cheers,

Niki
-- 
Microlinux - Solutions informatiques durables
7, place de l'église - 30730 Montpezat
Site : https://www.microlinux.fr
Blog : https://blog.microlinux.fr
Mail : i...@microlinux.fr
Tél. : 04 66 63 10 32
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Introduction & Squid ports

2018-03-10 Thread Nicolas Kovacs
Hi,

I'm new to this list, so let me introduce myself. I'm a 50-year old
Austrian living in Montpezat (South France), and I'm the manager of a
small IT company with a focus on Linux and free software.

I've been using Squid for a few years, but only as a transparent HTTP
proxy. Here's my blog article (in French) about that configuration on
CentOS 7:

https://blog.microlinux.fr/squid-centos/

These last two weeks I've been experimenting quite a lot with using
Squid as a transparent HTTP+HTTPS proxy. I've also written a blog
article about this setup:

https://blog.microlinux.fr/squid-https-centos/

This configuration is running quite nicely, though I still have to sand
down a few rough edges. I went through quite a lot of trial and error,
using the Squid wiki as well as a handful of tutorials I found on the
Internet.

Here's the section of my squid.conf file defining ports:

--8<-
# Ports du proxy
http_port 3130
http_port 3128 intercept
https_port 3129 intercept ssl-bump \
  cert=/etc/squid/ssl_cert/amandine.sandbox.lan.pem \
  generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
--8<-

And here's the corresponding section of my firewall script:

--8<-
# Commandes
IPT=/usr/sbin/iptables
SYS=/usr/sbin/sysctl
SERVICE=/usr/sbin/service

# Internet
IFACE_INET=enp2s0

# Réseau local
IFACE_LAN=virbr0
IFACE_LAN_IP=192.168.2.0/24

# Serveur
SERVER_IP=192.168.2.1

...

# Squid
$IPT -A INPUT -p tcp -i $IFACE_LAN --dport 3128 -j ACCEPT
$IPT -A INPUT -p udp -i $IFACE_LAN --dport 3128 -j ACCEPT
$IPT -A PREROUTING -t nat -i $IFACE_LAN -p tcp ! -d $SERVER_IP \
  --dport 80 -j REDIRECT --to-port 3128
$IPT -A INPUT -p tcp -i $IFACE_LAN --dport 3129 -j ACCEPT
$IPT -A INPUT -p udp -i $IFACE_LAN --dport 3129 -j ACCEPT
$IPT -A PREROUTING -t nat -i $IFACE_LAN -p tcp ! -d $SERVER_IP \
  --dport 443 -j REDIRECT --to-port 3129
$IPT -A INPUT -p tcp -i $IFACE_LAN --dport 3130 -j ACCEPT
$IPT -A INPUT -p udp -i $IFACE_LAN --dport 3130 -j ACCEPT
--8<-

This configuration works perfectly and gives me no errors or whatsoever,
though I don't quite understand why I need all these ports. When I used
only HTTP, I had this configuration

http_port 3128 transparent

So I wonder why it wasn't possible to have something like this:

http_port 3128 transparent
https_port 3129 transparent ssl-bump

I'm not sure about how the "intercept" mode works. As far as I
understand, connections to port 80 get redirected to port 3128 by the
firewall, but what then? Does "http_port 3128 intercept" mean that Squid
redirects these again and sends them to its internal port 3130?

Similarly, connections to port 443 get redirected to port 3129 by the
firewall, so far so good. But I don't understand how to read "https_port
3129 intercept". Again, does this mean that Squid redirects these to its
internal port 3130, along with HTTP connections?

In short, my configuration works, but I'd like to get a better grasp on
*how* it works.

Cheers from the sunny South of France,

Niki Kovacs

-- 
Microlinux - Solutions informatiques durables
7, place de l'église - 30730 Montpezat
Site : https://www.microlinux.fr
Blog : https://blog.microlinux.fr
Mail : i...@microlinux.fr
Tél. : 04 66 63 10 32
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] skype connection problem

2016-10-26 Thread Nicolas Valera

Well, this is really frustrating!
I'm trying with socks5 and it doesn't work...
the behavior is the same as https proxy, it tries to connect to the peer 
through udp, not through the proxy.


i can't believe it!


On 10/25/2016 11:44 AM, Eliezer Croitoru wrote:

I am working on these but it involves a huge CDN and it might not work for 
everyone.

Later tonight I will try to see how it goes.

Eliezer


Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il


-Original Message-
From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of Andrea Venturoli
Sent: Tuesday, October 25, 2016 17:42
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] skype connection problem

On 10/25/16 16:26, Yuri Voinov wrote:


You LAN settings is too restrictive. AFAIK you require to permit
traffic to skype servers directly from your clients. Without proxy.


Any hint on how to identify those server?
Any IP list?

  bye & Thanks
av.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] skype connection problem

2016-10-25 Thread Nicolas Valera

Amos, thanks for the tips!
any idea about my skype problem?

regards

On 10/25/2016 08:13 AM, Amos Jeffries wrote:

On 25/10/2016 5:19 a.m., Nicolas Valera wrote:

Hi Yuri, thanks for the answer!

we don't have the squid in transparent mode in this network.
the squid configuration is very basic. here is the conf:

-
http_port 1280 connection-auth=off
forwarded_for delete
httpd_suppress_version_string on
client_persistent_connections off

cache_mem 16 GB
maximum_object_size_in_memory 8 MB

url_rewrite_program /usr/bin/squidGuard


These...


url_rewrite_children 10
url_rewrite_access allow all


... are redundant. That is the default values for those directives.



acl numeric_IPs dstdom_regex
^(([0-9]+\.[0-9]+\.[0-9]+\.[0-9]+)|(\[([0-9a-f]+)?:([0-9a-f:]+)?:([0-9a-f]+|0-9\.]+)?\])):443

acl Skype_UA browser ^skype

acl SSL_ports port 443 563 873 1445 2083 8000 8088 10017 8443 5443 7443
50001
acl Safe_ports port 80 82 88 182 210 554 591 777 873 1001 21 443 70 280 488
acl Safe_ports port 1025-65535  # unregistered ports

acl CONNECT method CONNECT
acl safe_method method GET
acl safe_method method PUT
acl safe_method method POST
acl safe_method method HEAD
acl safe_method method CONNECT
acl safe_method method OPTIONS
acl safe_method method PROPFIND
acl safe_method method REPORT
acl safe_method method MERGE
acl safe_method method MKACTIVITY
acl safe_method method CHECKOUT


Whats the point of this ACL ?




http_access deny !Safe_ports
http_access allow CONNECT localnet numeric_IPS Skype_UA
http_access deny CONNECT !SSL_ports
http_access deny !safe_method
http_access allow localnet
http_access allow localhost
http_access deny all

refresh_pattern ^ftp:144020%10080
refresh_pattern ^gopher:14400%1440
refresh_pattern -i (/cgi-bin/|\?) 00%0
refresh_pattern Packages\.tar$ 0   20%4320 refresh-ims
ignore-no-cache
refresh_pattern Packages\.bz2$ 0   20%4320 refresh-ims
ignore-no-cache
refresh_pattern Sources\.bz2$  0   20%4320 refresh-ims
ignore-no-cache
refresh_pattern Release\.gpg$  0   20%4320 refresh-ims
refresh_pattern Release$   0   20%4320 refresh-ims
refresh_pattern -i
microsoft.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320 80%
43200 reload-into-ims ignore-no-cache
refresh_pattern -i
windowsupdate.com/.*\.(esd|cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip)
4320 80% 43200 reload-into-ims ignore-no-cache
refresh_pattern -i
windows.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320 80%
43200 reload-into-ims ignore-no-cache
refresh_pattern -i
live.net/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320 80% 43200
reload-into-ims ignore-no-cache
refresh_pattern .020%4320



All those "ignore-no-cache" are not useful. Run "squid -k parse" and it
should mention they are no longer supported.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] skype connection problem

2016-10-25 Thread Nicolas Valera

Hi Eliezer, thanks for the answer!

On 10/24/2016 02:03 PM, Eliezer Croitoru wrote:

Just to understand the scenario:
You have let say 1 client on network 192.168.0.0/24
You have a proxy at 192.168.0.200
The client doesn’t have a gateway in the network IE cannot run dns queries
or pings to the internet.
The client must define the proxy in order to access any Internet resources.
Right?


Yes, you're right!
So, in this scenario, the skype will never work?


The proxy have access to dns and the ip stack natted or not.

I believe it would be pretty simple to reproduced in order to verify the
issue by another party.

Let me know if I got the situation right.

Eliezer


Eliezer Croitoru 
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il


From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On
Behalf Of N V
Sent: Monday, October 24, 2016 01:11
To: squid-us...@squid-cache.org
Subject: [squid-users] skype connection problem

hi there,
i've had problems with windows skype clients with the only internet
connection is through squid. the clients can login successful but when they
make a call, it hangs after 12 secconds.

I checked the client connections and see that attempts to connect directly
even if the proxy is properly configured.

my squid version is 3.5.12
the skype clients have the last version available.
does anyone have the same issues?
any idea?

thanks in advance!
Nicolás.

pd. sorry about my english


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] skype connection problem

2016-10-24 Thread Nicolas Valera



On 10/24/2016 01:21 PM, Yuri Voinov wrote:


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256


24.10.2016 22:19, Nicolas Valera пишет:

Hi Yuri, thanks for the answer!

we don't have the squid in transparent mode in this network.

So, you route all traffic to proxy box?

Yes, clients do not have direct Internet access



the squid configuration is very basic. here is the conf:

-
http_port 1280 connection-auth=off
forwarded_for delete
httpd_suppress_version_string on
client_persistent_connections off

cache_mem 16 GB
maximum_object_size_in_memory 8 MB

url_rewrite_program /usr/bin/squidGuard
url_rewrite_children 10
url_rewrite_access allow all

acl numeric_IPs dstdom_regex

^(([0-9]+\.[0-9]+\.[0-9]+\.[0-9]+)|(\[([0-9a-f]+)?:([0-9a-f:]+)?:([0-9a-f]+|0-9\.]+)?\])):443

acl Skype_UA browser ^skype

acl SSL_ports port 443 563 873 1445 2083 8000 8088 10017 8443 5443

7443 50001

acl Safe_ports port 80 82 88 182 210 554 591 777 873 1001 21 443 70

280 488

acl Safe_ports port 1025-65535  # unregistered ports

acl CONNECT method CONNECT
acl safe_method method GET
acl safe_method method PUT
acl safe_method method POST
acl safe_method method HEAD
acl safe_method method CONNECT
acl safe_method method OPTIONS
acl safe_method method PROPFIND
acl safe_method method REPORT
acl safe_method method MERGE
acl safe_method method MKACTIVITY
acl safe_method method CHECKOUT

http_access deny !Safe_ports
http_access allow CONNECT localnet numeric_IPS Skype_UA
http_access deny CONNECT !SSL_ports
http_access deny !safe_method
http_access allow localnet
http_access allow localhost
http_access deny all

refresh_pattern ^ftp:144020%10080
refresh_pattern ^gopher:14400%1440
refresh_pattern -i (/cgi-bin/|\?) 00%0
refresh_pattern Packages\.tar$ 0   20%4320 refresh-ims

ignore-no-cache

refresh_pattern Packages\.bz2$ 0   20%4320 refresh-ims

ignore-no-cache

refresh_pattern Sources\.bz2$  0   20%4320 refresh-ims

ignore-no-cache

refresh_pattern Release\.gpg$  0   20%4320 refresh-ims
refresh_pattern Release$   0   20%4320 refresh-ims
refresh_pattern -i

microsoft.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320 80%
43200 reload-into-ims ignore-no-cache

refresh_pattern -i

windowsupdate.com/.*\.(esd|cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip)
4320 80% 43200 reload-into-ims ignore-no-cache

refresh_pattern -i

windows.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320 80%
43200 reload-into-ims ignore-no-cache

refresh_pattern -i

live.net/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320 80% 43200
reload-into-ims ignore-no-cache

refresh_pattern .020%4320

-

please, can you send me your settings for ssl bump?

Copy-n-paste unknown configs is very bad idea, Nicolas.


sorry about that!
the only way to make skype works through squid is with ssl bump?



thanks again!
nicolás.

On 10/23/2016 07:28 PM, Yuri Voinov wrote:





24.10.2016 4:11, N V пишет:
>>> hi there,
>>> i've had problems with windows skype clients with the only internet
connection is through squid. the clients can login successful but when
they make a call, it hangs after 12 secconds.
>>>
>>> I checked the client connections and see that attempts to connect
directly even if the proxy is properly configured.
Exactly, Skype does not use HTTP to calls. So, why you expect it calls
should goes via proxy?
>>>
>>> my squid version is 3.5.12
>>> the skype clients have the last version available.
>>> does anyone have the same issues?
>>> any idea?
With properly configured ssl bump and transparent proxy we have not any
problems with skype. I don't know your details.
>>>
>>> thanks in advance!
>>> Nicolás.
>>>
>>> pd. sorry about my english
>>>
>>>
>>>
>>> ___
>>> squid-users mailing list
>>> squid-users@lists.squid-cache.org
>>> http://lists.squid-cache.org/listinfo/squid-users





___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


- --
Cats - delicious. You just do not know how to cook them.
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBCAAGBQJYDjURAAoJENNXIZxhPexGJAYH/jWHDNBJz43d17Lx1iUZSn1N
88PER8+AcS9aVlAzBWnu7uSu2yCWdcmMMNz1g5O2PYOnzuzMpyBHd2fKZFgksoP8
azdw5AXeHT9FOvXnY1qjGGWmn/vcBXC06NDpA8OEeuW9qNpEoRYR/0LQUrAOokW3
vLFft2FWT127ZK5c2DlD/p7yPrW7FmlovSkMlAAoe+sXkMMmPomSu75PhDBv3dKs
HCsTpama4Cwv+huJg/HDMyOLCsy4uiYZoFmilNiOF92Hg

Re: [squid-users] skype connection problem

2016-10-24 Thread Nicolas Valera

Hi Yuri, thanks for the answer!

we don't have the squid in transparent mode in this network.
the squid configuration is very basic. here is the conf:

-
http_port 1280 connection-auth=off
forwarded_for delete
httpd_suppress_version_string on
client_persistent_connections off

cache_mem 16 GB
maximum_object_size_in_memory 8 MB

url_rewrite_program /usr/bin/squidGuard
url_rewrite_children 10
url_rewrite_access allow all

acl numeric_IPs dstdom_regex 
^(([0-9]+\.[0-9]+\.[0-9]+\.[0-9]+)|(\[([0-9a-f]+)?:([0-9a-f:]+)?:([0-9a-f]+|0-9\.]+)?\])):443

acl Skype_UA browser ^skype

acl SSL_ports port 443 563 873 1445 2083 8000 8088 10017 8443 5443 7443 
50001

acl Safe_ports port 80 82 88 182 210 554 591 777 873 1001 21 443 70 280 488
acl Safe_ports port 1025-65535  # unregistered ports

acl CONNECT method CONNECT
acl safe_method method GET
acl safe_method method PUT
acl safe_method method POST
acl safe_method method HEAD
acl safe_method method CONNECT
acl safe_method method OPTIONS
acl safe_method method PROPFIND
acl safe_method method REPORT
acl safe_method method MERGE
acl safe_method method MKACTIVITY
acl safe_method method CHECKOUT

http_access deny !Safe_ports
http_access allow CONNECT localnet numeric_IPS Skype_UA
http_access deny CONNECT !SSL_ports
http_access deny !safe_method
http_access allow localnet
http_access allow localhost
http_access deny all

refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern Packages\.tar$ 0   20%4320 refresh-ims 
ignore-no-cache
refresh_pattern Packages\.bz2$ 0   20%4320 refresh-ims 
ignore-no-cache
refresh_pattern Sources\.bz2$  0   20%4320 refresh-ims 
ignore-no-cache

refresh_pattern Release\.gpg$  0   20%4320 refresh-ims
refresh_pattern Release$   0   20%4320 refresh-ims
refresh_pattern -i 
microsoft.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320 80% 
43200 reload-into-ims ignore-no-cache
refresh_pattern -i 
windowsupdate.com/.*\.(esd|cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 
4320 80% 43200 reload-into-ims ignore-no-cache
refresh_pattern -i 
windows.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320 80% 
43200 reload-into-ims ignore-no-cache
refresh_pattern -i 
live.net/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 4320 80% 43200 
reload-into-ims ignore-no-cache

refresh_pattern .   0   20% 4320

-

please, can you send me your settings for ssl bump?

thanks again!
nicolás.

On 10/23/2016 07:28 PM, Yuri Voinov wrote:


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256



24.10.2016 4:11, N V пишет:

hi there,
i've had problems with windows skype clients with the only internet

connection is through squid. the clients can login successful but when
they make a call, it hangs after 12 secconds.


I checked the client connections and see that attempts to connect

directly even if the proxy is properly configured.
Exactly, Skype does not use HTTP to calls. So, why you expect it calls
should goes via proxy?


my squid version is 3.5.12
the skype clients have the last version available.
does anyone have the same issues?
any idea?

With properly configured ssl bump and transparent proxy we have not any
problems with skype. I don't know your details.


thanks in advance!
Nicolás.

pd. sorry about my english



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBCAAGBQJYDTmeAAoJENNXIZxhPexG15oH/Alq2pQYRr80H/gMJUj4RJSi
z3X/lD+QN7I7N7XkxV4/vL5Lzznxc/bGKznuAqiusha/t4mDdgpIp0issR9LtcV4
8pLnrnovxTrEWZR7yFfYX+u8V1KGnudQNxlfaJXLL8C8K0mg3cp3GpsW+1a8s2c5
3gvsrj6Ft871gKfNmXXVmT7BVQdrBwnQvBLmP4eKEOIiT9mKQSIZwMJB4HgKUgVW
dmNQQb4q4975FD6c2t8/0Uu6l/A5lbMcxxuRIv3O9xrLqQud05IjYcSDDakzgtTy
qv+w7gFHbKe1YWDUkl2wJEi/TPbIdiXvV73cmh+HiogItDrw++v2rftxMfbJa4U=
=s3Ih
-END PGP SIGNATURE-



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] squid 3.4.8 ssl-bump resolve ip in access.log

2015-12-01 Thread LANGLOIS Nicolas
Hi,  i'm trying to set up squid 3.4.8 on debian , i want a full transparent 
proxy, no conf on client side .
it's working actually but i 'm ask to report websites access but for https 
actually i just get  this kind of line in my access.log :
< TCP_MISS/200 288 CONNECT 64.233.184.106:443 - ORIGINAL_DST/64.233.184.106 <

Is there a way to have dns resolution  and log the website visited  for https ?

Here is a part of my squid.conf :

http_port 192.168.1.1:3128 ssl-bump generate-host-certificates=on 
dynamic_cert_mem_cache_size=4MB cert=/etc/squid3/ssl_cert/squid.pem
http_port 192.168.1.1:3129 intercept
https_port 192.168.1.1:3130 intercept ssl-bump  generate-host-certificates=on 
dynamic_cert_mem_cache_size=4MB cert=/etc/squid3/ssl_cert/squid.pem

ssl_bump none all
sslproxy_cert_error allow all
sslproxy_flags DONT_VERIFY_PEER
always_direct allow all

or is there a magical solution for transparent proxy  with no client-side 
(certs or proxy conf) config working actually with https ?

Regards

Nicolas

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid 3.4.8 ssl-bump resolve ip in access.log

2015-12-01 Thread LANGLOIS Nicolas
Thanks Amos for the quick reply,  
I 'm making lot of mistake around  ssl with squid, i 'm following your advice 
and try to setup with with last squid 3.5 version using tproxy  will let you 
know . 

Have a good day 

Nicolas

-Message d'origine-
De : squid-users [mailto:squid-users-boun...@lists.squid-cache.org] De la part 
de Amos Jeffries
Envoyé : mardi 1 décembre 2015 13:18
À : squid-users@lists.squid-cache.org
Objet : Re: [squid-users] squid 3.4.8 ssl-bump resolve ip in access.log

On 2/12/2015 12:40 a.m., LANGLOIS Nicolas wrote:
> Hi,  i'm trying to set up squid 3.4.8 on debian , i want a full transparent 
> proxy, no conf on client side .

That is not what "fully transparent" means.

The best form of transparent proxy is when clients are auto-configured with 
explicit-proxy settings.


Also be aware that the Debian versions which are shipping Squid-3.4.8 have some 
mystery issue in their kernels that nobody has yet been able to track down that 
prevents the TPROXY feature from operating properly.
You will have to stick with NAT or upgrade to one of the Debian versions 
shipping Squid-3.5, their kernels seem to work better.


> it's working actually but i 'm ask to report websites access but for https 
> actually i just get  this kind of line in my access.log :
> < TCP_MISS/200 288 CONNECT 64.233.184.106:443 - 
> ORIGINAL_DST/64.233.184.106 <
> 
> Is there a way to have dns resolution  and log the website visited  for https 
> ?

What for? all it does is reduce accuracy of the log.
You can end up with situations where the log says:
 "CONNECT 64.233.184.106:443 - ORIGINAL_DST/example.com"

But when log analysis runs example.com has moved its IP, or just DNS has cycled 
on to another one of the set. So analysis then reports the client requested URL 
"64.233.184.106:443" when connecting to a server whose IP is now 192.0.2.1. 
Which is plain wrong.


> 
> Here is a part of my squid.conf :
> 
> http_port 192.168.1.1:3128 ssl-bump generate-host-certificates=on 
> dynamic_cert_mem_cache_size=4MB cert=/etc/squid3/ssl_cert/squid.pem
> http_port 192.168.1.1:3129 intercept
> https_port 192.168.1.1:3130 intercept ssl-bump  
> generate-host-certificates=on dynamic_cert_mem_cache_size=4MB 
> cert=/etc/squid3/ssl_cert/squid.pem
> 
> ssl_bump none all
> sslproxy_cert_error allow all
> sslproxy_flags DONT_VERIFY_PEER
> always_direct allow all

So basically; Intercept TLS (supposed to be secure). Ignore all possible 
errors, malicious attacks, diversions or hijacking that might be done by anyone 
else on the way to the real server. BUT tell the client everything is safe to 
send or fetch that sensitive data they needed TLS for?

You are SO lucky you started that "ssl_bump none all" actually means dont 
perform SSL-bump interception. It is preventing a world of FAIL from landing on 
your head.

The other three lines should be erased immediately.

> 
> or is there a magical solution for transparent proxy  with no client-side 
> (certs or proxy conf) config working actually with https ?

Firstly, Be aware that SSL-Bump is involved in an arms race. If you are doing 
bumping always use the latest Squid. The 3.4 series is outdated by a year. 
Things have already moved on well past its capabilities.


Secondly, after upgrade to Squid-3.5 use "splice all" where you have placed 
"none all" right now and what you request will 'just work'. You can then 
peek/stare at unencrypted the SNI and cert details to log where clients are 
going and/or block certain servers being contacted.

The assumption with SSL-Bump is that you are doing so in order to actually bump 
at least some of the traffic. There is very little point in diverting port 443 
to Squid only to do nothing at all with it. All that does is slow the already 
heavyweight HTTPS protocol down. It is the bumping action that requires the 
client setup.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] SQUID 3.1.20 TunnelStateData::Connection::error read/write failure: (32) Broken pipe

2014-12-04 Thread LANGLOIS Nicolas
Hi, im using squid On a debian server as simple proxy/cache  and  sometimes 
some clients get a connection error, i can't reproduce the problem and don' t 
really know where it can come from

Here is a Squid cache.log  output :

2014/12/04 14:46:14| TunnelStateData::Connection::error: FD 232: read/write 
failure: (32) Broken pipe
2014/12/04 15:09:13| TunnelStateData::Connection::error: FD 285: read/write 
failure: (32) Broken pipe

Anyone has an idea or at least know what it mean ?


Nicolas
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid authentication failing

2014-08-12 Thread nicolas

El 2014-08-11 18:59, Sarah Baker escribió:

Background:
Squid: squid-3.1.23-2.el6.x86_64
OS: CentOS 6.5 - Linux 2.6.32-431.23.3.el6.x86_64 #1 SMP Thu Jul 31 
17:20:5=

1 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

Issue:
I have two boxes, same OS, same squid binary, same config file, same
squid-= passwd file.
Configuration is setup for ncsa_auth.  Squid runs as user squid.

Both systems return OK to use of command line of ncsa_auth as squid
user to=  the login and password in the squid-passwd file.

Using squid however via a curl thru one of the proxy ips/port of the 
system=

: one system gives 403 forbidden, the other works just fine.

Tried removing authentication entirely, a fully open squid.  It fails
- same message.

Also looked at thusfar:
rpm -q query_options --requires squid-3.1.23-2.el6.x86_64
the same on both boxes.
Ran yum update on both to insure everything was up to latest - no 
change.


Any ideas what I should look far?
-
S. Baker
Manager of Technical Operations, BrightEdge


Maybe some SELinux/Apparmor/Similar application blocking some context of 
Squid and therefore throwing a 403 code?


Re: [squid-users] How to forbid squid from caching some websites?

2014-08-08 Thread nicolas

El 2014-08-08 16:53, Mark jensen escribió:

I want to forbid squid from caching a website:

www1.example.com/public

and

www1.example.com/books

 so every time I access the page it brings it from source.

but I want it to cache the website:

www1.example.com

and I want it to log everything in access.log file (the caching ones
and not caching ones).


Seems that you're looking for this:

http://wiki.squid-cache.org/SquidFaq/OperatingSquid#How_can_I_make_Squid_NOT_cache_some_servers_or_URLs.3F

Not sure whether it's possible to log the not-cached ones in access.log, 
although I think they would be with a MISS status.


Regards.


[squid-users] RV: Delay Pools not working

2013-10-10 Thread Nicolas Pagliaro

Hi, I am trying to get this work but I cant.

Here is my configuration and tests:

I compiled squid like this:
./configure --enable-auth-basic=basic  --enable-auth-ntlmntlm 
--enable-external-acl-helpers=wbinfo_group --enable-delay-pools

Then I use this squid conf:

cache_effective_user squid
cache_effective_group squid

cache_dir ufs /var/spool/squid/ 900 16 256

http_port 3128

coredump_dir /usr/local/squid/var/cache/squid

refresh_pattern ^ftp:   1440    20% 10080
refresh_pattern ^gopher:    1440    0%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320


acl lan src  192.168.0.0/24

http_access allow  lan
visible_hostname SQUID-PROXY

delay_pools 1
delay_class 1 3

delay_access 1 allow lan
delay_access 1 deny all

delay_parameters 1 1250/1250 -1/-1 1250/1250



Then, I try to download some file from Internet using this proxy and the speed 
doesn't change, I always download at max speed.

Any idea?

Really Thanks


[squid-users] squid.conf ssl-bump error

2012-08-08 Thread Nicolas Michels
I have squid installed with enable-ssl and enable-ssl-crtd
sbin/squid -v
Squid Cache: Version 3.0.STABLE26
configure options:  '--enable-ssl' '--enable-ssl-crtd'
But when I try to run squid I get this error:
cache_cf.cc(346) squid.conf:19 unrecognized: 'ssl_bump'
FATAL: Bungled squid.conf line 42: https_port
192.168.1.253:3129 transparent ssl-bump cert=/usr/local/squid/ssl.cert
key=/usr/local/squid/ssl.key
Squid Cache (Version 3.0.STABLE26): Terminated abnormally.
CPU Usage: 0.008 seconds = 0.003 user + 0.005 sys
Maximum Resident Size: 14416 KB
Page faults with physical i/o: 0

When I remove ssl-bump, squid is able to start, any help?
Thanks a lot.


Re: [squid-users] FTP access for IPv6 clients

2012-06-07 Thread Nicolas C.

Le 07/06/2012 05:09, Amos Jeffries a écrit :


3.1.6 has quite a few issues with IPv4/IPv6 behaviour in FTP. Please try
upgrading to the 3.1.19 package in Debian Wheezy/Testing or Unstable.


I tried with Debian Wheezy, the behavior is the same. I will test with a 
3.2.x version compiled...



As a workaround, to force FTP clients to connect to Squid using IPv4,
I created a proxy-ftp entry in our DNS pointing to the IPv4 address
of the proxy. If FileZilla is configured to use proxy-ftp, it's
working fine.

The problem is that sometimes the FTP server has IPv6 enabled and
then it's not working, the workstation is using IPv4 to reach Squid
which is using IPv6 to reach the FTP server. The FTP client is
immediately failing after a PASV command.


Squid is coded to try IPv6+IPv4 compatible commands (EPSV) first. If it
gets as far as trying IPv4-only PASV command it will not go backwards to
trying the IPv6+IPv4 EPSV command.
... ftp_epsv off is making Squid go straight to PASV and skip all the
non-IPv4 access methods.


When I force the FTP client to reach Squid in IPv4, the client will try 
to perform PASV on the server even if Squid is connected to the FTP in 
IPv6, I think this is the root of the problem.


CONNECT debian.mur.at:21 HTTP/1.1 200 521 TCP_MISS:DIRECT:2a02:3e0::14:80

On FileZilla : Enter passive mode (80,223,35) = failing


The third option is to upgrade your FTP server to one which supports
those extension commands (they are for optimising IPv4 as much as IPv6
support). Then you won't have to hack protocol translation workarounds
through Squid to access it from modern FTP clients.


The problem is happening on remote FTP servers I don't manage.

Is there a possibility to make Squid using its IPv4 address for all 
outgoing FTP? I tried with tcp_outgoing_address with no luck.


Regards,

Nicolas


[squid-users] FTP access for IPv6 clients

2012-06-06 Thread Nicolas C.

Hello,

I'm using Squid as a http/ftp proxy on a university, most of your 
workstations and servers have IPv6 activated.


I recently upgraded my Squid proxies to version 3.1.6 (Debian Squeeze) 
and the workstations are connecting to the proxy using IPv6 (or IPv4) 
with no problem.


A few computers need to access FTP servers on the Internet and there are 
some issues when accessing a IPv4 FTP server : the FTP client 
(FileZilla) is using IPv6 to connect to the proxy and it uses FTP 
commands unknown to the FTP server (EPSV for example), using the 
ftp_epsv off option in Squid has no effect.


As a workaround, to force FTP clients to connect to Squid using IPv4, I 
created a proxy-ftp entry in our DNS pointing to the IPv4 address of 
the proxy. If FileZilla is configured to use proxy-ftp, it's working fine.


The problem is that sometimes the FTP server has IPv6 enabled and then 
it's not working, the workstation is using IPv4 to reach Squid which is 
using IPv6 to reach the FTP server. The FTP client is immediately 
failing after a PASV command.


Is there a known solution to my issue? I did not make network capture yet.

Regards,

Nicolas


[squid-users] Creating a config with 2 delay pools.

2011-08-02 Thread Nicolas Di Gregorio
Hello,

We have a squid-proxy configured with 1 delay pool to limit the
bandwith to 6M. I have to create a kind of exception for a specific
remote host for which we want to reserve 1M which is not included
within the 6M.

Here is our actual configuration of the delay pools

acl all_network src 0.0.0.0/0.0.0.0
acl mydomain dst www.mydomain.com
delay_pools 2
delay_class 1 1
delay_access 1 allow !mydomain  all_network
delay_access 1 deny  all
#delay_parameters 1 393216/393216
delay_parameters 1 786432/786432
# 512 kbits == 64 kbytes per second


delay_class 2 1
delay_access 2 allow mydomain all_network
delay_access 2 deny  all
delay_parameters 2 131072/131072
# 512 kbits == 64 kbytes per second


is this configuration correct? how to know that mydomain.com is going
into the second pool?



Thanks in advance


[squid-users] Problem with transparent proxy using WCCP2 + GRE on Linux

2007-02-23 Thread Nicolas Limage
Hi squid-users,

I'm currently trying to replace an old netapp proxy with a squid+linux box.

I've some users behind a Cisco 7200 running IOS 12.4(12) using the proxy in 
transparent mode. The current proxy uses WCCP2+GRE to get the traffic from 
the router. The aim is to reproduce this behaviour with the squid box.

I've set up a box running Linux Debian, with kernel 2.6.18-3-k7 from debian 
and squid-2.6.STABLE8 compiled with the following options :

$ ./configure --prefix=/opt/package/squid-2.6.STABLE8
--enable-storeio=aufs,coss,diskd,null,ufs --enable-removal-policies=heap,lru
--enable-useragent-log --enable-referer-log --enable-wccp --enable-wccpv2
--enable-snmp --enable-linux-netfilter --enable-large-cache-files
--disable-ident-lookups --with-pthreads

my squid.conf file include these lines :

http_port 3128 transparent
wccp2_router ip_of_the_cisco_router
wccp2_rebuild_wait on
wccp2_forwarding_method 1
wccp2_return_method 1
wccp2_assignment_method 1
wccp2_service standard 0

I have set up an unnumbered GRE tunnel between the box and the router :

# iptunnel del gre0
# iptunnel add gre0 mode gre remote ip_of_the_cisco_router local 
ip_of_the_linux_box dev eth0
# ifconfig gre0 up

I've added these commands to enable routing and disable spoof protection.

# echo 1  /proc/sys/net/ipv4/ip_forward
# for file in /proc/sys/net/ipv4/conf/*/rp_filter; do
echo 0  $file
done

To do the redirection, i'm using iptables, with all default policies set to 
ACCEPT, plus this rule :

# iptables -t nat -A PREROUTING -p tcp -m tcp --dport 80 -j 
DNAT --to-destination ip_of_the_linux_box:3128

The Cisco router has been doing the job for years, so I doubt the problem 
comes from it. The squid proxy is running, with no error messages. I've 
tested it by explicitely declaring it in my browser, and it works perfectly.

The router can see the proxy (it is in his WCCP list) and it sends the packets 
to the linux box. I can see the encapsulated packets coming to the linux box, 
i can see the packets coming out of the GRE tunnel (tcpdump -i gre0), they 
hit the iptable redirection rule (iptables -t nat -L -v (the couter is 
increasing)), but afterwards, they seem to disappear. No trace in the squid 
log. The tcp session is not established. I see no related traffic coming out 
of the box either.

Does someone has an idea of what could be happening ?

I'm also very interrested in knowing how (in therory) the answer is supposed 
to return to the client.

Thanks
-- 
Nicolas L.


Re: [squid-users] Problem with transparent proxy using WCCP2 + GRE on Linux

2007-02-23 Thread Nicolas Limage
Bryan,

First, thanks a lot for your answer, as it permitted me to solve my problem, 
at least partially.

The problem came from the tunnel, which had no ip address.
Putting the primary ip address of the box on it was the solution.

(I also simplified my iptable rule, as both are somehow equivalent, but yours 
is less error-prone)

Something remains strange : on my current (now working) configuration, if i 
try to replace gre0 by gre1, it stops working. Another interresting point 
is that i cannot delete gre0 :

# iptunnel del gre0
ioctl: Operation not permitted

The problem is that i need to enable this proxy on another router also, so 
another gre tunnel is required. This may belong more to a kernel list but 
maybe someone here experienced the same thing.

I can see icmp error packets from the squid box to the router :

18:58:12.635157 IP squidbox ip  router ip: ICMP squidbox ip protocol 47 
port 34878 unreachable, length 88

I'll post again if i can find anything interresting on this.

Thanks,
Nicolas

Le vendredi 23 février 2007 13:50, Bryan Shoebottom a écrit :
 Nicolas,

 Maybe, the packets are getting dropped when they are trying to get back
 into your system on port 3128, try redirecting to the port only using
 --to-ports instead of --to-destination.  I also use the REDIRECT
 function as opposed to DNAT.  Here is my rule:

 iptables -t nat -A PREROUTING -i gre0 -p tcp -m tcp --dport 80 -j
 REDIRECT --to-ports 3128

 Finally, i use the IP of my cache server with a /32 mask for the gre0
 interface.  Hope this helps.

 Thanks,
  Bryan

 On Fri, 2007-02-23 at 04:09 -0500, Nicolas Limage wrote:
  Hi squid-users,
 
  I'm currently trying to replace an old netapp proxy with a squid+linux
  box.
 
  I've some users behind a Cisco 7200 running IOS 12.4(12) using the
  proxy in
  transparent mode. The current proxy uses WCCP2+GRE to get the traffic
  from
  the router. The aim is to reproduce this behaviour with the squid box.
 
  I've set up a box running Linux Debian, with kernel 2.6.18-3-k7 from
  debian
  and squid-2.6.STABLE8 compiled with the following options :
 
  $ ./configure --prefix=/opt/package/squid-2.6.STABLE8
  --enable-storeio=aufs,coss,diskd,null,ufs
  --enable-removal-policies=heap,lru
  --enable-useragent-log --enable-referer-log --enable-wccp
  --enable-wccpv2
  --enable-snmp --enable-linux-netfilter --enable-large-cache-files
  --disable-ident-lookups --with-pthreads
 
  my squid.conf file include these lines :
 
  http_port 3128 transparent
  wccp2_router ip_of_the_cisco_router
  wccp2_rebuild_wait on
  wccp2_forwarding_method 1
  wccp2_return_method 1
  wccp2_assignment_method 1
  wccp2_service standard 0
 
  I have set up an unnumbered GRE tunnel between the box and the
  router :
 
  # iptunnel del gre0
  # iptunnel add gre0 mode gre remote ip_of_the_cisco_router local
  ip_of_the_linux_box dev eth0
  # ifconfig gre0 up
 
  I've added these commands to enable routing and disable spoof
  protection.
 
  # echo 1  /proc/sys/net/ipv4/ip_forward
  # for file in /proc/sys/net/ipv4/conf/*/rp_filter; do
  echo 0  $file
  done
 
  To do the redirection, i'm using iptables, with all default policies
  set to
  ACCEPT, plus this rule :
 
  # iptables -t nat -A PREROUTING -p tcp -m tcp --dport 80 -j
  DNAT --to-destination ip_of_the_linux_box:3128
 
  The Cisco router has been doing the job for years, so I doubt the
  problem
  comes from it. The squid proxy is running, with no error messages.
  I've
  tested it by explicitely declaring it in my browser, and it works
  perfectly.
 
  The router can see the proxy (it is in his WCCP list) and it sends the
  packets
  to the linux box. I can see the encapsulated packets coming to the
  linux box,
  i can see the packets coming out of the GRE tunnel (tcpdump -i gre0),
  they
  hit the iptable redirection rule (iptables -t nat -L -v (the couter is
  increasing)), but afterwards, they seem to disappear. No trace in the
  squid
  log. The tcp session is not established. I see no related traffic
  coming out
  of the box either.
 
  Does someone has an idea of what could be happening ?
 
  I'm also very interrested in knowing how (in therory) the answer is
  supposed
  to return to the client.
 
  Thanks
  --
  Nicolas L.

-- 
Nicolas L.


[squid-users] redirecting squid

2006-04-25 Thread frevol nicolas
hi,

i am trying to use use squid with an external program.
So i did:
redirect_program /home/mysession/rep/executable

In my Perl executable, i take the argv[0] value.

But what kind ok value do i have to print out ?
for example my perl script is :
-
#!/usr/bin/perl -w
use strict;

$|=1;
while () {
   my @X = split;
   my $url = $X[0];
   my $test = 0;   #just a test value for later
  if ($url =~ /^http:\/\/internal\.foo\.com/) {
 $url =~ s/^http/https/;
 $url =~ s/internal/secure/;
 if($test==0){
print 302:$url\n;
 }else{
print  file:///home/template.html;
 }
  }else{
 if($test==0){
print http://fr.yahoo.com\n;;
 }else {
print  file:///template.html;
 }
  }
}
- 
the result is that i have the yahoo site on my screen
without pictures.

and if i replace 
print http://fr.yahoo.com\n;; 
by
print file:///template.html;

it doesn't works


any ideas?

 






___ 
Faites de Yahoo! votre page d'accueil sur le web pour retrouver directement vos 
services préférés : vérifiez vos nouveaux mails, lancez vos recherches et 
suivez l'actualité en temps réel. 
Rendez-vous sur http://fr.yahoo.com/set


[squid-users] replace squidguard by a python script

2006-04-20 Thread frevol nicolas
hi,

i am triing to replace squidguard by a python script.
to do this, i have dreated an .exe file and i wrote in
the squid.conf file :
redirect_program /home/mysession/rep/executable.
the problem is that i don't know how to take back the
request from squid to give the URL to my script.

any ideas ?

thanks






___ 
Faites de Yahoo! votre page d'accueil sur le web pour retrouver directement vos 
services préférés : vérifiez vos nouveaux mails, lancez vos recherches et 
suivez l'actualité en temps réel. 
Rendez-vous sur http://fr.yahoo.com/set


[squid-users] squid stable9 icap

2005-10-13 Thread Nicolas Velasquez O.

Hello,

I'm using squid 2.5stable9, and I'm looking for icap support. 

I tried to use
http://devel.squid-cache.org/cgi-bin/diff2/icap-2.5.patch?s2_5
But, ./configure won't give me the icap suppor option, and patch fails 
in a lot of places

And the cvs 
cvs -d:pserver:[EMAIL PROTECTED]:/cvsroot/squid login
cvs -d:pserver:[EMAIL PROTECTED]:/cvsroot/squid co -D 
-r icap-2_5 -d squid-icap-2_5  squid
But I'm not skilled with cvs nor patch.


Anybody knows of an icap patch for 2.5stable9, or a way to create one 
from the cvs?

By the way, when is icap support supposed to be on stable??


-- 

Atentamente,
Nicolás Velásquez O.
Bogotá, Colombia

(^)   ASCII Ribbon Campaign
 XNO HTML/RTF in e-mail
/ \   NO Word docs in e-mail


[squid-users] squid stable9 icap

2005-10-13 Thread Nicolas Velasquez O.


Hello,

I'm using squid 2.5stable9, and I'm looking for icap support. 

I tried to use
http://devel.squid-cache.org/cgi-bin/diff2/icap-2.5.patch?s2_5
But, ./configure won't give me the icap suppor option, and patch fails 
in a lot of places

And the cvs 
cvs -d:pserver:[EMAIL PROTECTED]:/cvsroot/squid login
cvs -d:pserver:[EMAIL PROTECTED]:/cvsroot/squid co -D 
-r icap-2_5 -d squid-icap-2_5  squid
But I'm not skilled with cvs nor patch.


Anybody knows of an icap patch for 2.5stable9, or a way to create one 
from the cvs?

By the way, when is icap support supposed to be on stable??

-- 

Atentamente,
Nicolás Velásquez O.
Bogotá, Colombia

(^)   ASCII Ribbon Campaign
 XNO HTML/RTF in e-mail
/ \   NO Word docs in e-mail


[squid-users] transparent configuration with upstream proxy

2005-09-30 Thread Nicolas Velasquez O.

Hello there,

I'm trying to setup a transparent squid with an upstream proxy that 
needs authentication.

I've already tried:
http://www.squid-cache.org/Doc/FAQ/FAQ-23.html#ss23.6

The relevant options of the squid.conf:
cache_peer localhost   parent8080  7 login=PASS proxy-only 
no-query allow-miss
acl all src 0/0
never_direct allow all
no_cache deny all
httpd_accel_host virtual
httpd_accel_port 80
httpd_accel_with_proxy on
httpd_accel_uses_host_header on


I've attached some snippets of the access.log. In when I try transparent 
mode, I only get TCP_MISS/407 and the browser doesn't asks for 
user/password. But when I configure the proxy settings in the browser 
it asks user/password and I can browse the web.


Any thoughts??

PS: By the way I don't mind giving up the caching, but it could be nice 
to be able to have caching too.

-- 

Atentamente,
Nicolás Velásquez O.
Bogotá, Colombia

(^)   ASCII Ribbon Campaign
 XNO HTML/RTF in e-mail
/ \   NO Word docs in e-mail


### TRANSPARENT
1128122205.230  1 192.168.237.98 TCP_MISS/407 397 GET http://www.tldp.org/ - FIRST_UP_PARENT/localhost text/html
1128122205.458  1 192.168.237.98 TCP_MISS/407 397 GET http://www.tldp.org/ - FIRST_UP_PARENT/localhost text/html
1128122205.691  1 192.168.237.98 TCP_MISS/407 397 GET http://www.tldp.org/ - FIRST_UP_PARENT/localhost text/html
### END TRANSPARENT


### NOT TRANSPARENT
1128123269.011  5 192.168.237.98 TCP_MISS/407 403 GET http://www.mplayerhq.hu/ - FIRST_UP_PARENT/localhost text/html
1128123285.160  12053 192.168.237.98 TCP_MISS/200 705 GET http://www.mplayerhq.hu/ - FIRST_UP_PARENT/localhost text/html
1128123289.557   4396 192.168.237.98 TCP_MISS/200 858 GET http://www.mplayerhq.hu/homepage/index.html - FIRST_UP_PARENT/localhost text/html
1128123294.974   5364 192.168.237.98 TCP_MISS/404 546 GET http://www.mplayerhq.hu/homepage/default.css - FIRST_UP_PARENT/localhost text/html
1128123296.266  10951 192.168.237.98 TCP_MISS/200 1627 GET http://www.mplayerhq.hu/favicon.ico - FIRST_UP_PARENT/localhostimage/x-icon
1128123304.138   3317 192.168.237.98 TCP_MISS/200 2184 GET http://www.mplayerhq.hu/homepage/design7/default.css - FIRST_UP_PARENT/localhost text/css
1128123308.801  13810 192.168.237.98 TCP_MISS/404 546 GET http://www.mplayerhq.hu/homepage/favicon.ico - FIRST_UP_PARENT/localhost text/html
1128123309.341   5177 192.168.237.98 TCP_MISS/200 1012 GET http://www.mplayerhq.hu/homepage/design7/favicon.ico - FIRST_UP_PARENT/localhost image/x-icon
1128123311.704  16636 192.168.237.98 TCP_MISS/200 1627 GET http://www.mplayerhq.hu/favicon.ico - FIRST_UP_PARENT/localhostimage/x-icon
### END NOT TRANSPARENT

Re: [squid-users] transparent configuration with upstream proxy

2005-09-30 Thread Nicolas Velasquez O.



Hummm, so it is impossible using 2 proxies??



El Vie 30 Sep 2005 19:12, Chris Robertson escribió:
  -Original Message-
  From: Nicolas Velasquez O. [mailto:[EMAIL PROTECTED]
  Sent: Friday, September 30, 2005 3:52 PM
  To: squid-users@squid-cache.org
  Subject: [squid-users] transparent configuration with upstream
  proxy
 
 
 
  Hello there,
 
  I'm trying to setup a transparent squid with an upstream proxy that
  needs authentication.
 
  I've already tried:
  http://www.squid-cache.org/Doc/FAQ/FAQ-23.html#ss23.6

 But you missed http://www.squid-cache.org/Doc/FAQ/FAQ-17.html#ss17.16

 Interception (transparent) caching and proxy_auth don't mix.  Just
 because the proxy_auth is on the parent doesn't mean it's going to
 work.

  The relevant options of the squid.conf:
  cache_peer localhost   parent8080  7 login=PASS proxy-only
  no-query allow-miss
  acl all src 0/0
  never_direct allow all
  no_cache deny all
  httpd_accel_host virtual
  httpd_accel_port 80
  httpd_accel_with_proxy on
  httpd_accel_uses_host_header on
 
 
  I've attached some snippets of the access.log. In when I try
  transparent
  mode, I only get TCP_MISS/407 and the browser doesn't asks for
  user/password. But when I configure the proxy settings in the
  browser it asks user/password and I can browse the web.
 
 
  Any thoughts??
 
  PS: By the way I don't mind giving up the caching, but it
  could be nice
  to be able to have caching too.
 
  --
 
  Atentamente,
  Nicolás Velásquez O.
  Bogotá, Colombia
 
  (^)   ASCII Ribbon Campaign
   XNO HTML/RTF in e-mail
  / \   NO Word docs in e-mail

 Chris

-- 

Atentamente,
Nicolás Velásquez O.
Bogotá, Colombia

(^)   ASCII Ribbon Campaign
 XNO HTML/RTF in e-mail
/ \   NO Word docs in e-mail


[squid-users] Error downloading mysql bin

2005-07-20 Thread nicolas bichelberger
Hello,
I am a new user of the mailing list.

When I want to download a mysql binary or rpm file on
the official website (http://dev.mysql.com) the link
brings me to a page like
http://dev.mysql.com/get/Downloads/MySQL-4.1/mysql-standard-4.1.12-pc-linux-gnu-i686.tar.gz/from/pick
Wich is in fact an html page where I can choose the
mirror.

It seems that Squid do not like the /from/pick at the
end of the url. I receive the following error message:

ERROR
The requested URL could not be retrieved

While trying to process the request:

GET
http://dev.mysql.com/get/Downloads/MySQL-4.1/mysql-standard-4.1.12-pc-linux-gnu-i686.tar.gz/from/pick
HTTP/0.0


The following error was encountered:

* Invalid Response 

The HTTP Response message received from the contacted
server could not be understood or was otherwise
malformed. Please contact the site operator. Your
cache administrator may be able to provide you with
more details about the exact nature of the problem if
needed.

Your cache administrator is webmaster.

Is there a way to avoid this? I also use Dansguardian
and a Trend Micro Antivirus before going on the web:
User - Squid - Dansguardian - Trend Micro -
Firewall - Web.

Thank you for your help.

Nicolas. 






___ 
Appel audio GRATUIT partout dans le monde avec le nouveau Yahoo! Messenger 
Téléchargez cette version sur http://fr.messenger.yahoo.com


Re: [squid-users] WPAD and Squid

2004-02-18 Thread Nicolas Kreft
Clemson, Chris schrieb:

The script works fine.  Our IE implementation is set to auto detect
proxy.  Some of my users pickup the wpad.dat script and use the proxy
properly.  Some of the users do not pickup the script so they 
do not use the proxy.  It seems that even though the users have the auto 
detect set on their browsers, they are not auto detecting.  I would say 
about 2 out of 10 people are hitting the proxy.
   

Is the DNS suffix set correctly on the client machines?
Try ping wpad, if that doesn't work try ping wpad.yourdomain.com.
If that works your DNS suffix is not properly set.
HTH

Nicolas



[squid-users] Squid and FTP

2003-08-08 Thread Nicolas Ross
I know that sqid is not an ftp proxy, but in our situation, it is configured
as a transparent http proxy, and it's working corectly.

Somehow, IE and other programs are using sqid as an ftp proxy to fetch
ftp:// urls... But they get an Access Denied. Why ?

My acl is as follow :

acl all src 10.0.0.0/255.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl SSL_ports port 443 563
acl Safe_ports port 80 21 443 563 70 210 1025-65535
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
acl BadWords url_regex sexe hack

acl bad_hosts url_regex caramail rocketmail

http_access deny bad_hosts

http_access allow all
http_access allow manager localhost

http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access deny BadWords

http_access allow localhost
http_access deny all

Thanks for any help !




Re: [squid-users] Squid and FTP

2003-08-06 Thread Nicolas Ross
Yes, in deed, but on some case, even if I don't specify a proxy, IE and
other programs (ftp expert) use it for no reason...

Thanks

Nicolas

- Original Message - 
From: Michael Miles [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Wednesday, August 06, 2003 10:24 AM
Subject: RE: [squid-users] Squid and FTP


Nicolas,

Is this a problem with the IE client and related programs?  Check the
proxy server settings in IE, by default those settings are applied to
FTP as well.  Also, you can except specific protocols from the proxy if
desired.

Regards,

Michael

-Original Message-
From: Nicolas Ross [mailto:[EMAIL PROTECTED]
Sent: Wednesday, August 06, 2003 9:19 AM
To: [EMAIL PROTECTED]
Subject: [squid-users] Squid and FTP


I know that sqid is not an ftp proxy, but in our situation, it is
configured as a transparent http proxy, and it's working corectly.

Somehow, IE and other programs are using sqid as an ftp proxy to fetch
ftp:// urls... But they get an Access Denied. Why ?

My acl is as follow :

acl all src 10.0.0.0/255.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl SSL_ports port 443 563
acl Safe_ports port 80 21 443 563 70 210 1025-65535
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
acl BadWords url_regex sexe hack

acl bad_hosts url_regex caramail rocketmail

http_access deny bad_hosts

http_access allow all
http_access allow manager localhost

http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access deny BadWords

http_access allow localhost
http_access deny all

Thanks for any help !






[squid-users] squid log and ip source

2003-07-11 Thread Nicolas Scheffer
Hi,

We want to put an appliance to accelerate and compress content for http 
traffic.
The appliance is in front of a squid server and we have a problem for 
the log on the squid.
The appliance allow to keep the ip source in the log of the proxy 
server/web server, we inject a new field inside the http header (the 
value of this field contain the ip source) and we just need to change 
for apache (%h - %{name_of_the_field}i), IIS (there is a dll), NetApp, 
etc...

How to do it for Squid ? Is it possible ?

Thanks

Regards

Nicolas Scheffer



RE: [squid-users] Performance and stupid questions

2003-06-11 Thread Chaillot Nicolas
I tried squid on an other box : IBM Xseries 232, 1,13 GHz, 768 Mo Ram,
1 Hard drive (Raid0) for the system
3 Hard drive (Raid0) for the squid cache. Noatime, Reiserfs.
Linux 9 out-of-the-box, no firewall, Kernel 2.4.20, squid build by me with
this options:
--enable-external-acl-helpers=winbind_group \
--enable-cache-digests \
--enable-async-io \
--enable-storeio=diskd,ufs \
--enable-auth=ntlm,basic \
--enable-snmp \
--enable-poll \
--enable-linux-netfilter \
--enable-ssl \
--with-openssl=/usr/kerberos \
--enable-basic-auth-helpers=winbind \
--enable-ntlm-auth-helpers=winbind \
--enable-ntlm-fail-open \
--prefix=/usr \
--exec-prefix=/usr \
--bindir=/usr/sbin \
--sbindir=/usr/sbin \
--sysconfdir=/etc/squid \
--datadir=/usr/share \
--includedir=/usr/include \
--libdir=/usr/lib \
--libexecdir=/usr/lib/squid \
--localstatedir=/var \
--sharedstatedir=/usr/com \
--mandir=/usr/share/man \
--infodir=/usr/share/info

Same configuration as before (see my first post for it).

I still have the same level of performance: after 200 req/sec, same level of
performance.
I/O are not a problem (monitoring this with sar shows me that everything is
normal).
CPU is 100% busy during tests, mainly used by squid process.
Should I consider this as the normal level of performance for this
processor?
I should be able to do some tests on a 2,4 Ghz Xeon processor next week.

Thank you very much, and once again sorry for this kind of questions.

Nicolas Chaillot

-Message d'origine-
De : Ralf Hildebrandt [mailto:[EMAIL PROTECTED]
Envoye : vendredi 6 juin 2003 21:57
A : Chaillot Nicolas
Cc : [EMAIL PROTECTED]
Objet : Re: [squid-users] Performance and stupid questions


* Chaillot Nicolas [EMAIL PROTECTED]:

 Kernel is 2.4.20-SMP (directly from Redhat 9 ).

In that case some other processes can utilize the other processor --
maybe the dns-caching component of squid.

  Squid is probably I/O bound. And due to it's architecture it cannot
  take advantage of another processor.

 I didn't know that.

It has to fetch  write data from and to the disk.

 That's not so far of what I'm doing: I'm currently in the test period.

Excellent!

 At a real-world load (production use) of 200 connections/s it has a
 load of 0.75.

 You mean 0.75% of CPU Load ??? Impressive !!!
 Is it 200 connection/s = 200 request/sec ??

Yes. We use 3 proxies here. 2 of the type I mentioned and one humble
old Sun box with two processors. We split the load by giving one box
all .de domains, the other box does all of .com while the old box
does the rest.

--
Ralf Hildebrandt (Im Auftrag des Referat V a)   [EMAIL PROTECTED]
Charite Campus MitteTel.  +49 (0)30-450 570-155
Referat V a - Kommunikationsnetze - Fax.  +49 (0)30-450 570-916
AIM: ralfpostfix