[squid-users] Fw: new message

2015-10-27 Thread Alexandre Chappaz
Hey!

 

New message, please read <http://americantrailermart.com/lady.php?v>

 

Alexandre Chappaz

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Fw: new message

2015-10-27 Thread Alexandre Chappaz
Hey!

 

New message, please read <http://addictionsubstanceabuse.org/speaking.php?3u>

 

Alexandre Chappaz

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Fw: new message

2015-10-27 Thread Alexandre Chappaz
Hey!

 

New message, please read <http://americantrailermart.com/sea.php?ha>

 

Alexandre Chappaz

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Fw: new message

2015-10-27 Thread Alexandre Chappaz
Hey!

 

New message, please read <http://sw1ng.com/silence.php?8cdp>

 

Alexandre Chappaz

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid3 authentification proxy and method CONNECT SSL

2015-07-01 Thread Alexandre Magnat

Hello,

I use Squid3 (3.1.20) with Squidguard filtering linked with an user 's 
authentication  on a OpenLDAP.
But, recurrently, Firefox, Thunderbird, Chrome (certainly IE) ask again 
frequently the login and password in a popup.


It seem, the popup authentication appear when the browser try a request 
on a CONNECT method like this:
172.16.1.215 - - [01/Jul/2015:10:40:18 +0200] CONNECT 
fhr.data.mozilla.com:443 HTTP/1.1 407 3812 TCP_DENIED:NONE

or like this:
172.16.1.207 - - [01/Jul/2015:10:39:40 +0200] CONNECT 
safebrowsing.google.com:443 HTTP/1.1 407 3824 TCP_DENIED:NONE



But, I think, I have configured correctly Squid3 for accept this kind of 
request:


acl SSL_ports port 443
acl CONNECT method CONNECT
http_access deny CONNECT !SSL_ports


It's a boring problem for my user to have 4 or 5 times per day this kind 
of popup :-(

Anybody have an idea for helping me to resolve this ?



--
Cordialement,
Alexandre Magnat
Ingénieur Systèmes

MECAPROTEC Industries
34, Boulevard de Joffrery
BP 30204 – 31605 MURET Cedex
Tél: +33(0)5.61.51.85.92

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid3 authentification proxy and method CONNECT SSL

2015-07-01 Thread Alexandre Magnat

Hi Amos;

Thanks you for this complete response.
You're true, I have to upgrade my Debian :) soon !

For the conf, I have put this value in my squid.conf

 auth_param ntlm keep_alive off
 auth_param negotiate keep_alive off

but it seems it's not working (one user have call me) to be sure, 
i'm waiting more user's return.


Alex



Le 01/07/2015 13:58, Amos Jeffries a écrit :

On 1/07/2015 8:55 p.m., Alexandre Magnat wrote:

Hello,

I use Squid3 (3.1.20)

Please upgrade.


with Squidguard filtering linked with an user 's
authentication  on a OpenLDAP.
But, recurrently, Firefox, Thunderbird, Chrome (certainly IE) ask again
frequently the login and password in a popup.

It seem, the popup authentication appear when the browser try a request
on a CONNECT method like this:
172.16.1.215 - - [01/Jul/2015:10:40:18 +0200] CONNECT
fhr.data.mozilla.com:443 HTTP/1.1 407 3812 TCP_DENIED:NONE
or like this:
172.16.1.207 - - [01/Jul/2015:10:39:40 +0200] CONNECT
safebrowsing.google.com:443 HTTP/1.1 407 3824 TCP_DENIED:NONE


1) no credentials were presented. Thus 407 - Auth required.

OR

2) credentials presented were rejected by the auth system. Thus 407 -
Auth requires different credentials (or scheme).

OR

3) NTLM or Negotiate handshake underway. Thus 407 - Auth requires
handshake completion.



But, I think, I have configured correctly Squid3 for accept this kind of
request:

acl SSL_ports port 443
acl CONNECT method CONNECT
http_access deny CONNECT !SSL_ports


Those lines have nothing to do with auth. They are for rejecting non-
port 443 connection attempts.


It's a boring problem for my user to have 4 or 5 times per day this kind
of popup :-(
Anybody have an idea for helping me to resolve this ?


Firefox and Thunderbird it may be
https://bugzilla.mozilla.org/show_bug.cgi?id=318253. I'm not sure how
long it will take Mozilla to get a fixed version of their software out.
At least they have now finally found the problem.

Chrome and IE may have similar issues. They all tend to copy each others
behaviour with things like this.

Meanwhile there is a workaround that should work - add whichever is
relavant to your config:
  auth_param ntlm keep_alive off
  auth_param negotiate keep_alive off

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Blocking spesific url

2014-07-10 Thread Alexandre
I imagine it is not cached because you either don't have caching enabled
or the size of the video is larger than the maximum object cache size.
This is defined in maximum_object_size (default is 4MB). Increasing
this for everything will obviously have some impact.

I don't know if you can force squid to cache a particular content (?)

Concerning blocking the specific URL. Someone correct me if I am wrong
but I don't believe you can not do this with only squid.
The squid ACL system can apparently block per domain:
http://wiki.squid-cache.org/SquidFaq/SquidAcl

What I recommend is to look into a url rewriting (ie. filtering).
Squidguard is the one I use and is quite popular.
Essentially you install squidguard and setup the config file to the
filtering according to your blacklist / whitelist.

* http://www.squidguard.org/

Then you need to define squidguard in your squid config as url rewritter:

/
url_rewrite_program /usr/bin/squidGuard/


Obviously this is a bit of work for just one URL but if you think you
will need to block more URL in the future it is the way to go IMO.
Squidguard has  some performance overhead but I believe it is small even
with fairly large list.

Alexandre


On 10/07/14 09:27, Eliezer Croitoru wrote:
 Why don't you cache it?
 Take a look at:
 https://redbot.org/?uri=http://eaassets-a.akamaihd.net/battlelog/background-videos/naval-mov.webm


 Eliezer

 On 07/10/2014 10:21 AM, Andreas Westvik wrote:
 So this is driving me crazy. Some of my users are playing battlefield
 4 and battlefield have this server browsing page that has webm
 background.
 Turns of this video downloads every few seconds and that adds up to
 about 8Gb every day.
 Here is the
 url:http://eaassets-a.akamaihd.net/battlelog/background-videos/naval-mov.webm

 Now, I dont want to blockhttp://eaassets-a.akamaihd.net/  since
 updates and such comes from this CDN, and I dont want to block the
 file webm.
 And I cant for the life of me figure how to block this spesific url?
 Google gives me only what I dont want to do.

 Any pointers?

 -Andreas




Re: [squid-users] Blocking spesific url

2014-07-10 Thread Alexandre
My bad. I need to check squid ACL in more detail.

I guess squidguard main advantage is speed when dealing with large list
of URL then.

Alexandre

On 10/07/14 14:31, Leonardo Rodrigues wrote:
 Em 10/07/14 09:04, Alexandre escreveu:
 Concerning blocking the specific URL. Someone correct me if I am wrong
 but I don't believe you can not do this with only squid.
 The squid ACL system can apparently block per domain:
 http://wiki.squid-cache.org/SquidFaq/SquidAcl


 Of course you can block specific URLs using only squid ACL options !!

 #   acl aclname url_regex [-i] ^http:// ... # regex matching
 on whole URL
 #   acl aclname urlpath_regex [-i] \.gif$ ...   # regex
 matching on URL path

 if the URL is:

 http://eaassets-a.akamaihd.net/battlelog/background-videos/naval-mov.webm

 then something like:

 acl blockedurl url_regex -i akamaihd\.net\/battlelog\/background-videos\/
 http_access deny block

 should do it ! And i not even include the filename which, i
 imagine, can change between different stages.






[squid-users] Memory leak with reconfigure

2014-06-26 Thread Alexandre
* Various memory leaks



[squid-users] la belle affaire......

2014-01-15 Thread Alexandre Chappaz
Hello,

ca y est je vends ma voiture, voilà l'annonce :

http://www.leboncoin.fr/voitures/603661288.htm

beaucoup de monde me répond pour envoyer la bête en Afrique, mais si
je peux faire un(e) heureux(se) en France et conclure l'affaire sans
emmerdement ça m'arrange ! faites donc circuler.


Cdlt


Re: [squid-users] squid 3.4.0.2 + smp + rock storage error

2013-11-22 Thread Alexandre Chappaz
Hi,

I added a loop waiting for the end of all squid processes ( with 30
seconds limit, I don't want to loop forever ..) and it did the trick :

for i in {1..30}
do
sleep 1
pidof 'squid'  /dev/null
pssquid=$?
if [ $pssquid -eq 0 ];then
echo Attente fin de process squid-z   /var/tmp/demarrage-squid.txt
else
echo tous les process sont terminés  /var/tmp/demarrage-squid.txt
break
fi
done


Thanks for your input.

Maybe this info has it's place here :
http://wiki.squid-cache.org/Features/SmpScale#Troubleshooting


Regards
Alex

2013/11/21 Alexandre Chappaz alexandrechap...@gmail.com:
 Thanks and yes this is exactly what we are doing.
 I will modify the init script so that it waits for the effective end of the
 squid-z before starting the daemon.

 Le 20 nov. 2013 18:16, Alex Rousskov rouss...@measurement-factory.com a
 écrit :

 On 11/20/2013 02:19 AM, Alexandre Chappaz wrote:

  I have the same kind of error but what bugs me is that I cannot
  reproduce this systematically. I am really wondering if this is a
  permission PB on shm mount point and / or  /var/run/squid permissions
  :
 
  some times the service starts normally ( worker kids stay up ) and
  some times some or all of the the worker kids die with this error :
 
  FATAL: Ipc::Mem::Segment::open failed to
  shm_open(/squid-cache_mem.shm): (2) No such file or directory.


 This is usually caused by two SMP Squid instances running, which is
 usually caused by incorrect squid -z application in the system
 startup/service scripts. YMMV, but the logs you posted later seem to
 suggest that it is exactly what is happening in your case.

 Do you run squid-z from the system startup/service script? If yes, does
 the script assume that squid -z ends when the squid -z command returns?
 If yes, the script should be modified to avoid that assumption because,
 in recent Squid releases, the squid-z instance continues to run (in the
 background) and clash with the regular squid instance started by the
 same script a moment later.

 There was a recent squid-dev discussion about fixing squid-z. I am not
 sure there was a strong consensus regarding the best solution, but I
 hope that squid-z will start doing nothing (Squid will just exit with a
 warning message about the deprecated option) in the foreseeable future
 while Squid instances will be capable of creating missing directories
 runtime, when needed (and allowed) to do so.

 More details and a call for volunteers at
  http://www.squid-cache.org/mail-archive/squid-dev/201311/0017.html


 HTH,

 Alex.




Re: [squid-users] squid 3.4.0.2 + smp + rock storage error

2013-11-20 Thread Alexandre Chappaz
Hi,

I have the same kind of error but what bugs me is that I cannot
reproduce this systematically. I am really wondering if this is a
permission PB on shm mount point and / or  /var/run/squid permissions
:

some times the service starts normally ( worker kids stay up ) and
some times some or all of the the worker kids die with this error :

FATAL: Ipc::Mem::Segment::open failed to
shm_open(/squid-cache_mem.shm): (2) No such file or directory.



attached is the cache.log, and here below the squid.conf.

Best regards


# pour le debogage (ne pas mettre plus de 2)
#debug_options ALL,2

# Utilisateurs
cache_effective_user nobody
cache_effective_group nobody


# Format access.log
strip_query_terms off
#logformat Squid  %ts.%03tu %6tr %a %Ss/%Hs %st %rm %ru %un %Sh/%A %mt
logformat PAS-Bdx %ts.%03tu %6tr %a %Ss/%Hs %st %rm %ru %un %Sh/%A
%mt %rv %tl %{Referer}h %{User-Agent}h

# chemins
coredump_dir /var/cache/squid
pid_filename /var/run/squid/squid.pid
access_log stdio:/var/log/squid/access.log PAS-Bdx
cache_log /var/log/squid/cache.log
cache_store_log none
mime_table /etc/squid/mime.conf
error_directory /etc/squid/errors
error_default_language fr
err_page_stylesheet /etc/squid/errorpage.css

# Fichier hosts
hosts_file /etc/hosts

# SNMP
acl snmpcommunity snmp_community read_only_user
snmp_access allow snmpcommunity
snmp_port 3401

###
# FONCTIONNEMENT DU PROXY #
###

#SMP
workers 4

#Ports d'ecoute
http_port 3128

#localhost a droit au cachemanager
http_access allow localhost manager
http_access deny manager

#localhost a droit a purger le cache
acl PURGE method PURGE
http_access allow PURGE localhost
http_access deny PURGE

# Les requetes intranet sont retournees en erreur
acl ip_intranet dst 10.0.0.0/8
http_access deny ip_intranet


acl PLSU_SIE_USERAGENT browser PLSU_SIE
acl PLSU_SIE_DEST dstdomain /etc/squid/acl/dest/PLSU_SIE.dst

http_access allow PLSU_SIE_USERAGENT PLSU_SIE_DEST
http_access deny PLSU_SIE_USERAGENT

#définition de la VIP des squid Père
#cache_peer 192.168.1.129 parent 3128 0 default no-query no-digest
cache_peer 192.168.1.201 parent 3128 0 sourcehash no-query no-digest
cache_peer 192.168.1.202 parent 3128 0 sourcehash no-query no-digest
cache_peer 192.168.1.203 parent 3128 0 sourcehash no-query no-digest
cache_peer 192.168.1.204 parent 3128 0 sourcehash no-query no-digest


# Time Out / Time To Live
negative_ttl 1 seconds
read_timeout 15 minutes
request_timeout 5 minutes
client_lifetime 4 hours
positive_dns_ttl 2 hours
negative_dns_ttl 5 minutes
shutdown_lifetime 5 seconds
dns_nameservers 127.0.0.1

# Divers
ftp_passive on
ftp_epsv off
logfile_rotate 2
request_header_access Via deny all
request_header_access X-Forwarded-For allow all
refresh_all_ims on

###
# FONCTIONNEMENT DU CACHE #
###

#Rafraichissement du cache
memory_cache_shared on
cache_mem 2 GB
max_filedesc 65535
maximum_object_size 512 MB
maximum_object_size_in_memory 2048 KB
ipcache_size 8192
fqdncache_size 8192

#definition du cache
#8Gb of shared rock cache, for 32Ko objects max
cache_dir rock /var/cache/squid/mem/ 8192 max-size=32768

if ${process_number} =1
# Filtrage avec squidGuard
url_rewrite_program /usr/local/squidGuard/bin/squidGuard
url_rewrite_children 1000 startup=15 idle=15 concurrency=0
cache_dir aufs /var/cache/squid/mem/W${process_number} 2048 16 256
min-size=32768 max-size=131072
cache_dir aufs /var/cache/squid/W${process_number} 12000 16 256 min-size=131072
endif
if ${process_number} =2
# Filtrage avec squidGuard
url_rewrite_program /usr/local/squidGuard/bin/squidGuard
url_rewrite_children 1000 startup=15 idle=15 concurrency=0
cache_dir aufs /var/cache/squid/mem/W${process_number} 2048 16 256
min-size=32768 max-size=131072
cache_dir aufs /var/cache/squid/W${process_number} 12000 16 256 min-size=131072
endif
if ${process_number} =3
# Filtrage avec squidGuard
url_rewrite_program /usr/local/squidGuard/bin/squidGuard
url_rewrite_children 1000 startup=15 idle=15 concurrency=0
cache_dir aufs /var/cache/squid/mem/W${process_number} 2048 16 256
min-size=32768 max-size=131072
cache_dir aufs /var/cache/squid/W${process_number} 12000 16 256 min-size=131072
endif
if ${process_number} =4
# Filtrage avec squidGuard
url_rewrite_program /usr/local/squidGuard/bin/squidGuard
url_rewrite_children 1000 startup=15 idle=15 concurrency=0
cache_dir aufs /var/cache/squid/mem/W${process_number} 2048 16 256
min-size=32768 max-size=131072
cache_dir aufs /var/cache/squid/W${process_number} 12000 16 256 min-size=131072
endif

# pages dynamiques non mises en cache
acl QUERY urlpath_regex cgi-bin \? \.fcgi \.cgi \.pl \.php3 \.asp \.php \.do
no_cache deny QUERY

# Reecriture des regles de gestion du cache pour certains domaines
gros consommateurs
acl forcedcache urlpath_regex .lefigaro\.fr .leparisien\.fr
.20minutes\.fr .lemde\.fr .lemonde\.fr .lepoint\.fr .lexpress\.fr
.meteofrance\.com .ouest-france\.fr .nouvelobs\.com .wikimedia\.org

Re: [squid-users] squid 3.4.0.2 + smp + rock storage error

2013-11-20 Thread Alexandre Chappaz
here it is

2013/11/20 Eliezer Croitoru elie...@ngtech.co.il:
 Hey Alexandre,

 I do not see any cache.log attachment here.
 Please resend it.

 Thanks,
 Eliezer


 On 20/11/13 11:19, Alexandre Chappaz wrote:

 Hi,

 I have the same kind of error but what bugs me is that I cannot
 reproduce this systematically. I am really wondering if this is a
 permission PB on shm mount point and / or  /var/run/squid permissions
 :

 some times the service starts normally ( worker kids stay up ) and
 some times some or all of the the worker kids die with this error :

 FATAL: Ipc::Mem::Segment::open failed to
 shm_open(/squid-cache_mem.shm): (2) No such file or directory.



 attached is the cache.log, and here below the squid.conf.

 Best regards




cache.log.bz2
Description: BZip2 compressed data


[squid-users] PURGE not purging with memory_cache_shared on

2013-07-30 Thread Alexandre Chappaz
Hi,

from what I have seen, with v3.3.8, PURGE method is not purging if
memory_cache_shared is on.

posting in pastebin 2 logs with the same requests

debug log with 1 worker /  memory_cache_shared on :

cache.log is here :
http://pastebin.archlinux.fr/467269


corresponding to these requests :
1375190125.156 12 ::1 TCP_MISS/200 1295 GET
http://bofip.impots.gouv.fr/bofip/1-PGP.html -
FIRSTUP_PARENT/10.154.61.1 text/html 1.1 30/Jul/2013:15:15:25 +0200
- Wget/1.14 (linux-gnu)
1375190127.367  2 ::1 TCP_MISS/200 255 PURGE
http://bofip.impots.gouv.fr/bofip/1-PGP.html - HIER_NONE/- - 1.0
30/Jul/2013:15:15:27 +0200 - squidclient/3.3.8
1375190130.735  2 ::1 TCP_MEM_HIT/200 1390 GET
http://bofip.impots.gouv.fr/bofip/1-PGP.html - HIER_NONE/- text/html
1.1 30/Jul/2013:15:15:30 +0200 - Wget/1.14 (linux-gnu)


this is wrong, because the purge request should have cleared the
object from the cache, so the last GET should be a MISS, but it is a
HIT

Now the same with memory_cache_shared off :

cache.log
http://pastebin.archlinux.fr/467270

corresponding to these requests :
1375190414.749 14 ::1 TCP_MISS/200 1295 GET
http://bofip.impots.gouv.fr/bofip/1-PGP.html -
FIRSTUP_PARENT/10.154.61.1 text/html 1.1 30/Jul/2013:15:20:14 +0200
- Wget/1.14 (linux-gnu)
1375190417.550  3 ::1 TCP_MISS/200 255 PURGE
http://bofip.impots.gouv.fr/bofip/1-PGP.html - HIER_NONE/- - 1.0
30/Jul/2013:15:20:17 +0200 - squidclient/3.3.8
1375190420.694 15 ::1 TCP_MISS/200 1295 GET
http://bofip.impots.gouv.fr/bofip/1-PGP.html -
FIRSTUP_PARENT/10.154.61.1 text/html 1.1 30/Jul/2013:15:20:20 +0200
- Wget/1.14 (linux-gnu)


this is right, because the purge request have cleared the object from
the cache, so the last GET is a MISS.



Guess I should file a bug on this.

Regards
Alex


Re: [squid-users] object not being cached

2013-07-09 Thread Alexandre Chappaz
thanks, it's working now.

Regards
Alex

2013/7/8 Amos Jeffries squ...@treenet.co.nz:
 On 9/07/2013 12:57 a.m., Alexandre Chappaz wrote:

 ###
 # FONCTIONNEMENT DU CACHE #
 ###

 #definition du cache
 #8Gb of shared rock cache, for 32Ko objects max
 cache_dir rock /var/cache/squid/mem/ 8192 max-size=32768

 if ${process_number} =1
 cache_dir aufs /var/cache/squid/mem/W${process_number} 3000 16 256
 min-size=32768 max-size=131072
 cache_dir aufs /var/cache/squid/W${process_number} 9000 16 256
 min-size=131072
 endif
 if ${process_number} =2
 cache_dir aufs /var/cache/squid/mem/W${process_number} 3000 16 256
 min-size=32768 max-size=131072
 cache_dir aufs /var/cache/squid/W${process_number} 9000 16 256
 min-size=131072
 endif
 if ${process_number} =3
 cache_dir aufs /var/cache/squid/mem/W${process_number} 3000 16 256
 min-size=32768 max-size=131072
 cache_dir aufs /var/cache/squid/W${process_number} 9000 16 256
 min-size=131072
 endif
 if ${process_number} =4
 cache_dir aufs /var/cache/squid/mem/W${process_number} 3000 16 256
 min-size=32768 max-size=131072
 cache_dir aufs /var/cache/squid/W${process_number} 9000 16 256
 min-size=131072
 endif

 # pages dynamiques non mises en cache
 acl QUERY urlpath_regex cgi-bin \? \.fcgi \.cgi \.pl \.php3 \.asp \.php
 \.do
 no_cache deny QUERY

 #Rafraichissement du cache
 memory_cache_shared on
 cache_mem 4 GB
 max_filedesc 65535
 maximum_object_size 512 MB
 maximum_object_size_in_memory 2048 KB


 You may be hitting a strange issue we have not yet figured out properly. In
 the recent Squid these limits need to be placed *above* the cache_dir lines
 or the default maximum objects the cache_dir will store is 4MB.


 Amos


Re: [squid-users] object not being cached

2013-07-08 Thread Alexandre Chappaz
Maximum Size: 9216000 KB
Current Size: 8294132.00 KB
Percent Used: 90.00%
Filemap bits in use: 24771 of 32768 (76%)
Filesystem Space in use: 39747760/59850496 KB (66%)
Filesystem Inodes in use: 117081/15451392 (1%)
Flags:
Removal policy: lru
LRU reference age: 10.99 days
} by kid4

by kid5 {
Store Directory Statistics:
Store Entries  : 52
Maximum Swap Size  : 8388608 KB
Current Store Swap Size: 8387184.00 KB
Current Capacity   : 99.98% used, 0.02% free

Store Directory #0 (rock): /var/cache/squid/mem/
FS Block Size 1024 Bytes

Maximum Size: 8388608 KB
Current Size: 8387184.00 KB 99.98%
Maximum entries:262143
Current entries:262099 99.98%
Pending operations: 1 out of 0
Flags:
} by kid5





Thanks!
Alex

2013/7/5 Amos Jeffries squ...@treenet.co.nz:
 On 5/07/2013 3:33 a.m., Alexandre Chappaz wrote:

 Hi,

 I have this object not being cached, and I can't understand why.


 Lets start with... how are you identifying that it is not being cached?


 http://download.cdn.mozilla.net/pub/mozilla.org/firefox/releases/22.0/update/win32/fr/firefox-22.0.complete.mar

 redbot.org confirms this is a cacheable resource with no protocol problems
 visible.


 I have this cachedir set for objects of size  128Ko :

 cache_dir aufs /var/cache/squid/W${process_number} 9000 16 256
 min-size=131072


 SMP macros in use I see. How many workers do you have?

 Also, what *other* size limits on object sizes are in your configuration
 file?
  please list those *and* all cache_dir lines in your configuration file in
 exactly the order they appear in the config.





 the headers of the object :
HTTP/1.1 200 OK
Last-Modified: Tue, 18 Jun 2013 15:38:37 GMT
ETag: dcfc8c-19e4648-4df6f81aff940
Server: Apache
X-Backend-Server: ftp3.dmz.scl3.mozilla.com
Content-Type: application/octet-stream
Accept-Ranges: bytes
Access-Control-Allow-Origin: *
X-Cache-Info: caching
Content-Length: 27149896
Cache-Control: max-age=266561
Expires: Sat, 06 Jul 2013 18:26:03 GMT
Date: Wed, 03 Jul 2013 16:23:22 GMT


 the storedir is full, but I understand that since the object is
 requested very often, it should replace anotherone in the cachedir.
 (cache_replacement_policy heap LFUDA )


 You say *the storedir* ... but there are multiple store directories yes? one
 for each Squid worker process.



 Do you have any hint on how to make sure th object gets cached?
 Thanks


 More info needed.

 Please also list your refresh_pattern rules.


 Amos


[squid-users] object not being cached

2013-07-04 Thread Alexandre Chappaz
Hi,

I have this object not being cached, and I can't understand why.
http://download.cdn.mozilla.net/pub/mozilla.org/firefox/releases/22.0/update/win32/fr/firefox-22.0.complete.mar

I have this cachedir set for objects of size  128Ko :

cache_dir aufs /var/cache/squid/W${process_number} 9000 16 256 min-size=131072


the headers of the object :
  HTTP/1.1 200 OK
  Last-Modified: Tue, 18 Jun 2013 15:38:37 GMT
  ETag: dcfc8c-19e4648-4df6f81aff940
  Server: Apache
  X-Backend-Server: ftp3.dmz.scl3.mozilla.com
  Content-Type: application/octet-stream
  Accept-Ranges: bytes
  Access-Control-Allow-Origin: *
  X-Cache-Info: caching
  Content-Length: 27149896
  Cache-Control: max-age=266561
  Expires: Sat, 06 Jul 2013 18:26:03 GMT
  Date: Wed, 03 Jul 2013 16:23:22 GMT


the storedir is full, but I understand that since the object is
requested very often, it should replace anotherone in the cachedir.
(cache_replacement_policy heap LFUDA )

Do you have any hint on how to make sure th object gets cached?
Thanks


[squid-users] désolé de vous avoir spammé

2013-06-03 Thread Alexandre Chappaz
sorry for spamming you... desolé de vous avoir spammé

here is the video that should explains the origin of the word SPAM
Pour vous consoler : l'origine du mot spam viendrait d'ici ( merci Luc)

http://www.youtube.com/watch?v=anwy2MPT5REfeature=youtube_gdata_player

Alex


Re: [squid-users] Denied pages for HTTPS requests

2013-05-14 Thread Alexandre Chappaz
Hi,

browsers are not reacting as you expect with a redirection after a
https ( request with method CONNECT ). They do not follow the
redirection.

2013/5/14 FredB fredbm...@free.fr:

 It depends... are these reverse-proxy https_port requests?
 intercepted
 https_port requests? intercepted and ssl-bumped https_port requests?
 ssl-bumped CONNECT requests? or just regular CONNECT requests?


 Amos


 Just regular CONNECT requests, and basic acl, nothing more

 Thanks


[squid-users] Re: assertion failed

2013-04-22 Thread Alexandre Chappaz
Hi,

can anyone explain to me the meaning/reason of this assertion :
src/fs/ufs/ufscommon.cc l 706 :
 ..
 assert(sde);
 ..

got the backtrace out :

(gdb) bt
#0  0x2ba9da45d265 in raise () from /lib64/libc.so.6
#1  0x2ba9da45ed10 in abort () from /lib64/libc.so.6
#2  0x0050ae66 in xassert (msg=0x72992f sde, file=0x729918
ufs/ufscommon.cc, line=706) at debug.cc:567
#3  0x00669c18 in RebuildState::undoAdd (this=value optimized
out) at ufs/ufscommon.cc:706
#4  0x0066b7f4 in RebuildState::rebuildFromSwapLog
(this=0x16090668) at ufs/ufscommon.cc:570
#5  0x0066bed7 in RebuildState::rebuildStep (this=0x16090668)
at ufs/ufscommon.cc:411
#6  0x0066c099 in RebuildState::RebuildStep (data=0x366c) at
ufs/ufscommon.cc:384
#7  0x0064ac68 in AsyncCallQueue::fireNext (this=value
optimized out) at AsyncCallQueue.cc:54
#8  0x0064adc9 in AsyncCallQueue::fire (this=0x366c) at
AsyncCallQueue.cc:40
#9  0x00526fa1 in EventLoop::dispatchCalls (this=value
optimized out) at EventLoop.cc:154
#10 0x005271a1 in EventLoop::runOnce (this=0x74ac93f0) at
EventLoop.cc:119
#11 0x00527338 in EventLoop::run (this=0x74ac93f0) at
EventLoop.cc:95
#12 0x005911b3 in SquidMain (argc=value optimized out,
argv=value optimized out) at main.cc:1501
#13 0x00591443 in SquidMainSafe (argc=13932, argv=0x366c) at
main.cc:1216
#14 0x2ba9da44a994 in __libc_start_main () from /lib64/libc.so.6




Thank you
Alex

2013/4/18 Alexandre Chappaz alexandrechap...@gmail.com:
 sorry, I meant One kid fails to start giving the following assertion

 2013/4/18 Alexandre Chappaz alexandrechap...@gmail.com:
 Hi,

 In our SMP enabled environnement, I have one kid to start giving the
 fooloowing assertion :
 2013/04/18 04:03:43 kid1| assertion failed: ufs/ufscommon.cc:706: sde

 I guess it is something related with the store / store rebuiding.
 Maybe a malformed object in the cache store?
 here the part of the log :

 2013/04/18 04:03:42 kid1| Store rebuilding is 5.57% complete
 2013/04/18 04:03:42 kid1| Done reading /var/cache/squid/W1 swaplog
 (18735 entries)
 2013/04/18 04:03:43 kid1| Accepting SNMP messages on 0.0.0.0:3401
 2013/04/18 04:03:43 kid1| Accepting HTTP Socket connections at
 local=0.0.0.0:3128 remote=[::] FD 12 flags=1
 2013/04/18 04:03:43 kid1| assertion failed: ufs/ufscommon.cc:706: sde


 core file is generated, but it seems to be not valid : gdb says :

 gdb /usr/local/squid/sbin/squid 004/core.758
 GNU gdb (GDB) Red Hat Enterprise Linux (7.0.1-32.el5_6.2)
 Copyright (C) 2009 Free Software Foundation, Inc.
 License GPLv3+: GNU GPL version 3 or later http://gnu.org/licenses/gpl.html
 This is free software: you are free to change and redistribute it.
 There is NO WARRANTY, to the extent permitted by law.  Type show copying
 and show warranty for details.
 This GDB was configured as x86_64-redhat-linux-gnu.
 For bug reporting instructions, please see:
 http://www.gnu.org/software/gdb/bugs/...
 Reading symbols from /usr/local/squid/sbin/squid...done.
 Attaching to program: /usr/local/squid/sbin/squid, process 4
 ptrace: Opération non permise.
 BFD: Warning: /root/004/core.758 is truncated: expected core file size
= 58822656, found: 20480.
 Reading symbols from /lib64/ld-linux-x86-64.so.2...(no debugging
 symbols found)...done.
 Loaded symbols for /lib64/ld-linux-x86-64.so.2
 Failed to read a valid object file image from memory.
 Core was generated by `(squid-1) -S -f /etc/squid/squid.conf'.
 Program terminated with signal 6, Aborted.
 #0  0x2b6babe7a265 in ?? ()
 (gdb) bt
 Cannot access memory at address 0x7fff38b1cf98
 (gdb) quit






 Any clue on how to get a usable core file and/or on the meaning of the
 assertion ?


 Thanks
 Alex


[squid-users] assertion failed

2013-04-18 Thread Alexandre Chappaz
Hi,

In our SMP enabled environnement, I have one kid to start giving the
fooloowing assertion :
2013/04/18 04:03:43 kid1| assertion failed: ufs/ufscommon.cc:706: sde

I guess it is something related with the store / store rebuiding.
Maybe a malformed object in the cache store?
here the part of the log :

2013/04/18 04:03:42 kid1| Store rebuilding is 5.57% complete
2013/04/18 04:03:42 kid1| Done reading /var/cache/squid/W1 swaplog
(18735 entries)
2013/04/18 04:03:43 kid1| Accepting SNMP messages on 0.0.0.0:3401
2013/04/18 04:03:43 kid1| Accepting HTTP Socket connections at
local=0.0.0.0:3128 remote=[::] FD 12 flags=1
2013/04/18 04:03:43 kid1| assertion failed: ufs/ufscommon.cc:706: sde


core file is generated, but it seems to be not valid : gdb says :

gdb /usr/local/squid/sbin/squid 004/core.758
GNU gdb (GDB) Red Hat Enterprise Linux (7.0.1-32.el5_6.2)
Copyright (C) 2009 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later http://gnu.org/licenses/gpl.html
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type show copying
and show warranty for details.
This GDB was configured as x86_64-redhat-linux-gnu.
For bug reporting instructions, please see:
http://www.gnu.org/software/gdb/bugs/...
Reading symbols from /usr/local/squid/sbin/squid...done.
Attaching to program: /usr/local/squid/sbin/squid, process 4
ptrace: Opération non permise.
BFD: Warning: /root/004/core.758 is truncated: expected core file size
= 58822656, found: 20480.
Reading symbols from /lib64/ld-linux-x86-64.so.2...(no debugging
symbols found)...done.
Loaded symbols for /lib64/ld-linux-x86-64.so.2
Failed to read a valid object file image from memory.
Core was generated by `(squid-1) -S -f /etc/squid/squid.conf'.
Program terminated with signal 6, Aborted.
#0  0x2b6babe7a265 in ?? ()
(gdb) bt
Cannot access memory at address 0x7fff38b1cf98
(gdb) quit






Any clue on how to get a usable core file and/or on the meaning of the
assertion ?


Thanks
Alex


[squid-users] Re: assertion failed

2013-04-18 Thread Alexandre Chappaz
sorry, I meant One kid fails to start giving the following assertion

2013/4/18 Alexandre Chappaz alexandrechap...@gmail.com:
 Hi,

 In our SMP enabled environnement, I have one kid to start giving the
 fooloowing assertion :
 2013/04/18 04:03:43 kid1| assertion failed: ufs/ufscommon.cc:706: sde

 I guess it is something related with the store / store rebuiding.
 Maybe a malformed object in the cache store?
 here the part of the log :

 2013/04/18 04:03:42 kid1| Store rebuilding is 5.57% complete
 2013/04/18 04:03:42 kid1| Done reading /var/cache/squid/W1 swaplog
 (18735 entries)
 2013/04/18 04:03:43 kid1| Accepting SNMP messages on 0.0.0.0:3401
 2013/04/18 04:03:43 kid1| Accepting HTTP Socket connections at
 local=0.0.0.0:3128 remote=[::] FD 12 flags=1
 2013/04/18 04:03:43 kid1| assertion failed: ufs/ufscommon.cc:706: sde


 core file is generated, but it seems to be not valid : gdb says :

 gdb /usr/local/squid/sbin/squid 004/core.758
 GNU gdb (GDB) Red Hat Enterprise Linux (7.0.1-32.el5_6.2)
 Copyright (C) 2009 Free Software Foundation, Inc.
 License GPLv3+: GNU GPL version 3 or later http://gnu.org/licenses/gpl.html
 This is free software: you are free to change and redistribute it.
 There is NO WARRANTY, to the extent permitted by law.  Type show copying
 and show warranty for details.
 This GDB was configured as x86_64-redhat-linux-gnu.
 For bug reporting instructions, please see:
 http://www.gnu.org/software/gdb/bugs/...
 Reading symbols from /usr/local/squid/sbin/squid...done.
 Attaching to program: /usr/local/squid/sbin/squid, process 4
 ptrace: Opération non permise.
 BFD: Warning: /root/004/core.758 is truncated: expected core file size
= 58822656, found: 20480.
 Reading symbols from /lib64/ld-linux-x86-64.so.2...(no debugging
 symbols found)...done.
 Loaded symbols for /lib64/ld-linux-x86-64.so.2
 Failed to read a valid object file image from memory.
 Core was generated by `(squid-1) -S -f /etc/squid/squid.conf'.
 Program terminated with signal 6, Aborted.
 #0  0x2b6babe7a265 in ?? ()
 (gdb) bt
 Cannot access memory at address 0x7fff38b1cf98
 (gdb) quit






 Any clue on how to get a usable core file and/or on the meaning of the
 assertion ?


 Thanks
 Alex


Re: [squid-users] high traffic with google

2013-04-16 Thread Alexandre Chappaz
Thanks,

what do you mean by adding some headers?

Regards
Alex

2013/4/12 Eliezer Croitoru elie...@ngtech.co.il:
 I suggest you to contact squid and adding some headers will might help in 
 this case.

 Regards,
 Eliezer

 - Original Message -
 From: Alexandre Chappaz alexandrechap...@gmail.com
 To: squid-users@squid-cache.org
 Sent: Thursday, April 11, 2013 6:38:04 PM
 Subject: [squid-users] high traffic with google

 Hi,

 we are handling a rather large network ( ~140Kusers ) and we use one
 unique public IP address for internet traffic. This lead google to get
 suspicious with us ( captcha with each search )

 Do you know if google can whitelist us in some way? where to contact
 them? any way to smartly bypass this behavior?


 Thanks
 Alex


[squid-users] high traffic with google

2013-04-11 Thread Alexandre Chappaz
Hi,

we are handling a rather large network ( ~140Kusers ) and we use one
unique public IP address for internet traffic. This lead google to get
suspicious with us ( captcha with each search )

Do you know if google can whitelist us in some way? where to contact
them? any way to smartly bypass this behavior?


Thanks
Alex


Re: [squid-users] investigate squid eating 100% CPU

2013-03-26 Thread Alexandre Chappaz
Hi,

you can activate the full debug
launch
squid -k debug
with the service running, and check what comes in the cache.log.

squid -k parse will audit your config file. Look for WARNING in the
output of this command.

the cachemanager can be usefull to see the actual activity of your squid :

squidclient localhost mgr:5min

gives you the last 5 min stats. (see if the n° of req/s is coherent
with what you expect )


Bonne chance
Alex




2013/3/26 Youssef Ghorbal d...@pasteur.fr:
 Hello,

 We have a Squid 3.1.23 running on a FreeBSD 8.3 (amd64)
 The proxy is used to handle web access for ~2500 workstations and in 
 pure proxy/filter (squidGaurd) mode with no cache (all disk caching is 
 disabled)
 It's not a tranparent/intercepting proxy, just a plain explicit proxy 
 mode.

 What we see, is that the squid process is using 100% of CPU (userland 
 CPU usage, not kernel) all the time. Even in late night when the whole 
 traffic is very minimalistic.

 What I'm looking for is some advice on how to track down what is 
 causing this CPU misbehaviour. Maybe it's some stupid config option not 
 suitable for this kind of setup, maybe a bug etc.
 What would be the tools/methodology that I can use to profile the 
 running process.

 Any help/suggestion would be really appreciated.

 Youssef Ghorbal

 squid -v
 Squid Cache: Version 3.1.23
 configure options:  '--with-default-user=squid' '--bindir=/usr/local/sbin' 
 '--sbindir=/usr/local/sbin' '--datadir=/usr/local/etc/squid' 
 '--libexecdir=/usr/local/libexec/squid' '--localstatedir=/var/squid' 
 '--sysconfdir=/usr/local/etc/squid' '--with-logdir=/var/log/squid' 
 '--with-pidfile=/var/run/squid/squid.pid' '--enable-removal-policies=lru 
 heap' '--disable-linux-netfilter' '--disable-linux-tproxy' '--disable-epoll' 
 '--disable-translation' '--disable-ecap' '--disable-loadable-modules' 
 '--enable-auth=basic digest negotiate ntlm' '--enable-basic-auth-helpers=DB 
 NCSA PAM MSNT SMB squid_radius_auth LDAP YP' 
 '--enable-digest-auth-helpers=password ldap' 
 '--enable-external-acl-helpers=ip_user session unix_group wbinfo_group 
 ldap_group' '--enable-ntlm-auth-helpers=smb_lm' '--enable-storeio=ufs diskd 
 aufs' '--enable-disk-io=AIO Blocking DiskDaemon DiskThreads' 
 '--enable-delay-pools' '--enable-icap-client' '--enable-kqueue' 
 '--with-large-files' '--enable-stacktraces' '--disable-optimizations' 
 '--prefix=/usr/local' '--mandir=/usr/local/man' '--infodir=/usr/local/info/' 
 '--build=amd64-portbld-freebsd8.3' 'build_alias=amd64-portbld-freebsd8.3' 
 'CC=cc' 'CFLAGS=-pipe -I/usr/local/include -g -g -DLDAP_DEPRECATED' 'LDFLAGS= 
 -L/usr/local/lib' 'CPPFLAGS=' 'CXX=c++' 'CXXFLAGS=-pipe -I/usr/local/include 
 -g -g -DLDAP_DEPRECATED' 'CPP=cpp' 
 --with-squid=/wrkdirs/usr/ports/www/squid31/work/squid-3.1.23 
 --enable-ltdl-convenience



Re: [squid-users] rock squid -k reconfigure

2013-03-22 Thread Alexandre Chappaz
Hi,

investigating on this issue, it appears that the problem comes from
the disker ID in the SwapDir object.
Added these debug lines in SwapDir::active()

...
// we are inside a disker dedicated to this disk
debugs(3,1,SwapDir::active :: KidIdentifier =   KidIdentifier 
disker =disker  . );
if (KidIdentifier == disker)
return true;



it appears that the disker is wrong after running a squid -k
reconfigure, and hence the active() function returns false.


with a fresh start :

2013/03/22 11:30:29 kid3| SwapDir::active :: KidIdentifier = 3disker =  2.
2013/03/22 11:30:29 kid3| SwapDir::active :: KidIdentifier = 3disker =  2.
2013/03/22 11:30:29 kid2| SwapDir::active :: KidIdentifier = 2disker =  2.
2013/03/22 11:30:29 kid2| SwapDir::active :: KidIdentifier = 2disker =  2.
2013/03/22 11:30:30 kid3| SwapDir::active :: KidIdentifier = 3disker =  2.
2013/03/22 11:30:30 kid3| SwapDir::active :: KidIdentifier = 3disker =  2.
2013/03/22 11:30:30 kid2| SwapDir::active :: KidIdentifier = 2disker =  2.
2013/03/22 11:30:30 kid2| SwapDir::active :: KidIdentifier = 2disker =  2.

with

[root@tv alex]# ps aux|grep squid

root 20320  0.0  0.0 1182724 2096 ?Ss   11:30   0:00 squid
-f /etc/squid/squid.conf
proxy20322  0.1  0.2 1183196 11220 ?   S11:30   0:00
(squid-coord-3) -f /etc/squid/squid.conf
proxy20323  0.0  0.2 1184180 11640 ?   S11:30   0:00
(squid-disk-2) -f /etc/squid/squid.conf
proxy20324  0.0  0.2 1188276 11484 ?   S11:30   0:00
(squid-1) -f /etc/squid/squid.conf




after issuing the squid -k reconfigure, the output of the ps is
identical, but the diskerID is set to 3.

2013/03/22 11:31:46 kid3| SwapDir::active :: KidIdentifier = 3disker =  3.
2013/03/22 11:31:46 kid2| SwapDir::active :: KidIdentifier = 2disker =  3.
2013/03/22 11:31:46 kid2| SwapDir::active :: KidIdentifier = 2disker =  3.
2013/03/22 11:31:47 kid3| SwapDir::active :: KidIdentifier = 3disker =  3.
2013/03/22 11:31:47 kid3| SwapDir::active :: KidIdentifier = 3disker =  3.
2013/03/22 11:31:47 kid2| SwapDir::active :: KidIdentifier = 2disker =  3.
2013/03/22 11:31:47 kid2| SwapDir::active :: KidIdentifier = 2disker =  3.


Trying to trace the problem futher, the disker ID seems to be given
in the contructor of the SwapDir,

SwapDir::SwapDir(char const *aType): theType(aType),
max_size(0), min_objsize(0), max_objsize (-1),
path(NULL), index(-1), disker(-1),
repl(NULL), removals(0), scanned(0),
cleanLog(NULL)
{
fs.blksize = 1024;
}



Any hints on where is the right call for creation ( or re-creation )
of this object after a reconfigure?


Thanks
Alex

2013/3/18 Alex Rousskov rouss...@measurement-factory.com:
 On 03/18/2013 09:18 AM, Alexandre Chappaz wrote:
 Hi,

 Im am using squid 3.2.8-20130304-r11795 with SMP  a rock dir configured.
 After a fresh start, cachemanager:storedir reports :

 by kid5 {
 Store Directory Statistics:
 Store Entries  : 52
 Maximum Swap Size  : 8388608 KB
 Current Store Swap Size: 28176.00 KB
 Current Capacity   : 0.34% used, 99.66% free

 Store Directory #0 (rock): /var/cache/squid/mem/
 FS Block Size 1024 Bytes

 Maximum Size: 8388608 KB
 Current Size: 28176.00 KB 0.34%
 Maximum entries:262143
 Current entries:   880 0.34%
 Pending operations: 1 out of 0
 Flags:
 } by kid5



 for the rock cache_dir.


 After a squid -k reconfigure, without any change in the squid.conf,
 the cachemanager is reporting this :

 by kid5 {
 Store Directory Statistics:
 Store Entries  : 52
 Maximum Swap Size  : 0 KB
 Current Store Swap Size: 0.00 KB
 Current Capacity   : 0.00% used, 0.00% free

 } by kid5



 Is this only a problem with the reporting? Is the rock cachedir still
 in use after the reconfigure / is there a way to check if it is still
 in use?


 Please see Bug 3774. It may be related to your problem.

http://bugs.squid-cache.org/show_bug.cgi?id=3774

 Alex.



Re: [squid-users] squid/SMP

2013-03-21 Thread Alexandre Chappaz
Hi,

there is also one important directory where squid stores the .pid file
and one .ipc file for each process, typically /var/run/squid/.
make sure this directory is writable by the user set for squid.

Regards
Alex



2013/3/21 Adam W. Dace colonelforbi...@gmail.com:
 I had this exact problem on a different platform, Mac OS X.

 You probably want to use sysctl to increase the OS-default limits on
 Unix Domain Sockets.
 They're mentioned at the bottom of the squid Wiki page here:
 http://wiki.squid-cache.org/Features/SmpScale

 Please mail the list if you don't mind once you try that, I then ran
 into a different problem but most likely FreeBSD isn't affected.

 Regards,

 On Thu, Mar 21, 2013 at 5:52 AM, Eugene M. Zheganin e...@norma.perm.ru 
 wrote:
 Hi.

 I'm using squid 3.2.6 on a FreeBSD and today I tried to use it's SMP
 feature. I've added 'workers 2' in it's configuration file, checked the
 permission on localstatedir and ran it.
 I got on start

 FATAL: kid2 registration timed out

 and then looks like coordinator started to try to restart kids, but
 unsuccessfully. Obviously, no client requests were served at that time.
 I checked the localstatedir and saw 3 sockets, one from coordinator and two
 from kids - so I'm sure permissions are ok.

 What can I do to debug this feature ?
 I understand 3.2.x is no longer supported and I need to use 3.3.x, but right
 now I'm stuck to FreeBSD ports on my production, and there's no 3.3.[ in it
 yet; will try to build 3.3.[ realease on a test machine.

 Thanks.
 Eugene.



 --
 
 Adam W. Dace colonelforbi...@gmail.com

 Phone: (815) 355-5848
 Instant Messenger: AIM  Yahoo! IM - colonelforbin74 | ICQ - #39374451
 Microsoft Messenger - colonelforbi...@live.com

 Google Profile: http://www.google.com/profiles/ColonelForbin74


[squid-users] rock squid -k reconfigure

2013-03-18 Thread Alexandre Chappaz
Hi,

Im am using squid 3.2.8-20130304-r11795 with SMP  a rock dir configured.
After a fresh start, cachemanager:storedir reports :

by kid5 {
Store Directory Statistics:
Store Entries  : 52
Maximum Swap Size  : 8388608 KB
Current Store Swap Size: 28176.00 KB
Current Capacity   : 0.34% used, 99.66% free

Store Directory #0 (rock): /var/cache/squid/mem/
FS Block Size 1024 Bytes

Maximum Size: 8388608 KB
Current Size: 28176.00 KB 0.34%
Maximum entries:262143
Current entries:   880 0.34%
Pending operations: 1 out of 0
Flags:
} by kid5



for the rock cache_dir.


After a squid -k reconfigure, without any change in the squid.conf,
the cachemanager is reporting this :

by kid5 {
Store Directory Statistics:
Store Entries  : 52
Maximum Swap Size  : 0 KB
Current Store Swap Size: 0.00 KB
Current Capacity   : 0.00% used, 0.00% free

} by kid5



Is this only a problem with the reporting? Is the rock cachedir still
in use after the reconfigure / is there a way to check if it is still
in use?

Thank you
Alex


Re: [squid-users] Re: Squid 3.3.2 SMP Problem

2013-03-15 Thread Alexandre Chappaz
Hi,

which OS do you recommend for running SMP enabled version of squid?
Is there a particular kernel version to use or not use?
Do you have more specific tuning settings for shm, IPC and UDS
sockets, other than the hints from
http://wiki.squid-cache.org/Features/SmpScale#Troubleshooting

Regards
Alex

2013/3/15 Ahmad ahmed.za...@netstream.ps:
 hi all ,
 ive added in my squid.conf the config of smp ,
 im using squid 3.3.3  ,

 but my question is , how to monitoe the performace and make sure if my other
 idle cores are being used ??

 here is what i modified :
 smp options###
 # Custom options
 memory_cache_shared off
 #workers 2
 #
 workers 4
 cache_dir rock /squid-cache/rock-1 3000 max-size=31000 max-swap-rate=250
 swap-timeout=350
 cache_dir rock /squid-cache/rock-2 3000 max-size=31000 max-swap-rate=250
 swap-timeout=350
 =
 also i want to ask about another issue ,
 does using smp can make delay when starting squid ? i mean that rebuildig
 process take more time when using smp ??!!
 wish to help

 regards



 --
 View this message in context: 
 http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-3-3-2-SMP-Problem-tp4658906p4659000.html
 Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] sharepoint pinning issue?

2013-02-26 Thread Alexandre Chappaz
Hi,

I have found some time to go further in the investigation. And here is
status right now.
the behavior is the some with only 1 squid ( no upstream server ), and
is also the same if I use squid as a reverse proxy for the Sharepoint
server.

As read is some threads about this subject, depending ont the browser
/ OS, behaviour differes.
IE in XP works perfectly, whereas FF in XP or linux asks for
authentification in a loop whenever trying to upload a relatively big
file to the server.
Activating the debug level and trying to analyse the headers of the
packets gave me quite some headache! What we can see is that the HTTP
headers and techniques for POSTing files used are totally differents
between browser.
I would love some help on analysing these logs.
Joined are 2 files :
capture_IE_XP : upload of a file works
capture_FF_XP : upload of a file does not

I can provide access to the sharepoint server  reverse proxy if
someone has time to jump in.

Best regards
Alex

2013/2/13 Amos Jeffries squ...@treenet.co.nz:
 On 13/02/2013 3:49 a.m., Alexandre Chappaz wrote:

 Hi,

 I know this is a subject that has been put on the table many times,
 but I wanted to share with you my experience with squid + sharepoint.

 Squid Cache: Version 3.2.7-20130211-r11781

 I am having an issue with autehtication :
 when accessing the sharepoint server, I do get a login/pw popup, I can
 login and see some of the pages behind, but when doing some operation,
 even though I am supposed to be logged in, the autentcation popup
 comes back.
 Here is what I find the the access log :
 1360679927.561 43 X.X.X.X TCP_MISS/200 652 GET
 http://saralex.hd.free.fr/_layouts/images/selbg.png -
 FIRSTUP_PARENT/192.168.100.XX image/png


 URL #1. No authentication required. non-pinned connection used.


 1360679928.543 37 X.X.X.X TCP_MISS/401 542 GET
 http://saralex.hd.free.fr/_layouts/listform.aspx? -
 PINNED/192.168.100.XX -


 URL #2. Sent to upstream on already authenticated+PINNED connection.
 Upstream server requires further authentication details.
  -- authentication challenge?


 1360679928.665 58 X.X.X.X TCP_MISS/401 795 GET
 http://saralex.hd.free.fr/_layouts/listform.aspx? -
 PINNED/192.168.100.XX -


 URL #2 repeated. Sent to upstream on already authenticated+PINNED
 connection. Upstream server requires further authentication details.
  -- possibly authentication handshake request?


 1360679928.753229 X.X.X.X TCP_MISS/200 20625 GET
 http://saralex.hd.free.fr/_layouts/images/fgimg.png -
 FIRSTUP_PARENT/192.168.100.XX image/png


 URL #3. No authentication required. non-pinned connection used.


 1360679928.788 68 X.X.X.X TCP_MISS/302 891 GET
 http://saralex.hd.free.fr/_layouts/listform.aspx? -
 PINNED/192.168.100.XX text/html


 URL #2 repeated. Sent to upstream on already authenticated+PINNED
 connection. Upstream server redirectes the client to another URL.
  -- authentication credentials accepted.


 1360679928.921 45 X.X.X.X TCP_MISS/401 542 GET
 http://saralex.hd.free.fr/Lists/Tasks/NewForm.aspx? -
 PINNED/192.168.100.XX -


 URL #4. Sent to upstream on already authenticated+PINNED connection.
 Upstream server requires further authentication details.
  -- authentication challenge?


 1360679929.019 47 X.X.X.X TCP_MISS/401 795 GET
 http://saralex.hd.free.fr/Lists/Tasks/NewForm.aspx? -
 PINNED/192.168.100.XX -


 URL #4 repeated. Sent to upstream on already authenticated+PINNED
 connection. Upstream server requires further authentication details.
  -- possibly authentication handshake request?


 1360679929.656 81 X.X.X.X TCP_MISS/200 1986 GET
 http://saralex.hd.free.fr/_layouts/images/loadingcirclests16.gif -
 FIRSTUP_PARENT/192.168.100.XX image/gif


 URL #5. no authentication required. non-pinned connection used.


 1360679930.417   1322 X.X.X.X TCP_MISS/200 130496 GET
 http://saralex.hd.free.fr/Lists/Tasks/NewForm.aspx? -
 PINNED/192.168.100.XX text/html


 URL #4 repeated. Sent to upstream on already authenticated+PINNED
 connection. Upstream server provides the display response.
  -- authentication credentials accepted.


 1360679934.618 53 X.X.X.X TCP_MISS/401 542 GET
 http://saralex.hd.free.fr/_layouts/iframe.aspx? -
 PINNED/192.168.100.XX -
 1360679934.729 51 X.X.X.X TCP_MISS/401 795 GET
 http://saralex.hd.free.fr/_layouts/iframe.aspx? -
 PINNED/192.168.100.XX -

 could this be a pinning issue?

 Is V2.7 STABLE managing these things in a nicer way?


 Unknown. But I doubt it. This is Squid using a PINNED connection to relay
 traffic to an upstream server. That upstream server is rejecting the clients
 delivered credentials after each object. There is no sign of proxy
 authentication taking place, this re-challenge business is all between
 client and upstream server.

 You need to look at whether these connections are being pinned then closed,
 and why that is happening. Squid-3.2 offers debug level 11,2 which will give
 you a trace of the HTTP headers to see if the close is a normal

[squid-users] logformat emulate_httpd_log

2012-11-14 Thread Alexandre Chappaz
Hi,

I am looking for some precision on the %st  in the log format directive.
with squid 2.6, I used to set

emulate_httpd_log on

In order to have the useragent and referer I commented this line out
and used instead the combined format as defined here :

logformat combined %a %[ui %[un [%tl] %rm %ru HTTP/%rv %Hs %st
%{Referer}h %{User-Agent}h %Ss:%Sh


This is messing up with the scripts I have for parsing the logfiles in
the sense that %Hs and %st now report '-' character when I was
expecting a '0' with the emulate_httpd_log on.

Is there a way to turn these 2 fields back to the emulate_httpd_log behavior?


Regards
Alex


[squid-users] assertion failed

2012-10-24 Thread Alexandre Chappaz
Hi,

My squid 3.2.2 - R is failing after a reconfigure.
Not using SMP.

here is what I find in the logs :


2012/10/24 14:54:42 kid1| assertion failed: comm.cc:163:
!fd_table[conn-fd].closing()


2012/10/24 14:58:29 kid1| StoreEntry-ping_status: 2
2012/10/24 14:58:29 kid1| StoreEntry-store_status: 1
2012/10/24 14:58:29 kid1| StoreEntry-swap_status: 0
2012/10/24 14:58:29 kid1| assertion failed: store.cc:1854: isEmpty()
2012/10/24 14:58:33 kid1| Starting Squid Cache version
3.2.2-20121011-r11676 for x86_64-unknown-linux-gnu...
2012/10/24 14:58:33 kid1| Process ID 569
2012/10/24 14:58:33 kid1| Process Roles: worker
2012/10/24 14:58:33 kid1| With 65535 file descriptors available
2012/10/24 14:58:33 kid1| Initializing IP Cache...
2012/10/24 14:58:33 kid1| DNS Socket created at 0.0.0.0, FD 11
2012/10/24 14:58:33 kid1| Adding nameserver 194.2.0.20 from /etc/resolv.conf
2012/10/24 14:58:33 kid1| Adding nameserver 194.2.0.50 from /etc/resolv.conf
2012/10/24 14:58:33 kid1| Adding domain d097.cp from /etc/resolv.conf
2012/10/24 14:58:33 kid1| helperOpenServers: Starting 10/1000
'squidGuard' processes
2012/10/24 14:58:33 kid1| Logfile: opening log stdio:/var/log/squid/access.log
2012/10/24 14:58:33 kid1| Logfile: opening log
stdio:/var/log/squid/pirates_XFF.log
2012/10/24 14:58:33 kid1| Logfile: opening log
stdio:/var/log/squid/pirates_VIA.log
2012/10/24 14:58:33 kid1| Store logging disabled
2012/10/24 14:58:33 kid1| Swap maxSize 23019520 + 2097152 KB,
estimated 1932051 objects
2012/10/24 14:58:33 kid1| Target number of buckets: 96602
2012/10/24 14:58:33 kid1| Using 131072 Store buckets
2012/10/24 14:58:33 kid1| Max Mem  size: 2097152 KB
2012/10/24 14:58:33 kid1| Max Swap size: 23019520 KB
2012/10/24 14:58:33 kid1| Rebuilding storage in /var/cache/squid-mem (dirty log)
2012/10/24 14:58:33 kid1| Rebuilding storage in /var/cache/squid (dirty log)
2012/10/24 14:58:33 kid1| Using Least Load store dir selection
2012/10/24 14:58:33 kid1| Set Current Directory to /var/cache/squid
2012/10/24 14:58:33 kid1| Loaded Icons.
2012/10/24 14:58:33 kid1| HTCP Disabled.
2012/10/24 14:58:33 kid1| Sending SNMP messages from 0.0.0.0:3401
2012/10/24 14:58:33 kid1| Configuring Parent proxyav.bercy.cp/3128/0
2012/10/24 14:58:33 kid1| Squid plugin modules loaded: 0
2012/10/24 14:58:33 kid1| Accepting HTTP Socket connections at
local=0.0.0.0:3128 remote=[::] FD 39 flags=9
2012/10/24 14:58:33 kid1| Accepting ICP messages on 0.0.0.0:3130
2012/10/24 14:58:33 kid1| Sending ICP messages from 0.0.0.0:3130
2012/10/24 14:58:33 kid1| Accepting SNMP messages on 0.0.0.0:3401
2012/10/24 14:58:33 kid1| Store rebuilding is 0.62% complete
2012/10/24 14:58:33 kid1| Done reading /var/cache/squid swaplog (57155 entries)
2012/10/24 14:58:34 kid1| WARNING: HTTP header contains NULL
characters {Accept: */*^M
Content-Type: application/x-www-form-urlencoded}
NULL
{Accept: */*^M
Content-Type: application/x-www-form-urlencoded
2012/10/24 14:58:34 kid1| Starting new redirector helpers...
2012/10/24 14:58:34 kid1| helperOpenServers: Starting 5/1000
'squidGuard' processes
2012/10/24 14:58:34 kid1| Starting new redirector helpers...
2012/10/24 14:58:34 kid1| helperOpenServers: Starting 5/1000
'squidGuard' processes
2012/10/24 14:58:34 kid1| Starting new redirector helpers...
2012/10/24 14:58:34 kid1| helperOpenServers: Starting 5/1000
'squidGuard' processes
2012/10/24 14:58:34 kid1| Starting new redirector helpers...
2012/10/24 14:58:34 kid1| helperOpenServers: Starting 5/1000
'squidGuard' processes
2012/10/24 14:58:34 kid1| Starting new redirector helpers...
2012/10/24 14:58:34 kid1| helperOpenServers: Starting 5/1000
'squidGuard' processes
2012/10/24 14:58:34 kid1| Starting new redirector helpers...
2012/10/24 14:58:34 kid1| helperOpenServers: Starting 5/1000
'squidGuard' processes
2012/10/24 14:58:34 kid1| Starting new redirector helpers...
2012/10/24 14:58:34 kid1| helperOpenServers: Starting 5/1000
'squidGuard' processes
2012/10/24 14:58:34 kid1| Starting new redirector helpers...
2012/10/24 14:58:34 kid1| helperOpenServers: Starting 5/1000
'squidGuard' processes
2012/10/24 14:58:34 kid1| Starting new redirector helpers...
2012/10/24 14:58:34 kid1| helperOpenServers: Starting 5/1000
'squidGuard' processes
2012/10/24 14:58:37 kid1| Done reading /var/cache/squid-mem swaplog
(648235 entries)
2012/10/24 14:58:37 kid1| Finished rebuilding storage from disk.
2012/10/24 14:58:37 kid1|435593 Entries scanned
2012/10/24 14:58:37 kid1| 1 Invalid entries.
2012/10/24 14:58:37 kid1| 0 With invalid flags.
2012/10/24 14:58:37 kid1|165731 Objects loaded.
2012/10/24 14:58:37 kid1| 0 Objects expired.
2012/10/24 14:58:37 kid1|268805 Objects cancelled.
2012/10/24 14:58:37 kid1|   912 Duplicate URLs purged.
2012/10/24 14:58:37 kid1|   145 Swapfile clashes avoided.
2012/10/24 14:58:37 kid1|   Took 4.08 seconds (40609.22 objects/sec).
2012/10/24 14:58:37 kid1| Beginning Validation Procedure
2012/10/24 14:58:37 kid1|   

[squid-users] Squid 3.2 problem to log mac address

2012-10-18 Thread Alexandre Rubert

Hello,
I have a chillispot server and a squid 3.2 server on my machine. I'd 
like to log all traffic from my hotspot client (visited website, hour, 
addr mac). I compiled squid with --enable-eui. When I see log file, mac 
address is 00:00:00:00:00:00.


My squid.conf looks like :
http_port  intercept
http_access allow all
eui_lookup on
logformat squid %ts.%03tu %6tr %a %Ss/%03Hs %st %rm %ru %un %Sh/%A 
%mt %eui

access_log stdio:/usr/local/squid/var/logs/squid/access.log squid

My iptables :

IPTABLES=/sbin/iptables
EXTIF=eth0
INTIF=eth1
$IPTABLES -P INPUT DROP
$IPTABLES -P FORWARD ACCEPT
$IPTABLES -P OUTPUT ACCEPT
$IPTABLES -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
#Allow releated, established and ssh on $EXTIF. Reject everything else.
$IPTABLES -A INPUT -i $EXTIF -p tcp -m tcp --dport 22 --syn -j ACCEPT
#$IPTABLES -A INPUT -i $EXTIF -j REJECT
#SQUID
$IPTABLES -A INPUT -p tcp -m tcp --dport  --syn -j ACCEPT
$IPTABLES -t nat -A PREROUTING -i tun0 -p tcp -m tcp --dport  --syn 
-j DROP
$IPTABLES -t nat -A PREROUTING -i tun0 -p tcp -m tcp --dport 80 -j 
REDIRECT --to-ports 

#Allow related and established from $INTIF. Drop everything else.
#Allow http and https on other interfaces (input)
$IPTABLES -A INPUT -p tcp -m tcp --dport 80 --syn -j ACCEPT
$IPTABLES -A INPUT -p tcp -m tcp --dport 443 --syn -j ACCEPT
#Allow 3990 on other interfaces (input).
$IPTABLES -A INPUT -p tcp -m tcp --dport 3990 --syn -j ACCEPT
#Allow everything on loopback interface.
$IPTABLES -A INPUT -i lo -j ACCEPT
$IPTABLES -A FORWARD -o $INTIF -j DROP
#Enable NAT on output device
$IPTABLES -t nat -A POSTROUTING -o $EXTIF -j MASQUERADE





[squid-users] NTLM passthu

2012-10-11 Thread Alexandre Chappaz
Hi,

since upgrade from 3.1.20 to 3.2.1, we are facing a problem regarding
access to a IIS server with authentication :

the popup asking for credentials keeps poping out and make the
browsing impossibe.
I observed the same behavior with latest 3.2.2 version (r11676 ).

On the contrary, using 3.1.20 and same config, everything is fine.


Is this a regression? Shoudl I file a bug?

Thanks
Alex


Re: [squid-users] NTLM passthu

2012-10-11 Thread Alexandre Chappaz
Yes I have seen this bug and applied the patch right now.

with patch applied, behavior is a bit different :

after asking for credentials, I get a connexion reset.

and from access log :

1349962782.169  9 10.XXX.XXX.XXX TCP_MISS/401 436 GET
http://www.si-diamant.fr/ - HIER_DIRECT/94.124.232.64 -




2012/10/11 Wolfgang Breyha wbre...@gmx.net:
 Alexandre Chappaz wrote, on 11.10.2012 14:45:
 Is this a regression? Shoudl I file a bug?

 There already is a bug and a proposed fix
 http://bugs.squid-cache.org/show_bug.cgi?id=3655

 Greetings, Wolfgang
 --
 Wolfgang Breyha wbre...@gmx.net | http://www.blafasel.at/
 Vienna University Computer Center | Austria



Re: [squid-users] NTLM passthu

2012-10-11 Thread Alexandre Chappaz
Applied the patch on both 3.2.1 and 3.2.2 . Same result.
I'll post on your bug report.

In the meantime, is there some additional info that could help to debug?


2012/10/11 Wolfgang Breyha wbre...@gmx.net:
 Alexandre Chappaz wrote, on 11.10.2012 15:42:
 Yes I have seen this bug and applied the patch right now.

 with patch applied, behavior is a bit different :

 after asking for credentials, I get a connexion reset.

 Did you use 3.2.2 or 3.2.1? My patch is for 3.2.1. Don't know if it still
 works on 3.2.2.

 If it doesn't work on 3.2.1 either it's bad because this is not trivial to 
 debug.

 Maybe you want to comment on my bugreport that my patch doesn't fix it for 
 you.

 Greetings, Wolfgang
 --
 Wolfgang Breyha wbre...@gmx.net | http://www.blafasel.at/
 Vienna University Computer Center | Austria



Re: [squid-users] NTLM passthu

2012-10-11 Thread Alexandre Chappaz
Hi,

In fact I made a wrong manipulation while appling the patch.
When applied correctly, the provided patch does fix the pinning
problem and the authentaction to IIS works.

thanks
Alex

2012/10/11 Wolfgang Breyha wbre...@gmx.net:
 Alexandre Chappaz wrote, on 11.10.2012 15:57:
 Applied the patch on both 3.2.1 and 3.2.2 . Same result.
 I'll post on your bug report.

 In the meantime, is there some additional info that could help to debug?

 At least I can't help in this matter because my knowledge about squid source
 code is still very limited. I thought I understood all the stuff about pinning
 and ntlm/negotiate passthrough. It was enough to fix our troubles, but
 obviously not to fix yours;-) I currently do not have the spare time to debug
 any further. Sorry.

 Greetings, Wolfgang
 --
 Wolfgang Breyha wbre...@gmx.net | http://www.blafasel.at/
 Vienna University Computer Center | Austria



[squid-users] squid 3.2.1 : assertion failed on reconfigure

2012-10-09 Thread Alexandre Chappaz
Hi,

I am having a frequent issue since upgrade to squid 3.2.1-20120817-r11648
only one worker configured.
frequently dies with the following message in cache.log
Says report this, so reporting I am!

2012/10/08 14:02:01 kid1| Could not parse headers from on disk object
2012/10/08 14:02:01 kid1| WARNING: An error inside Squid has caused an
HTTP reply without Date:. Please report this:
2012/10/08 14:02:01 kid1| StoreEntry-key: DAAE3F7903C505691D3DBAF0C7764E53
2012/10/08 14:02:01 kid1| StoreEntry-next: 0x1d74118
2012/10/08 14:02:01 kid1| StoreEntry-mem_obj: 0x4c9eba0
2012/10/08 14:02:01 kid1| StoreEntry-timestamp: -1
2012/10/08 14:02:01 kid1| StoreEntry-lastref: 1349697721
2012/10/08 14:02:01 kid1| StoreEntry-expires: -1
2012/10/08 14:02:01 kid1| StoreEntry-lastmod: -1
2012/10/08 14:02:01 kid1| StoreEntry-swap_file_sz: 0
2012/10/08 14:02:01 kid1| StoreEntry-refcount: 1
2012/10/08 14:02:01 kid1| StoreEntry-flags:
CACHABLE,DISPATCHED,PRIVATE,FWD_HDR_WAIT,VALIDATED
2012/10/08 14:02:01 kid1| StoreEntry-swap_dirn: -1
2012/10/08 14:02:01 kid1| StoreEntry-swap_filen: -1
2012/10/08 14:02:01 kid1| StoreEntry-lock_count: 3
2012/10/08 14:02:01 kid1| StoreEntry-mem_status: 0
2012/10/08 14:02:01 kid1| StoreEntry-ping_status: 2
2012/10/08 14:02:01 kid1| StoreEntry-store_status: 1
2012/10/08 14:02:01 kid1| StoreEntry-swap_status: 0
2012/10/08 14:02:01 kid1| Could not parse headers from on disk object
2012/10/08 14:02:01 kid1| WARNING: An error inside Squid has caused an
HTTP reply without Date:. Please report this:
2012/10/08 14:02:01 kid1| StoreEntry-key: 222DCAAD7BE504DF43338A35A6F3D170
2012/10/08 14:02:01 kid1| StoreEntry-next: 0x2b32698
2012/10/08 14:02:01 kid1| StoreEntry-mem_obj: 0x534b6d0
2012/10/08 14:02:01 kid1| StoreEntry-timestamp: -1
2012/10/08 14:02:01 kid1| StoreEntry-lastref: 1349697721
2012/10/08 14:02:01 kid1| StoreEntry-expires: -1
2012/10/08 14:02:01 kid1| StoreEntry-lastmod: -1
2012/10/08 14:02:01 kid1| StoreEntry-swap_file_sz: 0
2012/10/08 14:02:01 kid1| StoreEntry-refcount: 1
2012/10/08 14:02:01 kid1| StoreEntry-flags:
CACHABLE,DISPATCHED,PRIVATE,FWD_HDR_WAIT,VALIDATED
2012/10/08 14:02:01 kid1| StoreEntry-swap_dirn: -1
2012/10/08 14:02:01 kid1| StoreEntry-swap_filen: -1
2012/10/08 14:02:01 kid1| StoreEntry-lock_count: 3
2012/10/08 14:02:01 kid1| StoreEntry-mem_status: 0
2012/10/08 14:02:01 kid1| StoreEntry-ping_status: 2
2012/10/08 14:02:01 kid1| StoreEntry-store_status: 1
2012/10/08 14:02:01 kid1| StoreEntry-swap_status: 0
2012/10/08 14:02:01 kid1| Could not parse headers from on disk object
2012/10/08 14:02:01 kid1| WARNING: An error inside Squid has caused an
HTTP reply without Date:. Please report this:
2012/10/08 14:02:01 kid1| StoreEntry-key: EF553578592F558D4F29A99A2B954197
2012/10/08 14:02:01 kid1| StoreEntry-next: 0x204c718
2012/10/08 14:02:01 kid1| StoreEntry-mem_obj: 0x5296840
2012/10/08 14:02:01 kid1| StoreEntry-timestamp: -1
2012/10/08 14:02:01 kid1| StoreEntry-lastref: 1349697721
2012/10/08 14:02:01 kid1| StoreEntry-expires: -1
2012/10/08 14:02:01 kid1| StoreEntry-lastmod: -1
2012/10/08 14:02:01 kid1| StoreEntry-swap_file_sz: 0
2012/10/08 14:02:01 kid1| StoreEntry-refcount: 1
2012/10/08 14:02:01 kid1| StoreEntry-flags:
CACHABLE,DISPATCHED,PRIVATE,FWD_HDR_WAIT,VALIDATED
2012/10/08 14:02:01 kid1| StoreEntry-swap_dirn: -1
2012/10/08 14:02:01 kid1| StoreEntry-swap_filen: -1
2012/10/08 14:02:01 kid1| StoreEntry-lock_count: 3
2012/10/08 14:02:01 kid1| StoreEntry-mem_status: 0
2012/10/08 14:02:01 kid1| StoreEntry-ping_status: 2
2012/10/08 14:02:01 kid1| StoreEntry-store_status: 1
2012/10/08 14:02:01 kid1| StoreEntry-swap_status: 0
2012/10/08 14:02:01 kid1| WARNING: 10 swapin MD5 mismatches
2012/10/08 14:02:01 kid1| Could not parse headers from on disk object
2012/10/08 14:02:01 kid1| WARNING: An error inside Squid has caused an
HTTP reply without Date:. Please report this:
2012/10/08 14:02:01 kid1| StoreEntry-key: A27A9B4B747D4D9B36BACDDFAAD6A266
2012/10/08 14:02:01 kid1| StoreEntry-next: 0x2ae43f8
2012/10/08 14:02:01 kid1| StoreEntry-mem_obj: 0x51523a0
2012/10/08 14:02:01 kid1| StoreEntry-timestamp: -1
2012/10/08 14:02:01 kid1| StoreEntry-lastref: 1349697721
2012/10/08 14:02:01 kid1| StoreEntry-expires: -1
2012/10/08 14:02:01 kid1| StoreEntry-lastmod: -1
2012/10/08 14:02:01 kid1| StoreEntry-swap_file_sz: 0
2012/10/08 14:02:01 kid1| StoreEntry-refcount: 1
2012/10/08 14:02:01 kid1| StoreEntry-flags:
CACHABLE,DISPATCHED,PRIVATE,FWD_HDR_WAIT,VALIDATED
2012/10/08 14:02:01 kid1| StoreEntry-swap_dirn: -1
2012/10/08 14:02:01 kid1| StoreEntry-swap_filen: -1
2012/10/08 14:02:01 kid1| StoreEntry-lock_count: 3
2012/10/08 14:02:01 kid1| StoreEntry-mem_status: 0
2012/10/08 14:02:01 kid1| StoreEntry-ping_status: 2
2012/10/08 14:02:01 kid1| StoreEntry-store_status: 1
2012/10/08 14:02:01 kid1| StoreEntry-swap_status: 0
2012/10/08 14:02:01 kid1| Could not parse headers from on disk object
2012/10/08 14:02:01 kid1| WARNING: An error inside Squid has caused an
HTTP reply without Date:. Please report this:

[squid-users] 3.2.1 : assertion failed

2012-09-20 Thread Alexandre Chappaz
Hi,

I am using squid 3.2.1 in a heavy load environnement.
Since upgrading from 3.1 to 3.2.1, we are having stability issues,
with the only worker restarting frequently with errors :

assertion failed: comm.cc:163: !fd_table[conn-fd].closing()
and then the parent squid process is trying to restart the kid1 and :
2012/09/19 15:55:25 kid1| assertion failed: store.cc:1854: isEmpty()

until the parent process gives up.
Sep 19 15:55:49 xxx squid[2991]: Squid Parent: (squid-1) process
7274 will not be restarted due to repeated, frequent failures
Sep 19 15:55:49 xxx squid[2991]: Exiting due to repeated, frequent failures


I have only 1 worker.
I have 2 cache_dir :

one is a ramdisk and one is a standard HD folder
cache_dir aufs /var/cache/squid-mem 2000 16 256 max-size=2048
cache_dir aufs /var/cache/squid 20480 16 256


Anyone experiencing the same behaviour?

thanks
Alex


[squid-users] Problème with FTP

2011-12-12 Thread alexandre . alain
Hello All,

I use squid3.1.13-dansguardian2.10.1.1 on a Centos 5.7. And I meet a
problem I never known.
When I connect on an ftp site in anonymous through a web browser NO PROBLEM all 
is good
When I connect on an ftp site with ftp://login:pass@ftp-site through a web 
browser it's
converted in anonymous connection. I must say the ftp-site works with both 
auth-method
but on different directory-tree.
There is no authentication needed on proxy

What's wrong with my configuration file.

regards
Alain


[squid-users] wccp2+tproxy4+squid 3.1 last

2009-08-15 Thread Alexandre Correa
Hello,
i´m trying setup tproxy4 + wccp2 + squid 3.1 but i don´t know whats happens..

squid + wccp2 works fine..

my network scheme: http://img269.imageshack.us/img269/2286/19551413.jpg

my squid.conf:
http_port 3129 tproxy transparent

..
wccp2_router 66.0.0.1
wccp2_forwarding_method 1
wccp2_return_method 1
wccp2_service standard 0
wccp2_address 66.0.0.3

wccp2_service dynamic 80
wccp2_service_info 80 protocol=tcp flags=dst_ip_hash priority=240 ports=80
wccp2_service dynamic 90
wccp2_service_info 90 protocol=tcp flags=src_ip_hash,ports_source
priority=240 ports=80


iptables:

iptables -t mangle -N DIVERT
iptables -t mangle -A DIVERT -j MARK --set-mark 1
iptables -t mangle -A DIVERT -j ACCEPT

iptables -t mangle -A PREROUTING -p tcp -m socket -j DIVERT
iptables -t mangle -A PREROUTING -i wccp0 -p tcp --dport 80 -j
TPROXY --tproxy-mark 0x1/0x1 --on-port 3129

ip rule add fwmark 1 lookup 100
ip route add local 0.0.0.0/0 dev lo table 100


cisco:

!
ip wccp 80
ip wccp 90
!
interface FastEthernet 0/0
   ip wccp 80 redirect in
   ip wccp 90 redirect out
!


i can see packets comming from router to wccp0 interface on linux...
but can´t access web

where is the problem ?!

thanks !!

-- 
Sds.

Alexandre Jeronimo Correa

Onda Internet
www.onda.net.br

IPV6 Ready !
www.ipv6.onda.net.br


RE: [squid-users] CentOS/Squid/Tproxy but no transfer

2009-07-13 Thread Alexandre DeAraujo
I am experiencing the same issue. Traffic is received and acknowledged by the 
webserver, but the connection always times out. I had
someone else take a look at my squid setup to see if it was something I was 
doing wrong, but it was suggested that it was a bug with
wccp. I see you guys are running the newest IOS code on your router, and as 
the issue appears to be a WCCP bug ( Via the captures
we did last night showing duplicate SYN/ACK packets ) I would suggest opening a 
case with Cisco to see what they can see. 

I am in the process of contacting Cisco about this so that they can take a 
look. I am using c7200-js-mz.124-25.bin on this router
and am about to try the c7200-is-mz.124-25.bin (Non-enterprise) to see if it 
will make a difference.

Alex

 -Original Message-
 From: Behnam B.Marandi [mailto:blix...@gmail.com]
 Sent: Sunday, July 12, 2009 10:10 AM
 To: squid-users@squid-cache.org
 Subject: Re: [squid-users] CentOS/Squid/Tproxy but no transfer
 
 I Checked the packages using tcpdump and it seems that the router and
 cache machine have no problem communicating via WCCP:
 8.061995   xx.xx.241.40   xx.xx.241.39   WCCP   2.0   Here I am
 8.062036   xx.xx.241.40   xx.xx.241.39   WCCP   2.0   Here I am
 8.065416   xx.xx.241.39   xx.xx.241.40   WCCP   2.0   I see you
 8.066978   xx.xx.241.39   xx.xx.241.40   WCCP   2.0   I see you
 
 So there must be something wrong with GRE connection or Inbound/Outbound
 routing.
 
 Step 35 and related squid.conf's configuration in step 33 seems kinda
 tricky; Based on service identifier's config in squid.conf (step 33)
 and the Note following step 35 (ip wccp 80 redirect-list 122) I
 concluded that service identifier 80 is the service identifier of
 packets which are incoming from client to the router and therefore
 service identifier 90 is for packets which suppose to return to client.
 
 Configuration in this message confirms that;
 http://www.mail-archive.com/squid-...@squid-cache.org/msg04302.html
 Even though destination and source flags inversed in the configuration
 above (and it got three interfaces that I'm not sure about necessity of
 them), dedication of service identifiers changed as well; service
 identifier 80 changed to the gateway to Internet and service
 identifier 90 did set as client gateway.
 
 I did test all of these (with two interfaces but no traffic coming back
 to the client). Dead end!
 Any suggestion?
 
 ROM: System Bootstrap, Version 12.1(3r)T2, RELEASE SOFTWARE (fc1)
 ROM: C2600 Software (C2600-IS-M), Version 12.2(11)T8,  RELEASE SOFTWARE
 (fc1)
 
 xx10.6 uptime is 1 day, 2 hours, 52 minutes
 System returned to ROM by power-on
 System image file is tftp://xx.xx.241.121/c2600-ipbasek9-mz.124-17.bin;
 
 Behnam.
 
 
 Ritter, Nicholas wrote:
  Behnam-
 
  The router is either not seeing the WCCP registration from the squid
  box, or the squid box is not seeing the ack from the router. Tom's
  suggestion of debug ip wccp is a good start.
 
  The IOS version makes a huge difference. Between revisions of IOS, WCCP
  works and/or breaks, so it is something you have to play with to know
  which IOS works. The specific 12.4 releases I have used work...but on a
  26xx series router you may not have enough flash and/or RAM for 12.4.
 
  Nick
 
 



RE: [squid-users] Updated CentOS/Squid/Tproxy Transparency steps.

2009-07-01 Thread Alexandre DeAraujo
I am giving this one more try, but have been unsuccessful. Any help is always 
greatly appreciated.

Here is the setup:
Router:
Cisco 7200 IOS 12.4(25)
ip wccp web-cache redirect-list 11
access-list 11 permits only selective ip addresses to use wccp

Wan interface (Serial)
ip wccp web-cache redirect out

Global WCCP information:
Router information:
Router Identifier:  192.168.20.1
Protocol Version:   2.0

Service Identifier: web-cache
Number of Service Group Clients:1
Number of Service Group Routers:1
Total Packets s/w Redirected:   8797
Process:4723
Fast:   0
CEF:4074
Redirect access-list:   11
Total Packets Denied Redirect:  124925546
Total Packets Unassigned:   924514
Group access-list:  -none-
Total Messages Denied to Group: 0
Total Authentication failures:  0
Total Bypassed Packets Received:0

WCCP Client information:
WCCP Client ID: 192.168.20.2
Protocol Version:   2.0
State:  Usable
Initial Hash Info:  

Assigned Hash Info: 

Hash Allotment: 256 (100.00%)
Packets s/w Redirected: 306
Connect Time:   00:21:33
Bypassed Packets
Process:0
Fast:   0
CEF:0
Errors: 0

Clients are on FEthernet0/1
Squid server is the only device on FEthernet0/3

Squid Server:
eth0  Link encap:Ethernet  HWaddr 00:14:22:21:A1:7D  
  inet addr:192.168.20.2  Bcast:192.168.20.7  Mask:255.255.255.248
  inet6 addr: fe80::214:22ff:fe21:a17d/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:3325 errors:0 dropped:0 overruns:0 frame:0
  TX packets:2606 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000 
  RX bytes:335149 (327.2 KiB)  TX bytes:394943 (385.6 KiB)

gre0  Link encap:UNSPEC  HWaddr 
00-00-00-00-CB-BF-F4-FF-00-00-00-00-00-00-00-00  
  inet addr:192.168.20.2  Mask:255.255.255.248
  UP RUNNING NOARP  MTU:1476  Metric:1
  RX packets:400 errors:0 dropped:0 overruns:0 frame:0
  TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0 
  RX bytes:31760 (31.0 KiB)  TX bytes:0 (0.0 b)

/etc/rc.d/rc.local file:
ip rule add fwmark 1 lookup 100
ip route add local 0.0.0.0/0 dev lo table 100
modprobe ip_gre
ifconfig gre0 192.168.20.2 netmask 255.255.255.248 up
echo 1  /proc/sys/net/ipv4/ip_nonlocal_bind

/etc/sysconfig/iptables file:
# Generated by iptables-save v1.4.4 on Wed Jul  1 03:32:55 2009
*mangle
:PREROUTING ACCEPT [166:11172]
:INPUT ACCEPT [164:8718]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [130:12272]
:POSTROUTING ACCEPT [130:12272]
:DIVERT - [0:0]
-A DIVERT -j MARK --set-xmark 0x1/0x 
-A DIVERT -j ACCEPT 
-A PREROUTING -p tcp -m socket -j DIVERT 
-A PREROUTING -p tcp -m tcp --dport 80 -j TPROXY --on-port 3128 --on-ip 
192.168.20.2 --tproxy-mark 0x1/0x1 
COMMIT
# Completed on Wed Jul  1 03:32:55 2009
# Generated by iptables-save v1.4.4 on Wed Jul  1 03:32:55 2009
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [160:15168]
:RH-Firewall-1-INPUT - [0:0]
-A INPUT -i gre0 -j ACCEPT 
-A INPUT -p gre -j ACCEPT 
-A INPUT -i eth0 -p gre -j ACCEPT 
-A INPUT -j RH-Firewall-1-INPUT 
-A FORWARD -j RH-Firewall-1-INPUT 
-A RH-Firewall-1-INPUT -s 192.168.20.1/32 -p udp -m udp --dport 2048 -j ACCEPT 
-A RH-Firewall-1-INPUT -i lo -j ACCEPT 
-A RH-Firewall-1-INPUT -p icmp -m icmp --icmp-type any -j ACCEPT 
-A RH-Firewall-1-INPUT -p esp -j ACCEPT 
-A RH-Firewall-1-INPUT -p ah -j ACCEPT 
-A RH-Firewall-1-INPUT -d 224.0.0.251/32 -p udp -m udp --dport 5353 -j ACCEPT 
-A RH-Firewall-1-INPUT -p udp -m udp --dport 631 -j ACCEPT 
-A RH-Firewall-1-INPUT -p tcp -m tcp --dport 631 -j ACCEPT 
-A RH-Firewall-1-INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT 
-A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT 
-A RH-Firewall-1-INPUT -j REJECT --reject-with icmp-host-prohibited 
COMMIT
# Completed on Wed Jul  1 03:32:55 2009

-squid.conf
acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl testing src 10.10.10.0/24
acl SSL_ports port 

RE: [squid-users] TPROXY and wiki article working on CentOS 5.3

2009-06-24 Thread Alexandre DeAraujo
It would be really great if you could do that.

Thank you,

Alex



 -Original Message-
 From: Ritter, Nicholas [mailto:nicholas.rit...@americantv.com]
 Sent: Tuesday, June 23, 2009 8:25 PM
 To: Alexandre DeAraujo
 Cc: squid-users
 Subject: RE: [squid-users] TPROXY and wiki article working on CentOS 5.3
 
 I had two separate problems with the setup that were both due to the
 ordering of rules in iptables. I am still testing one issue, which I
 just recently solved, and was not a squid/tproxy problem.
 
 And I am considering the task and need of upgrading the other components
 of iptables, such as conntrack-tools, etc.
 
 I can post the exact steps I used.
 
 Nick
 
 -Original Message-
 From: Alexandre DeAraujo [mailto:al...@cal.net]
 Sent: Tuesday, June 23, 2009 4:32 PM
 To: 'Ritter, Nicholas'
 Subject: RE: [squid-users] TPROXY and wiki article working on CentOS 5.3
 
 Nicholas,
 
 I have been trying the exact same setup for quite some time now and am
 having nothing but troubles. If possible, could you give me
 the link to the exact wiki you used? Do you also have any pointers as to
 what I should watch out for? I really appreciate any
 help/pointers you can give.
 
 Thank you,
 
 Alex DeAraujo
 
 
 
  -Original Message-
  From: Ritter, Nicholas [mailto:nicholas.rit...@americantv.com]
  Sent: Tuesday, June 23, 2009 2:21 PM
  To: squid-users
  Subject: [squid-users] TPROXY and wiki article working on CentOS 5.3
 
  I just started a task to upgrade our CentOS v5-based squid3/tproxy
 boxes
  utilizing the Wiki article that Amos wrote. Everything is working
 great
  and it was actually far easier to setup then it used to be. Amos,
  Laszlo, and Krisztian...you are amazing, and I wish to offer my
 sincere
  thanks to you guys for the work and talent that you give to the open
  source community.
 
  I am using the following software pieces to accomplish a
 WCCP-redirected
  TPROXY/transparent squid service in combination with Cisco routers:
 
  CentOS 5.3 x86_64
  Squid 3.1.0.8
  Iptables 1.4.3.2
  Kernel 2.6.30
 
  IOS Advanced Security 12.4(15)T8 on a 2811 (as the testbed router/ios
  combination)
 
 
 
  Amos-
 
  I can either create a new set of steps, this time more detailed and
  better tested, for TPROXY/SQUID on CentOS 5.3 to replace the current
 one
  that has my name on it, and/or add some details to the article you
  wrote.
 
  Nick




RE: FW: [squid-users] Tproxy Help // Transparent works fine

2009-06-17 Thread Alexandre DeAraujo
 Does access.log say anything is arriving at Squid?
 Are you able to track the packets anywhere else?
 
 Amos

Once the client tries to browse, the connection times out after 100-150 seconds 
and displays the error page:
The following error was encountered while trying to retrieve the URL: 
http://www.msn.com/
Connection to 207.68.172.246 failed.
The system returned: (110) Connection timed out
The remote host or network may be down. Please try the request again.

..and the following message will show on the access.log(at the same time as the 
timeout page is showed on the browser)
1245254249.779 179970 192.168.10.3 TCP_MISS/504 4533 GET http://www.msn.com/ - 
DIRECT/207.68.173.76 text/html
1245254249.779 179970 192.168.10.3 TCP_MISS/504 4533 GET http://www.msn.com/ - 
DIRECT/207.68.173.76 text/html
Nothing else will show in the access.log from the moment that the client tries 
to browse.

The following is the output of 'iptables -I INPUT -p tcp -j LOG'. Here is 
everything from the time the client tries to browse to when the connection 
times out
client ip = 192.168.10.3
squid ip = 192.168.20.10
msn.com ip = 207.68.172.246

Jun 17 10:09:20 kernel: IN=wccp2 OUT= MAC= SRC=192.168.10.3 DST=192.168.20.10 
LEN=48 TOS=0x00 PREC=0x00 TTL=127 ID=4652 DF PROTO=TCP SPT=3920 DPT=3128 
WINDOW=65535 RES=0x00 SYN URGP=0 MARK=0x1 
Jun 17 10:09:20 kernel: IN=wccp2 OUT= MAC= SRC=192.168.10.3 DST=192.168.20.10 
LEN=40 TOS=0x00 PREC=0x00 TTL=127 ID=4653 DF PROTO=TCP SPT=3920 DPT=3128 
WINDOW=65535 RES=0x00 ACK URGP=0 MARK=0x1 
Jun 17 10:09:20 kernel: IN=wccp2 OUT= MAC= SRC=192.168.10.3 DST=192.168.20.10 
LEN=968 TOS=0x00 PREC=0x00 TTL=127 ID=4654 DF PROTO=TCP SPT=3920 DPT=3128 
WINDOW=65535 RES=0x00 ACK PSH URGP=0 MARK=0x1 
Jun 17 10:09:20 kernel: IN=wccp2 OUT= MAC= SRC=192.168.10.3 DST=207.68.172.246 
LEN=60 TOS=0x00 PREC=0x00 TTL=63 ID=46343 DF PROTO=TCP SPT=34661 DPT=80 
WINDOW=5840 RES=0x00 SYN URGP=0 MARK=0x1 
Jun 17 10:09:20 kernel: IN=wccp2 OUT= MAC= SRC=192.168.10.3 DST=207.68.172.246 
LEN=40 TOS=0x00 PREC=0x00 TTL=127 ID=4655 PROTO=TCP SPT=34661 DPT=80 WINDOW=0 
RES=0x00 RST URGP=0 MARK=0x1 
Jun 17 10:09:23 kernel: IN=wccp2 OUT= MAC= SRC=192.168.10.3 DST=207.68.172.246 
LEN=60 TOS=0x00 PREC=0x00 TTL=63 ID=46344 DF PROTO=TCP SPT=34661 DPT=80 
WINDOW=5840 RES=0x00 SYN URGP=0 MARK=0x1 
Jun 17 10:09:23 kernel: IN=wccp2 OUT= MAC= SRC=192.168.10.3 DST=207.68.172.246 
LEN=40 TOS=0x00 PREC=0x00 TTL=127 ID=4656 PROTO=TCP SPT=34661 DPT=80 WINDOW=0 
RES=0x00 RST URGP=0 MARK=0x1 
Jun 17 10:09:29 kernel: IN=wccp2 OUT= MAC= SRC=192.168.10.3 DST=207.68.172.246 
LEN=60 TOS=0x00 PREC=0x00 TTL=63 ID=46345 DF PROTO=TCP SPT=34661 DPT=80 
WINDOW=5840 RES=0x00 SYN URGP=0 MARK=0x1 
Jun 17 10:09:29 kernel: IN=wccp2 OUT= MAC= SRC=192.168.10.3 DST=207.68.172.246 
LEN=40 TOS=0x00 PREC=0x00 TTL=127 ID=4660 PROTO=TCP SPT=34661 DPT=80 WINDOW=0 
RES=0x00 RST URGP=0 MARK=0x1 
Jun 17 10:09:41 kernel: IN=wccp2 OUT= MAC= SRC=192.168.10.3 DST=207.68.172.246 
LEN=60 TOS=0x00 PREC=0x00 TTL=63 ID=46346 DF PROTO=TCP SPT=34661 DPT=80 
WINDOW=5840 RES=0x00 SYN URGP=0 MARK=0x1 
Jun 17 10:09:41 kernel: IN=wccp2 OUT= MAC= SRC=192.168.10.3 DST=207.68.172.246 
LEN=40 TOS=0x00 PREC=0x00 TTL=127 ID=4664 PROTO=TCP SPT=34661 DPT=80 WINDOW=0 
RES=0x00 RST URGP=0 MARK=0x1 
Jun 17 10:10:05 kernel: IN=wccp2 OUT= MAC= SRC=192.168.10.3 DST=207.68.172.246 
LEN=60 TOS=0x00 PREC=0x00 TTL=63 ID=46347 DF PROTO=TCP SPT=34661 DPT=80 
WINDOW=5840 RES=0x00 SYN URGP=0 MARK=0x1 
Jun 17 10:10:05 kernel: IN=wccp2 OUT= MAC= SRC=192.168.10.3 DST=207.68.172.246 
LEN=40 TOS=0x00 PREC=0x00 TTL=127 ID=4673 PROTO=TCP SPT=34661 DPT=80 WINDOW=0 
RES=0x00 RST URGP=0 MARK=0x1 
Jun 17 10:10:30 kernel: IN=wccp2 OUT= MAC= SRC=192.168.10.3 DST=207.68.172.246 
LEN=60 TOS=0x00 PREC=0x00 TTL=63 ID=32546 DF PROTO=TCP SPT=54114 DPT=80 
WINDOW=5840 RES=0x00 SYN URGP=0 MARK=0x1 
Jun 17 10:10:30 kernel: IN=wccp2 OUT= MAC= SRC=192.168.10.3 DST=207.68.172.246 
LEN=40 TOS=0x00 PREC=0x00 TTL=127 ID=4683 PROTO=TCP SPT=54114 DPT=80 WINDOW=0 
RES=0x00 RST URGP=0 MARK=0x1 
Jun 17 10:10:33 kernel: IN=wccp2 OUT= MAC= SRC=192.168.10.3 DST=207.68.172.246 
LEN=60 TOS=0x00 PREC=0x00 TTL=63 ID=32547 DF PROTO=TCP SPT=54114 DPT=80 
WINDOW=5840 RES=0x00 SYN URGP=0 MARK=0x1 
Jun 17 10:10:33 kernel: IN=wccp2 OUT= MAC= SRC=192.168.10.3 DST=207.68.172.246 
LEN=40 TOS=0x00 PREC=0x00 TTL=127 ID=4684 PROTO=TCP SPT=54114 DPT=80 WINDOW=0 
RES=0x00 RST URGP=0 MARK=0x1 
Jun 17 10:10:39 kernel: IN=wccp2 OUT= MAC= SRC=192.168.10.3 DST=207.68.172.246 
LEN=60 TOS=0x00 PREC=0x00 TTL=63 ID=32548 DF PROTO=TCP SPT=54114 DPT=80 
WINDOW=5840 RES=0x00 SYN URGP=0 MARK=0x1 
Jun 17 10:10:39 kernel: IN=wccp2 OUT= MAC= SRC=192.168.10.3 DST=207.68.172.246 
LEN=40 TOS=0x00 PREC=0x00 TTL=127 ID=4688 PROTO=TCP SPT=54114 DPT=80 WINDOW=0 
RES=0x00 RST URGP=0 MARK=0x1 
Jun 17 10:10:51 kernel: IN=wccp2 OUT= MAC= SRC=192.168.10.3 DST=207.68.172.246 
LEN=60 TOS=0x00 PREC=0x00 TTL=63 ID=32549 DF PROTO=TCP SPT=54114 DPT=80 
WINDOW=5840 RES=0x00 SYN 

FW: [squid-users] Tproxy Help // Transparent works fine

2009-06-16 Thread Alexandre DeAraujo
-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz]
Sent: Monday, June 15, 2009 9:21 PM
To: Alexandre DeAraujo
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Tproxy Help // Transparent works fine

Should just be an upgrade Squid to 3.1 release and follow the instructions at:
http://wiki.squid-cache.org/Features/Tproxy4
Amos

I downloaded and installed squid-3.1.0.8.tar.gz with the configure build option 
'--enable-linux-netfilter'. 
Made sure squid.conf was configured with 
http_port 3128
http_port 3129 tproxy

The following modules are enabled on the kernel config file:
NF_CONNTRACK
NETFILTER_TPROXY
NETFILTER_XT_MATCH_SOCKET
NETFILTER_XT_TARGET_TPROXY

After typing the following lines:
iptables -t mangle -N DIVERT
iptables -t mangle -A DIVERT -j MARK --set-mark 1
iptables -t mangle -A DIVERT -j ACCEPT
iptables -t mangle -A PREROUTING -p tcp -m socket -j DIVERT
iptables -t mangle -A PREROUTING -p tcp --dport 80 -j TPROXY --tproxy-mark 
0x1/0x1 --on-port 3129

my iptables-save output:
# Generated by iptables-save v1.4.3.2 on Tue Jun 16 16:16:27 2009
*nat
:PREROUTING ACCEPT [33:2501]
:POSTROUTING ACCEPT [1:76]
:OUTPUT ACCEPT [1:76]
-A PREROUTING -i wccp2 -p tcp -j REDIRECT --to-ports 3128 
COMMIT
# Completed on Tue Jun 16 16:16:27 2009
# Generated by iptables-save v1.4.3.2 on Tue Jun 16 16:16:27 2009
*mangle
:PREROUTING ACCEPT [35:2653]
:INPUT ACCEPT [158:8713]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [123:11772]
:POSTROUTING ACCEPT [123:11772]
:DIVERT - [0:0]
-A PREROUTING -p tcp -m socket -j DIVERT 
-A PREROUTING -p tcp -m tcp --dport 80 -j TPROXY --on-port 3129 --on-ip 0.0.0.0 
--tproxy-mark 0x1/0x1 
-A DIVERT -j MARK --set-xmark 0x1/0x 
-A DIVERT -j ACCEPT 
COMMIT
# Completed on Tue Jun 16 16:16:27 2009

Then I entered the following lines:
ip rule add fwmark 1 lookup 100
ip route add local 0.0.0.0/0 dev lo table 100
echo 1  /proc/sys/net/ipv4/ip_forward

Client could not browse after that. I see the connections coming in with 
tcpdump, but all connections just timeout

ps. after compiling squid-3.1.0.8, I did a search for 'tproxy' on the console 
screen and found this line:
checking for linux/netfilter_ipv4/ip_tproxy.h... no
I don’t know if this would have anything to do with it..

Thanks,

Alex



[squid-users] Tproxy Help // Transparent works fine

2009-06-15 Thread Alexandre DeAraujo
I have a Transparent Proxy setup currently working and not seeing any problems 
while browsing. I am trying to setup squid to show
client's IP instead of proxy server's IP.
How do I go from this setup to implementing tproxy? Any pointers will be highly 
appreciated. 

CentOS release 5.3 (Final)
iptables v1.4.3.2
Squid Cache: Version 3.0.STABLE16
Linux 2.6.29.4-tproxy2 (custom kernel for tproxy)
Cisco 7206VXR WCCPv2

// start of squid.conf
acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl SSL_ports port 443
acl SSL_ports port 8443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl Safe_ports port 8443# Plesk
acl CONNECT method CONNECT
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localnet
#http_access deny all
http_access allow all
http_port 3128 transparent
hierarchy_stoplist cgi-bin ?
hosts_file /etc/hosts
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320
coredump_dir /var/spool/squid

http_port 3129

logformat squid %ts.%03tu %6tr %a %Ss/%03Hs %st %rm %ru %un %Sh/%A %mt
#emulate_httpd_log on
access_log /var/log/squid/access.log squid
cache_access_log /var/log/squid/access.log
cache_log /var/log/squid/cache.log
cache_store_log /var/log/squid/store.log
debug_options ALL,3

no_cache allow our_networks
cache_dir ufs /var/spool/squid 20 256 256
cache_effective_user squid
cache_swap_high 100%
cache_swap_low 80%
cache_mem 2 GB
maximum_object_size  8192 KB
half_closed_clients on
client_db off

wccp2_router router primary IP on GEthernet
wccp2_rebuild_wait on
wccp2_forwarding_method 1
wccp2_return_method 1
wccp2_assignment_method 1
wccp2_service standard 0

forwarded_for on
// end of squid.conf

// start of /etc/rc.d/rc.local
modprobe ip_gre
iptunnel add wccp2 mode gre remote router wccp id IP address local eth0 IP 
address dev eth0
ifconfig wccp2 eth0 IP Address netmask 255.255.255.255 up
echo 0  /proc/sys/net/ipv4/conf/wccp2/rp_filter
# these are the ONLY iptables rules on the system at the moment(to avoid 
issues).
iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 
3128 
iptables -t nat -A PREROUTING -i wccp2 -p tcp -j REDIRECT --to-port 3128
// end of rc.local

Thanks,

Alex DeAraujo




Re: [squid-users] antivirus for squid proxy

2009-02-12 Thread Alexandre Gonçalves Jacarandá
2009/2/12 Ralf Hildebrandt ralf.hildebra...@charite.de:
 * sameer shinde s9sam...@gmail.com:
 Hi All,

 Which is a best antivirus in untubu linux? I'm running squid
 proxy-server on my ubuntu server
 I want all the web requests tobe scanned by antivirus for any
 virus/malware infection and then
 pass to the user.
 Which antivirus package can help me in this?

 I'm using dansguardian with clamd

Dansguardian with clamd is a very goog choice, but if you want only an
antivirus with squid take a look at http://www.server-side.de/
Is a solution with squid+clamav+havp

Good luck, Alexandre.
 --
 Ralf Hildebrandtralf.hildebra...@charite.de
 Charite - Universitätsmedizin BerlinTel.  +49 (0)30-450 570-155
 Geschäftsbereich IT | Abt. Netzwerk Fax.  +49 (0)30-450 570-962
 Hindenburgdamm 30 | 12200 Berlin



[squid-users] No scanning at upload (squid+c-icap+clamav) ?

2008-12-23 Thread Alexandre Fouché

Hi,

Is it normal that squid+c-icap+clamav does not trigger an alert when  
i upload a virus ?
I am using squid as a reverse proxy to accelerate a dynamic user  
generated content website, and so, it does not prevent from  
uploading malware to the website, only from downloading it




[squid-users] squid3 + latest c-icap + clamav unreliable ?

2008-12-23 Thread Alexandre Fouché


Hi,

After seting up squid3+c-icap+clamav in a reverse proxy  
configuration, and i found it quite unreliable from an user  
experience point of view. It appeared to me as being very  
unreliable, as one page of ten would always seem unreachable and  
report an error, despites squid alone would work great without c-icap 
+clamav.


Does anyone has such an experience ? This is sad, as i would rather  
use c-icap than HAVP


For information, i am running Opensuse 11 64bits and squid3.0 and  
clamav 0.94.2 from official binary repositories, and latest  
c_icap-060708rc1 compiled on this same machine.






[squid-users] last try with Squid + Ldap + Kerberos

2008-10-04 Thread Alexandre augusto
Hi all,

After some months looking for some help to build squid + AD authentication 
using kerberos in transparent mode (without popup asking for the password), I 
can´t get success.

The latest try was using this doc/method:
http://klaubert.wordpress.com/2008/01/09/squid-kerberos-authentication-and-ldap-authorization-in-active-directory/

There is no way to use Squid + NTLM on my company due to internal policy, so, I 
already contact some squid developers to do that but the guys told I have a big 
problem because even if he can do, there is no promess that after some 
Microsoft update or service pack, the things continue to work and as this can 
be the first time that someone develope it, its possible that it doesn´t work...

If any people know how to do it, please contact me in private to stablish a 
contract and pay for the job.

But, if it´s impossible to make it, unfortunately I will stop to use Squid 
after more than 10 years working with this powerful product by problems of 
compatibility.

Thanks a lot

Alexandre


  Novos endereços, o Yahoo! que você conhece. Crie um email novo com a sua 
cara @ymail.com ou @rocketmail.com.
http://br.new.mail.yahoo.com/addresses


[squid-users] FTP over HTTP problem

2008-09-05 Thread Alexandre augusto
Hi all,

I´m doing some ftp connections using IE7 or Firefox that cannot view
folders when the user use Squid proxy.

whet I try
ftp://user:[EMAIL PROTECTED]

I got

CWD user
user: Access is denied.

The log messages only show me a 401 http auth error:

[05/Sep/2008:16:25:00 -0300] GET ftp://192.55.140.6/ HTTP/1.1 401
1804 TCP_MISS:DIRECT

Without proxy using ftp by command line and connecting direct to the
internet, everything is OK

anyone can help me to fix that ?

browser configuration, squid options and etc ...

thank you in advance

Alexandre



  Novos endereços, o Yahoo! que você conhece. Crie um email novo com a sua 
cara @ymail.com ou @rocketmail.com.
http://br.new.mail.yahoo.com/addresses


Re: [squid-users] Can i loadbalnce 2 web servers and 1 squid on the same machine ?

2008-08-24 Thread Alexandre Correa
yes, you can..

but apache have to listen on diferent ports.. or same port on diferent
ips (ip alias) .. and squid listen at 80 !



On Sun, Aug 24, 2008 at 3:23 AM, Meir Yanovich [EMAIL PROTECTED] wrote:
 Hello all
 i like to install 2 web servers and load balance between them with squid
 but i like to install all the software on one linux machine ( this is
 what i have now ) .
 i was reading the PAQ and i know it can be done with one web server.
 can it be done with 2?
 thanks




-- 

Sds.
Alexandre J. Correa
Onda Internet / OPinguim.net
http://www.ondainternet.com.br
http://www.opinguim.net


Re: [squid-users] Log analyzer

2008-08-18 Thread Alexandre Gonçalves Jacarandá
You can try SARG written by Pedro Lineu Orso.

2008/8/17 Kinkie [EMAIL PROTECTED]:
 On Sun, Aug 17, 2008 at 7:30 PM, Ralf Hildebrandt
 [EMAIL PROTECTED] wrote:
 * Kinkie [EMAIL PROTECTED]:
 2008/8/17 Ralf Hildebrandt [EMAIL PROTECTED]:
  * Paras Fadte [EMAIL PROTECTED]:
  Hi,
 
  Can anyone suggest which is the best log analyzer for squid ?
  Calamaris?

 It depends; if focus of the analysis is to understand squid's
 efficiency at caching, it probably is. If it is to analyze users'
 behaviour, probably somethign else (webalizer or similar) is probably
 better.

 Well, I can see which domains they surf to. Counts as behaviour, if
 you ask me.

 I'm not saying it can't; my point is, there is no better thing, it
 depends on the specific needs.
 The website and/or the wiki have a related software section which is
 a good starting point, but a bit of search is in order anyways.


 --
  /kinkie



[squid-users] Squid + Kerberos Auth

2008-07-22 Thread Alexandre augusto
Hi all,

about environment:
Squid version = squid-3.0.STABLE8
Active Directory Windows 2003 
Linux Redhat EL 5

I´m trying using kerberos auth with squid_kerb_auth looking for  

http://klaubert.wordpress.com/2008/01/09/squid-kerberos-authentication-and-ldap-authorization-in-active-directory/

(anyone have used it ?)

I have used this options to create keytab file on AD server as domain admin:

C:\ktpass -princ HTTP/[EMAIL PROTECTED] -mapuser domain_rj\squid_user -crypto 
D ES-CBC-CRC -ptype KRB5_NT_PRINCIPAL -pass *password* -out 
squid.DOMAIN.COM.BR.keytab

after export it to Linux box and try the initial test, I got it:

[EMAIL PROTECTED] etc]# kinit -V -k -t squid.domain.COM.BR.keytab domain.COM.BR
kinit(v5): Client not found in Kerberos database while getting initial 
credentials

googling this error I found a possible solution:

kinit: Client not found in Kerberos database while getting initial credentials.

Meaning: The principal whose credentials are being requested does not exist in 
the Kerberos database.

Reason or Active corretion:
Verify there is a principal entry available for the client in the Kerberos 
database; if not create the same

Question:

So, what is wrong and how can get it working ?

Thanks in advance

Alexandre


  Novos endereços, o Yahoo! que você conhece. Crie um email novo com a sua 
cara @ymail.com ou @rocketmail.com.
http://br.new.mail.yahoo.com/addresses


[squid-users] squid 2.6 with wccpv2 error ... router id !

2008-07-17 Thread Alexandre Correa
Hello,

i´m having problems to setup wccp with squid and freebsd,

my setup:

router:
!
!
ip wccp web-cache
interface Loopback0
 ip address 10.254.254.2 255.255.255.255
!
interface FastEthernet0/0/0
  description *** lan to clients ***
  ip address 189.x.x.1 255.255.255.0
  ip wccp web-cache redirect in
..
..


squid.conf
http_port 3128 transparent

wccp2_router 10.254.254.2
wccp2_forwarding_method 1
wccp2_return_method 1
wccp2_service standard 0


freebsd:
bge0: 189.x.x.3
ifconfig gre0 create inet 189.x.x.3 10.254.254.1 netmask
255.255.255.255 link2 tunnel 189.x.x.3 10.254.254.2 up

ipfw list:
01000 fwd 127.0.0.1,3128 tcp from any to any dst-port 80 recv gre0
65535 allow ip from any to any


#sh ip wccp
Global WCCP information:
Router information:
Router Identifier:   10.254.254.2
Protocol Version:2.0

Service Identifier: web-cache
Number of Cache Engines: 0
Number of routers:   0
Total Packets Redirected:0
Redirect access-list:-none-
Total Packets Denied Redirect:   0
Total Packets Unassigned:0
Group access-list:   -none-
Total Messages Denied to Group:  0
Total Authentication failures:   0


#sh ip wccp web-cache detail
WCCP Cache-Engine information:
Web Cache ID:  10.254.254.1
Protocol Version:  2.0
State: NOT Usable
Initial Hash Info: 
   
Assigned Hash Info:
   
Hash Allotment:0 (0.00%)
Packets Redirected:0
Connect Time:  00:00:08


ifconfig gre0
gre0: flags=d051UP,POINTOPOINT,RUNNING,LINK0,LINK2,MULTICAST mtu 1476
tunnel inet 189.x.x.3 -- 10.254.254.2
inet 189.x.x.3 -- 10.254.254.1 netmask 0x



someone can say where i´m mistaking ?!

thanks !!!

regards,


Sds.
Alexandre J. Correa
Onda Internet / OPinguim.net
http://www.ondainternet.com.br
http://www.opinguim.net


Re: [squid-users] Squid deny access to some part of website (SOLVED)

2008-07-08 Thread Alexandre augusto
Hi all,

I was in trouble to access some sites using Squid, after post a message here 
and got help from some guy, I had the certainty that my proxy was working 
properly.

My problem was related a DNS fail to resolve some domains.

Part of website was placed on another domain and a problem with DNS server that 
I was using didn´t let Squid find out the servers hosting jpg an flash files.

Thank you for the help Leonardo

Best regards

Alexandre


  Novos endereços, o Yahoo! que você conhece. Crie um email novo com a sua 
cara @ymail.com ou @rocketmail.com.
http://br.new.mail.yahoo.com/addresses


Re: [squid-users] transparent intercepting proxy

2008-07-07 Thread Alexandre Correa
no, it´s now possible without dns ... browser need to resolve address
to ip to start connections

On Mon, Jul 7, 2008 at 6:19 AM, Indunil Jayasooriya [EMAIL PROTECTED] wrote:
 Hi,

 I have setup transparent intercepting proxy (squid 2.6 branch) in
 RedHat EL5. It has 2 NICs. One is connected to router. The other is
 connected to LAN.  Client's gateway is LAN ip address of the proxy
 server.Clients have 2 Dns entries. It works fine. If I remove dns
 entires of clinets PCs. It will NOT work.

 Is it normal?

 Without DNS sentires in Clients Pcs. Is it possible to work?

 Hope to hear from you.



 --
 Thank you
 Indunil Jayasooriya




-- 

Sds.
Alexandre J. Correa
Onda Internet / OPinguim.net
http://www.ondainternet.com.br
http://www.opinguim.net


[squid-users] Squid deny access to some part of website

2008-07-07 Thread Alexandre augusto
Hi guys,

On the access.log the Squid show TCP_DENIED entry to some part of website

I´m authenticating my users using NTLM and all entry on access.log that DENIED 
part of site do not show the standard domain\username on log.
 only - -...

For example:

192.168.15.13 - contac\xtz0001 [07/Jul/2008:17:42:07 -0300] GET 
http://c.extra.com.br/content/consul.swf HTTP/1.1 503 1924 TCP_MISS:DIRECT

192.168.15.13 - - [07/Jul/2008:17:42:07 -0300] GET 
http://c.extra.com.br/content/brastemp.swf HTTP/1.1 407 2183 TCP_DENIED:NONE

192.168.15.13 - - [07/Jul/2008:17:42:07 -0300] GET 
http://c.extra.com.br/content/brastemp.swf HTTP/1.1 407 2257 TCP_DENIED:NONE

192.168.15.13 - contac\xtz0001 [07/Jul/2008:17:42:07 -0300] GET 
http://c.extra.com.br/content/brastemp.swf HTTP/1.1 503 1928 TCP_MISS:DIRECT

192.168.15.13 - - [07/Jul/2008:17:42:07 -0300] GET 
http://c.extra.com.br/content/lg.swf HTTP/1.1 407 2165 TCP_DENIED:NONE

192.168.15.13 - - [07/Jul/2008:17:42:07 -0300] GET 
http://c.extra.com.br/content/lg.swf HTTP/1.1 407 2239 TCP_DENIED:NONE

Looking for HTTP manuals I got that 407 is a Proxy Authentication Required 
response.

Is possiblie that I have problems authenticating all requests  ?

Also I´m using SquidGuard feature, but I have tried disable it on squid.conf 
without success.

Why squid deny some part of site and allow other one ?

thanks in Advance

Alexandre


  Novos endereços, o Yahoo! que você conhece. Crie um email novo com a sua 
cara @ymail.com ou @rocketmail.com.
http://br.new.mail.yahoo.com/addresses


Re: [squid-users] Squid deny access to some part of website

2008-07-07 Thread Alexandre augusto
Hi Leonardo,

The problem is that the website just show me part of website information.
The pictures (in most cases flash) is denied.

Do you have any idea ?

Thanks you

Alexandre

--- Em seg, 7/7/08, Leonardo Rodrigues Magalhães [EMAIL PROTECTED] escreveu:

 De: Leonardo Rodrigues Magalhães [EMAIL PROTECTED]
 Assunto: Re: [squid-users] Squid deny access to some part of website
 Para: [EMAIL PROTECTED]
 Cc: squid-users@squid-cache.org
 Data: Segunda-feira, 7 de Julho de 2008, 18:01
 Alexandre augusto escreveu:
  Hi guys,
 
  On the access.log the Squid show TCP_DENIED entry to
 some part of website
 
  I´m authenticating my users using NTLM and all entry
 on access.log that DENIED part of site do not show the
 standard domain\username on log.
   only - -...

 
 This is the EXPECTED behavior for NTLM authentications:
  2 (two) 
 denied/407 log entries and then the 'allow' hit,
 some MISS or HIT 
 
 this is expected and should be FAQqed somewhere.
 
 -- 
 
 
   Atenciosamente / Sincerily,
   Leonardo Rodrigues
   Solutti Tecnologia
   http://www.solutti.com.br
 
   Minha armadilha de SPAM, NÃO mandem email
   [EMAIL PROTECTED]
   My SPAMTRAP, do not email it


  Novos endereços, o Yahoo! que você conhece. Crie um email novo com a sua 
cara @ymail.com ou @rocketmail.com.
http://br.new.mail.yahoo.com/addresses


[squid-users] Build ACL with AD group and website list

2008-06-30 Thread Alexandre augusto
Hi all,

I´m trying use an ACL do allow/deny a single group um my Windows 2003 domain 
access webmail.

All people will be able to access all sites (if squidguard permit) but no one 
webmail site (if user is not a Webmail group menber).

A word list with most common webmails exist (webmail_list.txt)

My question is:

Where put webmail_list.txt working together Webmail group
anyone can give some help ?

This is my squid.conf entry:

acl ntlm proxy_auth REQUIRED

external_acl_type nt_group ttl=0 %LOGIN /usr/lib64/squid/wbinfo_group.pl

#my Webmail_Group have only 10 users that will be able to access this 
sites/service

acl webmail_users external nt_group Webmail_Group dstdomain -i 
/etc/squid/webmail_list.txt

#Internal ACLs
http_access deny !Safe_ports
http_access allow ntlm webmail_users
http_access deny limite_max
http_access allow localhost
http_access deny all


Is not working. 
all users can access all sites including webmail_list

any idea ?

thanks in advance

Alexandre


  Novos endereços, o Yahoo! que você conhece. Crie um email novo com a sua 
cara @ymail.com ou @rocketmail.com.
http://br.new.mail.yahoo.com/addresses


Re: [squid-users] Detail about if an object is cached or not

2008-06-24 Thread Alexandre Correa
at headers:

X-Cache: MISS - not cached
X-Cache: HIT - cached


On Tue, Jun 24, 2008 at 3:34 AM, WestWind [EMAIL PROTECTED] wrote:
 Hi,

  I want to know how squid determine an HTTP request is cached or not,
 for example: an request(GET a.html) sent by browser 2 times with
 different cookie.
 for the 2nd time, will squid return the cached object create by 1st request?
 Different 'referer', 'user-agent' or anything else affect  it ?
 Can I do something to make squid return the cached object or just
 forward to backend server?

 Is there any docment talking about this detail?

 Tks.




-- 

Sds.
Alexandre J. Correa
Onda Internet / OPinguim.net
http://www.ondainternet.com.br
http://www.opinguim.net


Re: [squid-users] Squid + AD (LDAP)

2008-06-14 Thread Alexandre augusto
Hi Henrik,

You are correct.
my search base is DC=abc,DC=com,DC=br


I have nothing related LDA on cache.log

I´m looking for some documentation and found many guys using Squid + Samba ( 
winbind) with libnss_winbind.so and  libnss_winbind.so.2 authenticating on AD 
(win 2003). 

That is way to take ?

thank you

Alexandre

--- Em sáb, 14/6/08, Henrik Nordstrom [EMAIL PROTECTED] escreveu:

 De: Henrik Nordstrom [EMAIL PROTECTED]
 Assunto: Re: [squid-users] Squid + AD (LDAP)
 Para: [EMAIL PROTECTED]
 Cc: squid-users@squid-cache.org
 Data: Sábado, 14 de Junho de 2008, 6:21
 On fre, 2008-06-13 at 18:09 -0700, Alexandre augusto wrote:
  Hi All,
  
  I was wrong when said that my authentication was
 working in last email...
  
  I´m trying work Squid with MS AD
  
  So this is my squid.conf entry about LDAP auth:
  
  auth_param basic program
 /usr/local/squid/libexec/squid_ldap_auth -R -b
 CN=user_admin,OU=ABC,DC=abc,DC=com,DC=br -D
 CN=user_admin,OU=ABC,DC=abc,DC=com,DC=br -w
 /usr/local/squid/etc/file -f
 (objectclass=*) -h ldap_server_ip:port
  
  Using this configuration with Ldapbrowser tool
 (Softerra), I can search my entire LDAP tree without
 problems.
  
  my search base is:
  
  CN=user_admin,OU=Usuarios,OU=ABC,DC=abc,DC=com,DC=br
 
 Are you really really sure? That looks very much like the
 user_admin
 object, not the OU (or any upper level) where all your
 users are found..
 
  user_admin is Domain Admin of AD ( maybe
 necessary to bind on it ???)
 
 That's what -D does.
 
  But Squid just give me an old TCP_DENIED entry on log
 files:
  
  1213403347.792 15 192.168.10.1 TCP_DENIED/407 2706
 GET http://www.gm.com/ user_admin NONE/- text/html  
  
  1213405393.479 15 192.168.10.1 TCP_DENIED/407 2706
 GET http://www.squid-cache.org/ user_admin NONE/- text/html 
 
 Anything in cache.log?
 
 You might need TLS/SSL for this to work. AD is often
 configured in such
 manner that plaintext authentication (simple bind without
 encryption) is
 not allowed.
 
 Regards
 Henrik


  Abra sua conta no Yahoo! Mail, o único sem limite de espaço para 
armazenamento!
http://br.mail.yahoo.com/


[squid-users] Squid + SAMBA + AD

2008-06-14 Thread Alexandre augusto
Hi all,

I´m still in trouble to get it working ...

The status is that my Squid just tell me this error on cache.log after try 
authenticate:

[2008/06/14 17:06:41, 0] utils/ntlm_auth.c:get_winbind_netbios_name(172)
  could not obtain winbind netbios name!

About access.log:
1213471888.513  0 192.168.15.21 TCP_DENIED/407 3056 GET 
http://www.google.com/favicon.ico - NONE/- text/html

I´m using:
Squid 3.0.STABLE6
Samba Version 3.2.0rc2
Redhat 5
Active directory on WInd 2003

After have built Samba + winbind, Squid + NTLM

After configured SAMBA, Winbind, Kerberos and Squid

my  wbinfo -t
checking the trust secret via RPC calls succeeded

my wbinfo -u
list all users of my domain

my wbinfo -g 
list all groups on my domain

my squid.conf regarding authentication is:

#Autenticacao de usuarios
auth_param ntlm program /usr/bin/ntlm_auth domain/AD_server_name
auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp
auth_param ntlm children 10

anyone have any idea ?

thank you

Alexandre





  Abra sua conta no Yahoo! Mail, o único sem limite de espaço para 
armazenamento!
http://br.mail.yahoo.com/


Re: [squid-users] Squid + SAMBA + AD

2008-06-14 Thread Alexandre augusto

Hi All,

I really don´t know what was wrong with another installation (by source).
but i forguet about it and start new installation using the squid RPM of 
Redhat. 
(a friend told me to do it because all necessary modules are tested and are 
include on package), So I replace the config file for my customized one and its 
working now.

thanks a lot

Alexandre


  Abra sua conta no Yahoo! Mail, o único sem limite de espaço para 
armazenamento!
http://br.mail.yahoo.com/


[squid-users] Squid + AD Auth - popup

2008-06-13 Thread Alexandre augusto
Hi all

I will migrate my proxy infrastructure to use Squid.
I´m doing a LDAP (MS AD) authentication without problems but, i´m in trouble to 
authenticate my users against MS AD without web popup.(asking for user and 
password)

I need to do it as a transparent mode for authentication

could you please give me some help .

thanks in advance

Alexandre




  Abra sua conta no Yahoo! Mail, o único sem limite de espaço para 
armazenamento!
http://br.mail.yahoo.com/


[squid-users] Squid + AD (LDAP)

2008-06-13 Thread Alexandre augusto
Hi All,

I was wrong when said that my authentication was working in last email...

I´m trying work Squid with MS AD

So this is my squid.conf entry about LDAP auth:

auth_param basic program /usr/local/squid/libexec/squid_ldap_auth -R -b 
CN=user_admin,OU=ABC,DC=abc,DC=com,DC=br -D 
CN=user_admin,OU=ABC,DC=abc,DC=com,DC=br -w /usr/local/squid/etc/file -f 
(objectclass=*) -h ldap_server_ip:port

Using this configuration with Ldapbrowser tool (Softerra), I can search my 
entire LDAP tree without problems.

my search base is:

CN=user_admin,OU=Usuarios,OU=ABC,DC=abc,DC=com,DC=br

user_admin is Domain Admin of AD ( maybe necessary to bind on it ???)

But Squid just give me an old TCP_DENIED entry on log files:

1213403347.792 15 192.168.10.1 TCP_DENIED/407 2706 GET http://www.gm.com/ 
user_admin NONE/- text/html  

1213405393.479 15 192.168.10.1 TCP_DENIED/407 2706 GET 
http://www.squid-cache.org/ user_admin NONE/- text/html 

Anyone can help me ?

Thanks in advance

Alexandre


  Abra sua conta no Yahoo! Mail, o único sem limite de espaço para 
armazenamento!
http://br.mail.yahoo.com/


[squid-users] wccp transparent ??? (like tproxy)

2008-06-11 Thread Alexandre Correa
Hello..

are wccp (v2) transparent like tproxy ? with wccp running, destination
www server see ip of the client or proxy ? (client has routable/valid
ip address)


reagards !

-- 

Sds.
Alexandre J. Correa
Onda Internet / OPinguim.net
http://www.ondainternet.com.br
http://www.opinguim.net


[squid-users] not redirect some ips to proxy via wccp

2008-06-11 Thread Alexandre Correa
Hello !!

I´m playing with wccp v2 and it´s working fine.. but i need to wccp
not redirect some ip-blocks to proxy (ignore them) and permit go
direct 

proxy runs on freebsd..and proxy server isn´t gateway .. gateway is
other server !!


ty.

regards !!
-- 

Sds.
Alexandre J. Correa
Onda Internet / OPinguim.net
http://www.ondainternet.com.br
http://www.opinguim.net


Re: [squid-users] not redirect some ips to proxy via wccp

2008-06-11 Thread Alexandre Correa
:)

i forgot that cisco acccess-lists are top-down parsing.. i have to add
dsts hosts first and after the ips to redirect to proxy :)


thanks !!!

regards !

On Wed, Jun 11, 2008 at 12:22 PM, Henrik Nordstrom
[EMAIL PROTECTED] wrote:
 On ons, 2008-06-11 at 09:04 -0300, Alexandre Correa wrote:

 I´m playing with wccp v2 and it´s working fine.. but i need to wccp
 not redirect some ip-blocks to proxy (ignore them) and permit go
 direct 

 This is done by acl lists in the router.

 Depending on your setup it might also be possibe by adjusting the
 firewall rules on your Squid server to allow direct forwarding if the
 traffic in question.

 proxy runs on freebsd..and proxy server isn´t gateway .. gateway is
 other server !!

 When using WCCP the boundaries is a bit diffuse as the router delegates
 traffic to the WCCP members...

 Regards
 Henrik




-- 

Sds.
Alexandre J. Correa
Onda Internet / OPinguim.net
http://www.ondainternet.com.br
http://www.opinguim.net


[squid-users] bypass urls - wccp

2008-02-27 Thread Alexandre Correa
Hello,

How to tell wccp to no redirect some urls to proxy ?

-- 

Sds.
Alexandre J. Correa
Onda Internet / OPinguim.net
http://www.ondainternet.com.br
http://www.opinguim.net


Re: [squid-users] Fatal: Bungled squid.conf

2008-02-07 Thread Alexandre Correa
FATAL: Bungled squid.conf line 4: cache_dir /usr/local/squid/var/cache
100 16 256

you it didn´t specify the type of cache_dir, coss, aufs, diskd, ufs  !!

sintaxe:

cache_dir type directory size L1 L2 (Q1) (Q2)



On Feb 7, 2008 8:16 AM, Dave Coventry [EMAIL PROTECTED] wrote:
 Hi,

 I'm having a problem with my squid.conf.

 I have specified
 cache_dir /usr/local/squid/var/cache 100 16 256
 as per the directions under QUICKSTART.

 However Squid produces the following error. Is it possible that I need
 to change the permissions of /usr/local/squid/var/?

 # /usr/local/squid/sbin/squid -z

 FATAL: Bungled squid.conf line 4: cache_dir /usr/local/squid/var/cache
 100 16 256
 Squid Cache (Version 3.0.STABLE1): Terminated abnormally.
 CPU Usage: 0.010 seconds = 0.000 user + 0.010 sys
 Maximum Resident Size: 0 KB
 Page faults with physical i/o: 0




-- 

Sds.
Alexandre J. Correa
Onda Internet / OPinguim.net
http://www.ondainternet.com.br
http://www.opinguim.net


Re: [squid-users] The requested URL could not be retrieved: invalid url

2008-02-07 Thread Alexandre Correa
your squid.conf has two http_port using same port..

try to remove http_port 3128 or change the port number for last http_port

On Feb 7, 2008 10:50 AM, Dave Coventry [EMAIL PROTECTED] wrote:
 Almost there!

 Squid3.0.STABLE1 squid.conf:

 
 visible_hostname iqBase
 http_port 3128 transparent
 acl iqnet src 192.168.60.0/255.255.255.0
 cache_dir ufs /usr/local/squid/var/cache 100 16 256
 acl manager proto cache_object
 acl localhost src 127.0.0.1/32
 acl to_localhost dst 127.0.0.0/8
 acl SSL_ports port 443
 acl CONNECT method CONNECT
 http_access allow iqnet
 http_access allow manager localhost
 http_access deny manager
 http_access deny CONNECT !SSL_ports
 icp_access allow iqnet
 http_access deny all
 icp_access allow iqnet
 icp_access deny all
 htcp_access allow iqnet
 htcp_access deny all
 http_port 3128
 hierarchy_stoplist cgi-bin ?
 access_log /usr/local/squid/var/logs/access.log squid
 acl QUERY urlpath_regex cgi-bin \?
 cache deny QUERY
 cache_effective_user nobody
 refresh_pattern ^ftp:   144020% 10080
 refresh_pattern ^gopher:14400%  1440
 refresh_pattern .   0   20% 4320
 icp_port 3130
 coredump_dir /usr/local/squid/var/cache
 ===

 Contents of the access.log:

 ===
 $ cat /usr/local/squid/var/logs/access.log
 1202382009.744  0 192.168.60.199 NONE/400 1856 GET
 /firefox?client=firefox-arls=org.mozilla:en-GB:official - NONE/-
 text/html
 1202382009.907  0 192.168.60.199 NONE/400 1760 GET /favicon.ico -
 NONE/- text/html
 1202382028.023  0 192.168.60.199 NONE/400 1738 GET / - NONE/- text/html
 1202382046.868  0 192.168.60.199 NONE/400 1738 GET / - NONE/- text/html
 1202382046.970  0 192.168.60.199 NONE/400 1760 GET /favicon.ico -
 NONE/- text/html
 1202387676.866  0 192.168.60.199 NONE/400 1856 GET
 /firefox?client=firefox-arls=org.mozilla:en-GB:official - NONE/-
 text/html
 1202387677.121  0 192.168.60.199 NONE/400 1760 GET /favicon.ico -
 NONE/- text/html
 1202387691.774  0 192.168.60.199 NONE/400 2082 GET
 /safebrowsing/update?client=navclient-auto-ffoxappver=2.0.0.11version=goog-white-domain:1:23,goog-white-url:1:371,goog-black-url:1:18337,goog-black-enchash:1:44096
 - NONE/- text/html
 1202387700.522  0 192.168.60.199 NONE/400 1856 GET
 /firefox?client=firefox-arls=org.mozilla:en-GB:official - NONE/-
 text/html
 ===

 Browser Display:

 ===
 ERROR
 The requested URL could not be retrieved

 While trying to retrieve the URL:
 /firefox?client=firefox-arls=org.mozilla:en-GB:official

 The following error was encountered:

 * Invalid URL

 Some aspect of the requested URL is incorrect. Possible problems:

 * Missing or incorrect access protocol (should be `http://'' or similar)
 * Missing hostname
 * Illegal double-escape in the URL-Path
 * Illegal character in hostname; underscores are not allowed

 Your cache administrator is webmaster.
 Generated Thu, 07 Feb 2008 12:35:00 GMT by iqBase (squid/3.0.STABLE1)
 ===

 Can anyone see what's wrong?




-- 

Sds.
Alexandre J. Correa
Onda Internet / OPinguim.net
http://www.ondainternet.com.br
http://www.opinguim.net


Re: [squid-users] squid trying access PF devices (freebsd)

2008-01-18 Thread Alexandre Correa
maresia# ls -l /dev/pf
crw---  1 root  wheel0,  74 Jan 10 11:18 /dev/pf


i will recompile squid without pf support.. i don´t need this on
proxies... because gateways redirect to proxies.. :)

thanks !!!

regards !

On Jan 18, 2008 10:45 PM, Adrian Chadd [EMAIL PROTECTED] wrote:
 On Fri, Jan 18, 2008, Alexandre Correa wrote:
  yes,,, gateway redirect packets going t tcp/80 for squid servers !!

 Then ls -l /dev/pf, look at the ownership/permissions, make sure you at least
 start squid as root?




 Adrian


 
  On Jan 18, 2008 3:14 PM, Adrian Chadd [EMAIL PROTECTED] wrote:
   Are you running Squid-2.6 as a transparent proxy?
  
  
   On Fri, Jan 18, 2008, Alexandre Correa wrote:
Hello !!
   
one of my proxies, running squid 2.6S17 on freebsd 6.2 is trying
access PF device.. in cache.log shows this error message:
   
   
2008/01/18 14:51:13| clientNatLookup: PF open failed: (13) Permission 
denied
   
on every request !!
   
how i can disable this ?!?! removing this configure option
--enable-pf-transparent can solve ?
   
regards
   
--
   
Sds.
Alexandre J. Correa
Onda Internet / OPinguim.net
http://www.ondainternet.com.br
http://www.opinguim.net
  
   --
   - Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid 
   Support -
   - $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -
  
 
 
 
  --
 
  Sds.
  Alexandre J. Correa
  Onda Internet / OPinguim.net
  http://www.ondainternet.com.br
  http://www.opinguim.net

 --

 - Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support 
 -
 - $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -




-- 

Sds.
Alexandre J. Correa
Onda Internet / OPinguim.net
http://www.ondainternet.com.br
http://www.opinguim.net


Re: [squid-users] What is TCP/MISS, TCP/HIT ?

2008-01-15 Thread Alexandre Correa
TCP_MISS - fetched from internet
TCP_HIT - fetched from local cache
TCP_MEM_* - fetched from local MEMORY(ram) cache

:)

On Jan 15, 2008 11:20 AM, Murat Ugur EMINOGLU [EMAIL PROTECTED] wrote:
 Dear List Users, What is TCP/MISS, TCP/HIT, TCP/MEM and others?I dont
 understand. Could you send documents and links ?

 Thanks, Best Regars.

 Murat.

 --
 #!/bin/bash
 
 Murat Ugur EMINOGLU
 
 www.fedoraturkiye.com
 www.murat.ws
 liste[at]fedoraturkiye.com
 




-- 

Sds.
Alexandre J. Correa
Onda Internet / OPinguim.net
http://www.ondainternet.com.br
http://www.opinguim.net


[squid-users] how to increase memory hit ratio

2008-01-07 Thread Alexandre Correa
Hello !!

squidclient mgr:info shows :

Request Memory Hit Ratios:  5min: 4.5%, 60min: 6.2%
Request Disk Hit Ratios:5min: 44.9%, 60min: 48.8%

my maximum_object_size_in_memory are 64kb ...

this servers is dedicated to squid (quad opteron with 4gb ram,
cache_mem 128MB) ..

can i increat cache_mem to 256mb and maximum obj in memory to 128kb ?

which are the best values  ?

thanks !!

-- 

Sds.
Alexandre J. Correa
Onda Internet / OPinguim.net
http://www.ondainternet.com.br
http://www.opinguim.net


Re: [squid-users] how to increase memory hit ratio

2008-01-07 Thread Alexandre Correa
i´m using GDSF for memory_replacement and LFUDA for disk !!

i will increase cache_mem to 512 MB and maximum_object_site_in_memory
to 512 KB .. and see what´s happens..

.. load of this servers is very low...

thanks for answers !!

:)

On Jan 8, 2008 1:29 AM, Gino Rivera [EMAIL PROTECTED] wrote:
 Hi there;

 I recommend you to increase the following:
  * maximum_object_size_in_memory = This will allow SQUID to retain more
 bigger objects in mem. How about (128Kb), there is a calculation for this.
  * the cache_mem leave it there that's perfect to the hardware you are
 using.

 But the following features to options need to be changed.
 * memory_replacement_policy heap LFUDA
 * cache_replacement_policy heap LFUDA

 Please refer to the SQUID manual to know what is LFUDA

 But, after to ask in the mailing list, tell what version of SQUID do you
 have.

 Remeber the version is important to know what features you have available
 and what not.


 - Original Message -
 From: Alexandre Correa [EMAIL PROTECTED]
 To: squid-users squid-users@squid-cache.org
 Sent: Monday, January 07, 2008 8:05 PM
 Subject: [squid-users] how to increase memory hit ratio


  Hello !!
 
  squidclient mgr:info shows :
 
  Request Memory Hit Ratios:  5min: 4.5%, 60min: 6.2%
  Request Disk Hit Ratios:5min: 44.9%, 60min: 48.8%
 
  my maximum_object_size_in_memory are 64kb ...
 
  this servers is dedicated to squid (quad opteron with 4gb ram,
  cache_mem 128MB) ..
 
  can i increat cache_mem to 256mb and maximum obj in memory to 128kb ?
 
  which are the best values  ?
 
  thanks !!
 
  --
 
  Sds.
  Alexandre J. Correa
  Onda Internet / OPinguim.net
  http://www.ondainternet.com.br
  http://www.opinguim.net
 





-- 

Sds.
Alexandre J. Correa
Onda Internet / OPinguim.net
http://www.ondainternet.com.br
http://www.opinguim.net


Re: [squid-users] how to increase memory hit ratio

2008-01-07 Thread Alexandre Correa
cache_dir aufs /var/spool/squid/cache 11 32 256

/var/spool/squid/cache is in one dedicated HD, SAS 15.000rpm

.. i will read about GDSF and LFUDA ..

thaks again :)

On Jan 8, 2008 4:37 AM, Manoj_Rajkarnikar [EMAIL PROTECTED] wrote:
 On Tue, 8 Jan 2008, Alexandre Correa wrote:

  i´m using GDSF for memory_replacement and LFUDA for disk !!
 
  i will increase cache_mem to 512 MB and maximum_object_site_in_memory
  to 512 KB .. and see what´s happens..

 what's the total size of your cache_dir. If its under 200GB you can safely
 increase cache_mem size to 1GB.


 Manoj

 
  .. load of this servers is very low...
 
  thanks for answers !!
 
  :)
 
 --



-- 

Sds.
Alexandre J. Correa
Onda Internet / OPinguim.net
http://www.ondainternet.com.br
http://www.opinguim.net


Re: [squid-users] 2.7 vs 3.0

2007-12-21 Thread Alexandre Correa
ChangeLog may help you !!

On Dec 21, 2007 10:43 PM, Adrian Chadd [EMAIL PROTECTED] wrote:

 On Sat, Dec 22, 2007, Count Of Dracula wrote:
You want 2.6STABLE17 right now, and 2.7 when it is released. =]  3.0
isn't really ready for a production environment yet.
 
  Can you please explain what is a difference between Squid 2.6,2.7 and
  3.0 ? Why there is a Squid 2.7 branch?

 Because there are users who aren't ready to move to Squid-3.0 for
 various reasons, and there's life left in that branch.



 Adrian

 --
 - Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support 
 -




-- 

Sds.
Alexandre J. Correa
Onda Internet / OPinguim.net
http://www.ondainternet.com.br
http://www.opinguim.net


[squid-users] squid freebsd aufs + coss - same hd

2007-12-06 Thread Alexandre Correa
can 2 cache_dir on the same hd (dedicated for squid) cause performance impact ?

i´m using cache_dir aufs ... and cache_dir coss (objetcs smaller than 1000k)

cache_dir aufs /var/spool/squid/cache 11 32 256
cache_dir coss /var/spool/squid/onda_coss01 6000 max-size=100
maxfullbufs=4 membufs=20 block-size=4096


thanks !!

-- 

Sds.
Alexandre J. Correa
Onda Internet / OPinguim.net
http://www.ondainternet.com.br
http://www.opinguim.net


Re: [squid-users] squid freebsd aufs + coss - same hd

2007-12-06 Thread Alexandre Correa
rebuild of coss take longer time... about 30 mins to rebuild 6gb (with
configs posted previously)...

correct ?

i will loose my cached files (aufs) if cache_swap_log is set to the
same location of old swap file ?!


thanks !

regards

Alexandre

On Dec 6, 2007 8:50 AM, Adrian Chadd [EMAIL PROTECTED] wrote:
 cache_dir's are 'checked' in order; I suggest putting the coss directories
 first.



 adrian


 On Fri, Dec 07, 2007, Amos Jeffries wrote:
  Alexandre Correa wrote:
  can 2 cache_dir on the same hd (dedicated for squid) cause performance
  impact ?
  
  i?m using cache_dir aufs ... and cache_dir coss (objetcs smaller than
  1000k)
  
  cache_dir aufs /var/spool/squid/cache 11 32 256
  cache_dir coss /var/spool/squid/onda_coss01 6000 max-size=100
  maxfullbufs=4 membufs=20 block-size=4096
  
  
  thanks !!
  
 
  Looks good. Just some hints though...
 
  You'd do well to set min-size on the AUFS dir, to push the smaller
  objects into COSS. At present small objects can go to either, and large
  only in AUFS.
 
  A 4KB block-size may cause a lot of watse if you get a large number of
  small objects such as optimized web pages or spacer images. The default
  512 is sufficient for COSS dir up to 8GB large
Not worth changing it now though if any important data has already
gone to COSS. Would require a destroy and rebuild to fix that.
 
  Two on same HDD might drag each other down a little, but no more than a
  single large cachedir doing the same throughput. Unfortunately squid is
  not head-optimised for disk usage yet. COSS is more in-memory than the
  others so it should still be a net gain over a single pure aufs.
 
  Amos

 --
 - Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support 
 -
 - $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -




-- 

Sds.
Alexandre J. Correa
Onda Internet / OPinguim.net
http://www.ondainternet.com.br
http://www.opinguim.net


Re: [squid-users] squid freebsd aufs + coss - same hd

2007-12-06 Thread Alexandre Correa
if not set the cache_swap_log/cache_swap_state squid doesn´t start !!

cache_swap_log/cache_swap_state is where squid save the default
'swap.state'.. right ?

adding coss cache_dir ... squid recreate swap.state ?

thanks !1

regards !

On Dec 6, 2007 8:03 PM, Adrian Chadd [EMAIL PROTECTED] wrote:
 and losing the cache swap log just triggers a rebuild. You won't lose
 the cache, just time.

 Yes you have to set the swap state directories when using COSS.




 Adrian

 On Fri, Dec 07, 2007, Amos Jeffries wrote:
   rebuild of coss take longer time... about 30 mins to rebuild 6gb (with
   configs posted previously)...
  
   correct ?
  
   i will loose my cached files (aufs) if cache_swap_log is set to the
   same location of old swap file ?!
 
  1) cache_swap_log is obsolete. Its now cache_swap_state to better reflect
  the file to which it applies, and that the file is NOT a normal log file.
 
  2) Better leave the cache_swap_log/cache_swap_state at defaults (missing
  from squid.conf) for squid to handle it safely.
 
  3) What Adrian meant was the order of the lines in squid.conf. Not an
  actual re-ordering of cache-dir. This is a major reason for #2, so a
  re-ordering in the config does not screw up the custom state file
  numbering.
 
 
  Amos
 
  
   thanks !
  
   regards
  
   Alexandre
  
   On Dec 6, 2007 8:50 AM, Adrian Chadd [EMAIL PROTECTED] wrote:
   cache_dir's are 'checked' in order; I suggest putting the coss
   directories
   first.
  
  
  
   adrian
  
  
   On Fri, Dec 07, 2007, Amos Jeffries wrote:
Alexandre Correa wrote:
can 2 cache_dir on the same hd (dedicated for squid) cause
   performance
impact ?

i?m using cache_dir aufs ... and cache_dir coss (objetcs smaller than
1000k)

cache_dir aufs /var/spool/squid/cache 11 32 256
cache_dir coss /var/spool/squid/onda_coss01 6000 max-size=100
maxfullbufs=4 membufs=20 block-size=4096


thanks !!

   
Looks good. Just some hints though...
   
You'd do well to set min-size on the AUFS dir, to push the smaller
objects into COSS. At present small objects can go to either, and
   large
only in AUFS.
   
A 4KB block-size may cause a lot of watse if you get a large number of
small objects such as optimized web pages or spacer images. The
   default
512 is sufficient for COSS dir up to 8GB large
  Not worth changing it now though if any important data has
   already
  gone to COSS. Would require a destroy and rebuild to fix that.
   
Two on same HDD might drag each other down a little, but no more than
   a
single large cachedir doing the same throughput. Unfortunately squid
   is
not head-optimised for disk usage yet. COSS is more in-memory than the
others so it should still be a net gain over a single pure aufs.
   
Amos
  
   --
   - Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid
   Support -
   - $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -
  
  
  
  
   --
  
   Sds.
   Alexandre J. Correa
   Onda Internet / OPinguim.net
   http://www.ondainternet.com.br
   http://www.opinguim.net
  
 

 --

 - Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support 
 -
 - $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -




-- 

Sds.
Alexandre J. Correa
Onda Internet / OPinguim.net
http://www.ondainternet.com.br
http://www.opinguim.net


Re: [squid-users] combine two connection

2007-12-01 Thread Alexandre Correa
you can do this with BGP on your router !!



On Dec 1, 2007 10:51 AM, apizz [EMAIL PROTECTED] wrote:
 is it possible for me to combine two connection then become one?

 i got two proxy..  i want to combine it to increase my bandwith..

 how can i do that..

 thanks..

 --
 Mohamed Afif Che Mohamed Rus
 Electric and Electronic Engineering Student
 Univerisit Teknologi PETRONAS




-- 

Sds.
Alexandre J. Correa
Onda Internet / OPinguim.net
http://www.ondainternet.com.br
http://www.opinguim.net


Re: [squid-users] combine two connection

2007-12-01 Thread Alexandre Correa
No !! you can do custom tcp_outgoing acls .. using source address..

but not join links !!

On Dec 1, 2007 11:24 AM, apizz [EMAIL PROTECTED] wrote:
 i dont have any router..

 im studying at university.. i dont have any access to the router..

 im also behind firewall..

 is it possible to joint the connection using some config at squid.conf?


 On Dec 1, 2007 5:56 AM, Alexandre Correa [EMAIL PROTECTED] wrote:
  you can do this with BGP on your router !!
 
 
 
 
  On Dec 1, 2007 10:51 AM, apizz [EMAIL PROTECTED] wrote:
   is it possible for me to combine two connection then become one?
  
   i got two proxy..  i want to combine it to increase my bandwith..
  
   how can i do that..
  
   thanks..
  
   --
   Mohamed Afif Che Mohamed Rus
   Electric and Electronic Engineering Student
   Univerisit Teknologi PETRONAS
  
 
 
 
  --
 
  Sds.
  Alexandre J. Correa
  Onda Internet / OPinguim.net
  http://www.ondainternet.com.br
  http://www.opinguim.net
 



 --
 Mohamed Afif Che Mohamed Rus
 Electric and Electronic Engineering Student
 Universiti Teknologi PETRONAS




-- 

Sds.
Alexandre J. Correa
Onda Internet / OPinguim.net
http://www.ondainternet.com.br
http://www.opinguim.net


Re: [squid-users] Squid Proxy Vulnerability.

2007-11-29 Thread Alexandre Correa
 - DIRECT/24.71.223.11 -
 1196170421.858645 64.237.46.55 TCP_MISS/200 328 CONNECT
 69.20.116.136:25 - DIRECT/69.20.116.136 -
 1196170421.957 59 64.237.46.55 TCP_MISS/200 0 CONNECT
 64.18.5.10:25 - DIRECT/64.18.5.10 -
 1196170422.101  30770 64.237.46.132 TCP_MISS/200 0 CONNECT
 203.135.130.131:25 - DIRECT/203.135.130.131 -
 1196170422.298  17913 64.237.46.55 TCP_MISS/200 577 CONNECT
 168.95.5.19:25 - DIRECT/168.95.5.19 -
 1196170422.551   1265 208.167.225.68 TCP_MISS/200 268 CONNECT
 218.5.77.18:25 - DIRECT/218.5.77.18 -
 1196170422.723 37 64.237.46.55 TCP_MISS/503 0 CONNECT
 208.4.52.29:25 - DIRECT/208.4.52.29 -
 1196170423.019197 208.167.225.68 TCP_MISS/200 334 CONNECT
 65.54.244.8:25 - DIRECT/65.54.244.8 -

 

 As you can see thats nearly 100 hits in 8 seconds.  I know this is
 come kind of tool, but I cannot figure out a way to reproduce these
 results.

 Our conf for this server is as follows (Host Names and IP addresses
 have been changed incase our other servers are also vulnerable):

 

 http_port 3128
 icp_port 3130


 cache_dir ufs /var/spool/squid3 100 16 256
 debug_options ALL,9

 cache_peer xxx.xxx.xxx.xxx parent 443 7 ssl sslflags=DONT_VERIFY_PEER
 no-query no-digest login=PASS originserver name=bradbury_ats_za
 cache_peer xxx.xxx.xxx.xxx parent 443 7 ssl sslflags=DONT_VERIFY_PEER
 no-query no-digest login=PASS originserver name=weasel_ats
 cache_peer xxx.xxx.xxx.xxx parent 443 7 ssl sslflags=DONT_VERIFY_PEER
 no-query no-digest login=PASS originserver name=reynolds_ats
 cache_peer xxx.xxx.xxx.xxx parent 443 7 ssl sslflags=DONT_VERIFY_PEER
 no-query no-digest login=PASS originserver name=asimov_ats


 acl sites_on_bradbury_ats_za dstdomain stuff.us.com
 acl sites_on_weasel_ats dstdomain stuff2.us.com
 acl sites_on_reynolds_ats dstdomain stuff3.us.com
 acl sites_on_reynolds_ats dstdomain stuff4.us.com
 acl sites_on_asimov_ats dstdomain stuff5.us.com
 acl sites_on_asimov_ats dstdomain stuff6.us.com
 acl sites_on_asimov_ats dstdomain stuff7.us.com

 cache_peer_access bradbury_ats_za allow sites_on_bradbury_ats_za
 cache_peer_access weasel_ats allow sites_on_weasel_ats
 cache_peer_access reynolds_ats allow sites_on_reynolds_ats
 cache_peer_access asimov_ats allow sites_on_asimov_ats


 acl all src 0.0.0.0/0.0.0.0
 acl localhost src 127.0.0.1/255.255.255.255
 acl to_localhost dst 127.0.0.0/8
 acl SSL_ports port 443
 acl Safe_ports port 80  # http
 acl Safe_ports port 443 # https
 acl CONNECT method CONNECT
 http_access deny !Safe_ports
 http_access deny CONNECT !SSL_ports

 http_access allow all
 icp_access allow all


 coredump_dir /var/spool/squid3


 httpd_suppress_version_string on
 visible_hostname proxy-us.hrsmart.com

 

 Anyone have any idea how this was being done?  If so please respond to
 the list.  If you know how to do this, I would appreciate a way to
 reproduce this for my superiors.




-- 

Sds.
Alexandre J. Correa
Onda Internet / OPinguim.net
http://www.ondainternet.com.br
http://www.opinguim.net


Re: [squid-users] question about filesystems and directories for cache.

2007-11-24 Thread Alexandre Correa
reiserfs 4 is much better than ext3 ...

On Nov 24, 2007 9:55 PM, Tony Dodd [EMAIL PROTECTED] wrote:
 Matias Lopez Bergero wrote:
  Hello,
 
 snip
 
  I'm being reading the wiki and the mailing list to know, which is the
  best filesystem to use, for now I have chose ext3 based on comments on
  the list, also, I have passed the nodev,nosuid,noexec,noatime flags to
  fstab in order to get a security and faster performance.
 
 snip

 Hi Matias,

 I'd personally recommend against ext3, and point you towards reiserfs.
 ext3 is horribly slow for many small files being read/written at the
 same time.  I'd also recommend maximizing your disk throughput, by
 splitting the raid, and having a cache-dir on each disk; though of
 course, you'll loose redundancy in the event of a disk failure.

 I wrote a howto that revolves around maximizing squid performance, take
 a look at it, you may find it helpful:
 http://blog.last.fm/2007/08/30/squid-optimization-guide

 --
 Tony Dodd, Systems Administrator

 Last.fm | http://www.last.fm
 Karen House 1-11 Baches Street
 London N1 6DL

 check out my music taste at:
 http://www.last.fm/user/hawkeviper




-- 

Sds.
Alexandre J. Correa
Onda Internet / OPinguim.net
http://www.ondainternet.com.br
http://www.opinguim.net


Re: [squid-users] my squid used by someone's proxy server.

2007-11-08 Thread Alexandre Correa
iblock via http_access access for only domains that you host.. example:

i host domains:
www.domain1.com
www.domain2.com
www.domainN.com

acl mydomains dstdomain .domain1.com .domain2.com .domainN.com

http_access allow mydomains
http_access deny all

or maybe you can try this other way:

acl myserver dst 200.200.200.200 200.200.200.1
http_access allow myserver
http_access deny all


i think this can solve your problem...

regards !!

AlexandrE

On Nov 8, 2007 4:16 AM, Seonkyu Park [EMAIL PROTECTED] wrote:
 Hello Squid users.

 I am using squid for server accelerator.

 But my squid server used by someone's transparent proxy.
 (My server IP address listed by 
 http://www.proxy-list.net/transparent-proxy-lists.shtml )
 Also listed by google link (PROXY LISTS - Free Anonymous Proxies and Proxy 
 Tools)

 Is It possible?
 Plz help on my squid.conf ( for reverse proxy)
 
 http_port80  vhost
 icp_port 0
 cache_peer  111.111.111.1 parent 80 0  no-query originserver no-digest
 cache_peer_domain  111.111.111.1  www.abc.com
 hierarchy_stoplist cgi-bin ?
 acl QUERY urlpath_regex cgi-bin \?
 cache deny QUERY
 acl all src 0.0.0.0/0.0.0.0
 acl manager proto cache_object
 acl localhost src 127.0.0.1/255.255.255.255
 acl to_localhost dst 127.0.0.0/8
 acl purge method PURGE
 acl CONNECT method CONNECT
 acl port80 port 80

 http_access allow port80
 http_access allow manager localhost
 http_access deny manager
 http_access allow purge localhost
 http_access deny purge
 http_access deny !Safe_ports
 http_access deny CONNECT !SSL_ports
 http_access allow localhost
 http_access deny all
 http_reply_access allow all
 

 And I checked  my squid logs.
 (cd /var/log/squid ; grep -v abc.com access.log* | grep -v 503 | grep -v 
 TCP_DENIED)

 I found that my squid server (server accelerator) used by someone's proxy 
 server.

 (219.136.189.213 - - [08/Nov/2007:15:30:35 +0900] GET http://www.baidu.com/ 
 HTTP/1.0 200 4082 - - TCP_REFRESH)

 How can I block it ?


 Plz help.






-- 

Sds.
Alexandre J. Correa
Onda Internet / OPinguim.net
http://www.ondainternet.com.br
http://www.opinguim.net


[squid-users] Log analyzer

2007-11-07 Thread Alexandre Mackow

Hi all,
just a question around squid, what do you use for analysing your logs?

I try to use sarg, but if the report is been sent by mail, the Internet 
page report is not generate (without error with the -z option).


So do you have a log analyser preference with html page ?

Thanks a lot .

Best regard.

++
begin:vcard
fn:Alexandre Mackow
n:Mackow;Alexandre
org:Groupe Millet
adr;dom:;;Bretignolles;Bressuire;;79300
email;internet:[EMAIL PROTECTED]
title:Service OSI
tel;work:05 49 74 55 67
x-mozilla-html:FALSE
version:2.1
end:vcard



Re: [squid-users] Log analyzer -- Webalizer

2007-11-07 Thread Alexandre Mackow

Keshava M P a écrit :

Hi,

Webalizer is a good tool.

Cheers!


On Nov 7, 2007 2:43 PM, Alexandre Mackow [EMAIL PROTECTED] wrote:
  

Hi all,
just a question around squid, what do you use for analysing your logs?

I try to use sarg, but if the report is been sent by mail, the Internet
page report is not generate (without error with the -z option).

So do you have a log analyser preference with html page ?

Thanks a lot .

Best regard.

++






  

Thanks for your answer, webanalyser is installed ;-)
For the moment, good result ...

Regard

++ 
begin:vcard
fn:Alexandre Mackow
n:Mackow;Alexandre
org:Groupe Millet
adr;dom:;;Bretignolles;Bressuire;;79300
email;internet:[EMAIL PROTECTED]
title:Service OSI
tel;work:05 49 74 55 67
x-mozilla-html:FALSE
version:2.1
end:vcard



[squid-users] FreeBSD, enable or not memory_pools

2007-11-05 Thread Alexandre Correa
Hello !!

Wich is best for FreeBSD, enable or disable memory_pools ?

freebsd 6.2 amd64
regards !!

-- 

Sds.
Alexandre J. Correa
Onda Internet / OPinguim.net
http://www.ondainternet.com.br
http://www.opinguim.net


Re: [squid-users] FreeBSD, enable or not memory_pools

2007-11-05 Thread Alexandre Correa
i´m using

memory_pools on
memory_pools_limit 16 MB

working fine..

:)


On Nov 6, 2007 3:31 AM, Tek Bahadur Limbu [EMAIL PROTECTED] wrote:
 Hi Alexandre,

 Alexandre Correa wrote:
  Hello !!
 
  Wich is best for FreeBSD, enable or disable memory_pools ?
 
  freebsd 6.2 amd64

 The default value seems to work fine for me.
 But you are free to experiment with it and report back your results!


  regards !!
 


 --

 With best regards and good wishes,

 Yours sincerely,

 Tek Bahadur Limbu

 System Administrator

 (TAG/TDG Group)
 Jwl Systems Department

 Worldlink Communications Pvt. Ltd.

 Jawalakhel, Nepal

 http://www.wlink.com.np

 http://teklimbu.wordpress.com




-- 

Sds.
Alexandre J. Correa
Onda Internet / OPinguim.net
http://www.ondainternet.com.br
http://www.opinguim.net


Re: [squid-users] squid proccess freeze

2007-11-03 Thread Alexandre Correa
 That's a 110 GB cache which is big. So that is absolutely nothing there
 in /var/log/squid/cache.log before your Squid process goes Zombie?


Nothing are show in cache.log ..

 When your Squid process gets freeze or goes zombie, what errors does
 your web browser gives you? I guess that machine does not loose network
 connectivity?

Connection Time out ... and i can access the server, i try to kill
squid proccess without success.. i can´t terminate .. only if i reboot
server.. ;/


 When your Squid process goes to a zombie state again, check the number
 of mbufs used at that very moment.

 Maybe, just maybe, it could be related to your mbufs running out.

okey.. i will check and post it !!!

about ICP traffic, some client servers are parent of my squid proxy...



thanks !!

regards

Sds.
Alexandre J. Correa
Onda Internet / OPinguim.net
http://www.ondainternet.com.br
http://www.opinguim.net


Re: [squid-users] squid proccess freeze

2007-11-03 Thread Alexandre Correa
maresia# netstat -m
32674/618/33292 mbufs in use (current/cache/total)
32665/103/32768/32768 mbuf clusters in use (current/cache/total/max)
32665/103 mbuf+clusters out of packet secondary zone in use (current/cache)
0/0/0/0 4k (page size) jumbo clusters in use (current/cache/total/max)
0/0/0/0 9k jumbo clusters in use (current/cache/total/max)
0/0/0/0 16k jumbo clusters in use (current/cache/total/max)
73498K/360K/73859K bytes allocated to network (current/cache/total)
0/6582/3291 requests for mbufs denied (mbufs/clusters/mbuf+clusters)
0/0/0 requests for jumbo clusters denied (4k/9k/16k)
0/0/0 sfbufs in use (current/peak/max)
0 requests for sfbufs denied
0 requests for sfbufs delayed
0 requests for I/O initiated by sendfile
0 calls to protocol drain routines


mbuf reached !!! now i set it to 264144 ...

thanks .. i think now it´s solved :)


On Nov 3, 2007 2:42 PM, Alexandre Correa [EMAIL PROTECTED] wrote:
  That's a 110 GB cache which is big. So that is absolutely nothing there
  in /var/log/squid/cache.log before your Squid process goes Zombie?
 

 Nothing are show in cache.log ..

  When your Squid process gets freeze or goes zombie, what errors does
  your web browser gives you? I guess that machine does not loose network
  connectivity?

 Connection Time out ... and i can access the server, i try to kill
 squid proccess without success.. i can´t terminate .. only if i reboot
 server.. ;/


  When your Squid process goes to a zombie state again, check the number
  of mbufs used at that very moment.
 
  Maybe, just maybe, it could be related to your mbufs running out.

 okey.. i will check and post it !!!

 about ICP traffic, some client servers are parent of my squid proxy...



 thanks !!

 regards


 Sds.
 Alexandre J. Correa
 Onda Internet / OPinguim.net
 http://www.ondainternet.com.br
 http://www.opinguim.net




-- 

Sds.
Alexandre J. Correa
Onda Internet / OPinguim.net
http://www.ondainternet.com.br
http://www.opinguim.net


Re: [squid-users] squid proccess freeze

2007-11-02 Thread Alexandre Correa
 How many users is your Squid box serving? It's strange that there is no
 errors. Where have you defined the cache_log directive in your squid.conf?

+- 300~400 users simultaneously

cache_access_log none
cache_log /var/log/squid/cache.log
cache_store_log none

cache_dir aufs /var/spool/squid 11 72 256
cache_mem 256 MB


 When you say after some time running .. squid proccess refusing
 connections, is it a few minutes or hours or even days?
an accurate time does not exist, after some hours, when the load increases ..

 Can you post the output of squidclient mgr:info?

 Posting your squid.conf may help too.

maresia# squidclient mgr:info
Squid Object Cache: Version 2.6.STABLE16
Start Time: Fri, 02 Nov 2007 04:43:44 GMT
Current Time:   Fri, 02 Nov 2007 07:24:54 GMT
Connection information for squid:
Number of clients accessing cache:  0
Number of HTTP requests received:   28162
Number of ICP messages received:21
Number of ICP messages sent:21
Number of queued ICP replies:   0
Request failure ratio:   0.00
Average HTTP requests per minute since start:   100.2
Average ICP messages per minute since start:0.1
Select loop called: 795855 times, 21.196 ms avg
Cache information for squid:
Request Hit Ratios: 5min: 48.1%, 60min: 43.4%
Byte Hit Ratios:5min: 4.5%, 60min: 6.6%
Request Memory Hit Ratios:  5min: 2.2%, 60min: 1.2%
Request Disk Hit Ratios:5min: 50.0%, 60min: 26.6%
Storage Swap size:  2980474 KB
Storage Mem size:   50900 KB
Mean Object Size:   17.57 KB
Requests given to unlinkd:  0
Median Service Times (seconds)  5 min60 min:
HTTP Requests (All):   0.08265  0.09736
Cache Misses:  0.39928  0.37825
Cache Hits:0.00286  0.00463
Near Hits: 0.09219  0.09219
Not-Modified Replies:  0.00286  0.0
DNS Lookups:   0.10428  0.17826
ICP Queries:   0.0  0.0
Resource usage for squid:
UP Time:16869.319 seconds
CPU Time:   85.383 seconds
CPU Usage:  0.51%
CPU Usage, 5 minute avg:0.38%
CPU Usage, 60 minute avg:   0.28%
Process Data Segment Size via sbrk(): 154172 KB
Maximum Resident Size: 159328 KB
Page faults with physical i/o: 1
Memory accounted for:
Total accounted:75685 KB
memPoolAlloc calls: 4466729
memPoolFree calls: 4009299
File descriptor usage for squid:
Maximum number of file descriptors:   4096
Largest file desc currently in use: 26
Number of file desc currently in use:   21
Files queued for open:   0
Available number of file descriptors: 4075
Reserved number of file descriptors:   100
Store Disk files open:   0
IO loop method: kqueue
Internal Data Structures:
169850 StoreEntries
  5569 StoreEntries with MemObjects
  5564 Hot Object Cache Items
169667 on-disk objects


at this moment i have less traffic/users .. i will run later when load
increase.. (if squid not freeze before).. it´s very strange..


kernel compiled with options: (6.2-RELEASE-p8 FreeBSD)


options SYSVSHM # SYSV-style shared memory
options SYSVMSG # SYSV-style message queues
options SYSVSEM # SYSV-style semaphores

options MSGMNB=16384
options MSGMNI=41
options MSGSEG=2049
options MSGSSZ=64
options MSGTQL=512
options SHMSEG=16
options SHMMNI=32
options SHMMAX=2097152
options SHMALL=3096


thanks !!


regards,

AlexandrE





 
  # ps auwx | grep squid
 
  USERPID %CPU %MEM   VSZ   RSS  TT  STAT STARTED  TIME COMMAND
  squid   807  0.0 16.1 679548 671268  ??  T10:52PM  10:50.02 (squid)
  -D -s (squid)
  squid   864  0.0  0.0  2472   752  ??  Is   10:52PM   0:00.00 (unlinkd)
  (unlinkd)
  root  36341  0.0  0.0  5852  1212  p0  R+   10:59AM   0:00.00 grep squi=
  d
 
  using AUFS !!
 
  squid 2.6.STABLE16
  configure options:
 
   '--program-prefix=3D' '--prefix=3D/usr' '--exec-prefix=3D/usr'
  '--bindir=3D/usr/bin' '--sbindir=3D/usr/sbin' '--sysconfdir=3D/etc'
  '--includedir=3D/usr/include' '--libdir=3D/usr/lib' '--libexecdir=3D/usr/li=
  bexec'
  '--sharedstatedir=3D/usr/com' '--mandir=3D/usr/share/man'
  '--infodir=3D/usr/share/info' '--exec_prefix=3D/usr' '--bindir=3D/usr/sbin'
  '--libexecdir=3D/usr/lib/squid' '--localstatedir=3D/var'
  '--sysconfdir=3D/etc/squid' '--disable-useragent-log' '--disable-referer-lo=
  g'
  '--enable-kqueue' '--enable-snmp' '--enable-removal-policies=3Dheap,lru'
  '--enable-storeio=3Daufs,coss,diskd,ufs' '--enable-ssl'
  '--enable-ipf-transparent' '--enable-linux-netfilter' '--with-pthreads'
  '--disable-dependency-tracking

[squid-users] squid proccess freeze

2007-11-01 Thread Alexandre Correa
Hello !!

I testing squid on freebsd 6.2 amd64+SMP, server is 2 procs dual-core
opteron 4gb ram ...

after some time running .. squid proccess refusing connections, if i try to
kill them, proccess don=B4t stop.. no errors is show.. without core dumps=
...

# ps auwx | grep squid

USERPID %CPU %MEM   VSZ   RSS  TT  STAT STARTED  TIME COMMAND
squid   807  0.0 16.1 679548 671268  ??  T10:52PM  10:50.02 (squid)
-D -s (squid)
squid   864  0.0  0.0  2472   752  ??  Is   10:52PM   0:00.00 (unlinkd)
(unlinkd)
root  36341  0.0  0.0  5852  1212  p0  R+   10:59AM   0:00.00 grep squi=
d

using AUFS !!

squid 2.6.STABLE16
configure options:

 '--program-prefix=3D' '--prefix=3D/usr' '--exec-prefix=3D/usr'
'--bindir=3D/usr/bin' '--sbindir=3D/usr/sbin' '--sysconfdir=3D/etc'
'--includedir=3D/usr/include' '--libdir=3D/usr/lib' '--libexecdir=3D/usr/li=
bexec'
'--sharedstatedir=3D/usr/com' '--mandir=3D/usr/share/man'
'--infodir=3D/usr/share/info' '--exec_prefix=3D/usr' '--bindir=3D/usr/sbin'
'--libexecdir=3D/usr/lib/squid' '--localstatedir=3D/var'
'--sysconfdir=3D/etc/squid' '--disable-useragent-log' '--disable-referer-lo=
g'
'--enable-kqueue' '--enable-snmp' '--enable-removal-policies=3Dheap,lru'
'--enable-storeio=3Daufs,coss,diskd,ufs' '--enable-ssl'
'--enable-ipf-transparent' '--enable-linux-netfilter' '--with-pthreads'
'--disable-dependency-tracking' '--enable-cachemgr-hostname=3Dlocalhost'
'--disable-ident-lookups' '--enable-underscores' '--datadir=3D/usr/share'
'--with-maxfd=3D4096' '--enable-async-io' '--disable-dlmalloc' '--with-aio'

somebody knowns wat=B4s happens ?!


thanks..

regards,

-- 

Sds.
Alexandre J. Correa
Onda Internet / OPinguim.net
http://www.ondainternet.com.br
http://www.opinguim.net


[squid-users] Ntlm and url_regex

2007-10-22 Thread Alexandre Mackow



Sujet: Ntlm and url_regex
Date: Mon, 22 Oct 2007 11:42:05 +0200
De: Alexandre Mackow [EMAIL PROTECTED]
Pour: squid-users@squid-cache.org

Hi all,
Squid is running and perfectly works with an authentification based on
AD (Ntlm) ..
So for my users who are not fully authorized, i create an acl
acl sites_ok url_regex /etc/squid/sitesok.list
http_access allow sites_ok

With 3 sites for evrybody
The probleme that when a user is not autorized with ntlm and go to a
page authorized with url_regex, when a link is present on the page (I
think), an authentification windows open ...and the user have to click
to pass the message.

So my question, is it possible to put some sites in full access for
evrybody without have an autentification windows ?

Thanks a lot for your help.
++
begin:vcard
fn:Alexandre Mackow
n:Mackow;Alexandre
org:Groupe Millet;OSI
adr;dom:;;Bretignolles;Bressuire;;79300
email;internet:[EMAIL PROTECTED]
title:Service OSI
tel;work:05 49 74 55 67
x-mozilla-html:FALSE
version:2.1
end:vcard



[Fwd: Re: [squid-users] Ntlm and url_regex]

2007-10-22 Thread Alexandre Mackow




Michael Alger a écrit :
 On Mon, Oct 22, 2007 at 11:44:17AM +0200, Alexandre Mackow wrote:
 Squid is running and perfectly works with an authentification
 based on AD (Ntlm) ..
 So for my users who are not fully authorized, i create an acl
 acl sites_ok url_regex /etc/squid/sitesok.list
 http_access allow sites_ok

 With 3 sites for evrybody
 The probleme that when a user is not autorized with ntlm and go to
 a page authorized with url_regex, when a link is present on the
 page (I think), an authentification windows open ...and the user
 have to click to pass the message.
 
 When a browser accesses a site, it will download all resources
 required to display it. The main ones to look for are style sheets,
 scripts, and embedded images and other types of media. You might
 find the Firebug extension for Firefox is useful for identifying
 all the things your browser is accessing in order to render a page.
 
 You will need to permit unauthenticated access to every resource on
 the page(s) you want to allow access to in order for a user to be
 able to browse it without being prompted to authenticate.
 
 Note that it's perfectly legitimate for some of the resources used
 by a page to be located on a different server, and even a completely
 unrelated domain. A good example is advertising scripts, which
 typically live on an adhost's servers (e.g. doubleclick.net).
 
 It's also possible that the browser is pre-fetching pages linked
 to by the site, by following normal hyperlinks. Most browsers don't
 do this out of the box though, only with the help of internet
 accelerator type software. So while this is posible, the most
 likely cause of the authentication popup is that the sites you're
 allowing access to include references to media or scripts located on
 other servers which you aren't allowing access to.
 
 AFAIK, there's no way in squid to tell it to allow a site and
 everything on it. If working out what external resources the site
 requires and permitting access to them is not an option (e.g. it's
 outside of your control or changes frequently), you might be able to use
 the Referer header from the client's request in an ACL -- but if you
 can, you make it possible for anyone who's clever to access any site
 without authenticating (the client can send whatever Referer header it
 wants), which may be unacceptable.
 
 A completely different option could be to use a tool to create a
 local mirror of the site(s) you want to allow access to. Such a
 tool would pull in all resources required to render the page and
 store them on a local server. It would also rewrite the original
 page to reference the local copies. Then you just need to permit
 unauthenticated users access to your local mirror.
 

Ok thanks for your help

Regards.



begin:vcard
fn:Alexandre Mackow
n:Mackow;Alexandre
org:Groupe Millet;OSI
adr;dom:;;Bretignolles;Bressuire;;79300
email;internet:[EMAIL PROTECTED]
title:Service OSI
tel;work:05 49 74 55 67
x-mozilla-html:FALSE
version:2.1
end:vcard



Re: [squid-users] Squid eating too much memory

2007-10-21 Thread Alexandre Correa
  200125 Hot Object Cache Items
  1646504 on-disk objects


 
  Thanking you...
 
 Thank you too!



-- 

Sds.
Alexandre J. Correa
Onda Internet / OPinguim.net
http://www.ondainternet.com.br
http://www.opinguim.net


Re: [squid-users] Cache dir problem with LVM

2007-10-17 Thread Alexandre Correa
try this,
whit lvm mounted on /var/spool/squid

chown squid:squid /var/spool/squid
chown squid:squid -R /var/spool/squid/*

chmod 744 /var/spool/squid
chmod 744 -R /var/spool/squid/*

maybe this can work :)

regards,

AlexandrE

On 10/17/07, Frenette, Jean-Sébastien [EMAIL PROTECTED] wrote:
 Hi everyone,

 I have a little problem. For my squid cache folder, I've set a Raid (LVM) 
 volume name « VolGroup00-LogVolSquidCache1 » that I mount to 
 /var/spool/squid/ (this is where my cache folder point to).

 Now, when I start squid, I get:
 FATAL: cache_dir /var/spool/squid/1/: (13) Permission denied Squid Cache 
 (Version 2.6.STABLE13): Terminated abnormally.
 CPU Usage: 0.012 seconds = 0.008 user + 0.004 sys Maximum Resident Size: 0 KB 
 Page faults with physical i/o: 0

 I've changed the own to squid.squid so everything in /var/spool/squid is 
 chown -R squid.squid

 Samething for the logs.

 I had the same problem with the swap drive until I ran squid -z, which 
 created all the folder.

 Anywhere I mount my LVM volume and then point my cache there, it failed. If I 
 point anywhere else, it work.

 Anybody have an answer?

 Thanks

 JSF


-- 
Sds.

Alexandre Jeronimo Correa

Onda Internet - http://www.ondainternet.com.br
OPinguim Hosting - http://www.opinguim.net

Linux User ID #142329

UNOTEL S/A - http://www.unotel.com.br


Re: [squid-users] PHP-Problem

2007-09-23 Thread Alexandre Correa
Hello,


configure:2305: error: C compiler cannot create executables

this message seems that your system hasn´t a fully working compiler !!

try: yum install gcc
or
up2date -d -i -u gcc


and rebuild squid src-rpm  !


regards !

See `config.log' for more details.
On 9/24/07, Andreas Meyer [EMAIL PROTECTED] wrote:
 Amos Jeffries [EMAIL PROTECTED] schrieb:

   What could be the problem with the Squid? If I circumvent the Squid, the
   shops
   are displayed without adding an index.php to the URL.
  
 
  PLEASE upgrade. There are quite a few serious problems known about
  versions of squid as old as that one. Not to mention the speed and
  configurability increases made across the 2.6 versions.

 I took a squid-2.6.STABLE5-7.src.rpm and tried to rebuild it but it
 didn't work out:

 Thread model: posix
 gcc version 3.3 20030226 (prerelease) (SuSE Linux)
 configure:2231: $? = 0
 configure:2233: gcc -V /dev/null 5
 gcc: `-V' option must have argument
 configure:2236: $? = 1
 configure:2260: checking for C compiler default output
 configure:2263: gcc -O2 -march=i486 -mcpu=i686 -fPIE -DLDAP_DEPRECATED 
 -fno-strict-aliasing  -pie conftes
 t.c  5
 gcc: unrecognized option `-pie'
 cc1: error: unrecognized option `-fPIE'
 configure:2266: $? = 1
 configure: failed program was:
 | #line 2239 configure
 | /* confdefs.h.  */
 |
 | #define PACKAGE_NAME Squid Web Proxy
 | #define PACKAGE_TARNAME squid
 | #define PACKAGE_VERSION 2.6.STABLE5
 | #define PACKAGE_STRING Squid Web Proxy 2.6.STABLE5
 | #define PACKAGE_BUGREPORT http://www.squid-cache.org/bugs/;
 | #define PACKAGE squid
 | #define VERSION 2.6.STABLE5
 | /* end confdefs.h.  */
 |
 | int
 | main ()
 | {
 |
 |   ;
 |   return 0;
 | }
 configure:2305: error: C compiler cannot create executables
 See `config.log' for more details.

 Don't know how to continue.

 BTW, the problem I mentioned first with the index.php is suddenly
 gone, I don't know what happend. It took me three days of confusion
 and suddenly it's gone.

 Regards
 --
Andreas Meyer

 Internet-Tel.: 06341620317
 Mein öffentlicher GPG-Schlüssel unter:
 http://gpg-keyserver.de/pks/lookup?search=anmeyerfingerprint=onop=index



-- 

Sds.
Alexandre J. Correa
Onda Internet / OPinguim.net
http://www.ondainternet.com.br
http://www.opinguim.net


  1   2   >