Re: [squid-users] Squid uses way too much RAM and starts swapping ...

2011-05-30 Thread guest01
Hi,

Any news on this topic? Unfortunately, RAM is running full within days
and at the moment, our workaround is to do a reboot ... We would
appreciate any other solution!

thanks,
peter


On Wed, May 11, 2011 at 3:47 PM, guest01 gues...@gmail.com wrote:
 On Wed, May 11, 2011 at 10:47 AM, Amos Jeffries squ...@treenet.co.nz wrote:
 On 11/05/11 19:19, guest01 wrote:

 Hi,

 I am currently using squid 3.1.12 as forward-proxy without
 harddisk-caching (only RAM is used for caching). Each server is
 running on RHEL5.5 and is pretty strong (16 CPUs, 28GB RAM), but each
 servers starts swapping a few days after start. The workaround at the
 moment is to reboot the server once a week, which I don't really like.
 But swapping leads to serious side effects, e.g. performance troubles,
 ...

 way too much swapping:
 http://imageshack.us/m/52/6149/memoryday.png

 I already read a lot of posts and mails for similar problems, but
 unfortunately, I was not able to solve this problem. I added following
 infos to my squid.conf-file:
 # cache specific settings
 cache_replacement_policy heap LFUDA
 cache_mem 1600 MB
 memory_replacement_policy heap LFUDA
 maximum_object_size_in_memory 2048 KB
 memory_pools off
 cache_swap_low 85
 cache_swap_high 90

 (There are four squid instances per server, which means that 1600*4 =
 6400MB RAM used for caching, which is not even 1/4 of the total
 available amount of RAM. Plenty enough, don't you think?)

 Not that is for HTTP object caching, emphasis on *caching* and HTTP
 object. In-transit objects and non-HTTP caches (Ip cache, domain name
 cache, persistent connections cache, client database, via/fwd database,
 network performance cache, auth caches, external ACL caches) and the indexes
 for all those caches use other memory.

 Then again they should all be using no more than a few GB combined. So you
 may have hit a new leak (all the known ones are resolved before 3.1.12).

 Ok, very strange. But at least it is reproducible, it takes about a
 week until squid is starting to swap ...
 http://img191.imageshack.us/img191/9615/memorymonth.png


 Very strange are the negative values (Memory usage for squid via
 mallinfo():) from the output below. Maybe that is a reason for running
 out of RAM?

 mallinfo() sucks badly when going above 2GB of RAM. It can be ignored.

 The section underneath it Memory accounted for: is Squids own accounting
 and more of a worry. It should not have had negatives since before 3.1.10.


 HTTP/1.0 200 OK
 Server: squid/3.1.12
 Mime-Version: 1.0
 Date: Wed, 11 May 2011 07:06:10 GMT
 Content-Type: text/plain
 Expires: Wed, 11 May 2011 07:06:10 GMT
 Last-Modified: Wed, 11 May 2011 07:06:10 GMT
 X-Cache: MISS from xlsqip03_1
 Via: 1.0 xlsqip03_1 (squid/3.1.12)
 Connection: close

 Squid Object Cache: Version 3.1.12
 Start Time:     Wed, 27 Apr 2011 11:01:13 GMT
 Current Time:   Wed, 11 May 2011 07:06:10 GMT
 Connection information for squid:
         Number of clients accessing cache:      1671
         Number of HTTP requests received:       16144359
         Number of ICP messages received:        0
         Number of ICP messages sent:    0
         Number of queued ICP replies:   0
         Number of HTCP messages received:       0
         Number of HTCP messages sent:   0
         Request failure ratio:   0.00
         Average HTTP requests per minute since start:   810.3
         Average ICP messages per minute since start:    0.0
         Select loop called: 656944758 times, 1.820 ms avg
 Cache information for squid:
         Hits as % of all requests:      5min: 17.4%, 60min: 18.2%
         Hits as % of bytes sent:        5min: 45.6%, 60min: 39.9%
         Memory hits as % of hit requests:       5min: 86.1%, 60min: 88.9%
         Disk hits as % of hit requests: 5min: 0.0%, 60min: 0.0%
         Storage Swap size:      0 KB
         Storage Swap capacity:   0.0% used,  0.0% free
         Storage Mem size:       1622584 KB
         Storage Mem capacity:   100.0% used,  0.0% free

 Okay 1.6 GB of RAM used for caching HTTP objects. Fully used.

         Mean Object Size:       0.00 KB

 Problem #1. It *may* be Squid not accounting for the memory objects in the
 mean.

         Requests given to unlinkd:      0
 Median Service Times (seconds)  5 min    60 min:
         HTTP Requests (All):   0.01648  0.01235
         Cache Misses:          0.05046  0.04277
         Cache Hits:            0.00091  0.00091
         Near Hits:             0.01469  0.01745
         Not-Modified Replies:  0.0  0.00091
         DNS Lookups:           0.00190  0.00190
         ICP Queries:           0.0  0.0
 Resource usage for squid:
         UP Time:        1195497.286 seconds
         CPU Time:       22472.507 seconds
         CPU Usage:      1.88%
         CPU Usage, 5 minute avg:        5.38%
         CPU Usage, 60 minute avg:       5.44%
         Process Data Segment Size via sbrk(): 3145032 KB
         Maximum Resident Size: 0 KB
         Page faults with physical 

Re: [squid-users] delay_access url_regex acl

2011-05-30 Thread Marc Nil
 Definitely the regex bits then.

 If you post the whitelist.no_limit we are able to see if there is
 room for improvement.
 Usually there is.

FYI he did. It was two domain names :(

Ok here is what I did after considering your replies:

acl whitelist.no_limit dstdomain /etc/squid3/etc/whitelist.no_limit
#cat /etc/squid3/etc/whitelist.no_limit
#www.microsoft.com
#cdimage.debian.org

delay_pools 1
delay_class 1 2
delay_parameters 1 3145728/3145728 51200/51200
delay_access 1 allow !whitelist.no_limit
delay_access 1 deny all

I removed the line working on a authentication group to limit the risk of 
potential errors.

I replaced the regex acl by a dstdomain one (now ther no more ambiguity 
concerning wether or the Regex works).

With the above configuration, the 50ko/s limitation per user is applied even on 
www.microsoft.com and cdimage.debian.org.

Thank in advance for your help,
Best Regards,
Marc.



[squid-users] How to download specified squid version.

2011-05-30 Thread Paweł Mojski

Hi Guys;

Regarding my problem with squid and ssl compilations I'd like to check 
the version specified by author of the patch, I mean: Squid v3.1 (r9820) 
this one.
How can I download it? I couldn't find any way for it, no cvs (auth 
required) or svn.


Regards;

--
Paweł Mojski



Re: [squid-users] Squid 3.1.12 times out when trying to access MSDN

2011-05-30 Thread Pandu Poluan
On Mon, May 30, 2011 at 17:25, Pandu Poluan pa...@poluan.info wrote:
 On Fri, May 27, 2011 at 17:47, Amos Jeffries squ...@treenet.co.nz wrote:
 On 27/05/11 19:42, Pandu Poluan wrote:

 Hello list,

 I've been experiencing a perplexing problem.

 Squid 3.1.12 often times out when trying to access certain sites, most
 notably MSDN. But it's still very fast when accessing other
 non-problematic sites.

 For instance, trying to access the following URL *always* result in
 timeout:

 http://msdn.microsoft.com/en-us/library/aa302323.aspx

 Trying to get the above URL using wget: No problem.


-- 8  8  8  8  8 --


 msdn.microsoft.com DNS response to  lookup is a successful CNAME, but
 with no IP addresses to connect to.

 The behaviour you describe can appear if you have turned dns_v4_fallback
 OFF. Which disables A lookup (IPv4 connectivity) if there is any kind of
 successful , even a useless empty one like MSDN produces.


 Ah, I see... I'll try it out today.


No joy :-(

I've specified dns_v4_fallback on explicitly (it was not specified
previously) and even replaced the miss_access lines with miss_access
allow all.

Still failing on those problematic pages.

-- 
Pandu E Poluan
~ IT Optimizer ~
Visit my Blog: http://pepoluan.posterous.com
Google Talk:    pepoluan
Y! messenger: pepoluan
MSN / Live:  pepol...@hotmail.com (do not send email here)
Skype:    pepoluan
More on me:  My LinkedIn Account  My Facebook Account


[squid-users] squid owa Exchange 2010 / slow load

2011-05-30 Thread Koopmann, Jan-Peter
Hi,

this topic came up here quite a while ago however without really finding a
solution. We configured a squid reverse proxy for Exchange 2010 (owa,
active-sync etc.). All is working quite well with a small exception: The
first load of owa takes 2-3 minutes. According to firebug the time is
spend in uglobal.js ( 2m). Once all is loaded things seem to work just
fine. This happens with every browser I tested (IE, Firefox, Safari,
Chrome) at least once during the initial load of the page. If you kill the
browser and restart OWA things are ok. This does NOT happen if I address
the Exchange server OWA in question directly, at least I was not able to
reproduce it.



Any idea where/how to look? cache.log does not say anything regarding this.


Kind regards,
   JP



--
Seceidos GmbHCo. KG| Tel: +49 (6151) 66843-43
Pfarrer-Staiger-Str. 39 | Fax: +49 (6151) 66843-52
55299 Nackenheim| Mobil:
http://www.seceidos.de/ |
Skype: jpkmobil
E-Mail: jan-peter.koopm...@seceidos.de
HRA 40961, Amtsgericht Mainz

persönlich haftende Gesellschafterin: Seceidos Verwaltungs GmbH, Nackenheim
HRB 42292, Amtsgericht Mainz
Geschäftsführer: Jan-Peter Koopmann





[squid-users] Configuration Problems

2011-05-30 Thread patric . glazar
Hello!

We are using squid to solve bandwith regulation for our patchmanagment.

What is our purpose:
- we have a limited bandwith to our central PM-Server so each squid has a 
connection bandwith max 20 KB!
  therefore we are using delay pools class 1
 delay_pools 1
 delay_class 1 1
 delay_parameters 1 2/2 2/2 
 delay_access 1 allow localnet

- the clients in the internal Lan should get the download from squid as 
fast as possible,
  same request should be handelt as one - first fill the cache and then 
serve the download to the clients
  the cache should be cleaned after 1 year!

hierarchy_stoplist cgi-bin ?

refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  525600
refresh_pattern .   0   20% 4320
 
 range_offset_limit -1
 collapsed_forwarding on 

rightnow it looks like that all clients are sharing the 20 KB and therfore 
not one is getting the update
- the cache is staing empty I don´t know why
- a trace on the central PM Server shows that the squid Servers are 
donwloading a huge amount of data


Mit freundlichen Grüßen / Best regards
Patric Glazar 

Allgemeines Rechenzentrum GmbH 
PC Infrastructure Management 
Team Security 

A-6020 Innsbruck, Anton Melzerstr.11 
Tel.: +43 / (0)504009-1309 
Fax: +43 / (0)504009-71309 
E-Mail: patric.gla...@arz.at 
http://www.arz.co.at 
DVR: 0419427

Disclaimer:
Diese Nachricht dient ausschließlich zu Informationszwecken und ist nur 
für den Gebrauch des angesprochenen Adressaten bestimmt.

This message is only for informational purposes and is intended solely for 
the use of the addressee.


Disclaimer:
Diese Nachricht dient ausschließlich zu Informationszwecken und ist nur 
für den Gebrauch des angesprochenen Adressaten bestimmt.

This message is only for informational purposes and is intended solely for 
the use of the addressee.



Re: [squid-users] Custom pages with translation

2011-05-30 Thread E.S. Rosenberg
2011/5/30 Amos Jeffries squ...@treenet.co.nz:
 On Sun, 29 May 2011 22:46:10 +0300, E.S. Rosenberg wrote:

 2011/5/29 Amos Jeffries squ...@treenet.co.nz:

 On 29/05/11 23:29, E.S. Rosenberg wrote:

 Hi all,
 We would like to create a few cutom errors and have them exist in the
 most common languages on our campus, I added the custom page to the
 templates folder, and I added a custom page to one of the languages,
 however when I browse with auto negotiate set to the language I still
 just get the English message instead of the language's message (and
 the localized version is 100% different since we translated it
 completely).
 Is there some other step that I missed here?

  * Squid version must be 3.1 or later.

 Check: 3.1.6

  * error_directory directive MUST be absent from squid.conf.

 Check

  * your new template(s) must be readable by the squid user account. Check
 permissions match the files around them.

 Check

  * browser must be advertising the test language first in the
 Accept-Language header. (Squid processes it left-to-right looking for the
 first available template)

 Afaik it is, with other (builtin like ERR_ACCESS_DENIED) pages I get
 the translated page just not with the custom page.

 You can see what Squid is doing with debug_options 4,6

 Ok, it showed me the following:
 errorpage.cc(1044) BuildContent: No existing error page language
 negotiated for ERR_WORKTIME_ACCESS_DENIED. Using default error file .

 Which suggests that the somehow the browser is not sending the
 language header with this block page

 Or that Squid was not built with --enable-auto-locale. That should be
 enabled by default if not disabled manually, though.
Well for 'builtin' error pages (like ERR_ACCESS_DENIED) it does give
me the localized page so that seems to be a pointer that it should
work right?

Thanks,
Eliyahu - אליהו

 The debug output when negotiation is enabled should look something like:

 ... Testing Header: he-IL,he;q=0.9,en;q=0.8
 ... Found language 'he-il', testing for available template in:
 '/usr/share/squid/errors/he-il'

 OR if the aliases are not setup a warning about he-il then:

 ... Found language 'he', testing for available template in:
 '/usr/share/squid/errors/he'



 BTW. the float: right makes the hebrew error messages very
 unreadable, direction: rtl; seems to be sufficient (only tested on
 IE8-9, FF3.6 and FF4)

 Ah thank. You fixed in the langpack for tomorrow. It just missed out on
 todays releases :(. Should be in next months.

 Amos



[squid-users] stablish local web page as home page

2011-05-30 Thread Camilo Cadena
 hi, 
we have a wi-fi network using ubuntu server 10.04 and squid 2.7. All seems to 
work fine.
Now, we want that everytime a user of my local network opens his browser he 
must see a web page .php the is inside my server, but we don't know how to do 
this.

if we wrote the ip address of this file inside my server as 192.168.1.123/prova 
it works, i can see it and i can continue the navigation after that, but what 
we want is to make this in automatic, this way the user don't have to write 
anything because the browser make it itself.

we will appreciate some help here.

thank you

Re: [squid-users] Configuration Problems

2011-05-30 Thread Amos Jeffries

On 30/05/11 23:38, patric.gla...@arz.at wrote:

Hello!

We are using squid to solve bandwith regulation for our patchmanagment.

What is our purpose:
- we have a limited bandwith to our central PM-Server so each squid has a
connection bandwith max 20 KB!
   therefore we are using delay pools class 1
  delay_pools 1
  delay_class 1 1
  delay_parameters 1 2/2 2/2
  delay_access 1 allow localnet

- the clients in the internal Lan should get the download from squid as
fast as possible,
   same request should be handelt as one - first fill the cache and then
serve the download to the clients
   the cache should be cleaned after 1 year!

hierarchy_stoplist cgi-bin ?

refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  525600
refresh_pattern .   0   20% 4320

  range_offset_limit -1
  collapsed_forwarding on

rightnow it looks like that all clients are sharing the 20 KB and therfore
not one is getting the update


Yes. Assuming this is 2.7 (where collapsed forwarding works) one client 
will be downloading and others waiting for the same URL will be sharing 
the trickle as it arrives.



- the cache is staing empty I don´t know why


If these are publicly accessible URL you can plug one of them into 
redbot.org. It will tell you if there are any caching problems that will 
hinder Squid.


Otherwise you will have to locate and figure out the headers (both 
request and reply) manually.


Not being able to cache these objects could be part of the below problem...


- a trace on the central PM Server shows that the squid Servers are
donwloading a huge amount of data


Check what that is. You can expect all clients to parallel download many 
objects. But they should still be bandwidth limited by Squid within the 
delay pool limits.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.12
  Beta testers wanted for 3.2.0.7 and 3.1.12.1


Re: [squid-users] Custom pages with translation

2011-05-30 Thread Amos Jeffries

On 30/05/11 23:50, E.S. Rosenberg wrote:

2011/5/30 Amos Jeffriessqu...@treenet.co.nz:

On Sun, 29 May 2011 22:46:10 +0300, E.S. Rosenberg wrote:


2011/5/29 Amos Jeffriessqu...@treenet.co.nz:


On 29/05/11 23:29, E.S. Rosenberg wrote:


Hi all,
We would like to create a few cutom errors and have them exist in the
most common languages on our campus, I added the custom page to the
templates folder, and I added a custom page to one of the languages,
however when I browse with auto negotiate set to the language I still
just get the English message instead of the language's message (and
the localized version is 100% different since we translated it
completely).
Is there some other step that I missed here?


  * Squid version must be 3.1 or later.


Check: 3.1.6


  * error_directory directive MUST be absent from squid.conf.


Check


  * your new template(s) must be readable by the squid user account. Check
permissions match the files around them.


Check


  * browser must be advertising the test language first in the
Accept-Language header. (Squid processes it left-to-right looking for the
first available template)


Afaik it is, with other (builtin like ERR_ACCESS_DENIED) pages I get
the translated page just not with the custom page.


You can see what Squid is doing with debug_options 4,6


Ok, it showed me the following:
errorpage.cc(1044) BuildContent: No existing error page language
negotiated for ERR_WORKTIME_ACCESS_DENIED. Using default error file .

Which suggests that the somehow the browser is not sending the
language header with this block page


Or that Squid was not built with --enable-auto-locale. That should be
enabled by default if not disabled manually, though.

Well for 'builtin' error pages (like ERR_ACCESS_DENIED) it does give
me the localized page so that seems to be a pointer that it should
work right?


Should be yes.

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.12
  Beta testers wanted for 3.2.0.7 and 3.1.12.1


Re: [squid-users] squid owa Exchange 2010 / slow load

2011-05-30 Thread Amos Jeffries

On 30/05/11 23:04, Koopmann, Jan-Peter wrote:

Hi,

this topic came up here quite a while ago however without really finding a
solution. We configured a squid reverse proxy for Exchange 2010 (owa,
active-sync etc.). All is working quite well with a small exception: The
first load of owa takes 2-3 minutes. According to firebug the time is
spend in uglobal.js (  2m). Once all is loaded things seem to work just
fine. This happens with every browser I tested (IE, Firefox, Safari,
Chrome) at least once during the initial load of the page. If you kill the
browser and restart OWA things are ok. This does NOT happen if I address
the Exchange server OWA in question directly, at least I was not able to
reproduce it.



Any idea where/how to look? cache.log does not say anything regarding this.


HTTP headers between squid and the server?

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.12
  Beta testers wanted for 3.2.0.7 and 3.1.12.1


Re: [squid-users] Squid uses way too much RAM and starts swapping ...

2011-05-30 Thread Amos Jeffries

On 30/05/11 20:07, guest01 wrote:

Hi,

Any news on this topic? Unfortunately, RAM is running full within days
and at the moment, our workaround is to do a reboot ... We would
appreciate any other solution!

thanks,
peter



:) I was just looking at this bug again today.

:( still no seriously good ideas.

Is there any way at all you can run one of these Squid under valgrind 
and get a report of what its memory is doing?
 Even if valgrind is just built in to squid. The info report gains a 
lot of extra memory stats and leak report without having to stop squid 
running.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.12
  Beta testers wanted for 3.2.0.7 and 3.1.12.1


[squid-users] CLOSE_WAIT connections with ICAP

2011-05-30 Thread Daniel Beschorner
  Connections with FIN_WAIT1 state on ICAP server side seem ESTABLISHED at 
 squid.
 
 ICAP-closed connection.
 
   Idle pconn in Squid have readers set listening for FIN to arrive and 
 close the FD. This is strange but not conclusive.
 
   Looks a bit like the FIN never arrived.
 
  Squid connections in CLOSE_WAIT state are no longer visible at ICAP server 
 side.
 
 Squid-closed connection.
 
 FIN packet sent by both sides. FIN-ACK packet from ICAP server not 
 arriving at Squid box. This confirms the FIN are not flowing right.
 
 
 Both cases are pointing to packets containing FIN not flowing from the 
 ICAP server to Squid. Though strangely seem fine going in the other 
 direction.

In both cases Squid has a large Recv-Q. Does Squid no longer empty the queue 
and therefore misses the FIN or should the FIN work out-of-band?

Daniel



Re: [squid-users] intercept and change a css file in reverse proxy

2011-05-30 Thread Amos Jeffries

On 30/05/11 15:37, Evuraan wrote:

I've a reverse proxy, and I've to fix somebody else's mess(!)

I am looking to intercept and change a specific css file's content from:

 src: url(/online/wrong.eot);

to:

 src: url(/online/correct.eot);

Is this possible? Any pointers will be much appreciated!


Not with Squid sorry.

Squid is designed not to pay with the body content. If you are lucky you 
could find some content filtering to do it. The only real solution 
though is to change the files or code on the web server.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.12
  Beta testers wanted for 3.2.0.7 and 3.1.12.1


Re: [squid-users] CLOSE_WAIT connections with ICAP

2011-05-30 Thread Amos Jeffries

On 31/05/11 00:17, Daniel Beschorner wrote:

Connections with FIN_WAIT1 state on ICAP server side seem ESTABLISHED at

squid.

ICAP-closed connection.

   Idle pconn in Squid have readers set listening for FIN to arrive and
close the FD. This is strange but not conclusive.

   Looks a bit like the FIN never arrived.


Squid connections in CLOSE_WAIT state are no longer visible at ICAP server

side.

Squid-closed connection.

FIN packet sent by both sides. FIN-ACK packet from ICAP server not
arriving at Squid box. This confirms the FIN are not flowing right.


Both cases are pointing to packets containing FIN not flowing from the
ICAP server to Squid. Though strangely seem fine going in the other
direction.


In both cases Squid has a large Recv-Q. Does Squid no longer empty the queue 
and therefore misses the FIN or should the FIN work out-of-band?

Daniel



No. Squid should be draining its queue, even if leaving the connections 
idle. Being in the idle pool sets a read to abort/close the socket on 
FIN or on excess data.


This does sound like those ICAP incomplete reply problems. Though how 
its getting into Squids idle pool without the act of insertion killing 
the socket with an excess data read is baffling me.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.12
  Beta testers wanted for 3.2.0.7 and 3.1.12.1


Re: [squid-users] stablish local web page as home page

2011-05-30 Thread Amos Jeffries

On 31/05/11 00:00, Camilo Cadena wrote:

hi, we have a wi-fi network using ubuntu server 10.04 and squid 2.7.
All seems to work fine. Now, we want that everytime a user of my
local network opens his browser he must see a web page .php the is
inside my server, but we don't know how to do this.

if we wrote the ip address of this file inside my server as
192.168.1.123/prova it works, i can see it and i can continue the
navigation after that, but what we want is to make this in automatic,
this way the user don't have to write anything because the browser
make it itself.

we will appreciate some help here.

thank you


Home page is a concept that exists only within the users browser. 
There is no way to affect that from outside the users PC.


Portals like wi-fi access proxies have a concept popularly known as 
splash pages. The details of setting one up in Squid are at:

 http://wiki.squid-cache.org/ConfigExamples/Portal/Splash

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.12
  Beta testers wanted for 3.2.0.7 and 3.1.12.1


Re: [squid-users] SQUID store_url_rewrite

2011-05-30 Thread Amos Jeffries

On 30/05/11 00:22, Ghassan Gharabli wrote:

Hello,

I was trying to cache this website :

http://down2.nogomi.com.xn55571528exgem0o65xymsgtmjiy75924mjqqybp.nogomi.com/M15/Alaa_Zalzaly/Atrak/Nogomi.com_Alaa_Zalzaly-3ali_Tar.mp3

How do you cache or rewrite its uRL to static domain! :
down2.nogomi.com.xn55571528exgem0o65xymsgtmjiy75924mjqqybp.nogomi.com

Does that URL matches this REGEX EXAMPLE or who can help me match this
Nogomi.com CDN?

 #generic http://variable.domain.com/path/filename.ex;, ext or exte


The line above describes what the 'm/' pattern produces for the $y array.
Well, kind of...

$1 is anything. utter garbage. could be a full worth of loose bits:
  http://evil.example.com/cache-poison?url=http://;

$2 appears to be a two-part domain name (ie example.com as opposed to 
a three-part www.example.com)


$3 is the file or script name.
$4 is the file extension type.



 #http://cdn1-28.projectplaylist.com
 #http://s1sdlod041.bcst.cdn.s1s.yimg.com
} elsif 
(m/^http:\/\/(.*?)(\.[^\.\-]*?\..*?)\/([^\?\\=]*)\.([\w\d]{2,4})\??.*$/)
{
@y = ($1,$2,$3,$4);
$y[0] =~
s/([a-z][0-9][a-z]dlod[\d]{3})|((cache|cdn)[-\d]*)|([a-zA-A]+-?[0-9]+(-[a-zA-Z]*)?)/cdn/;


I assume you are trying to compress 
down2.nogomi.com.xn55571528exgem0o65xymsgtmjiy75924mjqqybp down to 
cdn without allowing any non-FQDN garbage to compress?


I would use:  s/[a-z0-9A-Z\.\-]+/cdn/
and add a fixed portion to ensure that $y[1] is one of the base domains 
in the CDN. Just in case some other site uses the same host naming scheme.



print $x . storeurl:// . $y[0] . $y[1] . / . $y[2] . . .
$y[3] . \n;

I also tried to study more about REGULAR EXPRESSIONS but their
examples are only for simple URLS .. I really need to study more about
Complex URL .



Relax. You do not have to combine them all into one regex.

You can make it simple and efficient to start with and improve as your 
knowledge does. If in doubt play it safe, storeurl_rewriting has at its 
core the risk of XSS attack on your own clients (in the example above 
$y[0] comes very close).


The hardest part is knowing for certain what all the parts of the URL 
mean to the designers of that website. So that you only erase the 
useless trackers and routing tags, while keeping everything important.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.12
  Beta testers wanted for 3.2.0.7 and 3.1.12.1


[squid-users] Re: url_rewrite_access-directive bypasses everything

2011-05-30 Thread Andreas Moroder

Am 25.06.2010 07:22, schrieb Tom Tux:

This seems not to work.

I have the following directive:
acl ALLOWED_HOSTS src /etc/squid/Allowed_hosts
url_rewrite_access deny ALLOWED_HOSTS
url_rewrite_access allow all


In the file /etc/squid/Allowed_hosts I have the following entry:
10.xx.xx.xx/32

But on the Redirector-Logfile, I can see, that websites, called from
the host listed in the file Allowed_hosts, are blocked. So this host
isn't bypassing the redirector.
Thanks.
Tom


Hello Tom,

I have the same problem. Have you resolved this ?

Bye
Andreas



Re: [squid-users] Squid uses way too much RAM and starts swapping ...

2011-05-30 Thread guest01
ok, I can at least try to start it under valgrind, which I have never
heard before. Do I just start squid under valgrind and send you the
logfile? Do you need any special options?


On Mon, May 30, 2011 at 2:15 PM, Amos Jeffries squ...@treenet.co.nz wrote:
 On 30/05/11 20:07, guest01 wrote:

 Hi,

 Any news on this topic? Unfortunately, RAM is running full within days
 and at the moment, our workaround is to do a reboot ... We would
 appreciate any other solution!

 thanks,
 peter


 :) I was just looking at this bug again today.

 :( still no seriously good ideas.

 Is there any way at all you can run one of these Squid under valgrind and
 get a report of what its memory is doing?
  Even if valgrind is just built in to squid. The info report gains a lot of
 extra memory stats and leak report without having to stop squid running.

 Amos
 --
 Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.12
  Beta testers wanted for 3.2.0.7 and 3.1.12.1



Re: [squid-users] Sharing ACL Lists between different squids

2011-05-30 Thread Stefan Jensen
Hi,...

Am Freitag, den 27.05.2011, 20:10 +1200 schrieb Amos Jeffries:

   * It is NOT possible to load whole lines of config from any source 
 other than a config file on disk.

I decided to use a special local folder for all my changing ACL's and
syncing it every hour from the main squid box via rsync. Seems the
easiest and most stable way.

Is there any caveat doing a squid -k reconfigure via cron every hour?

Can squid do some sanity check on the ACL-files? e.g. don't load, if
the files are corrupt? (if rsync fails)

best regards

Stefan
-- 



[squid-users] RE: squid owa Exchange 2010 / slow load

2011-05-30 Thread Paul Freeman
Jan-Peter
I came across the same behaviour late last year when implementing Squid 
v3.0stable19 (the version which was available as part of Ubuntu 10.04LTS) as a 
reverse proxy for Exchange 2010 OWA and ActiveSync.

I found the browser would pause early on in the connection to OWA for approx 2 
min but once this passed I could restart the browser and the pause would no 
longer occur.  If the browser cache was cleared and the browser restarted, the 
pause returned.

What version of Squid are you using?

With the assistance of Amos we found the pause was due to issues with chunked 
Transfer-Encoding header compatibility.  Amos suggested I try one of the Squid 
3.1.x series due to the improved handling of this.

I changed to Squid 3.1.8 and this resolved the problem.

Regards

Paul
 
 -Original Message-
 From: Koopmann, Jan-Peter [mailto:jan-peter.koopm...@seceidos.de]
 Sent: Monday, 30 May 2011 9:05 PM
 To: squid-users@squid-cache.org
 Subject: [squid-users] squid owa Exchange 2010 / slow load
 
 Hi,
 
 this topic came up here quite a while ago however without really finding a
 solution. We configured a squid reverse proxy for Exchange 2010 (owa,
 active-sync etc.). All is working quite well with a small exception: The
 first load of owa takes 2-3 minutes. According to firebug the time is
 spend in uglobal.js ( 2m). Once all is loaded things seem to work just
 fine. This happens with every browser I tested (IE, Firefox, Safari,
 Chrome) at least once during the initial load of the page. If you kill the
 browser and restart OWA things are ok. This does NOT happen if I address
 the Exchange server OWA in question directly, at least I was not able to
 reproduce it.
 
 
 
 Any idea where/how to look? cache.log does not say anything regarding
 this.
 
 
 Kind regards,
JP
 
 
 
 --
 Seceidos GmbHCo. KG| Tel: +49 (6151) 66843-43
 Pfarrer-Staiger-Str. 39 | Fax: +49 (6151) 66843-52
 55299 Nackenheim| Mobil:
 http://www.seceidos.de/ |
 Skype: jpkmobil
 E-Mail: jan-peter.koopm...@seceidos.de
 HRA 40961, Amtsgericht Mainz
 
 persönlich haftende Gesellschafterin: Seceidos Verwaltungs GmbH,
 Nackenheim
 HRB 42292, Amtsgericht Mainz
 Geschäftsführer: Jan-Peter Koopmann
 
 



Re: [squid-users] Re: problems squid_kerb_auth

2011-05-30 Thread spiderslack

Hi,

For the log can not see any connection against the Active Directory on 
port 88 (kerberos, right). Attached is the. pcap. I did the 
configuration of firefox as below


firefox set variables as follows:

network.negotiate-auth.delegation-uris=vialactea.corp
network.negotiate-auth.trusted-uris= vialactea.corp

where vialactea.corp is the domain of the Active Directory. I tried in 
IE but he keeps asking for login and password infinitely


Regards

On 05/29/2011 09:39 AM, Markus Moeller wrote:

Hi,

 The squid log file says that the client could not use Kerberos and  
fell back to NTLM.


 Can you capture the traffic from the client to the proxy and to your 
Kerberos servers (e.g. active directory) with wireshark  and send me 
the cap file (if not too big) ?


Markus




log_squid3.pcap
Description: application/cap


Re: [squid-users] Re: problems squid_kerb_auth

2011-05-30 Thread spiderslack

Hi,

For the log can not see any connection against the Active Directory on 
port 88 (kerberos, right). Attached is the. pcap. I did the 
configuration of firefox as below


firefox set variables as follows:

network.negotiate-auth.delegation-uris=vialactea.corp
network.negotiate-auth.trusted-uris= vialactea.corp

where vialactea.corp is the domain of the Active Directory. I tried in 
IE but he keeps asking for login and password infinitely


Regards

On 05/29/2011 09:39 AM, Markus Moeller wrote:

Hi,

 The squid log file says that the client could not use Kerberos and  
fell back to NTLM.


 Can you capture the traffic from the client to the proxy and to your 
Kerberos servers (e.g. active directory) with wireshark  and send me 
the cap file (if not too big) ?


Markus




log_squid3.pcap
Description: application/cap


[squid-users] Re: Re: problems squid_kerb_auth

2011-05-30 Thread Markus Moeller
Looking at the capture it seems the client (Firefox) does not react on the 
Negotiate response.  I think you need to use *.vialactea.corp to fix this.


Regards
Markus

spiderslack spidersl...@yahoo.com.br wrote in message 
news:4de41183.6080...@yahoo.com.br...

Hi,

For the log can not see any connection against the Active Directory on
port 88 (kerberos, right). Attached is the. pcap. I did the
configuration of firefox as below

firefox set variables as follows:

network.negotiate-auth.delegation-uris=vialactea.corp
network.negotiate-auth.trusted-uris= vialactea.corp

where vialactea.corp is the domain of the Active Directory. I tried in
IE but he keeps asking for login and password infinitely

Regards

On 05/29/2011 09:39 AM, Markus Moeller wrote:

Hi,

 The squid log file says that the client could not use Kerberos and
fell back to NTLM.

 Can you capture the traffic from the client to the proxy and to your
Kerberos servers (e.g. active directory) with wireshark  and send me
the cap file (if not too big) ?

Markus








[squid-users] Re: Re: problems squid_kerb_auth

2011-05-30 Thread Markus Moeller

Hi,

I testing with Internet Explorer and obtain this error

2011/05/30 22:06:36| squid_kerb_auth: gss_acquire_cred() failed:
Unspecified GSS failure.  Minor code may provide more information. Key
table entry not found



That looks better, but not quite right.  What does klist -ekt squid-keytab 
(for MIT) or ktutil  -k squid-keytab list (for Heimdal) give ?
Also can you do a kinit user and then a kvno HTTP/squid-fqdn ( I assume 
MIT here)  ?


klist -ekt /etc/squid/squid.keytab
Keytab name: WRFILE:/etc/squid/squid.keytab
KVNO Timestamp Principal
 - 
  41 05/28/11 14:40:42 HTTP/w2k3r2.win2003r2.h...@win2003r2.home  (ArcFour 
with HMAC/md5)



#  kinit m...@win2003r2.home
Password for m...@win2003r2.home:
# kvno  HTTP/w2k3r2.win2003r2.h...@win2003r2.home
HTTP/w2k3r2.win2003r2.h...@win2003r2.home: kvno = 41

The kvno must be the same (in my case here 41) !

Also can you lock/unlock your desktop to get new credentials and run 
wireshark again when you use IE ?


You should see a TGS-REQ and TGS-REP and the TGS-REP looks like:

No. TimeSourceDestination   Protocol 
Info
 8 23:51:18.941121 192.168.1.12  192.168.1.27  KRB5 
TGS-REP


Frame 8 (1300 bytes on wire, 1300 bytes captured)
Ethernet II, Src: Vmware_d0:e5:e9 (00:0c:29:d0:e5:e9), Dst: Vmware_8e:33:fe 
(00:0c:29:8e:33:fe)
Internet Protocol, Src: 192.168.1.12 (192.168.1.12), Dst: 192.168.1.27 
(192.168.1.27)

User Datagram Protocol, Src Port: kerberos (88), Dst Port: 43611 (43611)
Kerberos TGS-REP
   Pvno: 5
   MSG Type: TGS-REP (13)
   Client Realm: WIN2003R2.HOME
   Client Name (Principal): mm
   Name-type: Principal (1)
   Name: mm
   Ticket
   Tkt-vno: 5
   Realm: WIN2003R2.HOME
   Server Name (Principal): HTTP/w2k3r2.win2003r2.home
   Name-type: Principal (1)
   Name: HTTP
   Name: w2k3r2.win2003r2.home
   enc-part rc4-hmac
   Encryption type: rc4-hmac (23)
   Kvno: 41
   enc-part: 7435AE25CA1CA6B2BA3E2C29D62A7F80D38B3A96E1528168...
   enc-part rc4-hmac
   Encryption type: rc4-hmac (23)
   enc-part: BA59EF1595A8CDAEE212C41EBE29C68E9D427D49995919D8...


Can you check that the keytab details (name, encryption type and kvno) match 
with what you see in the TGS-REP ?



Regards

On 05/30/2011 05:52 PM, spiderslack wrote:

Hi,

For the log can not see any connection against the Active Directory on
port 88 (kerberos, right). Attached is the. pcap. I did the
configuration of firefox as below

firefox set variables as follows:

network.negotiate-auth.delegation-uris=vialactea.corp
network.negotiate-auth.trusted-uris= vialactea.corp

where vialactea.corp is the domain of the Active Directory. I tried in
IE but he keeps asking for login and password infinitely

Regards

On 05/29/2011 09:39 AM, Markus Moeller wrote:

Hi,

 The squid log file says that the client could not use Kerberos and
fell back to NTLM.

 Can you capture the traffic from the client to the proxy and to your
Kerberos servers (e.g. active directory) with wireshark  and send me
the cap file (if not too big) ?

Markus






Regards
Markus






Re: [squid-users] SQUID store_url_rewrite

2011-05-30 Thread Ghassan Gharabli
Hello again,

#generic http://variable.domain.com/path/filename.ex;, ext or exte
#http://cdn1-28.projectplaylist.com
#http://s1sdlod041.bcst.cdn.s1s.yimg.com
#} elsif 
(m/^http:\/\/(.*?)(\.[^\.\-]*?\..*?)\/([^\?\\=]*)\.([\w\d]{2,4})\??.*$/)
{
#@y = ($1,$2,$3,$4);
#$y[0] =~
s/([a-z][0-9][a-z]dlod[\d]{3})|((cache|cdn)[-\d]*)|([a-zA-A]+-?[0-9]+(-[a-zA-Z]*)?)/cdn/;
#print $x . storeurl:// . $y[0] . $y[1] . / . $y[2] . .
. $y[3] . \n;


Why we had to use arrays in this example.
I understood that m/ indicates a regex match operation , \n to break
the line and we assined @y as an array which has
4 values we used to call each one for example we call $1 the first
record as y[0] ..till now its fine for me
and we assign a value to y[0] =~ $y[0] =~
s/([a-z][0-9][a-z]dlod[\d]{3})|((cache|cdn)[-\d]*)|([a-zA-A]+-?[0-9]+(-[a-zA-Z]*)?)/cdn/;
...

Please correct me if im wrong here.Im still confused about those
values $1 , $2 , $3 ..
how does the program know where to locate $1 or $2 as there is no
values or $strings anyway
as I have noticed that $1 means an element for example
http://cdn1-28.projectplaylist.com can be grouped as elements .. Hope
Im correct on this one
http://(cdn1-28) . (projectplaylist) . (com) should be http:// $1 . $2 . $3

Then let me see if I can solve this one to match this URL
http://down2.nogomi.com.xn55571528exgem0o65xymsgtmjiy75924mjqqybp.nogomi.com/M15/Alaa_Zalzaly/Atrak/Nogomi.com_Alaa_Zalzaly-3ali_Tar.mp3

so I should work around the FQDN and leave the rest as is, please if
you found any wrong this then correct it for me
#does that match
http://down2.nogomi.com.xn55571528exgem0o65xymsgtmjiy75924mjqqybp.nogomi.com/M15/Alaa_Zalzaly/Atrak/Nogomi.com_Alaa_Zalzaly-3ali_Tar.mp3
  ??
elsif (m/^http:\/\/(.*?)(\.[^\.\-]*?\..*?)\/([^\?\\=]*)\.([\w\d]{2,4})\??.*$/)
{
  @y = ($1,$2,$3,$4);
  $y[0] =~ s/[a-z0-9A-Z\.\-]+/cdn/
  print $x . storeurl:// . $y[0] . $y[1] . / . $y[2] . . .
$y[3] . \n;


does this example matches Nogomi.com domain correctly ?

and why u used s/[a-z0-9A-Z\.\-]+/cdn/

I only understood that you are mnaking sure to find small letters ,
cap letters , numbers but I believe \. is to search
for one dot only .. how about if there is 2 dots or more that 3 dots
in this case! .. another one u r finding dash ..

The only thing im confused about is why we have added /cdn/ since the
url doesnt has a word cdn?

Why we have used storeurl:// because I can see some of examples are
print $x . http://; . $y[0] . $y[1] . / . $y[2] . . . $y[3] . \n;

can you give me an example to add the portion of $y[1] please..

Which one have your interests , writing a script to match the most
similar examples in one rule or writing each script for each FQDN?

for example sometimes we see
http://down2.xn55571528exgem0o65xymsgtmjiy75924mjqqybp.example.com/folder/filename.ext
or
http://cdn.xn55571528exgem0o65xymsgtmjiy75924mjqqybp.example2.com/xn55571528exgem0o65xymsgtmjiy75924mjqqybp/folder/filename.ext

really that is interesting to me , that is why I would love to match
this too as well but the thing is if I knew all of these things ..
everything would be fine for me

Again I want to thank you for answering my questions as I felt like Im
writing a magazine heheheh


Regards,
Ghassan


Re: [squid-users] Squid uses way too much RAM and starts swapping ...

2011-05-30 Thread Amos Jeffries

On 31/05/11 03:20, guest01 wrote:

ok, I can at least try to start it under valgrind, which I have never
heard before. Do I just start squid under valgrind and send you the
logfile? Do you need any special options?


Squid need valgrind support built in when this is done.

Apart from that yes it is just run Squid using valgrind like you would a 
debugger. Once it gets to some unusual amount of RAM usage shutdown 
Squid and send in the resulting valgrind log.


We shall see if anything interesting pops up there.
Thank you

Amos



On Mon, May 30, 2011 at 2:15 PM, Amos Jeffriessqu...@treenet.co.nz  wrote:

On 30/05/11 20:07, guest01 wrote:


Hi,

Any news on this topic? Unfortunately, RAM is running full within days
and at the moment, our workaround is to do a reboot ... We would
appreciate any other solution!

thanks,
peter



:) I was just looking at this bug again today.

:( still no seriously good ideas.

Is there any way at all you can run one of these Squid under valgrind and
get a report of what its memory is doing?
  Even if valgrind is just built in to squid. The info report gains a lot of
extra memory stats and leak report without having to stop squid running.

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.12
  Beta testers wanted for 3.2.0.7 and 3.1.12.1




--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.12
  Beta testers wanted for 3.2.0.7 and 3.1.12.1


Re: [squid-users] Sharing ACL Lists between different squids

2011-05-30 Thread Amos Jeffries

On 31/05/11 05:33, Stefan Jensen wrote:

Hi,...

Am Freitag, den 27.05.2011, 20:10 +1200 schrieb Amos Jeffries:


   * It is NOT possible to load whole lines of config from any source
other than a config file on disk.


I decided to use a special local folder for all my changing ACL's and
syncing it every hour from the main squid box via rsync. Seems the
easiest and most stable way.

Is there any caveat doing a squid -k reconfigure via cron every hour?


Some paused downtime while the config is re-processed. Usually very 
fast, but if you have a lot of config lines of things needing DNS 
resolution (domain names in dst ACL etc) it can be some seconds.




Can squid do some sanity check on the ACL-files? e.g. don't load, if
the files are corrupt? (if rsync fails)


Run squid -k parse before the reconfigure. That will display detected 
problems needing resolving. After fixing fatal changes its worth running 
again to ensure nothing got lost from after the fatal issue.


If you just do a full reconfigure Squid will stop running on any serious 
brokenness.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.12
  Beta testers wanted for 3.2.0.7 and 3.1.12.1


[squid-users] per-user max connection limit?

2011-05-30 Thread errno
'm still getting my bearings w/ squid, so apologies if I'm asking
particularly stupid questions.

Is it possible to configure a scenario that facilitates per-user
max connection limits?  For instance, user1 might have a
max connection limit of 10, while user3 has a max connection
limit of 50; etc., etc.


Thankyou!


Re: [squid-users] SQUID store_url_rewrite

2011-05-30 Thread Amos Jeffries

On 31/05/11 11:54, Ghassan Gharabli wrote:

Hello again,

 #generic http://variable.domain.com/path/filename.ex;, ext or exte
 #http://cdn1-28.projectplaylist.com
 #http://s1sdlod041.bcst.cdn.s1s.yimg.com
#} elsif 
(m/^http:\/\/(.*?)(\.[^\.\-]*?\..*?)\/([^\?\\=]*)\.([\w\d]{2,4})\??.*$/)
{
#@y = ($1,$2,$3,$4);
#$y[0] =~
s/([a-z][0-9][a-z]dlod[\d]{3})|((cache|cdn)[-\d]*)|([a-zA-A]+-?[0-9]+(-[a-zA-Z]*)?)/cdn/;
#print $x . storeurl:// . $y[0] . $y[1] . / . $y[2] . .
. $y[3] . \n;


Why we had to use arrays in this example.
I understood that m/ indicates a regex match operation , \n to break
the line and we assined @y as an array which has
4 values we used to call each one for example we call $1 the first
record as y[0] ..till now its fine for me
and we assign a value to y[0] =~ $y[0] =~
s/([a-z][0-9][a-z]dlod[\d]{3})|((cache|cdn)[-\d]*)|([a-zA-A]+-?[0-9]+(-[a-zA-Z]*)?)/cdn/;
...

Please correct me if im wrong here.Im still confused about those
values $1 , $2 , $3 ..
how does the program know where to locate $1 or $2 as there is no
values or $strings anyway
as I have noticed that $1 means an element for example
http://cdn1-28.projectplaylist.com can be grouped as elements .. Hope
Im correct on this one
http://(cdn1-28) . (projectplaylist) . (com) should be http:// $1 . $2 . $3



m//  produces $1, $2, ... $9  for each () element in the pattern.

s// will produce different $1, $2, ... etc. You have to save the ones 
from m// somewhere if you want to use them after s//. The person who 
wrote that saves them in the array y[].




Then let me see if I can solve this one to match this URL
http://down2.nogomi.com.xn55571528exgem0o65xymsgtmjiy75924mjqqybp.nogomi.com/M15/Alaa_Zalzaly/Atrak/Nogomi.com_Alaa_Zalzaly-3ali_Tar.mp3

so I should work around the FQDN and leave the rest as is, please if
you found any wrong this then correct it for me
#does that match
http://down2.nogomi.com.xn55571528exgem0o65xymsgtmjiy75924mjqqybp.nogomi.com/M15/Alaa_Zalzaly/Atrak/Nogomi.com_Alaa_Zalzaly-3ali_Tar.mp3
   ??
elsif (m/^http:\/\/(.*?)(\.[^\.\-]*?\..*?)\/([^\?\\=]*)\.([\w\d]{2,4})\??.*$/)
{
   @y = ($1,$2,$3,$4);
   $y[0] =~ s/[a-z0-9A-Z\.\-]+/cdn/
   print $x . storeurl:// . $y[0] . $y[1] . / . $y[2] . . .
$y[3] . \n;


does this example matches Nogomi.com domain correctly ?

and why u used s/[a-z0-9A-Z\.\-]+/cdn/

I only understood that you are mnaking sure to find small letters ,
cap letters , numbers but I believe \. is to search
for one dot only .. how about if there is 2 dots or more that 3 dots
in this case! .. another one u r finding dash ..


That pattern ends with +. To search for one or more of the listed 
safe domain letters.


It matches all of the $y[0] content:
down2.nogomi.com.xn55571528exgem0o65xymsgtmjiy75924mjqqybp.

While also not-matching bad things like
  http://evil.com?url=http://nogomi...;

(the one I gave will covert http://evil.com?url=http://nogomi...; --
cdn://evil.com?url=http://nogomi...; )




The only thing im confused about is why we have added /cdn/ since the
url doesnt has a word cdn?


This is a s// operation. ('s' meaning 'switch'). *IF* the $y[0] value 
matches the pattern for a domain s// will place cdn instead of that 
matched piece.


So what this does is change *.nogomi.com --  cdn.nogomi.com

If there are any bad stuff like my evil.com example going on it will 
screw with those URL as well. BUT the bits there will not map to 
cdn.nagomi.com so will not corrupt the actual CDN content.


Thinking about it a bit more I should have been more careful and told you:
  s/^[a-z0-9A-Z\.\-]+$/cdn/

which will ONLY match if the $y[0] as a whole is a valid host name text.



Why we have used storeurl:// because I can see some of examples are
print $x . http://; . $y[0] . $y[1] . / . $y[2] . . . $y[3] . \n;

can you give me an example to add the portion of $y[1] please..


 elsif 
(m/^http:\/\/(.*?)(\.[^\.\-]*?\..*?)\/([^\?\\=]*)\.([\w\d]{2,4})\??.*$/)

 {
   @y = ($1,$2,$3,$4);

   if (m/$y[1]/nagomi.com/) {
 $y[0] =~ s/[a-z0-9A-Z\.\-]+/cdn/
   } else {

$y[0] =~ 
s/([a-z][0-9][a-z]dlod[\d]{3})|((cache|cdn)[-\d]*)|([a-zA-A]+-?[0-9]+(-[a-zA-Z]*)?)/cdn/;


   }

   print $x . storeurl:// . $y[0] . $y[1] . / . $y[2] . . . $y[3] 
. \n;


 }



Which one have your interests , writing a script to match the most
similar examples in one rule or writing each script for each FQDN?


The example you started with had some complex details built into its s// 
matching. So that particular CDN syntax would be detected and replaced.
  This is useful if the CDN is only some sub-domains of the main site. 
And there are other non-CDN subdomains to be avoided. The nasty CDN.



The one I've just put above is for use when the site just uses all its 
subdomains as CDN for the same content. These are the semi-friendly CDN.
 You can extend that for other CDN by adding their base domains to the 
m// test. ie if (m/(nagomi|example)\.com|example\.net/) ...


 

Re: [squid-users] per-user max connection limit?

2011-05-30 Thread Amos Jeffries

On 31/05/11 15:15, errno wrote:

'm still getting my bearings w/ squid, so apologies if I'm asking
particularly stupid questions.

Is it possible to configure a scenario that facilitates per-user
max connection limits?  For instance, user1 might have a
max connection limit of 10, while user3 has a max connection
limit of 50; etc., etc.


Yes, with multiple ACLs.

http://wiki.squid-cache.org/SquidFaq/SquidAcls#Common_Mistakes

If you get a lot of ACLs doing this you seriously need to reconsider 
your design though. Complexity is slow. Simple is better.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.12
  Beta testers wanted for 3.2.0.7 and 3.1.12.1


Re: [squid-users] stablish local web page as home page

2011-05-30 Thread Amos Jeffries

On 31/05/11 10:36, k.caden...@gmail.com wrote:

Hi,
I'm having problems finding the file session.db into my server with squid 
installed, I'm trying to make the portal splash pages work, but I think is 
because I don't have this file ( session.db) that it doesn't work I'm 
missing something

The server is working fine, but it doesn't send the users to the home page I 
want, just free internet navigation, any help will appreciated

Than you


A far as I know the session helper creates it. You just configure where 
it is to go.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.12
  Beta testers wanted for 3.2.0.7 and 3.1.12.1


Re: [squid-users] Squid 3.1.12 times out when trying to access MSDN

2011-05-30 Thread Pandu Poluan
On Mon, May 30, 2011 at 17:32, Pandu Poluan pa...@poluan.info wrote:
 On Mon, May 30, 2011 at 17:25, Pandu Poluan pa...@poluan.info wrote:
 On Fri, May 27, 2011 at 17:47, Amos Jeffries squ...@treenet.co.nz wrote:
 On 27/05/11 19:42, Pandu Poluan wrote:

 Hello list,

 I've been experiencing a perplexing problem.

 Squid 3.1.12 often times out when trying to access certain sites, most
 notably MSDN. But it's still very fast when accessing other
 non-problematic sites.

 For instance, trying to access the following URL *always* result in
 timeout:

 http://msdn.microsoft.com/en-us/library/aa302323.aspx

 Trying to get the above URL using wget: No problem.



 -- 8  8  8  8  8 --


 I've specified dns_v4_fallback on explicitly (it was not specified
 previously) and even replaced the miss_access lines with miss_access
 allow all.

 Still failing on those problematic pages.


No other bright ideas? :-(

-- 
Pandu E Poluan
~ IT Optimizer ~
Visit my Blog: http://pepoluan.posterous.com