RE: [squid-users] Utorrrent through squid

2010-09-23 Thread GIGO .

Could you please help me out that why such errors are happening?

1285227117.990  0 10.1.97.27 TCP_DENIED/403 1480 GET 
http://tracker.thepiratebay.org/announce? - NONE/- text/html [Host: 
tracker.thepiratebay.org\r\nUser-Agent: 
uTorrent/2040(21586)\r\nAccept-Encoding: gzip\r\n] [HTTP/1.0 403 
Forbidden\r\nServer: squid\r\nDate: Thu, 23 Sep 2010 07:31:57 
GMT\r\nContent-Type: text/html\r\nContent-Length: 1129\r\nX-Squid-Error: 
ERR_ACCESS_DENIED 0\r\nX-Cache: MISS from squid.local\r\nX-Cache-Lookup: NONE 
from squid.local:8080\r\nVia: 1.0 squid.local:8080 (squid)\r\nConnection: 
close\r\n\r]
 
 
still not able to use  torrent..is it related to CONNECT method which is 
currently allowed for SSL(443) if http supported Torrent clients make http 
tunnel to work then which are the ports that required to be open?
 
 
 
regards,
 
Bilal
 
 



> Date: Thu, 23 Sep 2010 03:49:06 +1200
> From: squ...@treenet.co.nz
> To: squid-users@squid-cache.org
> Subject: Re: [squid-users] Utorrrent through squid
>
> On 22/09/10 22:43, GIGO . wrote:
>>
>> So Amos does this means that downloading of torrents with earlier version of 
>> squid is not possible at all?
>
> No, its perfectly possible with IPv4 trackers.
>
> His specific problem was with IPv6-only trackers.
>
>> 
>>> Date: Wed, 22 Sep 2010 20:27:29 +1200
>>> Subject: Re: [squid-users] Utorrrent through squid
>>>
>>> On 22/09/10 19:56, GIGO . wrote:
>>>>
>>>> Hi all,
>>>>
>>>> I am unable to run utorrent software through squid proxy due to ipv6 
>>>> tracker failure.I am unable to connect to an ipv 6 tracker.
>>>>
>>>> 1285141356.609 152 10.1.97.27 TCP_MISS/504 1587 GET 
>>>> http://ipv6.torrent.ubuntu.com:6969/announce? - 
>>>> DIRECT/ipv6.torrent.ubuntu.com text/html [Host: 
>>>> ipv6.torrent.ubuntu.com:6969\r\nUser-Agent: 
>>>> uTorrent/2040(21586)\r\nAccept-Encoding: gzip\r\n] [HTTP/1.0 504 Gateway 
>>>> Time-out\r\nServer: squid\r\nDate: Wed, 22 Sep 2010 07:42:36 
>>>> GMT\r\nContent-Type: text/html\r\nContent-Length: 1234\r\nX-Squid-Error: 
>>>> ERR_DNS_FAIL 0\r\nX-Cache: MISS from xyz.com\r\nX-Cache-Lookup: MISS from 
>>>> xyz.com:8080\r\nVia: 1.0 xyz.com:8080 (squid)\r\nConnection: close\r\n\r]
>>>>
>>>> I am using squid 2.7 Stable 9 release.
>>>>
>>>
>>> Squid-3.1 is required for IPv4/IPv6 gateway.
>>>
>>>>
>>>> For doing this is there a special configuration required on the Operating 
>>>> system(RHEL 5 ) or squid itself. Please guide.
>>>>
>>>
>>> http://wiki.squid-cache.org/KnowledgeBase/RedHat
>>>
>
>
> Amos
> --
> Please be using
> Current Stable Squid 2.7.STABLE9 or 3.1.8
> Beta testers wanted for 3.2.0.2 

RE: [squid-users] Utorrrent through squid

2010-09-22 Thread GIGO .

So Amos does this means that downloading of torrents with earlier version of 
squid is not possible at all?
 
 
regards,
Bilal 



> Date: Wed, 22 Sep 2010 20:27:29 +1200
> From: squ...@treenet.co.nz
> To: squid-users@squid-cache.org
> Subject: Re: [squid-users] Utorrrent through squid
>
> On 22/09/10 19:56, GIGO . wrote:
>>
>> Hi all,
>>
>> I am unable to run utorrent software through squid proxy due to ipv6 tracker 
>> failure.I am unable to connect to an ipv 6 tracker.
>>
>> 1285141356.609 152 10.1.97.27 TCP_MISS/504 1587 GET 
>> http://ipv6.torrent.ubuntu.com:6969/announce? - 
>> DIRECT/ipv6.torrent.ubuntu.com text/html [Host: 
>> ipv6.torrent.ubuntu.com:6969\r\nUser-Agent: 
>> uTorrent/2040(21586)\r\nAccept-Encoding: gzip\r\n] [HTTP/1.0 504 Gateway 
>> Time-out\r\nServer: squid\r\nDate: Wed, 22 Sep 2010 07:42:36 
>> GMT\r\nContent-Type: text/html\r\nContent-Length: 1234\r\nX-Squid-Error: 
>> ERR_DNS_FAIL 0\r\nX-Cache: MISS from xyz.com\r\nX-Cache-Lookup: MISS from 
>> xyz.com:8080\r\nVia: 1.0 xyz.com:8080 (squid)\r\nConnection: close\r\n\r]
>>
>> I am using squid 2.7 Stable 9 release.
>>
>
> Squid-3.1 is required for IPv4/IPv6 gateway.
>
>>
>> For doing this is there a special configuration required on the Operating 
>> system(RHEL 5 ) or squid itself. Please guide.
>>
>
> http://wiki.squid-cache.org/KnowledgeBase/RedHat
>
>
> Amos
> --
> Please be using
> Current Stable Squid 2.7.STABLE9 or 3.1.8
> Beta testers wanted for 3.2.0.2 

[squid-users] Utorrrent through squid

2010-09-22 Thread GIGO .

Hi all,
 
I am unable to run utorrent software through squid proxy due to ipv6 tracker 
failure.I am unable to connect to an ipv 6 tracker.
 
1285141356.609152 10.1.97.27 TCP_MISS/504 1587 GET 
http://ipv6.torrent.ubuntu.com:6969/announce? - DIRECT/ipv6.torrent.ubuntu.com 
text/html [Host: ipv6.torrent.ubuntu.com:6969\r\nUser-Agent: 
uTorrent/2040(21586)\r\nAccept-Encoding: gzip\r\n] [HTTP/1.0 504 Gateway 
Time-out\r\nServer: squid\r\nDate: Wed, 22 Sep 2010 07:42:36 
GMT\r\nContent-Type: text/html\r\nContent-Length: 1234\r\nX-Squid-Error: 
ERR_DNS_FAIL 0\r\nX-Cache: MISS from xyz.com\r\nX-Cache-Lookup: MISS from 
xyz.com:8080\r\nVia: 1.0 xyz.com:8080 (squid)\r\nConnection: close\r\n\r]

I am using squid 2.7 Stable 9 release.
 
 
For doing this is there a special configuration required on the Operating 
system(RHEL 5 ) or squid itself. Please guide.
 
regards,
 
Bilal Aslam   

RE: [squid-users] Alerting when cache Peer is used.

2010-09-20 Thread GIGO .

2010/09/20 12:40:56| WARNING: Forwarding loop detected for:
Client: 10.25.88.175 http_port: 10.1.82.175:8080

As far as alerts are concerned i got your point thanks!
 
i am getting these kind of messages in my cache.log can i ignore these warnings 
in reference to my requirements(internet backup path of each other) or i need 
to make some configuration changes. Please guide
 
thanking you &
 
regards,
 
Bilal Aslam


> Date: Fri, 17 Sep 2010 23:31:55 +1200
> From: squ...@treenet.co.nz
> To: squid-users@squid-cache.org
> Subject: Re: [squid-users] Alerting when cache Peer is used.
>
> On 17/09/10 23:14, GIGO . wrote:
>>
>> I have configured my proxy servers in two regions for backup internet path 
>> of each other by declaring the following directives.
>>
>> Directives on Proxy A:
>>
>> cache_peer A parent 8080 0 proxy-only
>> prefer_direct on
>> nonhierarchical_direct off
>> cache_peer_access A allow all
>>
>>
>> Directives on Proxy B:
>>
>> cache_peer B parent 8080 0 proxy-only
>> prefer_direct on
>> nonhierarchical_direct off
>> cache_peer_access B allow all
>>
>>
>> Is there a way that whenever a peer cache is used an email alert is 
>> generated to the admins.
>>
>
> Not from Squid. That is a job for network availability software.
>
> You could hack up a script to scan squid access.log for the peer
> hierarchy codes (DIRECT or FIRST_UP_PARENT etc) being used.
>
>
> Note that the setting is only "prefer" _direct. It can go to the peer
> with perfectly working network access if the origin web server simply
> takes too long to reply to a connect attempt.
>
> Amos
> --
> Please be using
> Current Stable Squid 2.7.STABLE9 or 3.1.8
> Beta testers wanted for 3.2.0.2 

[squid-users] Alerting when cache Peer is used.

2010-09-17 Thread GIGO .

I have configured my proxy servers in two regions for backup internet path of 
each other by declaring the following directives.
 
Directives on Proxy A:
 
cache_peer A parent 8080 0 proxy-only
prefer_direct on
nonhierarchical_direct off
cache_peer_access A allow all
 
 
Directives on Proxy B:
 
cache_peer B parent 8080 0 proxy-only
prefer_direct on
nonhierarchical_direct off
cache_peer_access B allow all
 
 
Is there a way that whenever a peer cache is used an email alert is generated 
to the admins.
 
thanking you &
 
Best Regards,
 
Bilal
  

[squid-users] Facebook issue despite server_http11 on

2010-08-11 Thread GIGO .

Dear All,
 
I am using squid 2.7 stable 9 version the facebook was working fine since 
yesterday and suddenly the issue appears that a blank page comes whenever tried 
to access facebook. I have tried the recommended directive
 
server_http11 on
But the problem is unresolved please help.
 
 
regards,
Bilal 

RE: [squid-users] Delay Pool Configuration Confirmation.

2010-07-24 Thread GIGO .

Well i have tried the class 2 settings but they seems not working except 
properly. I have quick_abort -1 setting  (for youtube , windows update etc) . 
Could it be a problem?  If so for fixing it what maximum size for quick_abort 
is possible.
 
 
regards,
 
Bilal 


> Date: Sat, 24 Jul 2010 14:06:37 +1200
> From: squ...@treenet.co.nz
> To: squid-users@squid-cache.org
> Subject: Re: [squid-users] Delay Pool Configuration Confirmation.
>
> GIGO . wrote:
>> Right Amos i think what i want was the class 2 so i will configure as you 
>> suggest and it will encompass the authenticated users as well?
>>
>> regards,
>> Bilal
>>
>>
>
> Each pool encompasses whatever requests your delay_pool_access matches.
>
> Amos
> --
> Please be using
> Current Stable Squid 2.7.STABLE9 or 3.1.5 
>   
_
Hotmail: Free, trusted and rich email service.
https://signup.live.com/signup.aspx?id=60969

[squid-users] Cache Peer Setup Mutual Parents

2010-07-24 Thread GIGO .

Dear All,
 
Is it possible to use squid as backup path if going direct fails? i am using 
squid 2.7 stable 9 with cache_digest enabled. Follwoing are the directives i 
think will do it please check them if they are correct?
 
Setting of cache at North Region:
 
   cache_peer SOUTH parent 8080 0  no-query proxy-only default
   prefer_direct on
   nonhierarchical_direct off
   cache_peer_access SOUTH allow onlybrowsing
 
Setting of Cache at South Region:
   
   cache_peer NORTH parent 8080 0 no-query proxy-only default
   prefer_direct on
   nonhierarchical_direct off
   cache_peer_access NORTH allow onlybrowsing
 
 
I dont think that for my purpose ICP queries or cache Digest will serve any 
purpose because they only matter with sibling type relation? is it so
 
 
is this a practical setup or could pose any problems can u please add any 
recommendations.?
 
 
 
Thanking you &
 
Best Regards,
 
Bilal
 
 
 
 
 
 
 
 
  
_
Hotmail: Powerful Free email with security by Microsoft.
https://signup.live.com/signup.aspx?id=60969

RE: [squid-users] Delay Pool Configuration Confirmation.

2010-07-22 Thread GIGO .

Right Amos i think what i want was the class 2 so i will configure as you 
suggest and it will encompass the authenticated users as well?
 
regards,
Bilal



> Date: Thu, 22 Jul 2010 23:56:21 +1200
> From: squ...@treenet.co.nz
> To: squid-users@squid-cache.org
> Subject: Re: [squid-users] Delay Pool Configuration Confirmation.
>
> GIGO . wrote:
>> Dear all,
>>
>>
>> I am using squid 2.7 stable 9. I want to restrict downloads for every one 
>> both authenticated and IP based clients to 128KB at the day time and with 
>> full capacity at night. I have done the following configurations however 
>> they dont seem to work for me. Can you confirm that if they are correct.
>>
>>
>>
>> i am using squid_kerb_ldap & squid_kerb_auth and 50% users are based on 
>> this. 50% users are IP based 10.x.x.x (/24).
>> #Definition of working hours---
>> acl wh time MTWHF 09:00-21:00
>> #--Delay Pools Settings---
>> delay_pools 1
>> delay_class 1 1
>> delay_parameters 1 128000/128000
>> delay_access 1 allow downloads wh
>
> class 1 is a aggregate limit. Meaning that config caps the whole network
> at 125KB combined. Divide that by the number of users on the network
> using the proxy at any time.
>
> If you want each user to have 128KB but no more, use a class 2 pool.
> With parameters of -1/-1 131072/131072 (no aggregate limit, 128KB
> individual caps).
>
> Amos
> --
> Please be using
> Current Stable Squid 2.7.STABLE9 or 3.1.5 
>   
_
Your E-mail and More On-the-Go. Get Windows Live Hotmail Free.
https://signup.live.com/signup.aspx?id=60969

[squid-users] Delay Pool Configuration Confirmation.

2010-07-22 Thread GIGO .

Dear all,
 
 
I am using squid 2.7 stable 9. I want to restrict downloads for every one both 
authenticated and IP based clients to 128KB at the day time and with full 
capacity at night. I have done the following configurations however they dont 
seem to work for me. Can you confirm that if they are correct.

 
 
i am using squid_kerb_ldap & squid_kerb_auth and 50% users are based on this. 
50% users are IP based 10.x.x.x (/24). 
#Definition of working hours---
acl wh time MTWHF 09:00-21:00
#--Delay Pools Settings---
delay_pools 1
delay_class 1 1
delay_parameters 1 128000/128000
delay_access 1 allow downloads wh 
_
Hotmail: Free, trusted and rich email service.
https://signup.live.com/signup.aspx?id=60969

[squid-users] clientcahehit: request has store_url http://www.xyx.com/whaever/abc.xyz ; mem object in hit has mis matchedurl

2010-07-20 Thread GIGO .

Dear All,
 
 
I am having lot of such errors please your help and guidance is required.

2010/07/20 17:26:34| clientCacheHit: request has store_url 
'http://www.cricinfo.com/navigation/cricinfo/ci/scorecard.css'; mem object in 
hit has mis-matched url 
'http://www.cricinfo.com/navigation/cricinfo/ci/scorecard.css?1274977955'!

 
regards,
 
Bilal 
_
Hotmail: Trusted email with Microsoft’s powerful SPAM protection.
https://signup.live.com/signup.aspx?id=60969

[squid-users] squid stable 2.7 stable 9 store url errors in cache.log

2010-07-19 Thread GIGO .

Dear All,
 
 
I am seeing the following in my cache.log. Is this refering to some 
misconfiguration or issue ?
 

2010/07/19 17:58:20| clientCacheHit: request has store_url 
'http://cdn.nytimes.com/images/apps/timespeople/none.png'; mem object in hit 
has mis-matched url 
'http://graphics8.nytimes.com/images/apps/timespeople/none.png'!
2010/07/19 17:58:31| storeLocateVary: Not our vary marker object, 
853708066C81CBC307A860FBABB2E9DE = 
'http://www.cricinfo.com/navigation/cricinfo/ci/scorecard.css?1274977955', 
'accept-encoding'/'-'
2010/07/19 17:58:31| clientCacheHit: request has store_url 
'http://www.cricinfo.com/navigation/cricinfo/ci/scorecard.css'; mem object in 
hit has mis-matched url 
'http://www.cricinfo.com/navigation/cricinfo/ci/scorecard.css?1274977955'!
2010/07/19 17:58:44| clientCacheHit: request has store_url 
'http://cdn.linkedin.com/mpr/mpr/shrink_40_40/p/2/000/065/250/1cdc957.jpg'; mem 
object in hit has mis-matched url 
'http://media02.linkedin.com/mpr/mpr/shrink_40_40/p/2/000/065/250/1cdc957.jpg'!
2010/07/19 17:58:44| clientCacheHit: request has store_url 
'http://cdn.linkedin.com/scds/common/u/img/bg/bg_border_3x1.png'; mem object in 
hit has mis-matched url 
'http://static02.linkedin.com/scds/common/u/img/bg/bg_border_3x1.png'!

&&
2010/07/19 17:58:31| storeLocateVary: Not our vary marker object, 
853708066C81CBC307A860FBABB2E9DE = 
'http://www.cricinfo.com/navigation/cricinfo/ci/scorecard.css?1274977955', 
'accept-encoding'/'-'

 
 
 
2. With 8 GB of Memory & 50gb of cache directory would there be any performance 
gain to declare 2gb cache_mem when squid is only being used as forward proxy. 
Is there any relation of Memory settings +max object size in mem to this error 
(| WARNING: swapfile header too small)
 
 
 
 
Thanking you &
 
Best regards,
 
Bilal 
 
 
 
 
 
 
 
 
 
 
 
  
_
Hotmail: Powerful Free email with security by Microsoft.
https://signup.live.com/signup.aspx?id=60969

RE: [squid-users] swapfile header too small

2010-07-16 Thread GIGO .

Amos,
 
Thank you. I will do as per your advice.
 
regards,
 
Bilal


> Date: Fri, 16 Jul 2010 13:43:17 +1200
> From: squ...@treenet.co.nz
> To: squid-users@squid-cache.org
> Subject: Re: [squid-users] swapfile header too small
>
> GIGO . wrote:
>> Dear All,
>>
>>
>> I am finding this in my cache.log file.
>>
>> 2010/07/15 19:12:14| WARNING: swapfile header too small
>> 2010/07/15 19:12:14| WARNING: swapfile header too small
>> 2010/07/15 19:28:30| WARNING: swapfile header too small
>> squid 2.7 stable 9 installed on RHEL
>>
>>
>> What is the reason of these errors and how to resolve it.
>>
>
> Each message is a file failing validity checks Squid does to prevent
> cache corruption. It's a strong sign of disk failure or manual tampering
> with the cached files.
>
> It's normal to see some of them and other similar during a "DIRTY"
> rebuild of the cache following a crash.
>
> If they are occuring during normal operation or a even lot. I recommend
> running a disk scan, and possibly erasing the cache and rebuilding it
> clean with squid -z.
>
> Amos
> --
> Please be using
> Current Stable Squid 2.7.STABLE9 or 3.1.5 
>   
_
Your E-mail and More On-the-Go. Get Windows Live Hotmail Free.
https://signup.live.com/signup.aspx?id=60969

[squid-users] swapfile header too small

2010-07-15 Thread GIGO .

Dear All,
 
 
I am finding this in my cache.log file. 
 
2010/07/15 19:12:14| WARNING: swapfile header too small
2010/07/15 19:12:14| WARNING: swapfile header too small
2010/07/15 19:28:30| WARNING: swapfile header too small
squid 2.7 stable 9 installed on RHEL
 
 
What is the reason of these errors and how to resolve it.
 
 
Thanking you
 
&
 
regards,
 
Bilal Aslam
 
  
_
Hotmail: Trusted email with powerful SPAM protection.
https://signup.live.com/signup.aspx?id=60969

[squid-users] Download manager

2010-07-09 Thread GIGO .


I am using ISA as a parent peer and have squid has no direct connection to 
internet
 
I am unable to use IDM with Squid proxy i have both tried with Authenticated as 
well as IP based client but its not being successful.Another download manager 
(FDM) works fine with the same setup
 

Error is "connection closed by server".
 
please help.
 
 
regards,
 
Bilal 
_
Your E-mail and More On-the-Go. Get Windows Live Hotmail Free.
https://signup.live.com/signup.aspx?id=60969

RE: [squid-users] Re: Re: Re: squid_kerb_auth (parseNegTokenInit failed with rc=102)

2010-07-05 Thread GIGO .

Hi,
 
please some more guidance required. Can squid_kerb_ldap be used(alone) 
independentaly of calling squid_kerb_auth or any other helper??
 
If and only if it is must to use squid_kerb_auth & squid_kerb_ldap both then is 
it correct that we are not using the following directives??
 
acl auth proxy_auth REQUIRED #used
#http_access deny !auth # Not used
#http_access allow auth #not used
 
as instead ldap based directives of the following form are used...
 
external_acl_type squid_kerb_ldap ttl=3600  negative_ttl=3600  %LOGIN 
/usr/sbin/squid_kerb_ldap -g GROUP@
acl ldap_group_check external squid_kerb_ldap
http_access allow ldap_group_check

 
thanking you
& 
regards,
 
Bilal 
 
 
 
 
 
 
 


> To: squid-users@squid-cache.org
> From: hua...@moeller.plus.com
> Date: Thu, 1 Jul 2010 21:31:13 +0100
> Subject: [squid-users] Re: Re: Re: squid_kerb_auth (parseNegTokenInit failed 
> with rc=102)
>
> Hi
>
> 1) 1.2.1a is just a minor patch version to 1.2.1.
> 2) This happens only when you use the -d debug option
> 3) You can use the options -u BIND_DN -p BIND_PW -b BIND_PATH -l LDAP_URL
> 4) If they have different access needs then that is the only way. If they
> have the same access right you can use -g
> inetgrl...@mailserver.v.local:inetgrl...@mailserver.v.local:inetgrl...@mailserver.v.local
>
> Regards
> Markus
>
> - Original Message -
> From: "GIGO ." 
> To: "squidsuperuser2" ; "SquidHelp"
> 
> Sent: Thursday, July 01, 2010 11:31 AM
> Subject: RE: [squid-users] Re: Re: Re: squid_kerb_auth (parseNegTokenInit
> failed with rc=102)
>
>
>
> Dear Markus,
>
> Thank you so much for your help as i diagnosed the problem back to
> KRB5_KTNAME not exported properly through my startup script. For the
> completion sake and your analysis i have appended the cache.log at the
> bottom.
>
> Please i have few queries:
>
>
> 1. I am using squid_kerb_ldap version 1.2.1a as per your recommendation and
> which is the latest but is the "a" in 1.2.1(a) means alpha. Can i use this
> latest version in the production or i should switch back to 1.2.1.
>
>
>
>
> 2. i have just figured out that squid_kerb_ldap gets all the groups for a
> user in question even if the first group it find matches. Is this the normal
> behaviour?
>
>
> 3. Is there a way to bind to a specific or multiple(chosen) ldap servers
> rather than using DNS. (what is the syntax and how)
>
>
> 4. As i have different categories of users so i had defined the following
> directives. Is it ok to do this way as it does not look very neet to me and
> looks like squid_kerb_ldap being called redundantly.
>
>
> -Portion of
> squid.conf-
> auth_param negotiate program
> /usr/libexec/squid/squid_kerb_auth/squid_kerb_auth
> auth_param negotiate children 10
> auth_param negotiate keep_alive on
> # basic auth ACL controls to make use of it are.(if and only if
> squid_kerb_ldap(authorization) is not used)
> #acl auth proxy_auth REQUIRED
> #http_access deny !auth
> #http_access allow auth
>
> #Groups fom Mailserver Domain:
> external_acl_type squid_kerb_ldap_msgroup1 ttl=3600 negative_ttl=3600
> %LOGIN /usr/libexec/squid/squid_kerb_ldap -g inetgrl...@mailserver.v.local
> external_acl_type squid_kerb_ldap_msgroup2 ttl=3600 negative_ttl=3600
> %LOGIN /usr/libexec/squid/squid_kerb_ldap -g inetgrl...@mailserver.v.local
> external_acl_type squid_kerb_ldap_msgroup3 ttl=3600 negative_ttl=3600
> %LOGIN /usr/libexec/squid/squid_kerb_ldap -g inetgrl...@mailserver.v.local
>
> acl msgroup1 external squid_kerb_ldap_msgroup1
> acl msgroup2 external squid_kerb_ldap_msgroup2
> acl msgroup3 external squid_kerb_ldap_msgroup3
> http_access deny msgroup2 msn
> http_access deny msgroup3 msn
> http_access deny msgroup2 ym
> http_access deny msgroup3 ym
> ###Most Restricted settings Exclusive for Normal users..###
> http_access deny msgroup3 Movies
> http_access deny msgroup3 downloads
> http_access deny msgroup3 torrentSeeds
> http_access deny all
>
> 
_
Your E-mail and More On-the-Go. Get Windows Live Hotmail Free.
https://signup.live.com/signup.aspx?id=60969

RE: [squid-users] Startup/shutdown script which was working perfactly alright for squid 3.0stable25 is not working for squid 2.7 stable9.0

2010-06-30 Thread GIGO .

Hi Amos,
 
I just found that running it from rc.local works but is it ok to run it through 
there in CENTOS??

squidautostart.sh-
 
#!/bin/sh
KRB5_KTNAME=/etc/squid/HTTP.keytab
export KRB5_KTNAME
KRB5RCACHETYPE=none
export KRB5RCACHETYPE
echo -n $"Starting squid instance2: "
/usr/sbin/squid -D -s -f /etc/squid/inst2squid.conf
echo -n $"Starting squid instance1: "
/usr/sbin/squid -D -s -f /etc/squid/inst1squid.conf
 
 
Are the variables exported in the script are available to the running instances 
of squid through rc.local or not? (For the time program is running)
 
 
I also think that for running squid manually to export these variables for all 
user i had to define them in /etc/profile .am i right?
 
 
please guide.
 
 
thanking you
 
&
 
regards,
 
Bilal
 
 
 
 
 
 
 
 
 

 
 
 
 
 

> Date: Mon, 24 May 2010 00:52:39 +1200
> From: squ...@treenet.co.nz
> To: squid-users@squid-cache.org
> Subject: Re: [squid-users] Startup/shutdown script which was working 
> perfactly alright for squid 3.0stable25 is not working for squid 2.7 stable9.0
> 
> GIGO . wrote:
>> Hi all,
>> 
>> I am able to run squid manually however whenever i try to run it through the 
>> startup/shutdown script it fails. This is the same script working for squid 
>> 3.0 stable 25 however i am not being able to figure out that why its failing 
>> on squid 2.7 stable 9? Neither of the instance starts with system startup.
>> 
>> 
>> Please guide me i be thankful. My startup script and tail of cache.log for 
>> both instances is below.
>> 
>> 
>> #!/bin/sh
>> #
>> #my script
>> case "$1" in
>> start)
>> /usr/sbin/squid -D -s -f /etc/squid/squidcache.conf
>> /usr/sbin/squid -D -s -f /etc/squid/squid.conf
>> #The below line is to automatically start apache with system startup
>> /usr/sbin/httpd -k start
>> #KRB5_KTNAME=/etc/squid/HTTP.keytab
>> #export KRB5_KTNAME
>> #KRB5RCACHETYPE=none
>> #export KRB5RCACHETYPE
>> ;;
>> stop)
>> /usr/sbin/squid -k shutdown -f /etc/squid/squidcache.conf
>> echo "Shutting down squid secondary process"
>> /usr/sbin/squid -k shutdown -f /etc/squid/squid.conf
>> echo "Shutting down squid main process"
>> # The below line is to automatically stop apache at system shutdown
>> /usr/sbin/httpd -k stop
>> ;;
>> esac
> 
> 
> The script looks right to me.
> 
>> 
>> tail> instance 2 cache file:
>> 
>> 2010/05/22 06:05:18| Beginning Validation Procedure
>> 2010/05/22 06:05:18| Completed Validation Procedure
>> 2010/05/22 06:05:18| Validated 0 Entries
>> 2010/05/22 06:05:18| store_swap_size = 0k
>> 2010/05/22 06:05:18| storeLateRelease: released 0 objects
>> 2010/05/22 06:09:28| Preparing for shutdown after 62 requests
> 
> This message means the Squid instance has received the shutdown signal 
> from some external process. Either kill or squid -k shutdown.
> 
>> 2010/05/22 06:09:28| Waiting 30 seconds for active connections to finish
>> 2010/05/22 06:09:28| FD 16 Closing HTTP connection
>> 2010/05/22 06:09:28| WARNING: store_rewriter #1 (FD 7) exited
>> 2010/05/22 06:09:28| Too few store_rewriter processes are running
>> 2010/05/22 06:09:28| Starting new helpers
>> 2010/05/22 06:09:28| helperOpenServers: Starting 1 'storeurl.pl' processes
> 
> That may be a bug, restarting helpers on shutdown looks wrong.
> 
> Amos
> -- 
> Please be using
> Current Stable Squid 2.7.STABLE9 or 3.1.3 
>   
_
Hotmail: Powerful Free email with security by Microsoft.
https://signup.live.com/signup.aspx?id=60969

RE: [squid-users] Re: Re: squid_kerb_auth (parseNegTokenInit failed with rc=102)

2010-06-30 Thread GIGO .
 ExpiresService principal
06/30/10 15:25:06  07/01/10 01:24:49  
krbtgt/mailserver.v.lo...@mailserver.v.local
renew until 07/01/10 15:25:06
06/30/10 15:25:49  07/01/10 01:24:49  ldap/ldc-ms-dc2.mailserver.v.local@
renew until 07/01/10 15:25:06
06/30/10 15:25:49  06/30/10 15:27:49  kadmin/chang...@mailserver.v.local
renew until 06/30/10 15:27:49

Kerberos 4 ticket cache: /tmp/tkt0
klist: You have no tickets cached
 
Keytab name: FILE:/etc/squid/HTTP.keytab
KVNO Principal
 --
   2 HTTP/squidlhr1.mailserver.v.lo...@mailserver.v.local (DES cbc mode with 
CRC-32)
   2 HTTP/squidlhr1.mailserver.v.lo...@mailserver.v.local (DES cbc mode with 
RSA-MD5)
   2 HTTP/squidlhr1.mailserver.v.lo...@mailserver.v.local (ArcFour with 
HMAC/md5)
 
 
10.-msktutil--
msktutil -c -b "OU=UNIXOU" -s HTTP/squidlhr1.mailserver.mcb.com.pk -h 
squidlhr1.v.local -k /etc/squid/HTTP.keytab --computer-name squidlhr-http --upn 
HTTP/squidlhr1.mailserver.v.local --server ldc-ms-dc2.v.local --verbose


 
 
 
Please help me out as tried so not yet got a clue about. Will be thankful.
regards,
Bilal









> To: squid-users@squid-cache.org
> From: hua...@moeller.plus.com
> Date: Tue, 29 Jun 2010 23:38:54 +0100
> Subject: [squid-users] Re: Re: squid_kerb_auth (parseNegTokenInit failed with 
> rc=102)
>
> Can you add the option -d -i to squid_kerb_auth and squid_kerb_ldap to
> create more debut output and send the cache.log extract
>
> Regards
> Markus
>
>
> "GIGO ." wrote in message
> news:snt134-w34626d5c8ec65f9d8495b1b9...@phx.gbl...
>
> Hi Henrik/Markus/All
>
> Every setting(keeping in view your recommendation) was correct i many a
> times confirmed that.Even i tried re-creating the SPN but in vain. However i
> just realized that most of the users were required to logoff and login to
> get authenticated through squid. I wonder why a user even with a valid TGT
> was require to do that as he should be able to get the TGS for every new
> kerberized service???
>
> Anyways of the few users i tried only one was able to access it without
> re-login. Bottom line is that its working.
>
>
> Now the authorization portion is not seems like behaving properly can you
> please check the syntax for correctness before i probe further. I have
> appended at the bottom my squid.conf portion relevant to this.
>
> e.g. After the authorization few of the clients were showing this wheter in
> the group or not:
> --
> Internet explorer cannot display the webpage
> what you can try:
> Diagnose connection problems
> More Info
> --
>
> Further i think IE7(and latest) and FireFox 3.6.x above are supportive for
> kerberos. Am i right? is there any special configuration required on the
> client side(other than the proxy settings).??
>
>
>
> #After allowing IP based clients and the access controls related to them.
> http_access allow ipbc
> # Part 2 Authentication/Authorization
> auth_param negotiate program
> /usr/libexec/squid/squid_kerb_auth/squid_kerb_auth
> auth_param negotiate children 10
> auth_param negotiate keep_alive on
> # basic auth ACL controls to make use of it are.(if and only if
> squid_kerb_ldap(authorization) is not used)
> #acl auth proxy_auth REQUIRED
> #http_access deny !auth
> #http_access allow auth
> #Groups fom Mailserver Domain:
> external_acl_type squid_kerb_ldap_ms_group1 ttl=3600 negative_ttl=3600
> %LOGIN /usr/libexec/squid/squid_kerb_ldap -g
> inetgrl...@mailserver.v.local
> external_acl_type squid_kerb_ldap_ms_group2 ttl=3600 negative_ttl=3600
> %LOGIN /usr/libexec/squid/squid_kerb_ldap -g
> inetgrl...@mailserver.v.local
> external_acl_type squid_kerb_ldap_ms_group3 ttl=3600 negative_ttl=3600
> %LOGIN /usr/libexec/squid/squid_kerb_ldap -g
> inetgrl...@mailserver.v.local
> acl ms_group1 external squid_kerb_ldap_ms_group1
> acl ms_group2 external squid_kerb_ldap_ms_group2
> acl ms_group3 external squid_kerb_ldap_ms_group3
> http_access deny ms_group2 msnd
> http_access deny ms_group3 msnd
> http_access deny ms_group2 msn
> http_access deny ms_group3 msn
> http_access deny ms_group2 msn1
> http_access deny ms_group3 msn1
> http_access deny ms_group2 numeric_IPs
> http_access deny ms_group3 numeric_IPs
> http_access deny ms_group2 Skype_UA
> http_access deny ms_group3 Skype_UA
> http_access deny ms_group2 ym
> http_access deny ms_group3 ym
> http_access deny ms_group2 ymregex
> http_access deny ms_group3 ymregex
> ##

RE: [squid-users] Re: squid_kerb_auth (parseNegTokenInit failed with rc=102)

2010-06-29 Thread GIGO .

Hi Henrik/Markus/All
 
Every setting(keeping in view your recommendation) was correct i many a times 
confirmed that.Even i tried re-creating the SPN but in vain. However i just 
realized that most of the users were required to logoff and login to get 
authenticated through squid. I wonder why a user even with a valid TGT was 
require to do that as he should be able to get the TGS for every new kerberized 
service???
 
Anyways of the few users i tried only one was able to access it without 
re-login. Bottom line is that its working.
 

Now the authorization portion is not seems like behaving properly can you 
please check the syntax for correctness before i probe further. I have appended 
at the bottom my squid.conf portion relevant to this.

e.g. After the authorization few of the clients were showing this wheter in the 
group or not: 
--
   Internet explorer cannot display the webpage
   what you can try:
   Diagnose connection problems
   More Info
--
 
Further i think IE7(and latest) and FireFox 3.6.x above are supportive for 
kerberos. Am i right? is there any special configuration required on the client 
side(other than the proxy settings).??
 
 
 
#After allowing IP based clients and the access controls related to them.
http_access allow ipbc
# Part 2 Authentication/Authorization
auth_param negotiate program /usr/libexec/squid/squid_kerb_auth/squid_kerb_auth
auth_param negotiate children 10
auth_param negotiate keep_alive on
# basic auth ACL controls to make use of it are.(if and only if 
squid_kerb_ldap(authorization) is not used)
#acl auth proxy_auth REQUIRED
#http_access deny !auth
#http_access allow auth
#Groups fom Mailserver Domain:
external_acl_type squid_kerb_ldap_ms_group1 ttl=3600  negative_ttl=3600  %LOGIN 
/usr/libexec/squid/squid_kerb_ldap -g 
inetgrl...@mailserver.v.local
external_acl_type squid_kerb_ldap_ms_group2 ttl=3600  negative_ttl=3600  %LOGIN 
/usr/libexec/squid/squid_kerb_ldap -g 
inetgrl...@mailserver.v.local
external_acl_type squid_kerb_ldap_ms_group3 ttl=3600  negative_ttl=3600  %LOGIN 
/usr/libexec/squid/squid_kerb_ldap -g 
inetgrl...@mailserver.v.local
acl ms_group1 external squid_kerb_ldap_ms_group1
acl ms_group2 external squid_kerb_ldap_ms_group2
acl ms_group3 external squid_kerb_ldap_ms_group3
http_access deny  ms_group2 msnd
http_access deny  ms_group3 msnd
http_access deny  ms_group2 msn
http_access deny  ms_group3 msn
http_access deny  ms_group2 msn1
http_access deny  ms_group3 msn1
http_access deny  ms_group2 numeric_IPs
http_access deny  ms_group3 numeric_IPs
http_access deny  ms_group2 Skype_UA
http_access deny  ms_group3 Skype_UA
http_access deny  ms_group2 ym
http_access deny  ms_group3 ym
http_access deny  ms_group2 ymregex
http_access deny  ms_group3 ymregex
###Most Restricted settings Exclusive for Normal users..###
http_access deny  ms_group3 Movies
http_access deny  ms_group3 MP3s
http_access deny  ms_group3 FTP
http_access deny  ms_group3 MP3url
http_reply_access deny ms_group3 deny_rep_mime_flashvideo 
http_access deny  ms_group3 youtube_domains
http_access deny  ms_group3 facebook_sites
http_access deny  ms_group3 BIP
http_access deny  ms_group3 downloads
http_access deny  ms_group3 torrentSeeds
http_access deny  ms_group3 dlSites
##- Time based ACLs
http_access deny  ms_group2 youtube_domains wh
http_access deny  ms_group2 BIP wh
http_access deny  ms_group2 facebook_sites wh
http_access allow ms_group1
http_access allow ms_group2
http_access allow ms_group3

 
http_access deny all

 
Squid version: squid 2.7 stable 9 on CENTOS 5.4 64 bit.
 
 

 
 
 
 
 
> To: squid-users@squid-cache.org
> From: hua...@moeller.plus.com
> Date: Mon, 28 Jun 2010 23:56:51 +0100
> Subject: [squid-users] Re: squid_kerb_auth (parseNegTokenInit failed with 
> rc=102)
> 
> Make sure the squid servers hostname matches squidhr1.v.local. If not use -s 
> HTTP/squidhr1.v.local as an option to squid_kerb_auth.
> 
> Regards
> Markus
> 
> "GIGO ."  wrote in message 
> news:snt134-w64257c53609757cd3cf006b9...@phx.gbl...
> 
> Hi all,
> 
> I am unable to do kerberos authentication in my live enviroment as appose to 
> the test enviroment where it was successful. My environment is Active 
> Direcory Single Forest Multidomain with each domain having multiple domain 
> controllers.
> 
> SPN was created through:
> 
> msktutil -c -b "OU=UNIXOU" -s HTTP/squidlhr1.v.local -h squidlhr1.v.local -k 
> /etc/squid/HTTP.keytab --computer-name squid-http --upn 
> HTTP/squidlhr1.v.local --server ldc-ms-dc2.v.local --verbose
> 
> 
> Through ADSIEDIT & setspn tools SPN is confirmed in the Active Directory.
> 
> My kerb5.conf Settings:
> [libdefaults]
> default_realm = MAILSERVER.V.LOCAL
>

RE: [squid-users] squid_kerb_auth (parseNegTokenInit failed with rc=102)

2010-06-28 Thread GIGO .

I have read the thread advised by you however i dont think that it is related 
to my environment (Active directory with parent&child domains having full trust 
two ways between each with a Single Squid Server and not a cluster).
 
So i think that registering SPN on a single domain should work. And as i said 
that previously in my test environments i have tested it many a times and it 
works.
 
If you you could explain in detail then i may get better idea about what u mean.
 
 
regards,
 
Bilal


> Date: Mon, 28 Jun 2010 10:34:07 +0200
> From: e.leso...@crbn.fr
> To: squid-users@squid-cache.org
> Subject: Re: [squid-users] squid_kerb_auth (parseNegTokenInit failed with 
> rc=102)
>
> Hi,
>
> I think you might be interested by this thread :
>
> http://www.squid-cache.org/mail-archive/squid-users/201006/0128.html
>
> Le Mon, 28 Jun 2010 07:57:38 +,
> "GIGO ." a écrit :
>
>>
>> Hi all,
>>
>> I am unable to do kerberos authentication in my live enviroment as
>> appose to the test enviroment where it was successful. My environment
>> is Active Direcory Single Forest Multidomain with each domain having
>> multiple domain controllers.
>>
>
> --
> Emmanuel Lesouef
_
Hotmail: Free, trusted and rich email service.
https://signup.live.com/signup.aspx?id=60969

[squid-users] squid_kerb_auth (parseNegTokenInit failed with rc=102)

2010-06-28 Thread GIGO .

Hi all,

I am unable to do kerberos authentication in my live enviroment as appose to 
the test enviroment where it was successful. My environment is Active Direcory 
Single Forest Multidomain with each domain having multiple domain controllers.

SPN was created through:

msktutil -c -b "OU=UNIXOU" -s HTTP/squidlhr1.v.local -h squidlhr1.v.local -k 
/etc/squid/HTTP.keytab --computer-name squid-http --upn HTTP/squidlhr1.v.local 
--server ldc-ms-dc2.v.local --verbose


Through ADSIEDIT & setspn tools SPN is confirmed in the Active Directory.

My kerb5.conf Settings:
[libdefaults]
default_realm = MAILSERVER.V.LOCAL
dns_lookup_realm = false
dns_lookup_kdc = false
default_keytab_name = /etc/krb5.keytab
; for windows 2003 encryption type configuration.
default_tgs_enctypes = rc4-hmac des-cbc-crc des-cbc-md5
default_tkt_enctypes = rc4-hmac des-cbc-crc des-cbc-md5
permitted_enctypes = rc4-hmac des-cbc-crc des-cbc-md5
[realms]
V.LOCAL = {
kdc = ldc-v-dc2.v.local
admin_server = ldc-v-dc2.v.local
}
MAILSERVER.V.LOCAL = {
kdc = ldc-ms-dc2.mailserver.v.local
admin_server = ldc-ms-dc2.mailserver.v.local
}
# BT.V.LOCAL = {
# kdc = dc.bt.v.local
# admin_server = dc.bt.v.local
#}
[domain_realm]
.linux.home = MAILSERVER.V.LOCAL
.v.local = V.LOCAL
v.local = V.LOCAL
.mailserver.v.local = MAILSERVER.V.LOCAL
mailserver.v.local = MAILSERVER.V.LOCAL
#.bt.v.local= BT.V.LOCAL
#bt.v.local = BT.V.LOCAL
[logging]
kdc = FILE:/var/log/kdc.log
admin_server = FILE:/var/log/kadmin.log
default = FILE:/var/log/kdc.log







I have tried this on multiple client computers but not seem to be working
Below are the files for your reference.


Dump through wire shark :
-

Hypertext Transfer Protocol
GET http://www.google.com/ HTTP/1.1\r\n
Accept: */*\r\n
Accept-Language: en-us\r\n
User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; 
.NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR
3.5.30729; InfoPath.2; AskTB5.5)\r\n
Accept-Encoding: gzip, deflate\r\n
Proxy-Connection: Keep-Alive\r\n
[truncated] Cookie: 
PREF=ID=dfcab88fe782b2f3:U=8cc1a776c84c55e1:TM=1273578259:LM=1273579194:S=ec2wG6BXReYHZvWe;
NID=36=iQ9ZARYGAQQvkpoAjK1OHFtg7BF7IE9hh-E__mxd9S8cV8EcNVq_M_9qMHZPatpJiifFPpdWYqJMmTtBxuCdoQMknggCTHJKkJkNigy5I6kewAQTepVnZ0Pb
[truncated] Proxy-Authorization: Negotiate
YIIFTwYGKwYBBQUCoIIFQzCCBT+gJDAiBgkqhkiC9xIBAgIGCSqGSIb3EgECAgYKKwYBBAGCNwICCqKCBRUEggURYIIFDQYJKoZIhvcSAQICAQBuggT8MIIE+KADAgEFoQMCAQ6iBwMFACCjggQVYYIEE
TCCBA2gAwIBBaEXGxVNQUlMU0VSVkVSLk1DQi5D
GSS-API Generic Security Service Application Program Interface
OID: 1.3.6.1.5.5.2 (SPNEGO - Simple Protected Negotiation)
SPNEGO
negTokenInit
mechTypes: 3 items
MechType: 1.2.840.48018.1.2.2 (MS KRB5 - Microsoft Kerberos 5)
MechType: 1.2.840.113554.1.2.2 (KRB5 - Kerberos 5)
MechType: 1.3.6.1.4.1.311.2.2.10 (NTLMSSP - Microsoft NTLM Security Support 
Provider)
mechToken: 6082050D06092A864886F71201020201006E8204FC308204...
krb5_blob: 6082050D06092A864886F71201020201006E8204FC308204...
KRB5 OID: 1.2.840.113554.1.2.2 (KRB5 - Kerberos 5)
krb5_tok_id: KRB5_AP_REQ (0x0001)
Kerberos AP-REQ
Pvno: 5
MSG Type: AP-REQ (14)
Padding: 0
APOptions: 2000 (Mutual required)
.0..        = Use Session Key: Do NOT use the 
session key to encrypt the ticket
..1.        = Mutual required: MUTUAL 
authentication is REQUIRED
Ticket
Tkt-vno: 5
Realm: MAILSERVER.V.LOCAL
Server Name (Service and Instance): HTTP/squidlhr1.v.local
Name-type: Service and Instance (2)
Name: HTTP
Name: squidlhr1.v.local
enc-part rc4-hmac
Encryption type: rc4-hmac (23)
Kvno: 2
enc-part: 60082AD63370B0B25657BB713A74B080C21E261079263809...
Authenticator rc4-hmac
Encryption type: rc4-hmac (23)
Authenticator data: A7B9567AB0F52FD022CD130905ACD67DA268C8222AC6ED97...
Host: www.google.com\r\n
\r\n

Hypertext Transfer Protocol
HTTP/1.0 407 Proxy Authentication Required\r\n
Server: squid\r\n
Date: Fri, 25 Jun 2010 15:00:57 GMT\r\n
Content-Type: text/html\r\n
Content-Length: 1295\r\n
Content length: 1295
X-Squid-Error: ERR_CACHE_ACCESS_DENIED 0\r\n
Proxy-Authenticate: Negotiate\r\n
Proxy-Authenticate: Negotiate gss_acquire_cred()\r\n
GSS-API Generic Security Service Application Program Interface
[Malformed Packet: GSS-API]
Expert Info (Error/Malformed): Malformed Packet (Exception occurred)
Message: Malformed Packet (Exception occurred)
Severity level: Error
Group: Malformed
X-Cache: MISS from squidlhr1\r\n
X-Cache-Lookup: NONE from squidlhr1:8080\r\n
Via: 1.0 squidlhr1main:8080 (squid)\r\n
Connection: close\r\n
\r\n

squid_kerb_auth -d output:
---

2010/06/28 10:03:24| squid_kerb_auth: Got 'YR 
YIIFTgYGKwYBBQUCoIIFQjCCBT6gJDAiBgkqhkiC9xIBAgIGCSqGSIb3EgECAgYKKwYBBAGCNwICCqKCBRQEggUQYIIFDAYJKoZIhvcSAQICAQBuggT7MIIE96ADAgEFoQMCAQ6iBwMFACCjggQVYYIEETCCBA2gAwIBBaEXGxVNQUlMU0VSVkVSLk1DQi5DT00uUEuiMjAwoAMCAQKhKTAnGwRIVFRQGx9zcXVpZGxocjEubWFpbHNlcnZlci5tY2IuY29tLnBro4IDtzCCA7OgAwIBF6EDAgECooIDpQSCA6HbS

[squid-users] DNS server setup for squid/kerberos

2010-06-23 Thread GIGO .

Dear All,
 
 
Your help is required.

 
Problem: Setting up squid in an Active Directory environment. (where Active 
Directory domain controllers, Windows clients, UNIX clients, and application 
servers must all have a shared understanding of the correct host names and IP 
addresses for each computer within the environment.)
 
 
The following options i just have thought about but i am not sure which one is 
better.
 
 
1.Using a local Active directory integrated dns server configured to 
forward internet queries to ISP DNS.(allowed through firewall).
 
 
 
2.Using two nics one for lan traffic configured with local AD integrated 
DNS and the second for internet traffic pointing to ISP DNS. Would there be any 
special requirements on the squid or linux side to setup squid with multiple 
nics ? is there a Kb article available for that.
 
 
3. Are there any material gains installing BIND DNS on squid server?
 
 
 
 
 
regards,
 
Bilal 
  
_
Hotmail: Trusted email with powerful SPAM protection.
https://signup.live.com/signup.aspx?id=60969

RE: [squid-users] Confusion regarding regex

2010-06-21 Thread GIGO .

Henrik Thank you so much.
 
regards,
 
Bilal


> From: hen...@henriknordstrom.net
> To: gi...@msn.com
> CC: squid-users@squid-cache.org
> Date: Mon, 21 Jun 2010 10:59:45 +0200
> Subject: RE: [squid-users] Confusion regarding regex
>
> mån 2010-06-21 klockan 06:25 + skrev GIGO .:
>> Hi Amos,
>>
>> There is still some confusion regarding regex and any help will be great 
>> please.
>>
>>
>> you told that squid uses posix regex but is it BRE or ERE???
>
> Extended.
>
>> as for ERE according to my best understanding special characters are
>> not required to be escaped and if escaped then will lose there special
>> meaning and on the contrary in BRE some special characters like ( )
>> must be escaped otherwise they will be treated as literals.
>
> Correct.
>
>> If the regex processor is built on the squid itself or it is using the os 
>> default regex parser?
>
> Generall the os default regex implementation. Squid also ships with a
> copy of GNU Regex in case the OS regex implemetation to too broken.
>
> Regards
> Henrik
>
>
> 
_
Hotmail: Free, trusted and rich email service.
https://signup.live.com/signup.aspx?id=60969

RE: [squid-users] Confusion regarding regex

2010-06-20 Thread GIGO .

Hi Amos,
 
There is still some confusion regarding regex and any help will be great please.
 
 
you told that squid uses posix regex but is it BRE or ERE???
 
as for ERE according to my best understanding special characters are not 
required to be escaped and if escaped then will lose there special meaning and 
on the contrary in BRE some special characters like ( ) must be escaped 
otherwise they will be treated as literals.
 
 
If the regex processor is built on the squid itself or it is using the os 
default regex parser?
 
 
thanking you &
 
 
regards,
 
Bilal
 



> Date: Wed, 16 Jun 2010 23:11:08 +1200
> From: squ...@treenet.co.nz
> To: squid-users@squid-cache.org
> Subject: Re: [squid-users] Confusion regarding regex
>
> GIGO . wrote:
>> Hi,
>>
>> Please need your guidance regarding the regex used by squid. Is it bre ere 
>> or perl? I assume that squid using a gnurep compatible version? Am i right?
>>
>
> POSIX regular expressions.
>
>>
>> In grep to use some metacharacter we have to encode it which are ‘\?’, ‘\+’, 
>> ‘\{’, ‘\|’, ‘\(’, and ‘\)’ does this hold true to write regex for squid as 
>> well?
>>
>
> Yes. I know for at least these: \. \? \+ \( \)
> Not sure about the others.
>
>> acl MP3url urlpath_regex \.mp3(\?.*)?$ isnt this expression should be 
>> written as \.mp3'\(''\?'.*'\)''\?'$
>
> No. It means the text ".mp3" ending the path (aka the MP3 file
> extension), with optional query string parameters following.
>
> Which matches URI standard syntax:
> protocol ':' '/' '/' domain '/' path '?' parameters
>
>
> Amos
> --
> Please be using
> Current Stable Squid 2.7.STABLE9 or 3.1.4 
>   
_
Hotmail: Powerful Free email with security by Microsoft.
https://signup.live.com/signup.aspx?id=60969

RE: [squid-users] Confusion regarding regex

2010-06-16 Thread GIGO .

ok what i understand is in the posix regular expression you encode special 
characters  . ? + ( ) with a back slash only and no need of (single 
quote+backslash) is required which is a must in grep.
 
 
regards,
 
Bilal Aslam
 
 



> Date: Wed, 16 Jun 2010 23:11:08 +1200
> From: squ...@treenet.co.nz
> To: squid-users@squid-cache.org
> Subject: Re: [squid-users] Confusion regarding regex
>
> GIGO . wrote:
>> Hi,
>>
>> Please need your guidance regarding the regex used by squid. Is it bre ere 
>> or perl? I assume that squid using a gnurep compatible version? Am i right?
>>
>
> POSIX regular expressions.
>
>>
>> In grep to use some metacharacter we have to encode it which are ‘\?’, ‘\+’, 
>> ‘\{’, ‘\|’, ‘\(’, and ‘\)’ does this hold true to write regex for squid as 
>> well?
>>
>
> Yes. I know for at least these: \. \? \+ \( \)
> Not sure about the others.
>
>> acl MP3url urlpath_regex \.mp3(\?.*)?$ isnt this expression should be 
>> written as \.mp3'\(''\?'.*'\)''\?'$
>
> No. It means the text ".mp3" ending the path (aka the MP3 file
> extension), with optional query string parameters following.
>
> Which matches URI standard syntax:
> protocol ':' '/' '/' domain '/' path '?' parameters
>
>
> Amos
> --
> Please be using
> Current Stable Squid 2.7.STABLE9 or 3.1.4 
>   
_
Hotmail: Trusted email with powerful SPAM protection.
https://signup.live.com/signup.aspx?id=60969

[squid-users] Confusion regarding regex

2010-06-16 Thread GIGO .

Hi,
 
Please need your guidance regarding the regex used by squid. Is it bre ere or 
perl? I assume that squid using a gnurep compatible version? Am i right?
 
 
In grep to use some metacharacter we have to encode it which are ‘\?’, ‘\+’, 
‘\{’, ‘\|’, ‘\(’, and ‘\)’ does this hold true to write regex for squid as well?
 
acl MP3url urlpath_regex \.mp3(\?.*)?$ isnt this expression should be written 
as \.mp3'\(''\?'.*'\)''\?'$ 
 
 
please guidance regarding this will be of great value to me.
 
 
thanks & Regards,
 
Bilal 
 
 
 
 
  
_
Hotmail: Trusted email with powerful SPAM protection.
https://signup.live.com/signup.aspx?id=60969

RE: [squid-users] Youtube -An error occured, please try again later

2010-05-31 Thread GIGO .

Hi Amos
 
 
Yes the problems seems to be gone and it could be the reason thanks for 
explaining.
 
regards,
 
Bilal
 
 
 
 



> Date: Mon, 31 May 2010 20:32:43 +1200
> From: squ...@treenet.co.nz
> To: squid-users@squid-cache.org
> Subject: Re: [squid-users] Youtube -An error occured, please try again later
>
> GIGO . wrote:
>> Hi henrik,
>>
>> Right now i don't have my access.log. (will share it with you after the 
>> weekend) However let me tell you that after setting the negative_ttl to 0. 
>> Apparently the problem was resolved. But i need to be sure about it.
>>
>> Do you think that this had resolved the problem?
>
> Quite probably.
>
> negative_ttl forces Squid to cache and provide ALL clients with the 4xx
> or 5xx error page for a certain length of time. Even if it was only a
> temporary issue due to a single client request failure. It's a manually
> added DoS vulnerability to every Squid which uses it.
>
> It's rarely useful nowdays even for its original purpose of reducing 404
> flooding of backend servers.
>
> Amos
> --
> Please be using
> Current Stable Squid 2.7.STABLE9 or 3.1.3 
>   
_
Hotmail: Free, trusted and rich email service.
https://signup.live.com/signup.aspx?id=60969

RE: [squid-users] Youtube -An error occured, please try again later

2010-05-28 Thread GIGO .

Hi henrik,
 
Right now i don't have my access.log. (will share it with you after the 
weekend) However let me tell you that after setting the negative_ttl to 0. 
Apparently the problem was resolved. But i need to be sure about it.
 
Do you think that this had resolved the problem?
 
 
regards,
 
Bilal 



> Subject: Re: [squid-users] Youtube -An error occured, please try again later
> From: hen...@henriknordstrom.net
> To: gi...@msn.com
> CC: squid-users@squid-cache.org
> Date: Fri, 28 May 2010 18:53:19 +0200
>
> fre 2010-05-28 klockan 05:33 + skrev GIGO .:
>> Hi all,
>>
>> For some of my youtube videos i am getting the following error.
>>
>>
>> "An error occured, please try again later".
>
> What does access.log say?
>
> Regards
> Henrik
> 
_
Hotmail: Trusted email with Microsoft’s powerful SPAM protection.
https://signup.live.com/signup.aspx?id=60969

RE: [squid-users] Youtube -An error occured, please try again later

2010-05-28 Thread GIGO .

Amos/Maurizio :) though i am sure that i did not get the joke completely would 
it be i have enjoyed it morebut hopefully you have understood the problem 
and that was important. I request you to please guide me regarding it and help 
resolving it.
(Also reminds me of Quentin Tarantino whose movies were without any 
sequencebut fun watching as you have to think randomly)
 
 
p.s. sorry for the mail being sent out of sequence as when i had sent the mail 
i realized that the store.log would be important for diagnosing and solving so 
i appended it in the beggining which definitely was a mistake.
 
 
 
regards,
 
Bilal
 
 
 
 
 



> Date: Fri, 28 May 2010 18:39:39 +1200
> From: squ...@treenet.co.nz
> To: squid-users@squid-cache.org
> Subject: Re: FW: [squid-users] Youtube -An error occured, please try again 
> later
>
> Maurizio Marini wrote:
>> On Fri, 28 May 2010 06:15:32 +
>> "GIGO ." wrote:
>>
>>> My store.logs are following
>>
>> A. Because people read from top to bottom.
>> Q. Why should I not top post?
>>
>
>
> Ah, fun...
>
> you know.
> sdrawkcab daer lla tnac ew
> so please dont
> posting above the reference
> what is top posting?
>
>
> and my favourite: (can be read by both top and bottom posters. :)
>
> top posting.
> why do people still do it?
> how can people still do it?
> such a worrysome activity
> reading upwards 
_
Your E-mail and More On-the-Go. Get Windows Live Hotmail Free.
https://signup.live.com/signup.aspx?id=60969

FW: [squid-users] Youtube -An error occured, please try again later

2010-05-27 Thread GIGO .

My store.logs are following
 
1275025642.358 SWAPOUT 00 8152 04FD0DB17EE9789F06B1386F1D6CDA4D  200 
1275025483 1234793502 1275047700 video/x-flv 5142132/5142132 GET 
http://r8.ts-bru5.c.youtube.com/videoplayback?ip=0.0.0.0&sparams=id%2Cexpire%2Cip%2Cipbits%2Citag%2Calgorithm%2Cburst%2Cfactor%2Coc%3AU0dWSlBPVl9FSkNNNl9ISVpB&fexp=907111&algorithm=throttle-factor&itag=34&ipbits=0&burst=40&sver=3&expire=1275048000&key=yt1&signature=538993A5EE74B6B699669E1D6A89F101C061148B.D937688FE5C5DD2447558E4AB677F51AF69E8A4E&factor=1.25&id=8190a1a6ed3647ed&redirect_counter=1&st=ts
1275025600.989 RELEASE -1  85FE590AE3CDAB37631292367AE052AA  200 
1275025644-1  41629446 text/xml 66/66 GET 
http://www.youtube.com/set_awesome?feature=related&video_id=gZChpu02R-0&el=detailpage&l=125.84&w=0.8026064844246662&plid=AASHoQDhJ0Nv8M8p&t=vjVQa1PpcFO19wc78YvxNbP1S8x1t9MmvNUKqqI8EHk=
1275025495.423 RELEASE -1  E52CA19FA8D0AFC4DD582D9D0B53745B  204 
1275025538-1  41629446 text/html 0/0 GET 
http://www.youtube.com/player_204?rt=63.047&shost=v12.lscache8.c.youtube.com&v=m336FlPPbEw&plid=AASHoQCBv-E--QW6&fv=WIN%2010,0,45,2&fmt=5&el=detailpage&scoville=1&ec=100&fexp=907111&event=streamingerror
1275025495.109 RELEASE -1  6B918E2BFCBE3D4B485CF5E1CE53DE7D  504
-1-1-1 text/html 4230/4230 GET 
http://v12.lscache8.c.youtube.com/videoplayback?ip=0.0.0.0&sparams=id%2Cexpire%2Cip%2Cipbits%2Citag%2Calgorithm%2Cburst%2Cfactor%2Coc%3AU0dWSlBPVl9FSkNNNl9ISVpB&fexp=907111&algorithm=throttle-factor&itag=5&ipbits=0&burst=40&sver=3&expire=1275048000&key=yt1&signature=7AF53A87CCB5E0C654C6BE521682B95A981A3A1F.D5A310DFCDF9C4061F378070ACEBDAAE0FA71050&factor=1.25&id=9b7dfa1653cf6c4c&;
1275025494.782 RELEASE -1  36400B0F0D0E460A97CBBDA20D9D13FF  504
-1-1-1 text/html 4230/4230 GET 
http://v12.lscache8.c.youtube.com/generate_204?ip=0.0.0.0&sparams=id%2Cexpire%2Cip%2Cipbits%2Citag%2Calgorithm%2Cburst%2Cfactor%2Coc%3AU0dWSlBPVl9FSkNNNl9ISVpB&fexp=907111&algorithm=throttle-factor&itag=5&ipbits=0&burst=40&sver=3&expire=1275048000&key=yt1&signature=7AF53A87CCB5E0C654C6BE521682B95A981A3A1F.D5A310DFCDF9C4061F378070ACEBDAAE0FA71050&factor=1.25&id=9b7dfa1653cf6c4c
1275025447.415 RELEASE -1  1C82FB35508E2A7C1628DE606EB7B4AB  204 
1275025490-1  41629446 text/html 0/0 GET 
http://www.youtube.com/player_204?rt=15.015&shost=v12.lscache8.c.youtube.com&v=m336FlPPbEw&plid=AASHoQCBv-E--QW6&fv=WIN%2010,0,45,2&fmt=5&el=detailpage&scoville=1&ec=102&fexp=907111&event=streamingerror


> From: gi...@msn.com
> To: squid-users@squid-cache.org
> Date: Fri, 28 May 2010 05:33:08 +
> Subject: [squid-users] Youtube -An error occured, please try again later
>
>
> Hi all,
>
> For some of my youtube videos i am getting the following error.
>
>
> "An error occured, please try again later".
>
>
> I have confirmed that this only occur when squid is being used. find below 
> the relevant information in this regard.
>
> cache_dir aufs /cachedisk1/var/spool/squid 5 128 256
> cache_mem 1000 MB
> range_offset_limit -1 KB
> maximum_object_size 4194304 KB
> maximum_object_size_in_memory 1024 KB
> minimum_object_size 10 KB
> quick_abort_min -1 KB
>
> #specific for youtube custom refreshpatterns belowones
> refresh_pattern -i (get_video\?|videoplayback\?|videodownload\?) 5259487 
> % 5259487 override-expire ignore-reload
> refresh_pattern ^http://*.youtube.com/.* 720 100% 4320
> refresh_pattern -i \.flv$ 10080 90% 99 ignore-no-cache override-expire 
> ignore-private
> refresh_pattern -i \.(iso|avi|wav|mp3|mp4|mpeg|mpg|swf|x-flv)$ 43200 90% 
> 432000 override-expire ignore-no-cache ignore-private
>
> acl store_rewrite_list urlpath_regex 
> \/(get_video\?|videodownload\?|videoplayback.*id)
> acl video urlpath_regex 
> \.((mpeg|ra?m|avi|mp(g|e|4)|mov|divx|asf|qt|wmv|m\dv|rv|vob|asx|ogm|flv|3gp)(\?.*)?)$
>  (get_video\?|videoplayback\?|videodownload\?|\.flv(\?.*)?)
> storeurl_rewrite_children 1
> storeurl_rewrite_concurrency 10
>
> The storeurl.pl script i am using is by:
> # by chudy_fernan...@yahoo.com
> # Updates at 
> http://wiki.squid-cache.org/ConfigExamples/DynamicContent/YouTube/Discussion
> I also have applied the bug fix (src/client_side.c)
>
>
>
> Now what is causing this error to occur? And how to resolve it
>
>
>
>
>
> thanking you
>
> &
> regards,
>
> Bilal
>
> _
> Hotmail: Powerful Free email with security by Microsoft.
> https://signup.live.com/signup.aspx?id=60969  
>   
_
Hotmail: Powerful Free email with security by Microsoft.
https://signup.live.com/signup.aspx?id=60969

[squid-users] Youtube -An error occured, please try again later

2010-05-27 Thread GIGO .

Hi all,
 
For some of my youtube videos i am getting the following error.

 
"An error occured, please try again later". 
 

I have confirmed that this only occur when squid is being used. find below the 
relevant information in this regard.
 
cache_dir aufs /cachedisk1/var/spool/squid 5 128 256
cache_mem 1000 MB
range_offset_limit -1 KB
maximum_object_size 4194304 KB
maximum_object_size_in_memory 1024 KB
minimum_object_size 10 KB
quick_abort_min -1 KB
 
#specific for youtube custom refreshpatterns belowones
refresh_pattern -i (get_video\?|videoplayback\?|videodownload\?) 5259487 
% 5259487 override-expire ignore-reload
refresh_pattern ^http://*.youtube.com/.* 720 100% 4320
refresh_pattern -i \.flv$ 10080 90% 99 ignore-no-cache override-expire 
ignore-private
refresh_pattern -i \.(iso|avi|wav|mp3|mp4|mpeg|mpg|swf|x-flv)$ 43200 90% 432000 
override-expire ignore-no-cache ignore-private
 
acl store_rewrite_list urlpath_regex 
\/(get_video\?|videodownload\?|videoplayback.*id)
acl video urlpath_regex 
\.((mpeg|ra?m|avi|mp(g|e|4)|mov|divx|asf|qt|wmv|m\dv|rv|vob|asx|ogm|flv|3gp)(\?.*)?)$
 (get_video\?|videoplayback\?|videodownload\?|\.flv(\?.*)?)
storeurl_rewrite_children 1
storeurl_rewrite_concurrency 10
 
The storeurl.pl script i am using is by: 
# by chudy_fernan...@yahoo.com
# Updates at 
http://wiki.squid-cache.org/ConfigExamples/DynamicContent/YouTube/Discussion
I also have applied the bug fix (src/client_side.c)
 
 

Now what is causing this error to occur? And how to resolve it
 
 
 
 
 
 thanking you
 
& 
regards,
 
Bilal 
  
_
Hotmail: Powerful Free email with security by Microsoft.
https://signup.live.com/signup.aspx?id=60969

RE: [squid-users] Running Multiple instances and reporting confusion.

2010-05-27 Thread GIGO .

Hi Amos,
 
Related to my earlier query regarding how to handle reports with multiple 
instances. The problem was that inst1access.log though track client activities 
correctly however give incorrect information regarding the in-cache returned 
objects.As the caching part is instead being done by Instance-2. So  the SARG 
reports (parsing of inst1access.log) wrongly depicts about objects returned 
from the cache.
 
Now i just thought an idea that may be pointing to the same cache will solve 
the problem if instance 1 has no-store option set. Please read below and guide 
me i would be thankful
 
 
# INSTANCE-2 Cache directory setup of the instance that is doing the 
caching/fetching part
---
cache_dir aufs /cachedisk1/var/spool/squid 5 128 256
coredump_dir /cachedisk1/var/spool/squid
cache_mem 1000 MB
range_offset_limit -1 KB
maximum_object_size 4194304 KB
maximum_object_size_in_memory 1024 KB
quick_abort_min -1 KB
cache_replacement_policy heap LFUDA
memory_replacement_policy heap GDSF
 
 
# INSTANCE-1 Cache Directory setup Thought of the instance that is user 
facing
-
cache_peer 127.0.0.1  parent 1975 0 default no-digest no-query proxy-only
prefer_direct off  
# point to the directory of instance 1?
cache_dir aufs /cachedisk1/var/spool/squid 5 128 256 no-store
cache_dir aufs /var/spool/squid 1 16 256
coredump_dir /var/spool/squid
cache_replacement_policy heap GDSF 
 
 
1. Is it possible for 1 instance to point to the cache directory of other 
insance in read only mode?
 
 
 
2. My original intention for multiple instances was to cache directory 
failover? However if the setup above mentioned is possible then would the setup 
will remain faulttolerant or failing of /cachedisk1 now will terminate both the 
instances and it is no longer faulttolerant?
 
 
 
 
 
regards,
 
Bilal
 
 


> Date: Sat, 22 May 2010 02:18:51 +1200
> From: squ...@treenet.co.nz
> To: squid-users@squid-cache.org
> Subject: Re: [squid-users] Running Multiple instances and reporting confusion.
> 
> GIGO . wrote:
>> Hi all,
>> 
>> I am running multiple instances of squid on the same machine. One
>> instance is taking the clients request and forwarding to its parent
>> peer at 127.0.0.1. All is going well. However there is a confusion
>> related to reporting through sarg. To capture the client activity
>> sarge is parsing the access.log file of the instance i.e user facing
>> which is correct. However obvioulsy it is depicting a wrong in-cache
>> out-cache figures as this value should be instead of the instance
>> which is managing/doing caching.
>> 
>> Is there a way/trick to manage this? Is it possible that a cache_hit
>> from a parent cache be recorded as in-cache in the child?
>> 
> 
> The parent cache with the hier_code ACL type may be able to log only the 
> requests that did not get sent to the child.
> 
> The child cache using follow_x_forwarded_for trusting the parent proxy 
> and log_uses_indirect_client should be able to log the remote client IP 
> which connected to the parent with its received requests.
> 
> Combining the parent and child proxies logs line-wise for analysis 
> should then give you the result you want.
> 
> That combination is a bit tricky though, since we have only just added 
> TCP reliable logging to Squid-3.2. UDP logging is available for 2.7 and 
> 3.1, but may result in some lost records under high load. With either of 
> those methods you just need a daemon to receive the log traffic and 
> store it in the one file.
> 
> Amos
> -- 
> Please be using
> Current Stable Squid 2.7.STABLE9 or 3.1.3 
>   
_
Hotmail: Trusted email with powerful SPAM protection.
https://signup.live.com/signup.aspx?id=60969

[squid-users] Squid 2.7 working with reference to storeurl/caching?

2010-05-23 Thread GIGO .

Hi all,

I request that please read my squid.conf file and guide me on the order of the 
directives and any issue as i am unable to cache a single thing. Does it matter 
the order of definition of the following?
 
1. storeurl program
2. refresh patterns
3  storeurl rewrite lists...
 
I assume :
 
1. whenever a user open a page in his user agent squid very first of all check 
the refresh pattern for deciding whether to search in the cache or go to web. 
Am i right?

2. Now if the request match the storeurl rewrite lists then the request is 
forwarded to storeurl program who then see that if the object is available in 
the cache in that case it is returned. Otherwise object is fetched from the web 
and stored as store_url for future reference. Please guide i am totally unclear?


3. With the following squid.conf not a single object is being cached. I am not 
sure whats happening?
 

# This is the configuration file for instance 2 which is doing all the caching. 
squid v 2.7 stable 9 is chosen for its store_url feature.

visible_hostname squidlhr1
unique_hostname squidlhr1cache
cache_effective_user proxy

# Directives to enhance security.
allow_underscore off
httpd_suppress_version_string on
forwarded_for off
log_mime_hdrs on

pid_filename /var/run/inst2squid.pid
access_log /var/logs/inst2access.log squid
cache_log /var/logs/inst2cache.log
cache_store_log /var/logs/inst2store.log
http_port 1975
icp_port 0
# This option must be supported through giving at compilation
snmp_port 7172
#Explicit definition of all is must in squid 2.7 version
acl all src all
acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8
acl CONNECT method CONNECT
# Only allow cachemgr access from localhost
http_access allow manager localhost
http_access deny manager
# If peering with ISA then following two directives will be required. Otherwise 
not
#cache_peer 10.1.82.205 parent 8080 0 default no-digest no-query no-delay
#never_direct allow all
cache_dir aufs /cachedisk1/var/spool/squid 5 128 256
coredump_dir /cachedisk1/var/spool/squid
cache_swap_low 75
#should be 1/4 of the physical memory installed in the system
cache_mem 1000 MB
range_offset_limit -1 KB
maximum_object_size 4194304 KB
minimum_object_size 10 KB
quick_abort_min -1 KB
cache_replacement_policy heap LFUDA

# This portion is not understood yet well
# Let the clients favorite video site through with full caching
# - they can come from any of a number of youtube.com subdomains.
# - this is NOT ideal, the 'merging' of identical content is really needed here
acl youtube dstdomain .youtube.com
cache allow youtube

#-Refresh Pattern Portion--
# Custom Refresh patterns will come first
# Updates windows/debian etc..
refresh_pattern windowsupdate.com/.*.(cab|exe)(\?|$) 518400 100% 518400 
reload-into-ims
refresh_pattern update.microsoft.com/.*.(cab|exe)(\?|$) 518400 100% 518400 
reload-into-ims
refresh_pattern download.microsoft.com/.*.(cab|exe)(\?|$) 518400 100% 518400 
reload-into-ims
refresh_pattern download.windowsupdate.com/.*\.(cab|exe|dll|msi) 1440 100% 
43200 reload-into-ims
refresh_pattern (Release|Package(.gz)*)$ 0 20% 2880
refresh_pattern .deb$ 518400 100% 518400 override-expire
#specific for youtube custom refreshpatterns belowones
refresh_pattern -i (get_video\?|videoplayback\?|videodownload\?) 5259487 
% 5259487 override-expire ignore-reload
# Break HTTP standard for flash videos. Keep them in cache even if asked not to.
refresh_pattern -i \.flv$ 10080 90% 99 ignore-no-cache override-expire 
ignore-private
# Other long-lived items
refresh_pattern -i .(jp(e?g|e|2)|gif|png|tiff?|bmp|ico|flv)(\?|$) 161280 3000% 
525948 override-expire reload-into-ims

#Trial/Test
refresh_pattern -i \.(iso|avi|wav|mp3|mp4|mpeg|mpg|swf|flv|x-flv)$ 43200 90% 
432000 override-expire ignore-no-cache ignore-private
refresh_pattern -i \.(deb|rpm|exe|ram|bin|pdf|ppt|doc|tiff)$ 10080 90% 43200 
override-expire ignore-no-cache ignore-private
refresh_pattern -i \.(gif|png|jpg|jpeg|ico)$ 10080 90% 43200 override-expire 
ignore-no-cache ignore-private
refresh_pattern -i \.(zip|gz|arj|lha|lzh|tar|tgz|cab|rar)$ 10080 95% 43200 
override-expire ignore-no-cache ignore-private
refresh_pattern -i \.(php|asp|aspx|cgi|html|htm|css|js) 1440 40% 40320
refresh_pattern ^http://*.gmail.*/.* 720 100% 4320
refresh_pattern ^http://*.twitter.*/.* 720 100% 4320
refresh_pattern ^http://*.yimg.*/.* 720 100% 4320
refresh_pattern ^http://*.ymail.*/.* 720 100% 4320
refresh_pattern ^http://*.hotmail.*/.* 720 100% 4320
refresh_pattern ^http://*.live.*/.* 720 100% 4320
refresh_pattern ^http://*.wikipedia.*/.* 720 100% 4320
refresh_pattern ^http://wiki.*.*/.* 720 100% 4320
refresh_pattern ^http://*.profile/.* 720 100% 4320
refresh_pattern ^http://*.yahoo.*/.* 720 100% 4320
refresh_pattern ^http://*.microsoft.*/.* 720 100% 4320
refresh_

[squid-users] Startup/shutdown script which was working perfactly alright for squid 3.0stable25 is not working for squid 2.7 stable9.0

2010-05-22 Thread GIGO .

Hi all,
 
I am able to run squid manually however whenever i try to run it through the 
startup/shutdown script it fails. This is the same script working for squid 3.0 
stable 25 however i am not being able to figure out that why its failing on 
squid 2.7 stable 9? Neither of the instance starts with system startup.
 
 
Please guide me i be thankful. My startup script and tail of cache.log for both 
instances is below.
 
 
#!/bin/sh
#
#my script
case "$1" in
start)
/usr/sbin/squid -D -s -f /etc/squid/squidcache.conf
/usr/sbin/squid -D -s -f /etc/squid/squid.conf
#The below line is to automatically start apache  with system startup
/usr/sbin/httpd -k start
#KRB5_KTNAME=/etc/squid/HTTP.keytab
#export KRB5_KTNAME
#KRB5RCACHETYPE=none
#export KRB5RCACHETYPE
;;
stop)
/usr/sbin/squid -k shutdown -f /etc/squid/squidcache.conf
echo "Shutting down squid secondary process"
/usr/sbin/squid -k shutdown -f /etc/squid/squid.conf
echo "Shutting down squid main process"
# The below line is to automatically stop apache at system shutdown
/usr/sbin/httpd -k stop
;;
esac
 
tail> instance 2 cache file:
 
2010/05/22 06:05:18| Beginning Validation Procedure
2010/05/22 06:05:18|   Completed Validation Procedure
2010/05/22 06:05:18|   Validated 0 Entries
2010/05/22 06:05:18|   store_swap_size = 0k
2010/05/22 06:05:18| storeLateRelease: released 0 objects
2010/05/22 06:09:28| Preparing for shutdown after 62 requests
2010/05/22 06:09:28| Waiting 30 seconds for active connections to finish
2010/05/22 06:09:28| FD 16 Closing HTTP connection
2010/05/22 06:09:28| WARNING: store_rewriter #1 (FD 7) exited
2010/05/22 06:09:28| Too few store_rewriter processes are running
2010/05/22 06:09:28| Starting new helpers
2010/05/22 06:09:28| helperOpenServers: Starting 1 'storeurl.pl' processes

 
tail> instance 1 cache file:
 
2010/05/22 06:05:25| 0 Objects expired.
2010/05/22 06:05:25| 0 Objects cancelled.
2010/05/22 06:05:25| 0 Duplicate URLs purged.
2010/05/22 06:05:25| 0 Swapfile clashes avoided.
2010/05/22 06:05:25|   Took 0.3 seconds (   0.0 objects/sec).
2010/05/22 06:05:25| Beginning Validation Procedure
2010/05/22 06:05:25|   Completed Validation Procedure
2010/05/22 06:05:25|   Validated 0 Entries
2010/05/22 06:05:25|   store_swap_size = 0k
2010/05/22 06:05:25| storeLateRelease: released 0 objects
2010/05/22 06:09:28| Preparing for shutdown after 63 requests
2010/05/22 06:09:28| Waiting 30 seconds for active connections to finish
2010/05/22 06:09:28| FD 15 Closing HTTP connection

 
 
regards,
 
 
Bilal 
_
Hotmail: Trusted email with powerful SPAM protection.
https://signup.live.com/signup.aspx?id=60969

[squid-users] Runcache script- Lot of confusion

2010-05-22 Thread GIGO .

Hi all,
 
Please guide about runcache script behaviour uptil now i have only understood 
this that this script will check and autorestart squid in case of failure. I 
also assume that this script should be registered with init.d for 
startup/shutdown.  Where is this script located?(2.7 version). Is this compiled 
already with the squid code? Is it Deprecated now? If squid must be run through 
runcache script?
 
 
 
thanking you
&
Regards
 
Bilal
 
  My Startup/Shutdown Script for reference:

#!/bin/sh
#
#my script
case "$1" in
start)
/usr/sbin/squid -D -s -f /etc/squid/squidcache.conf
/usr/sbin/squid -D -s -f /etc/squid/squid.conf
#The below line is to automatically start apache  with system startup
/usr/sbin/httpd -k start
#KRB5_KTNAME=/etc/squid/HTTP.keytab
#export KRB5_KTNAME
#KRB5RCACHETYPE=none
#export KRB5RCACHETYPE
;;
stop)
/usr/sbin/squid -k shutdown -f /etc/squid/squidcache.conf
echo "Shutting down squid secondary process"
/usr/sbin/squid -k shutdown -f /etc/squid/squid.conf
echo "Shutting down squid main process"
# The below line is to automatically stop apache at system shutdown
/usr/sbin/httpd -k stop
;;
esac
  
_
Hotmail: Free, trusted and rich email service.
https://signup.live.com/signup.aspx?id=60969

RE: [squid-users] Memory Considerations when you are running multiple instances of squid on the same server.

2010-05-21 Thread GIGO .

Thank you for explaining well
 
regards,

Bilal


> From: hen...@henriknordstrom.net
> To: gi...@msn.com
> CC: squid-users@squid-cache.org
> Date: Fri, 21 May 2010 09:53:06 +0200
> Subject: Re: [squid-users] Memory Considerations when you are running 
> multiple instances of squid on the same server.
>
> fre 2010-05-21 klockan 06:38 + skrev GIGO .:
>
>> can it be said as a generalization that one can allocate/fix 1/4 of
>> physical ram for cache mem objects. Will it holds true even when you
>> are running multiple instances???
>
> I would not generalize a rule like that. It is a reasonable
> recommendation when sizing the system, but also depends on how your
> Squid is being used. A reverse proxy benefits much more from cache_mem
> than a normal forward proxy, and in a forward proxy you may want to give
> priority to on-disk cache instead.
>
> memory usage per Squid = cache size (in GB) * 10 MB + cache_mem + 10MB.
>
> memory usage by OS: Leave at least 25%. In smaller configurations up to
> 50%.
>
> system memory requirement = sum(squid instances) + system memory =
> sum(squid instances) / 0.75.
>
>
> If you inverse the above calculation then you'll notice that cache size
> is a function of cache_mem. If one is increased then the other need to
> be decreased.
>
> Note: if you also log in on the sever using graphical desktop (not
> recommended) then reserve about 1GB for that.
>
>> please guide that how memory handling will be occuring in multiple
>> instances setup???cache_mem will influencing per instance and not the
>> program as whole. right?
>
> Right.
>
> Regards
> Henrik
> 
_
Hotmail: Powerful Free email with security by Microsoft.
https://signup.live.com/signup.aspx?id=60969

[squid-users] Memory Considerations when you are running multiple instances of squid on the same server.

2010-05-20 Thread GIGO .

Hi All,
 
 
can it be said as  a generalization that one can allocate/fix 1/4 of physical 
ram for cache mem objects. Will it holds true even when you are running 
multiple instances???
 
 
please guide that how memory handling will be occuring in multiple instances 
setup???cache_mem will influencing per instance and not the program as whole. 
right?
 
 
 
MysetuP:
I am running multiple instances. I have 8 GB Physical memory installed. OS is 
installed on RAID1 which has a 10GB cache for instance1.This will only come 
into play if my Cache Disk fails. For actual caching 71 GB HD 15K SAS --> 50 GB 
has been defined for Cache directory is controlled by instance second...
 
 
 
 
 
Thanking you
 
regards,
 
Bilal
 
 
  
_
Hotmail: Trusted email with powerful SPAM protection.
https://signup.live.com/signup.aspx?id=60969

RE: [squid-users] squid_kerb_auth & Squid_kerb_ldap (Squid 2.7)

2010-05-20 Thread GIGO .

Thank you!
 
regards,
 
Bilal


> From: hen...@henriknordstrom.net
> To: gi...@msn.com
> CC: squid-users@squid-cache.org
> Date: Thu, 20 May 2010 11:24:15 +0200
> Subject: Re: [squid-users] squid_kerb_auth & Squid_kerb_ldap (Squid 2.7)
>
> tor 2010-05-20 klockan 07:52 + skrev GIGO .:
>
>> Does squid_kerb_auth & squid_kerb_ldap work fine in squid 2.7 like squid 3.x.
>
> Yes.
>>
>> ./configure *...*--enable-basic-auth-helpers="LDAP" 
>> --enable-auth="basic,negotiate,ntlm" 
>> --enable-external-acl-helpers="wbinfo_group,ldap_group" 
>> --enable-negotiate-auth-helpers="squid_kerb_auth"
>
> Looks reasonable to me.
>
>> One more question is that i not mentioned squid_kerb_ldap here is it being 
>> covered through --enable-external-acl-helpers=ldap_group ???
>
> squid_kerb_ldap is not (yet) included in the Squid distribution and need
> to be compiled separately.
>
> Regards
> Henrik
> 
_
Your E-mail and More On-the-Go. Get Windows Live Hotmail Free.
https://signup.live.com/signup.aspx?id=60969

[squid-users] squid_kerb_auth & Squid_kerb_ldap (Squid 2.7)

2010-05-20 Thread GIGO .

Hi all,
 
Does squid_kerb_auth & squid_kerb_ldap work fine in squid 2.7 like squid 3.x.
 
 
Are these the correct options?
 
./configure *...*--enable-basic-auth-helpers="LDAP" 
--enable-auth="basic,negotiate,ntlm" 
--enable-external-acl-helpers="wbinfo_group,ldap_group" 
--enable-negotiate-auth-helpers="squid_kerb_auth"

 
One more question is that i not mentioned squid_kerb_ldap here is it being 
covered through --enable-external-acl-helpers=ldap_group ???
 
 
 
regards,
 
Bilal 
_
Hotmail: Free, trusted and rich email service.
https://signup.live.com/signup.aspx?id=60969

[squid-users] Squid 3.1.3 & squid 2.7 running together on the same server.

2010-05-19 Thread GIGO .

Hi All,
 
I was running multiple instances of  squid 3.0 Stable 25 on the same server 
successfully. However i intend to run squid 2.7 & 3.1.3 on the same server now 
reason being 2.7s enhance support of dynamic content caching. (Earlier the main 
intention to use multiple instances was to give fault tolerance to cache 
failure )
 
 
My question is that if this possible? If there be any special changes i be 
requiring?
 
 
 
copy of squid instance 2 which i will be using for caching please peruse it in 
the context of youtube/facebook caching specifically. If you notice any other 
drawback/discrepancy please do guide about it as well i would be really really 
thankful.
 
( i have also altered the client_side.c as per the guide available on squid 
cache web site)
-
visible_hostname squidl...@virtual.local
unique_hostname squidlhr1cache
pid_filename /var/run/inst2squid.pid
http_port 1975
icp_port 0
snmp_port 7172
access_log /var/logs/inst2access.log squid
cache_log /var/logs/inst2cache.log
cache_store_log /var/logs/inst2store.log
cache_effective_user proxy 
cache_mgr squidadm...@virtual.local
# If peering with ISA then following options will be required. Otherwise not
#cache_peer 10.1.82.205 parent 8080 0 default no-digest no-query no-delay 
#never_direct allow all 
 
# Hard disk size 71gb SAS 15k dedicated for caching. Operating system is on 
RAID1.
cache_dir aufs /cachedisk1/var/spool/squid 5 128 256
coredump_dir /cachedisk1/var/spool/squid
cache_swap_low 75
#should be 1/4 of the physical memory installed in the system
cache_mem 1000 MB
 
 
range_offset_limit -1 KB
maximum_object_size 4 GB
minimum_object_size 10 KB
quick_abort_min -1 KB
 
# not yet sure that what options during compilation should be provided and if i 
have defined this directive correctly
cache_replacement_policy heap
 
 
 
#-Refresh Pattern Portion--

# Custom Refresh patterns will come first
#specific for youtube custom refreshpatterns belowones
refresh_pattern (get_video\?|videoplayback\?|videodownload\?) 5259487 % 
5259487 override-expire ignore-reload
 
# Break HTTP standard for flash videos. Keep them in cache even if asked not to.

refresh_pattern -i \.flv$ 10080 90% 99 ignore-no-cache override-expire 
ignore-private
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
refresh_pattern . 0 20% 4320

# This portion is not understood yet well what does it mean?
# Let the clients favorite video site through with full caching
# - they can come from any of a number of youtube.com subdomains.
# - this is NOT ideal, the 'merging' of identical content is really needed here
acl youtube dstdomain .youtube.com
cache allow youtube

acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8
acl CONNECT method CONNECT
# Only allow cachemgr access from localhost
http_access allow manager localhost
http_access deny manager

acl store_rewrite_list urlpath_regex 
\/(get_video\?|videodownload\?|videoplayback.*id)
# storeurl rewrite helper program
storeurl_rewrite_program /usr/local/etc/squid/storeurl.pl
storeurl_access allow store_rewrite_list
storeurl_access deny all
storeurl_rewrite_children 1
storeurl_rewrite_concurrency 10
#Allow access from localhost only
http_access allow localhost
http_access deny all
-
 
This is the script i be looking forward to use as per configuration guide.
--
#your perl location in here, mine is #!/bin/perl
$|=1;
while (<>) {
@X = split;
$x = $X[0];
$_ = $X[1];
} elsif (m/^http:\/\/([0-9.]{4}
|.*\.youtube\.com|.*\.googlevideo\.com|.*\.video\.google\.com).*?\&(itag=
[0-9]*).*?\&(id=[a-zA-Z0-9]*)/) {
print $x . "http://video-srv.youtube.com.SQUIDINTERNAL/"; . $2 .
"&" . $3 . "\n";
} else {
print $x . $_ . "\n";
}
}

 
 
 
 
Just for the completion sake only here is the copy of my squid.conf that is 
user facing...However if somebody could give suggestions over it as well 
will definately be really thankful.
 
 
# This is the configuration file for the instance1 which is serving the user 
requests by forwarding it to the local parent peer. All the logic of 
Authentication/Access control is build here. Name this file squidinst1.conf
 
#---Adminsitrative Section-
visible_hostname squidLhr1
unique_hostname squidlhr1main
pid_filename /var/run/inst1squid.pid
http_port 8080
icp_port 0
snmp_port 3161
access_log /var/logs/inst1access.log squid
cache_log /var/logs/inst1cache.log
cache_store_log /var/logs/inst1store.log
cache_effective_user proxy 
cache_mgr squid

RE: [squid-users] SELINUX issue(confined>unconfined)

2010-05-19 Thread GIGO .

Hi,
 
I use CENTOS 5.3 and currently have no knowledge of SELINUX as yesterday was 
the first time i studied it. As u could have guessed i am a newbie in Linux 
field.yes.. i have been assigned the project of migrating from ISA to squid 
(managing having confidence in my capability to learn/understand things have 
assigned it... )
 
And i assume it would take quite a  time to be able to build the policy myself 
for which i have short of time. So i am thinking of pending it for some future 
time. And concentrate towards other issues/stabalization that are necessary for 
the required Basic functionality. Once the project is piloted and management 
show confidence in me i can do more challenging tasks like this.
 
But if you think its really very necessary then definately i will look forward 
to complete this task before piloting. Any tips/guidance will be warm welcomed.
 
 
Thanking you
 
&
 
regards,
 
Bilal 
 
 
 



> Date: Wed, 19 May 2010 11:33:40 +0200
> From: tiery.de...@gmail.com
> To: gi...@msn.com
> CC: squid-users@squid-cache.org
> Subject: Re: [squid-users] SELINUX issue(confined>unconfined)
>
> Hi,
>
> In permissive mode, you only get log, but selinux will not be active
> (it will not forbid unauthorized access). Usually you put selinux in
> permissive mode only in order to get all access denied log in
> audit.log in order to build policy module or adjust filecontexts.
>
> I suggest you to spend some time on selinux, it can realy increase the
> security of your proxy server.
>
> But you will need to build a policy module for squid_kerb_auth witch
> is not currently supported by selinux policy on redhat-like systems.
>
> What distrib do you use ?
>
>
> Tiery
>
>
> On Wed, May 19, 2010 at 6:17 AM, GIGO . wrote:
>>
>> Thank you i will give it a try. However i am also thinking of running 
>> SELinux in permissive mode for my proxy server. what do you say about it?
>>
>>
>> regards,
>>
>> Bilal
>>
>> 
>>> Date: Tue, 18 May 2010 15:00:05 +0200
>>> From: tiery.de...@gmail.com
>>> To: gi...@msn.com
>>> CC: squid-users@squid-cache.org
>>> Subject: Re: [squid-users] SELINUX issue(confined>unconfined)
>>>
>>> okay,
>>>
>>> I have also worked on a similar project (squid/kerberos/selinux).
>>> I installed squid in /usr/local/squid but I had to modify
>>> /etc/selinux/targeted/contexts/files/file_contexts and adapt it to my
>>> squid directory.
>>>
>>> /usr/local/squid/etc(/.*)? system_u:object_r:squid_conf_t:s0
>>> /usr/local/squid/var/logs(/.*)? system_u:object_r:squid_log_t:s0
>>> /usr/local/squid/share(/.*)? system_u:object_r:squid_conf_t:s0
>>> /usr/local/squid/var/cache(/.*)? system_u:object_r:squid_cache_t:s0
>>> /usr/local/squid/sbin/squid -- system_u:object_r:squid_exec_t:s0
>>> /usr/local/squid/var/logs/squid\.pid -- system_u:object_r:squid_var_run_t:s0
>>> /usr/local/squid/libexec(/.*)? system_u:object_r:lib_t:s0
>>> /usr/local/squid -d system_u:object_r:bin_t:s0
>>> /usr/local/squid/var -d system_u:object_r:var_t:s0
>>>
>>> Then restore context (with restorecon or .autorelabel and reboot).
>>>
>>> But i am not sure modifing this file is the best way.
>>> It you update your selinux policy, changement will not be persistent.
>>>
>>> I think it is better to build a selinux module for our squid.
>>>
>>> Tiery
>>>
>>>
>>>
>>> On Tue, May 18, 2010 at 2:34 PM, GIGO . wrote:
>>>>
>>>> Yes i am using a compiled version. I have used this command chcon -t 
>>>> unconfined_exec_t /usr/sbin/squid and its working now. Is this a security 
>>>> issue?
>>>>
>>>> regards,
>>>>
>>>> Bilal
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> 
>>>>> Date: Tue, 18 May 2010 14:26:06 +0200
>>>>> From: tiery.de...@gmail.com
>>>>> To: squid-users@squid-cache.org
>>>>> Subject: Re: [squid-users] SELINUX issue(confined>unconfined)
>>>>>
>>>>> Hi,
>>>>>
>>>>> ps -Z => squid_t and getenforce => enforcing
>>>>> squid is started with selinux
>>>>>
>>>>> Redhat/centos platform:
>>>>> If squid is installed with yum, squid will be started with a squid_t
>>>>> selinux context.

[squid-users] Squid Compilation

2010-05-18 Thread GIGO .

Hi All,
 
Your guidance is required regarding compilation.
 
 
 
I had compiled  squid-3.0.STABLE25 with the following options:
 

./configure --prefix=/usr --includedir=/usr/include --datadir=/usr/share 
--bindir=/usr/sbin --libexecdir=/usr/lib/squid --localstatedir=/var 
--sysconfdir=/etc/squid --enable-cache-digests --enable-removal-policies=lru 
--enable-delay-pools --enable-storeio=aufs,ufs --with-large-files 
--disable-ident-lookups --with-default-user=proxy 
--enable-basic-auth-helpers="LDAP" --enable-auth="basic,negotiate,ntlm" 
--enable-external-acl-helpers="wbinfo_group,ldap_group" 
--enable-negotiate-auth-helpers="squid_kerb_auth"

 
I wonder how my squid_kerb_ldap helper was working which i was using for 
authorization. As i did not mentioned during compilation??
 
 
 
 
Second question is that as i have decided to upgrade to 3.1.3 where i also want 
to include Heap support.how to do it ? 
just adding this option --enable-removal-policies lru,heap while keeping other 
options same??
 
 
 
 
 
in my squid.conf i had this directive
 
cache_replacement_policy lru how would/should it be redefined/changed for 
optimal performance
 
((Single Hard disk being used for caching of 71 GB SAS15k out of which 50 gb is 
allocated to cache directory))
 
 
 
 
 
 
regards,
 
Bilal
 
 
 
 
 
 
 
 
 
 
  
_
Your E-mail and More On-the-Go. Get Windows Live Hotmail Free.
https://signup.live.com/signup.aspx?id=60969

RE: [squid-users] SELINUX issue

2010-05-18 Thread GIGO .

Mine is a compiled version of squid does it matter? Is it true that binaries 
available through a distro by default run in confined domain and in case squid 
is compiled it will run in unconfined domain.
 
So i assume that my squid will run in an unconfined domain however still it was 
giving that error.
 
 
 
your furhter guidance will be real valueable
 
thanking you
 
Bilal


> From: hen...@henriknordstrom.net
> To: gi...@msn.com
> CC: squid-users@squid-cache.org
> Date: Tue, 18 May 2010 21:12:52 +0200
> Subject: Re: [squid-users] SELINUX issue
>
> tis 2010-05-18 klockan 06:02 + skrev GIGO .:
>
>> 2010/05/18 10:31:52| storeLateRelease: released 0 objects
>> 2010/05/18 10:31:52| TCP connection to 127.0.0.1/3128 failed
>
> setsebool -P squid_connect_any true
>
> should help there.
>
> Regards
> Henrik
> 
_
Hotmail: Free, trusted and rich email service.
https://signup.live.com/signup.aspx?id=60969

RE: [squid-users] SELINUX issue(confined>unconfined)

2010-05-18 Thread GIGO .

Thank you i will give it a try. However i am also thinking of running SELinux 
in permissive mode for my proxy server. what do you say about it?
 
 
regards,
 
Bilal


> Date: Tue, 18 May 2010 15:00:05 +0200
> From: tiery.de...@gmail.com
> To: gi...@msn.com
> CC: squid-users@squid-cache.org
> Subject: Re: [squid-users] SELINUX issue(confined>unconfined)
>
> okay,
>
> I have also worked on a similar project (squid/kerberos/selinux).
> I installed squid in /usr/local/squid but I had to modify
> /etc/selinux/targeted/contexts/files/file_contexts and adapt it to my
> squid directory.
>
> /usr/local/squid/etc(/.*)? system_u:object_r:squid_conf_t:s0
> /usr/local/squid/var/logs(/.*)? system_u:object_r:squid_log_t:s0
> /usr/local/squid/share(/.*)? system_u:object_r:squid_conf_t:s0
> /usr/local/squid/var/cache(/.*)? system_u:object_r:squid_cache_t:s0
> /usr/local/squid/sbin/squid -- system_u:object_r:squid_exec_t:s0
> /usr/local/squid/var/logs/squid\.pid -- system_u:object_r:squid_var_run_t:s0
> /usr/local/squid/libexec(/.*)? system_u:object_r:lib_t:s0
> /usr/local/squid -d system_u:object_r:bin_t:s0
> /usr/local/squid/var -d system_u:object_r:var_t:s0
>
> Then restore context (with restorecon or .autorelabel and reboot).
>
> But i am not sure modifing this file is the best way.
> It you update your selinux policy, changement will not be persistent.
>
> I think it is better to build a selinux module for our squid.
>
> Tiery
>
>
>
> On Tue, May 18, 2010 at 2:34 PM, GIGO . wrote:
>>
>> Yes i am using a compiled version. I have used this command chcon -t 
>> unconfined_exec_t /usr/sbin/squid and its working now. Is this a security 
>> issue?
>>
>> regards,
>>
>> Bilal
>>
>>
>>
>>
>>
>>
>>
>> 
>>> Date: Tue, 18 May 2010 14:26:06 +0200
>>> From: tiery.de...@gmail.com
>>> To: squid-users@squid-cache.org
>>> Subject: Re: [squid-users] SELINUX issue(confined>unconfined)
>>>
>>> Hi,
>>>
>>> ps -Z => squid_t and getenforce => enforcing
>>> squid is started with selinux
>>>
>>> Redhat/centos platform:
>>> If squid is installed with yum, squid will be started with a squid_t
>>> selinux context.
>>>
>>> If you compile your squid and installed it, you will have to change
>>> squid files contexts manually.
>>>
>>> As i see you have squid_kerb_plugin, you should have compile you squid
>>> to support kerberos, no?
>>>
>>> ---
>>>
>>> For your problem:
>>>
>>> try to check selinux log:
>>> audit2allow -al
>>> or cat /var/log/audit/audit.log | audit2allow
>>>
>>> You can also try to restore selinux context for all squid files:
>>> restorecon -R /etc/squid
>>> restorecon -R /var/log/squid
>>>
>>> etc...
>>>
>>> or touch /.autorelabel and reboot
>>>
>>>
>>> Tiery
>>>
>>> On Tue, May 18, 2010 at 9:47 AM, GIGO . wrote:
>>>>
>>>> Dear All,
>>>>
>>>> Your guidance is required. Please help.
>>>>
>>>> It looks that squid process run by default as a confined process whether 
>>>> its a compiled version or a version that come with the linux distro. It 
>>>> means that the squid software is SELINUX aware.Am i right?
>>>>
>>>> [r...@squidlhr ~]# ps -eZ | grep squid
>>>> system_u:system_r:squid_t 3173 ? 00:00:00 squid
>>>> system_u:system_r:squid_t 3175 ? 00:00:00 squid
>>>> system_u:system_r:squid_t 3177 ? 00:00:00 squid
>>>> system_u:system_r:squid_t 3179 ? 00:00:00 squid
>>>> system_u:system_r:squid_t 3222 ? 00:00:00 unlinkd
>>>> system_u:system_r:squid_t 3223 ? 00:00:00 unlinkd
>>>>
>>>>
>>>> it was successful before i changed the selinux to enforcing.Now i even 
>>>> cannot start squid process that access the parent at localhost(3128) 
>>>> manually even. The other process starts normally if i do manually.
>>>>
>>>> When running as an unconfined process by the following command the problem 
>>>> had resolved
>>>>
>>>> chcon -t unconfined_exec_t /usr/sbin/squid
>>>>
>>>> However it doesnot feel appropriate to me. Please guide me on this.
>>>>
>>>>
>>>>
>>>> I am starting squid with the following init script if it has 

RE: [squid-users] SELINUX issue(confined>unconfined)

2010-05-18 Thread GIGO .

Yes i am using a compiled version. I have used this command chcon -t 
unconfined_exec_t /usr/sbin/squid and its working now. Is this a security issue?
 
regards,
 
Bilal
 
 
 
 
 



> Date: Tue, 18 May 2010 14:26:06 +0200
> From: tiery.de...@gmail.com
> To: squid-users@squid-cache.org
> Subject: Re: [squid-users] SELINUX issue(confined>unconfined)
>
> Hi,
>
> ps -Z => squid_t and getenforce => enforcing
> squid is started with selinux
>
> Redhat/centos platform:
> If squid is installed with yum, squid will be started with a squid_t
> selinux context.
>
> If you compile your squid and installed it, you will have to change
> squid files contexts manually.
>
> As i see you have squid_kerb_plugin, you should have compile you squid
> to support kerberos, no?
>
> ---
>
> For your problem:
>
> try to check selinux log:
> audit2allow -al
> or cat /var/log/audit/audit.log | audit2allow
>
> You can also try to restore selinux context for all squid files:
> restorecon -R /etc/squid
> restorecon -R /var/log/squid
>
> etc...
>
> or touch /.autorelabel and reboot
>
>
> Tiery
>
> On Tue, May 18, 2010 at 9:47 AM, GIGO . wrote:
>>
>> Dear All,
>>
>> Your guidance is required. Please help.
>>
>> It looks that squid process run by default as a confined process whether its 
>> a compiled version or a version that come with the linux distro. It means 
>> that the squid software is SELINUX aware.Am i right?
>>
>> [r...@squidlhr ~]# ps -eZ | grep squid
>> system_u:system_r:squid_t 3173 ? 00:00:00 squid
>> system_u:system_r:squid_t 3175 ? 00:00:00 squid
>> system_u:system_r:squid_t 3177 ? 00:00:00 squid
>> system_u:system_r:squid_t 3179 ? 00:00:00 squid
>> system_u:system_r:squid_t 3222 ? 00:00:00 unlinkd
>> system_u:system_r:squid_t 3223 ? 00:00:00 unlinkd
>>
>>
>> it was successful before i changed the selinux to enforcing.Now i even 
>> cannot start squid process that access the parent at localhost(3128) 
>> manually even. The other process starts normally if i do manually.
>>
>> When running as an unconfined process by the following command the problem 
>> had resolved
>>
>> chcon -t unconfined_exec_t /usr/sbin/squid
>>
>> However it doesnot feel appropriate to me. Please guide me on this.
>>
>>
>>
>> I am starting squid with the following init script if it has something to do 
>> with the problem:
>>
>> #!/bin/sh
>> #
>> #my script
>> case "$1" in
>> start)
>> /usr/sbin/squid -D -sYC -f /etc/squid/squidcache.conf
>> /usr/sbin/squid -D -sYC -f /etc/squid/squid.conf
>> #The below line is to automatically start apache with system startup
>> /usr/sbin/httpd -k start
>> #KRB5_KTNAME=/etc/squid/HTTP.keytab
>> #export KRB5_KTNAME
>> #KRB5RCACHETYPE=none
>> #export KRB5RCACHETYPE
>> ;;
>> stop)
>>
>> /usr/sbin/squid -k shutdown -f /etc/squid3/squidcache.conf
>> echo "Shutting down squid secondary process"
>> /usr/sbin/squid -k shutdown -f /etc/squid3/squid.conf
>> echo "Shutting down squid main process"
>> # The below line is to automatically stop apache at system shutdown
>> /usr/sbin/httpd -k stop
>> ;;
>> esac
>>
>>
>> Thanking you & regards,
>>
>> Bilal
>>
>>
>> 
>>> From: gi...@msn.com
>>> To: squid-users@squid-cache.org
>>> Date: Tue, 18 May 2010 06:02:35 +
>>> Subject: [squid-users] SELINUX issue
>>>
>>>
>>> Hi all,
>>>
>>> When i change SELINUX from permissive mode to Enforcing mode. My multiple 
>>> instance setup fail to start. Please guide how to overcome this.
>>>
>>> ---Excerpts from cache.log-
>>>
>>> 2010/05/18 10:31:51| TCP connection to 127.0.0.1/3128 failed
>>> 2010/05/18 10:31:51| Store rebuilding is 7.91% complete
>>> 2010/05/18 10:31:52| Done reading /var/spool/squid swaplog (51794 entries)
>>> 2010/05/18 10:31:52| Finished rebuilding storage from disk.
>>> 2010/05/18 10:31:52| 51794 Entries scanned
>>> 2010/05/18 10:31:52| 0 Invalid entries.
>>> 2010/05/18 10:31:52| 0 With invalid flags.
>>> 2010/05/18 10:31:52| 51794 Objects loaded.
>>> 2010/05/18 10:31:52| 0 Objects expired.
>>> 2010/05/18 10:31:52| 0 Objects cancelled.
>>> 2010/05/18 10:31:52| 0 Duplicate URLs purged.
>>> 2010/05/18 10:31:52| 0 Swapfile clashe

[squid-users] Running Multiple instances and reporting confusion.

2010-05-18 Thread GIGO .

Hi all,

I am running multiple instances of squid on the same machine. One instance is 
taking the clients request and forwarding to its parent peer at 127.0.0.1. All 
is going well. However there is a confusion related to reporting through sarg. 
To capture the client activity sarge is parsing the access.log file of the 
instance i.e user facing which is correct. However obvioulsy it is depicting a 
wrong in-cache out-cache figures as this value should be instead of the 
instance which is managing/doing caching.
 
Is there a way/trick to manage this? Is it possible that a cache_hit from a 
parent cache be recorded as in-cache in the child?
 
 
Instance 1:
# Fulfilling client requests and faultolerant incase of a cachedisk failure.
cache_peer 127.0.0.1  parent 3128 0 default no-digest no-query proxy-only
 
Instance 2:
 
Directly connected to internet and doing all the caching...
@only allowed access from localhost.
 
 
 
Thanks & regards,
 
Bilal 
_
Your E-mail and More On-the-Go. Get Windows Live Hotmail Free.
https://signup.live.com/signup.aspx?id=60969

RE: [squid-users] SELINUX issue(confined>unconfined)

2010-05-18 Thread GIGO .

Dear All,
 
Your guidance is required. Please help.
 
It looks that squid process run by default as a confined process whether its a 
compiled version or a version that come with the linux distro. It means that 
the squid software is SELINUX aware.Am i right?
 
[r...@squidlhr ~]# ps -eZ | grep squid
system_u:system_r:squid_t3173 ?00:00:00 squid
system_u:system_r:squid_t3175 ?00:00:00 squid
system_u:system_r:squid_t3177 ?00:00:00 squid
system_u:system_r:squid_t3179 ?00:00:00 squid
system_u:system_r:squid_t3222 ?00:00:00 unlinkd
system_u:system_r:squid_t3223 ?00:00:00 unlinkd

 
it was successful before i changed the selinux to enforcing.Now i even cannot 
start squid process that access the parent at localhost(3128) manually even. 
The other process starts normally if i do manually.
 
When running as an unconfined process by the following command the problem had 
resolved
 
chcon -t unconfined_exec_t /usr/sbin/squid
 
However it doesnot feel appropriate to me. Please guide me on this.
 
 
 
I am starting squid with the following init script if it has something to do 
with the problem:
 
#!/bin/sh
#
#my script
case "$1" in
start)
/usr/sbin/squid -D -sYC -f /etc/squid/squidcache.conf
/usr/sbin/squid -D -sYC -f /etc/squid/squid.conf
#The below line is to automatically start apache  with system startup
/usr/sbin/httpd -k start
#KRB5_KTNAME=/etc/squid/HTTP.keytab
#export KRB5_KTNAME
#KRB5RCACHETYPE=none
#export KRB5RCACHETYPE
;;
stop)

/usr/sbin/squid -k shutdown -f /etc/squid3/squidcache.conf
echo "Shutting down squid secondary process"
/usr/sbin/squid -k shutdown -f /etc/squid3/squid.conf
echo "Shutting down squid main process"
# The below line is to automatically stop apache at system shutdown
/usr/sbin/httpd -k stop
;;
esac

 
Thanking you & regards,
 
Bilal



> From: gi...@msn.com
> To: squid-users@squid-cache.org
> Date: Tue, 18 May 2010 06:02:35 +
> Subject: [squid-users] SELINUX issue
>
>
> Hi all,
>
> When i change SELINUX from permissive mode to Enforcing mode. My multiple 
> instance setup fail to start. Please guide how to overcome this.
>
> ---Excerpts from cache.log-
>
> 2010/05/18 10:31:51| TCP connection to 127.0.0.1/3128 failed
> 2010/05/18 10:31:51| Store rebuilding is 7.91% complete
> 2010/05/18 10:31:52| Done reading /var/spool/squid swaplog (51794 entries)
> 2010/05/18 10:31:52| Finished rebuilding storage from disk.
> 2010/05/18 10:31:52| 51794 Entries scanned
> 2010/05/18 10:31:52| 0 Invalid entries.
> 2010/05/18 10:31:52| 0 With invalid flags.
> 2010/05/18 10:31:52| 51794 Objects loaded.
> 2010/05/18 10:31:52| 0 Objects expired.
> 2010/05/18 10:31:52| 0 Objects cancelled.
> 2010/05/18 10:31:52| 0 Duplicate URLs purged.
> 2010/05/18 10:31:52| 0 Swapfile clashes avoided.
> 2010/05/18 10:31:52| Took 1.13 seconds (45641.00 objects/sec).
> 2010/05/18 10:31:52| Beginning Validation Procedure
> 2010/05/18 10:31:52| Completed Validation Procedure
> 2010/05/18 10:31:52| Validated 103614 Entries
> 2010/05/18 10:31:52| store_swap_size = 913364
> 2010/05/18 10:31:52| storeLateRelease: released 0 objects
> 2010/05/18 10:31:52| TCP connection to 127.0.0.1/3128 failed
> 2010/05/18 10:31:52| TCP connection to 127.0.0.1/3128 failed
> 2010/05/18 10:31:52| TCP connection to 127.0.0.1/3128 failed
> 2010/05/18 10:31:52| TCP connection to 127.0.0.1/3128 failed
> 2010/05/18 10:31:52| TCP connection to 127.0.0.1/3128 failed
> 2010/05/18 10:31:52| TCP connection to 127.0.0.1/3128 failed
> 2010/05/18 10:31:52| TCP connection to 127.0.0.1/3128 failed
> 2010/05/18 10:31:52| TCP connection to 127.0.0.1/3128 failed
> 2010/05/18 10:31:52| TCP connection to 127.0.0.1/3128 failed
> 2010/05/18 10:31:52| Detected DEAD Parent: 127.0.0.1
> 2010/05/18 10:31:52| TCP connection to 127.0.0.1/3128 failed
> 2010/05/18 10:31:52| Failed to select source for 
> 'http://1.channel19.facebook.com/p'
> 2010/05/18 10:31:52| always_direct = 0
> 2010/05/18 10:31:52| never_direct = 1
> 2010/05/18 10:31:52| timedout = 0
> 2010/05/18 10:31:57| Failed to select source for 
> 'http://0.channel19.facebook.cm
>
> 
>
>
> regards,
>
> Bilal
> _
> Hotmail: Trusted email with powerful SPAM protection.
> https://signup.live.com/signup.aspx?id=60969  
>   
_
Hotmail: Powerful Free email with security by Microsoft.
https://signup.live.com/signup.aspx?id=60969

[squid-users] SELINUX issue

2010-05-17 Thread GIGO .

Hi all,
 
When i change SELINUX from permissive mode to Enforcing mode. My multiple 
instance setup fail to start. Please guide how to overcome this.
 
---Excerpts from cache.log-
 
2010/05/18 10:31:51| TCP connection to 127.0.0.1/3128 failed
2010/05/18 10:31:51| Store rebuilding is 7.91% complete
2010/05/18 10:31:52| Done reading /var/spool/squid swaplog (51794 entries)
2010/05/18 10:31:52| Finished rebuilding storage from disk.
2010/05/18 10:31:52| 51794 Entries scanned
2010/05/18 10:31:52| 0 Invalid entries.
2010/05/18 10:31:52| 0 With invalid flags.
2010/05/18 10:31:52| 51794 Objects loaded.
2010/05/18 10:31:52| 0 Objects expired.
2010/05/18 10:31:52| 0 Objects cancelled.
2010/05/18 10:31:52| 0 Duplicate URLs purged.
2010/05/18 10:31:52| 0 Swapfile clashes avoided.
2010/05/18 10:31:52|   Took 1.13 seconds (45641.00 objects/sec).
2010/05/18 10:31:52| Beginning Validation Procedure
2010/05/18 10:31:52|   Completed Validation Procedure
2010/05/18 10:31:52|   Validated 103614 Entries
2010/05/18 10:31:52|   store_swap_size = 913364
2010/05/18 10:31:52| storeLateRelease: released 0 objects
2010/05/18 10:31:52| TCP connection to 127.0.0.1/3128 failed
2010/05/18 10:31:52| TCP connection to 127.0.0.1/3128 failed
2010/05/18 10:31:52| TCP connection to 127.0.0.1/3128 failed
2010/05/18 10:31:52| TCP connection to 127.0.0.1/3128 failed
2010/05/18 10:31:52| TCP connection to 127.0.0.1/3128 failed
2010/05/18 10:31:52| TCP connection to 127.0.0.1/3128 failed
2010/05/18 10:31:52| TCP connection to 127.0.0.1/3128 failed
2010/05/18 10:31:52| TCP connection to 127.0.0.1/3128 failed
2010/05/18 10:31:52| TCP connection to 127.0.0.1/3128 failed
2010/05/18 10:31:52| Detected DEAD Parent: 127.0.0.1
2010/05/18 10:31:52| TCP connection to 127.0.0.1/3128 failed
2010/05/18 10:31:52| Failed to select source for 
'http://1.channel19.facebook.com/p'
2010/05/18 10:31:52|   always_direct = 0
2010/05/18 10:31:52|never_direct = 1
2010/05/18 10:31:52|timedout = 0
2010/05/18 10:31:57| Failed to select source for 'http://0.channel19.facebook.cm
 

 
 
regards,
 
Bilal 
_
Hotmail: Trusted email with powerful SPAM protection.
https://signup.live.com/signup.aspx?id=60969

RE: [squid-users] Dynamic Content Caching/Windowsupdate/Facebook/youtube

2010-05-17 Thread GIGO .

You recommended the change in order of refresh_patterns  same is written in the 
reference materials. I tried to understand what could be the reason for that 
but have no clue yet please guide. Further for windows clients (xpwithservice 
pack 3 an latest windows mostly ) do i need to manually do the configuration 
for winhttp proxysettings through proxycfg.exe on each computer?
 
regards,
 
Bilal 
---
refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern . 0 20% 4320>>>> 
 
refresh_pattern (get_video\?|videoplayback\?|videodownload\?) 5259487 % 
5259487
 
Amos>> The youtube pattern and all other custom refresh_patterns' need to be 
configured above the default set (ftp:, gopher:, cgi-bin, and . ).
 
refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 
 
Amos>> This dynamic content needs to be between the refresh_pattern ^gopher: 
and the refresh_pattern . patterns. 
 
 
 



> Date: Sat, 15 May 2010 18:57:18 +1200
> From: squ...@treenet.co.nz
> To: squid-users@squid-cache.org
> Subject: Re: [squid-users] Dynamic Content 
> Caching/Windowsupdate/Facebook/youtube
>
> GIGO . wrote:
>> All,
>>
>> I am really sorry i was looking at the access.log file of squid instance 
>> that is user facing and not the instance that is doing the fetching/caching 
>> and there i can see mp4 files being cached. However i am not very much 
>> confident about my settings so please read my queries and the configuration 
>> file and advice.
>>
>> I would be really thankful.
>>
>>
>> 
>>> From: gi...@msn.com
>>> To: squid-users@squid-cache.org
>>> Date: Fri, 14 May 2010 12:00:46 +
>>> Subject: [squid-users] Dynamic Content 
>>> Caching/Windowsupdate/Facebook/youtube
>>>
>>>
>>>
>>> Dear All,
>>>
>>>
>>> I require your help and guidance regarding dynamic content caching. 
>>> Following are the quries.
>>>
>>>
>>> 1. I am running squid in multiple instances mode (For Cache Disk Failure 
>>> Protection). I dont think that it has any effect on internet object 
>>> caching? I am confused that if connect methods are to be duplicate on both 
>>> of the instances or i have configured it right specially in perspective of 
>>> windows update.
>>>
>
> Depends on whether the port the cache instance is listening on is
> reachable to external people, if it is then its Squid will definitely
> need the http_access security settings turned on as well.
>
>>>
>>> 2. As rewrite_url is not exported in new versions(version 3 and above) of 
>>> squid is it still possible for squid to cache facebook/youtube videos? Have 
>>> i configured it correctly? As i have seen no TCP_HIT for mp3,mp4 etc so i 
>>> think caching is not done.
>>>
>
> If you meant to write "storeurl_rewrite"? then yes. That particular
> method of caching them is not possible yet in 3.x. YouTube will still
> cache using the low-efficiency duplicate-object way it does most places.
>
>>>
>>> 3. Please can u please check my configuration for windows updates? is there 
>>> anything else which i have missed there? How could i assure that windows 
>>> update is being cached properly?
>>>
>
> You don't show any http_access rules from the cache instance.
> The default is to block all access through that instance.
>
> The main instance is okay.
>
>>>
>>>
>>> Through studying online tutorials mailarchive support and best of my 
>>> understanding i have configured squid as follows. Please peruse and guide.
>>>
>>> --
>>> Squid Cache Instance:
>>>
>>> visible_hostname squidlhr.v.local
>>> unique_hostname squidcacheinstance
>>> pid_filename /var/run/squidcache.pid
>>>
>>>
>>> cache_dir aufs /cachedisk1/var/spool/squid 5 128 256
>>> coredump_dir /cachedisk1/var/spool/squid
>>>
>>> cache_swap_low 75
>>> cache_mem 1000 MB
>>> range_offset_limit -1
>>> maximum_object_size 4096 MB
>>> minimum_object_size 10 KB
>>> quick_abort_min -1
>>> cache_replacement_policy heap
>>>
>>> refresh_pattern ^ftp: 1440 20% 10080
>>> refresh_pattern ^gopher: 1440 0% 1440
>>> refresh_pattern . 0 20% 4320
>>>
>>> #specific for youtube belowone
>>> ref

[squid-users] never_direct/always_direct

2010-05-17 Thread GIGO .

Dear all,
 
never_direct/always_direct
 

Why two directives had to be created while one directive could have done the 
trick? Please guide
 
 
regards,
 
Bilal 
_
Hotmail: Free, trusted and rich email service.
https://signup.live.com/signup.aspx?id=60969

[squid-users] Access.log

2010-05-14 Thread GIGO .

Hi all,
 
 
Can anybody please explain me what does this error mean and why it occurs it 
happens while i was testing youtube/facebook caching.

TCP_NEGATIVE_HIT/204
 
Does this suggest that some object in cache has corrupted? if so how to rectify 
the error?
 
 
 
 
Is this error only means that user has aborted the transfer or it may come for 
some other reason as well.
TCP_MISS/000
 
thanks & regards,
 
Bilal 
_
Hotmail: Trusted email with powerful SPAM protection.
https://signup.live.com/signup.aspx?id=60969

RE: [squid-users] Dynamic Content Caching/Windowsupdate/Facebook/youtube

2010-05-14 Thread GIGO .

All,
 
I am really sorry i was looking at the access.log file of squid instance that 
is user facing and not the  instance that is doing the fetching/caching and 
there i can see mp4 files being cached. However i am not very much confident 
about my settings so please read my queries and the configuration file and 
advice. 
 
I would be really thankful.
 
 

> From: gi...@msn.com
> To: squid-users@squid-cache.org
> Date: Fri, 14 May 2010 12:00:46 +
> Subject: [squid-users] Dynamic Content Caching/Windowsupdate/Facebook/youtube
>
>
>
> Dear All,
>
>
> I require your help and guidance regarding dynamic content caching. Following 
> are the quries.
>
>
> 1. I am running squid in multiple instances mode (For Cache Disk Failure 
> Protection). I dont think that it has any effect on internet object caching? 
> I am confused that if connect methods are to be duplicate on both of the 
> instances or i have configured it right specially in perspective of windows 
> update.
>
>
> 2. As rewrite_url is not exported in new versions(version 3 and above) of 
> squid is it still possible for squid to cache facebook/youtube videos? Have i 
> configured it correctly? As i have seen no TCP_HIT for mp3,mp4 etc so i think 
> caching is not done.
>
>
> 3. Please can u please check my configuration for windows updates? is there 
> anything else which i have missed there? How could i assure that windows 
> update is being cached properly?
>
>
>
>
>
>
>
>
> Through studying online tutorials mailarchive support and best of my 
> understanding i have configured squid as follows. Please peruse and guide.
>
> --
> Squid Cache Instance:
>
> visible_hostname squidlhr.v.local
> unique_hostname squidcacheinstance
> pid_filename /var/run/squidcache.pid
>
>
> cache_dir aufs /cachedisk1/var/spool/squid 5 128 256
> coredump_dir /cachedisk1/var/spool/squid
>
> cache_swap_low 75
> cache_mem 1000 MB
> range_offset_limit -1
> maximum_object_size 4096 MB
> minimum_object_size 10 KB
> quick_abort_min -1
> cache_replacement_policy heap
>
> refresh_pattern ^ftp: 1440 20% 10080
> refresh_pattern ^gopher: 1440 0% 1440
> refresh_pattern . 0 20% 4320
>
> #specific for youtube belowone
> refresh_pattern (get_video\?|videoplayback\?|videodownload\?) 5259487 
> % 5259487
>
> # For any dynamic content caching.
> refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
>
> --
> Squid Main Instance:
> visible_hostname squidlhr
> unique_hostname squidmain
> cache_peer 127.0.0.1 parent 3128 0 default no-digest no-query
> prefer_direct off
>
> cache_dir aufs /var/spool/squid 1 16 256
> coredump_dir /var/spool/squid
> cache_swap_low 75
> cache_replacement_policy lru
>
> refresh_pattern ^ftp: 1440 20% 10080
> refresh_pattern ^gopher: 1440 0% 1440
> refresh_pattern . 0 20% 4320
>
>
> #Defining & allowing ports section
> acl SSL_ports port 443 # https
> acl Safe_ports port 80 # http
> acl Safe_ports port 21 # ftp
> acl Safe_ports port 443 # https
> acl Safe_ports port 70 # gopher
> acl Safe_ports port 210 # wais
> acl Safe_ports port 1025-65535 # unregistered ports
> acl Safe_ports port 280 # http-mgmt
> acl Safe_ports port 488 # gss-http
> acl Safe_ports port 591 # filemaker
> acl Safe_ports port 777 # multiling http
> acl CONNECT method CONNECT
>
> # Only allow cachemgr access from localhost
> http_access allow manager localhost
> http_access deny manager
>
> # Deny request to unknown ports
> http_access deny !Safe_ports
>
> # Deny request to other than SSL ports
> http_access deny CONNECT !SSL_ports
>
> #Allow access from localhost
> http_access allow localhost
>
>
> # Windows Update Section...
> acl windowsupdate dstdomain windowsupdate.microsoft.com
> acl windowsupdate dstdomain .update.microsoft.com
> acl windowsupdate dstdomain download.windowsupdate.com
> acl windowsupdate dstdomain redir.metaservices.microsoft.com
> acl windowsupdate dstdomain images.metaservices.microsoft.com
> acl windowsupdate dstdomain c.microsoft.com
> acl windowsupdate dstdomain www.download.windowsupdate.com
> acl windowsupdate dstdomain wustat.windows.com
> acl windowsupdate dstdomain crl.microsoft.com
> acl windowsupdate dstdomain sls.microsoft.com
> acl windowsupdate dstdomain productactivation.one.microsoft.com
> acl windowsupdate dstdomain ntservicepack.microsoft.com
> acl wuCONNECT dstdomain www.update.microsoft.com
> acl wuCONNECT dstdomain sls.microsoft.com
> http_access allow CONNECT wuCONNECT all
> http_access allow windowsupdate all
>
>
> regards & thanks
>
> Bilal
> _
> Hotmail: Free, trusted and rich email service.
> https://signup.live.com/signup.aspx?id=60969  
>   
_
Hotmail: Powerful Free email with security by Microsoft.
https://signup.live.com/signup.aspx?id=60969

[squid-users] Dynamic Content Caching/Windowsupdate/Facebook/youtube

2010-05-14 Thread GIGO .


Dear All,
 
 
I require your help and guidance regarding dynamic content caching. Following 
are the quries.
 
 
1. I am running squid in multiple instances mode (For Cache Disk Failure 
Protection). I dont think that it has any effect on internet object caching? I 
am confused that if connect methods are to be duplicate on both of the 
instances or i have configured it right specially in perspective of windows 
update.
 
 
2. As rewrite_url is not exported in new versions(version 3 and above) of squid 
is it still possible for squid to cache facebook/youtube videos? Have i 
configured it correctly? As i have seen no TCP_HIT for mp3,mp4 etc so i think 
caching is not done.
 
 
3. Please can u please check my configuration for windows updates? is there 
anything else which i have missed there? How could i assure that windows update 
is being cached properly?
 
 
 
 
 
 
 
 
Through studying online tutorials mailarchive support and best of my 
understanding i have configured squid as follows. Please peruse and guide.
 
--
Squid Cache Instance:
 
visible_hostname squidlhr.v.local
unique_hostname squidcacheinstance
pid_filename /var/run/squidcache.pid
 
 
cache_dir aufs /cachedisk1/var/spool/squid 5 128 256
coredump_dir /cachedisk1/var/spool/squid

cache_swap_low 75
cache_mem 1000 MB
range_offset_limit -1
maximum_object_size 4096 MB
minimum_object_size 10 KB
quick_abort_min -1
cache_replacement_policy heap
 
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern . 0 20% 4320
 
#specific for youtube belowone
refresh_pattern (get_video\?|videoplayback\?|videodownload\?) 5259487 % 
5259487
 
# For any dynamic content caching.
refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
 
--
Squid Main Instance:
visible_hostname squidlhr
unique_hostname squidmain
cache_peer 127.0.0.1  parent 3128 0 default no-digest no-query
prefer_direct off 

cache_dir aufs /var/spool/squid 1 16 256
coredump_dir /var/spool/squid
cache_swap_low 75
cache_replacement_policy lru
 
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern . 0 20% 4320
 
 
#Defining & allowing ports section
acl SSL_ports port 443  # https
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
 
# Only allow cachemgr access from localhost
http_access allow manager localhost
http_access deny manager
 
# Deny request to unknown ports
http_access deny !Safe_ports
 
# Deny request to other than SSL ports
http_access deny CONNECT !SSL_ports
 
#Allow access from localhost
http_access allow localhost
 
 
# Windows Update Section...
acl windowsupdate dstdomain windowsupdate.microsoft.com
acl windowsupdate dstdomain .update.microsoft.com
acl windowsupdate dstdomain download.windowsupdate.com
acl windowsupdate dstdomain redir.metaservices.microsoft.com
acl windowsupdate dstdomain images.metaservices.microsoft.com
acl windowsupdate dstdomain c.microsoft.com
acl windowsupdate dstdomain www.download.windowsupdate.com
acl windowsupdate dstdomain wustat.windows.com
acl windowsupdate dstdomain crl.microsoft.com
acl windowsupdate dstdomain sls.microsoft.com
acl windowsupdate dstdomain productactivation.one.microsoft.com
acl windowsupdate dstdomain ntservicepack.microsoft.com
acl wuCONNECT dstdomain www.update.microsoft.com
acl wuCONNECT dstdomain sls.microsoft.com
http_access allow CONNECT wuCONNECT all
http_access allow windowsupdate all
 
 
regards & thanks
 
Bilal 
_
Hotmail: Free, trusted and rich email service.
https://signup.live.com/signup.aspx?id=60969

[squid-users] Squid in Deamon Mode

2010-05-10 Thread GIGO .

Hi,
 
I start squid in the normal mode as following:

 
/usr/sbin/squid -D -f /etc/squid/squid.conf
 
 
Could there be any benefit achieved running it in a deamon mode. Can please 
somebody guide in detail
 
 
Thanking in advance
 
regards,
 
Bilal 
_
Hotmail: Powerful Free email with security by Microsoft.
https://signup.live.com/signup.aspx?id=60969

[squid-users] Cache Contents.

2010-05-10 Thread GIGO .

Dear All,
 
I want to confirm that my youtube/facebook and windowsupdate are being cached 
as i configured. How to have this assurity. Further i wish to view what are the 
contents of my cache. Please guide in this respect.
 
 
 
Thanking you
 
&
 
regards,
 
Bilal 
_
Hotmail: Free, trusted and rich email service.
https://signup.live.com/signup.aspx?id=60969

[squid-users] Yum Updates and Squid

2010-05-04 Thread GIGO .

Dear All,

 

Is it safe enough to use Automatic yum updates on the squid Server 
machine? Is there any strict package version requirements to have those 
with which squid was already installed.

 

 

Automatic updates will even upgrade kernel as well so is it ok? 

 

 

 

Please your guidance will be much valuable.

 

 

Thanks in advance.

 

 

Regards,

 

Bilal 
_
Hotmail: Powerful Free email with security by Microsoft.
https://signup.live.com/signup.aspx?id=60969

[squid-users] Authentication Reverse Proxy

2010-05-02 Thread GIGO .

Hi,
 
What is the behaviour/mechanism of authentication if using squid proxy for both 
as forward proxy and reverse proxy.
 
I have successfully setup it for a forward proxy using the Helper files by 
Markus and the following tutorial;
http://wiki.squid-cache.org/ConfigExamples/Authenticate/Kerberos
 
 
Now comming in my mind two scenarios. One is that squid is being used for 
authentication and the second one is that web server is providing the 
authenticaiton/authorization and squid is just forwarding the requests to the 
web server? Please guide/suggest/comment about it.
 
 
However what my pan is that I want that web server(outlookwebacess) should be 
the one taking care of auhentication part and squid should simply have given 
the role of forwarder. However i am not sure which approach to adopt and what 
are any special configurations that are required? what are the implications of 
each approach?
 
 
 
regards,
 
Bilal 
_
Hotmail: Free, trusted and rich email service.
https://signup.live.com/signup.aspx?id=60969

[squid-users] squid_kerb_ldap/squid_kerb_auth in Single Forest Multidomains Active Directory.

2010-04-25 Thread GIGO .

Dear All,
 
The problem under discussion is a continutity of SPN creation/Single Forest 
MultiDomain (Active Directory) topic. 
 
@ Markus
Yes my infrastructure is Active Directory based (Root Forest Directory A with 
two child domains B (80 % users) & C (20 % users) in their own trees). Only 
squid Proxy is installed on Centos OS and not joined to any domain.Markus you 
are right I Observerd that the clients in the child domain are able to use 
squidproxy without any changes required in the krb5.conf file.(no need to 
define [CAPATH] section). I got it that by design of the Active directory 
forest where Parent domains and child domains have two way transitive trusts,  
Active directory/DNS infrastructure is managing itself...and the clients in any 
domain are able to find that Service principal is in which domain to acquire a 
service ticket from that domain. Right??
 
 
 

If the UnixServer(Proxy) is not belonged to any domain then the default_realm 
section does not matter and i can choose any of my domains as default_realm. As 
i think that the default_realm tag is compulsory to define so couldn't be left 
blank. Similarly if am not to use any other kerberised service for example from 
my SquidProxyunix server then .linux.home tag will be unimportant otherwise it 
is a must. Right??
 
 
 
 
//krb5.conf for Active directory single forest multi domain its working 
correctly
[libdefaults]
 default_realm = A.COM.PK
 dns_lookup_realm = false
 dns_lookup_kdc = false
 default_keytab_name = /etc/krb5.keytab

; for windows 2003 encryption type configuration.
default_tgs_enctypes = rc4-hmac des-cbc-crc des-cbc-md5
default_tkt_enctypes = rc4-hmac des-cbc-crc des-cbc-md5
permitted_enctypes = rc4-hmac des-cbc-crc des-cbc-md5
[realms]
 A.COM.PK = {
   kdc = dc1.a.com.pk
   admin_server = dc1.a.com.pk
  }
 b.A.COM.PK = {
   kdc = childdc.b.a.com.pk
   admin_server = childdc.b.a.com.pk
}
[domain_realm]
.linux.home = A.COM.PK
.a.com.pk = A.COM.PK
a.com.pk = A.COM.PK
.b.a.com.pk = b.A.COM.PK
b.a.com.pk = b.A.COM.PK
[logging]
kdc = FILE:/var/log/kdc.log
admin_server = FILE:/var/log/kadmin.log
default = FILE:/var/log/kdc.log
\\
Any suggestions/guidance required??
 
 
 
 
My squid.conf portion related to Authentication/Authorization along with the 
questions.
 
auth_param negotiate program /usr/libexec/squid/squid_kerb_auth
auth_param negotiate children 10
auth_param negotiate keep_alive on
# basic auth ACL controls to make use of it are.
#acl auth proxy_auth REQUIRED
#http_access deny !auth
#http_access allow auth
 
 
I think now above commented directives are not required as squid_kerb_ldap has 
taken the charge. Right???
 
 
 
#external_acl_type squid_kerb1 ttl=3600  negative_ttl=3600  %LOGIN 
/usr/libexec/squid/squid_kerb_ldap -g 
gro...@a.com.pk:gro...@a.com.pk:gro...@a.com.pk:g...@b.a.com.pk:gro...@b.a.com.pk:gro...@b.a.com.pk

external_acl_type g1_parent ttl=3600  negative_ttl=3600  %LOGIN 
/usr/libexec/squid/squid_kerb_ldap -g gro...@a.com.pk
 
external_acl_type g2_parent ttl=3600  negative_ttl=3600  %LOGIN 
/usr/libexec/squid/squid_kerb_ldap -g gro...@a.com.pk
 
external_acl_type g2_child ttl=3600  negative_ttl=3600  %LOGIN 
/usr/libexec/squid/squid_kerb_ldap -g gro...@a.b.com.pk
 
 
 
Although the commented single liner was working properly for me and look more 
apporpriate to me but i had to split it into multiple linesnothing came 
into my mind how to handle the ACL's based on user group membership. Please 
guide me if there is a better way to do that as it feels that i am calling the 
helper multiple times instead of single time now??
 
 
 
(There are other expected groups from child domains and parent domains so am 
worried that isnt it affect the performance)
 
 
acl ldap_group_check1 external g1_parent
acl ldap_group_check2 external g2_parent
acl ldap_group_check3 external g2_child
 
 
Definition of YouTube.
## The videos come from several domains
acl youtube_domains dstdomain .youtube.com .googlevideo.com .ytimg.com

http_access deny  ldap_group_check1 youtube_domains
http_access allow ldap_group_check2
http_access allow ldap_group_check1
http_access allow ldap_group_check3
http_access deny  all

 

As i think squid.conf file is parsed from top to bottom and if a related 
statement/acl is met then will see no further so it means that putting the 
statments in an order where groups containing most of the users will improve 
performance. Can there be if-else structure be used in squid.conf and how? Am 
not sure??? please guide...
 
 
 
 
Thanking you 
 
&
 
regards,
 
 
Bilal
 
 
 
  
_
Hotmail: Free, trusted and rich email service.
https://signup.live.com/signup.aspx?id=60969

[squid-users] Single Forest Multiple Domains kebreos setup (squid_kerb_ldap)

2010-04-22 Thread GIGO .

Dear Markus/All,
 
Please guide me on the matter discussed below:

 
Single Forest Multiple Domain setup 
 
 
  A
 / \
/   \
BC
 
Problem:
 
Single FOrest Multiple domains where as Root A is empty with no users. Domain B 
& C have no trust configured between each other. The internet users belong to 
Domain B & Domain C. We want to enable users from both domains to authenticate 
via Kerberos and authrorized through LDAP.
 
 
Guides and Helpers used:
http://wiki.squid-cache.org/ConfigExamples/Authenticate/Kerberos
http://mailman.mit.edu/pipermail/kerberos/2009-March/014751.html
& squid_kerb_ldap readme file
 
>>>If you serve multiple Kerberos realms add a HTTP/f...@realm service 
>>>principal per realm to the 
HTTP.keytab file and use the -s GSS_C_NO_NAME option with squid_kerb_auth..
 
 
i think this is the only change required in squid configuration to authenticate 
and authorize from multiple domains?
 
 
 
 
Please confirm that am i to create SPN as below for this setup to work.
 
 
(SPNs for both the domains)
 
Creation of keytab/SPN/Computerobject for  Domain A:
 
msktutil -c -b "CN=COMPUTERS" -s HTTP/squidlhr.b.com -h squidlhr.b.com -k 
/etc/squid/HTTP.keytab --computer-name squid-http --upn HTTP/squidlhr.b.com 
--server dcofbdomain.b.com --verbose
 
Appending in the same keytab SPN/keys for Domain B:
 
msktutil -c -b "CN=COMPUTERS" -s HTTP/squidlhr.c.com -h squidlhr.c.com -k 
/etc/squid/HTTP.keytab --computer-name whatever-http --upn HTTP/squidlhr.c.com 
--server dcofcdomain.c.com --verbose
 
 
 
PLease guide me on the changes that would be required in the krb5.conf file ?
 

My working krb5.conf file as per the guidance of Markus ( kerberos working 
authorizaton portion yet to implement )
 
[libdefaults]
 default_realm = B.COM
 dns_lookup_realm = false
 dns_lookup_kdc = false
 default_keytab_name = /etc/krb5.keytab

; for windows 2003 encryption type configuration.
default_tgs_enctypes = rc4-hmac des-cbc-crc des-cbc-md5
default_tkt_enctypes = rc4-hmac des-cbc-crc des-cbc-md5
permitted_enctypes = rc4-hmac des-cbc-crc des-cbc-md5
[realms]
 B.COM = {
  kdc = b.com
  admin_server = dc.b.com  }
[domain_realm]
.linux.home = B.COM
.b.com = B.COM
b.com = B.COM
[logging]
kdc = FILE:/var/log/kdc.log
admin_server = FILE:/var/log/kadmin.log
default = FILE:/var/log/kdc.log
-
 
 
 
regards,
 
Bilal
 
  
_
Hotmail: Powerful Free email with security by Microsoft.
https://signup.live.com/signup.aspx?id=60969

[squid-users] SPN case sensitivity culprit for Negotiate/Kerberos Failures +msktutil

2010-04-21 Thread GIGO .


Dear Markus/Nick/All,
 
After a great struggle and help (i got from you people)i was managed to resolve 
the issue however i have few confusions which i wish you to ask please.
 
 
1. First of all I traced down my problem to SPN Names casesensitivity the case 
for ServicePrincipalName attribute as seen through ADSIEDIT.msc tool was 
different from the value my klist -ke was showing.  
 
 
 
According to ASIedit.msc:
 
 
servicePrincipalName == HTTP/squidlhrtest.v.local 
userPrinciapalName == HTTP/squidlhrtest.v.lo...@v.local
 
Where as klisting the SPN as stored in my keytab:
2 HTTP/squidlhrtest.v.lo...@v.local (DES cbc mode with CRC-32)
2 HTTP/squidlhrtest.v.lo...@v.local (DES cbc mode with RSA-MD5)
2 HTTP/squidlhrtest.v.lo...@v.local (ArcFour with HMAC/md5)  
 
After diagnosing the problem i tried recreation of keytab/spn through msktutil 
utility however in no benefit. But Then i changed my hostname(squidmachines') 
all to lowercase and recreated the keytab and it worked. I confirmed that it 
matched the one as stored in the Active Directory. kerberos/negotiate was 
working. Although i have studied that microsoft spn is case insensitive but 
does this also mean that microsoft will always store spn in lower case no 
matter how you have given name in your msktutil command?
 
 
Second thing is that what is the role of upn here? I mean why a upn is required 
when created SPN with computer objects? I can understand that its some kind of 
linkage but i am not sure and clear about the purpose ? 
 
 
Also why SPNattribute has no realm name appended in the output while upn has a 
realm name appended in the output when seeing it through ADSIEDIT.msc.
 
 
Another question is that as i am using SARG configured with Apache i am looking 
forward to SSO apache also with kerberos. Now the keytab/spn for squid sso is 
already here created as :
 
msktutil -c -b "CN=COMPUTERS" -s HTTP/squidlhrtest.v.local -h 
squidlhrtest.v.local -k /etc/squid/HTTP.keytab --computer-name squid-http --upn 
HTTP/squidlhrtest.v.local --server vdc.v.local --verbose  
 
Right now to my understanding a keytab can have keys from multiple services so 
this means that i can have the same keytab used for squid & Apache both ?  For 
example i think the following command will append the keytab file with the 
following new keys. I guess that only computer-name is to be changed and the 
rest of the same command will do as far as the keytab creation is concerned. 
(apache specific settings is a seperate story which is definately out of scope 
here)
 
The command to my understanding which will append keys to be used by Apache:
 
msktutil -c -b "CN=COMPUTERS" -s HTTP/squidlhrtest.v.local -h 
squidlhrtest.v.local -k /etc/squid/HTTP.keytab --computer-name apache-http 
--upn HTTP/squidlhrtest.v.local --server vdc.v.local --verbose   
 
 
But why not apache and squid should share a single keytab? as after all they 
are both HTTP in the end. Isnt creating a seperate key/spn for apache be 
redundant or it is must?
 
 
 
Another somewhat similar question is that My active Directory setup has a 
single forest with one Parent A wand two childs B and childs C. The internet 
users are only in childs A and B. What would be the way to handle SSO. I have 
not much clarity can anybody please advice? ...How Would i 
be pointing to the multiple realms? would i b duplicate exact setup which i 
have done for 1 domain and somehow(i am unclear) somehow update squid 
accordingly?
 
 
 
 
Please i would be real thankful to all of you for guidance/help. 
 
 
 
best regards,
 
Bilal Aslam   
_
Hotmail: Free, trusted and rich email service.
https://signup.live.com/signup.aspx?id=60969

RE: [squid-users] Re: Re: Re: Creating a kerberos Service Principal.

2010-04-16 Thread GIGO .

Markus,
 
Now what to do why this behaviour of the browser though i have confirmed that 
windows integrated authentication is checked. IE version can do the kerberos. 
DNS name as proxy is given. The only missing thing is DNS reverse lookup 
settings on my Domaincontoller/dns. Checked on two clients. I have a virtual 
environment made on VMware.
 

 
How to move forward from here.
 

> To: squid-users@squid-cache.org
> From: hua...@moeller.plus.com
> Date: Fri, 16 Apr 2010 15:18:27 +0100
> Subject: [squid-users] Re: Re: Re: Creating a kerberos Service Principal.
> 
> Hi Bilal,
> 
> In your case the browser is returning a NTLM token not a Kerberos token whu 
> squid_kerb_auth will deny access.
> 
> Regards
> Markus
> 
> "GIGO ."  wrote in message 
> news:snt134-w155de8e05828b08d15c09ab9...@phx.gbl...
> 
> Dear Nick,
> 
> This was the result of my klist -k command:
> 
> [r...@squidlhrtest log]# klist -k /etc/squid/HTTP.keytab
> Keytab name: FILE:/etc/squid/HTTP.keytab
> KVNO Principal
>  
> --
> 2 HTTP/vdc.v.com...@v.com.pk
> 2 HTTP/vdc.v.com...@v.com.pk
> 2 HTTP/vdc.v.com...@v.com.pk
> ---
> 
> i recreated the spn as follows in my new lab ( domaincontroller name is now 
> vdc.v.local and proxyname is squidLhrTest)
> msktutil -c -b "CN=COMPUTERS" -s HTTP/vdc.v.local -h squidLhrTest.v.local -k 
> /etc/squid/HTTP.keytab --computer-name squid-http --upn 
> HTTP/squidLhrTest.v.local --server vdc.v.local --verbose
> 
> 
> 
> However whenever a client try to access the internet this error appears:
> 
> CacheHost: squidLhrTest
> ErrPage: ERR_CACHE_ACCESS_DENIED
> Err: [none]
> TimeStamp: Fri, 16 Apr 2010 10:43:51 GMT
> ClientIP: 10.1.82.54
> HTTP Request:
> GET /isapi/redir.dll?prd=ie&ar=hotmail HTTP/1.1
> Accept: image/gif, image/jpeg, image/pjpeg, image/pjpeg, 
> application/x-shockwave-flash, */*
> Accept-Language: en-us
> User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0)
> Accept-Encoding: gzip, deflate
> Proxy-Connection: Keep-Alive
> Host: www.microsoft.com
> Proxy-Authorization: Negotiate 
> TlRMTVNTUAABB4IIogAFASgKDw==
> 
> 
> 
> thank you so much for you consideration Nick. yes despite doing lots of 
> efforts not being able to get this thing to work and am frustated now. 
> however in the journey at least learnt many things :)
> 
> 
> 
> regards,
> 
> Bilal Aslam
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
>> From: nick.cairncr...@condenast.co.uk
>> To: gi...@msn.com
>> Date: Fri, 16 Apr 2010 09:39:11 +0100
>> Subject: Re: [squid-users] Re: Re: Creating a kerberos Service Principal.
>>
>> Bilal,
>>
>> I understand your frustration! First off: What happens when you klist -k 
>> /etc/squid/HTTP.keytab
>> As I understand it, shouldn't you be specifying the spn as 
>> HTTP/yoursquidproxy and not your DC? You want to be able to authenticate 
>> from the squid proxy, using the HTTP service to the squid-http computer 
>> account.
>>
>> Nick
>>
>>
>>
>>
>>
>> On 16/04/2010 08:43, "GIGO ." wrote:
>>
>>
>>
>> Dear Nick/Markus,
>>
>> I am totally lost in translation and am not sure what to do i need your 
>> help please. The problem is that my kerberos authentication is not 
>> working. In my virtual environment i have two machines one configured as 
>> Domain Controller and the other one as SquidProxy. I am trying to use the 
>> internet from my domain controller( internet explorer 7 & DNS name is 
>> given instead of the ip). However it only popup a authentication window 
>> and never works like it should.
>>
>>
>>
>>
>> I have setup the squid authentication as follows:
>>
>>
>> Steps:
>>
>> I copied the squid_kerb_auth files to correct directory. (SELinux is 
>> enabled)
>>
>> cp -r squid_kerb_auth /usr/libexec/squid/
>>
>> I then Installed the msktutil software
>>
>> step No 1: i changed my krb5.conf file as follows;
>>
>> krb5.conf-
>> [logging]
>> default = FILE:/var/log/krb5libs.log
>> kdc = FILE:/var/log/krb5kdc.log
>> admin_server = FILE:/var/log/kadmind.log
>> [libdefaults]
>> default_realm = V.COM.PK
>> dns_lookup_realm = no
>> dns_lookup_kdc = no
>> ticket_lifetime = 24h
>

RE: [squid-users] Re: Re: Creating a kerberos Service Principal.

2010-04-16 Thread GIGO .

Dear Nick,
 
This was the result of my klist -k command:

[r...@squidlhrtest log]# klist -k /etc/squid/HTTP.keytab
Keytab name: FILE:/etc/squid/HTTP.keytab
KVNO Principal
 --
2 HTTP/vdc.v.com...@v.com.pk
2 HTTP/vdc.v.com...@v.com.pk
2 HTTP/vdc.v.com...@v.com.pk
---

i recreated the spn as follows in my new lab ( domaincontroller name is now 
vdc.v.local and proxyname is squidLhrTest)
msktutil -c -b "CN=COMPUTERS" -s HTTP/vdc.v.local -h squidLhrTest.v.local -k 
/etc/squid/HTTP.keytab --computer-name squid-http --upn 
HTTP/squidLhrTest.v.local --server vdc.v.local --verbose
 
 
 
However whenever a client try to access the internet this error appears:
 
CacheHost: squidLhrTest
ErrPage: ERR_CACHE_ACCESS_DENIED
Err: [none]
TimeStamp: Fri, 16 Apr 2010 10:43:51 GMT
ClientIP: 10.1.82.54
HTTP Request:
GET /isapi/redir.dll?prd=ie&ar=hotmail HTTP/1.1
Accept: image/gif, image/jpeg, image/pjpeg, image/pjpeg, 
application/x-shockwave-flash, */*
Accept-Language: en-us
User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0)
Accept-Encoding: gzip, deflate
Proxy-Connection: Keep-Alive
Host: www.microsoft.com
Proxy-Authorization: Negotiate 
TlRMTVNTUAABB4IIogAFASgKDw==

 
 
thank you so much for you consideration Nick. yes despite doing lots of efforts 
not being able to get this thing to work and am frustated now. however in 
the journey at least learnt many things :)
 
 
 
regards,
 
Bilal Aslam
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
> From: nick.cairncr...@condenast.co.uk
> To: gi...@msn.com
> Date: Fri, 16 Apr 2010 09:39:11 +0100
> Subject: Re: [squid-users] Re: Re: Creating a kerberos Service Principal.
>
> Bilal,
>
> I understand your frustration! First off: What happens when you klist -k 
> /etc/squid/HTTP.keytab
> As I understand it, shouldn't you be specifying the spn as 
> HTTP/yoursquidproxy and not your DC? You want to be able to authenticate from 
> the squid proxy, using the HTTP service to the squid-http computer account.
>
> Nick
>
>
>
>
>
> On 16/04/2010 08:43, "GIGO ." wrote:
>
>
>
> Dear Nick/Markus,
>
> I am totally lost in translation and am not sure what to do i need your help 
> please. The problem is that my kerberos authentication is not working. In my 
> virtual environment i have two machines one configured as Domain Controller 
> and the other one as SquidProxy. I am trying to use the internet from my 
> domain controller( internet explorer 7 & DNS name is given instead of the 
> ip). However it only popup a authentication window and never works like it 
> should.
>
>
>
>
> I have setup the squid authentication as follows:
>
>
> Steps:
>
> I copied the squid_kerb_auth files to correct directory. (SELinux is enabled)
>
> cp -r squid_kerb_auth /usr/libexec/squid/
>
> I then Installed the msktutil software
>
> step No 1: i changed my krb5.conf file as follows;
>
> krb5.conf-
> [logging]
> default = FILE:/var/log/krb5libs.log
> kdc = FILE:/var/log/krb5kdc.log
> admin_server = FILE:/var/log/kadmind.log
> [libdefaults]
> default_realm = V.COM.PK
> dns_lookup_realm = no
> dns_lookup_kdc = no
> ticket_lifetime = 24h
> forwardable = yes
> default_keytab_name= /etc/krb5.keytab
> ; for windows 2003
> default_tgs_enctypes= rc4-hmac des-cbc-crc des-cbc-md5
> default_tkt_enctypes= rc4-hmac des-cbc-crc des-cbc-md5
> permitted_enctypes= rc4-hmac des-cbc-crc des-cbc-md5
> [realms]
> V.LOCAL = {
> kdc = vdc.v.com.pk:88
> admin_server = vdc.v.com.pk:749
> default_domain = v.com.pk
> }
> [domain_realm]
> .linux.home = V.COM.PK
> .v.com.pk=V.COM.PK
> v.local=V.COM.PK
>
> [appdefaults]
> pam = {
> debug = false
> ticket_lifetime = 36000
> renew_lifetime = 36000
> forwardable = true
> krb4_convert = false
> }
>
> Step 2: I verified the settings in resolv.conf & hosts file
> --etc/resolv.conf---
> nameserver 10.1.82.51 (My domain conroller and DNS)
>
> /etc/hosts 
> file
> 127.0.0.1 squidLhrTest localhost.localdomain localhost
> 10.1.82.52 squidLhrTest.v.com.pk
> ::1 localhost6.localdomain6 localhost6
> ---
>
>
> Step 3:
> i created the keytab as follows:
> kinit administra...@v.local
>
> msktutil -c -b "CN=COMPUTERS" -s HTTP/vdc.v.com.pk -h squidLhrTest.v.com.pk 
> -k /etc/squid/HTTP.keytab --computer-name squid-http --up

[squid-users] Kerberos Authentication in Relation to Connect ACLs

2010-04-15 Thread GIGO .

I get the following error whenever i try to use squid: (currently i am trying 
to use it from the AD which is also the KDC for squid to provide 
authentication.)
 
Access Denied:
Access control configuration prevents your request from being allowed at this 
time. Please contact your service provider if you feel this is incorrect.
(No pop open for authentication just whenever i try to open any webpage this 
error)

 
However i dont think that i have done any settings to prevent users. I am not 
sure what is happening please guide.Is it something to do with the connect 
method ACLs.
 
 
acl CONNECT method CONNECT
# Only allow cachemgr access from localhost
http_access allow manager localhost
http_access deny manager
# Deny request to unknown ports
http_access deny !Safe_ports
# Deny request to other than SSL ports
http_access deny CONNECT !SSL_ports
#Allow access from localhost
http_access allow localhost
auth_param negotiate program /usr/libexec/squid/squid_kerb_auth/squid_kerb_auth
auth_param negotiate children 10
auth_param negotiate keep_alive on
acl auth proxy_auth REQUIRED
http_access deny !auth
http_access allow auth
http_access deny all
 
 
please guide
 
regards,
Bilal
  
_
Hotmail: Free, trusted and rich email service.
https://signup.live.com/signup.aspx?id=60969

RE: [squid-users] ipcCreate error:

2010-04-15 Thread GIGO .

Hi Henrik,

I created another setup but now again i am facing the ipccreate issue although 
i have copied the squid_kerb_auth from my compilation to /usr/libexec/squid by 
cp -r command
 
and also i have pointed in squid.conf as
 
auth_param negotiate program /usr/libexec/squid/squid_kerb_auth
 
what could be the issue now?
 
please help will be thankful.
 
regards,
 
Bilal 
 


> From: hen...@henriknordstrom.net
> To: gi...@msn.com
> CC: squid-users@squid-cache.org
> Date: Wed, 14 Apr 2010 09:34:28 +0200
> Subject: RE: [squid-users] ipcCreate error:
>
> ons 2010-04-14 klockan 04:47 + skrev GIGO .:
>> Hi Henrik,
>>
>> Thank you this problem is resolved by placing the squid_kerb_auth in
>> the libexec folder. Now i beleive that i also have to place any other
>> helpers like squid_ldap_group in the same location to get it to work.
>
> Yes. if you have selinux enabled on the host then the security policy
> for squid restricts it to execute helpers in /usr/libexec/squid/ only.
> Which is a good thing in terms of security.
>
> Regards
> Henrik
>
> 
_
Your E-mail and More On-the-Go. Get Windows Live Hotmail Free.
https://signup.live.com/signup.aspx?id=60969

[squid-users] Unable to create keytab Msktutil ldap_set_option failed (local errror)

2010-04-15 Thread GIGO .

Dear All,
 
Once again i failed to properly create keytab. Following is the detail of how i 
performed this task
 
step No 1: i changed my krb5.conf file as follows;

[logging]
 default = FILE:/var/log/krb5libs.log
 kdc = FILE:/var/log/krb5kdc.log
 admin_server = FILE:/var/log/kadmind.log
 
[libdefaults]
 default_realm = V.LOCAL
 dns_lookup_realm = no
 dns_lookup_kdc = no
 ticket_lifetime = 24h
 forwardable = yes
 default_keytab_name= /etc/krb5.keytab

; for windows 2003
 default_tgs_enctypes= rc4-hmac des-cbc-crc des-cbc-md5
 default_tkt_enctypes= rc4-hmac des-cbc-crc des-cbc-md5
 permitted_enctypes= rc4-hmac des-cbc-crc des-cbc-md5
 
[realms]
 V.LOCAL = {
  kdc = vdc.v.local:88
  admin_server = vdc.v.local:749
  default_domain = v.local
  }
 
[domain_realm]
.linux.home = V.LOCAL
 .v.local=V.LOCAL
 v.local=V.LOCAL

[appdefaults]
 pam = {
   debug = false
   ticket_lifetime = 36000
   renew_lifetime = 36000
   forwardable = true
   krb4_convert = false
 }
 
Step 2:
i tried to create the keytab as follows:
kinit administra...@v.local 
 
msktutil -c -b "CN=COMPUTERS" -s HTTP/vdc.v.local -h squidLhrTest.v.local -k 
/etc/squid/HTTP.keytab --computer-name squid-http --upn HTTP/vdc.v.local 
--server vdc.v.local --verbose
 
However the following error:
 
SASL/GSSAPI authentication started
Error: ldap_set_option failed (Local error)
Error: ldap_connect failed
 -- krb5_cleanup: Destroying Kerberos Context
 -- ldap_cleanup: Disconnecting from LDAP server
 -- init_password: Wiping the computer password structure

 
My other settings are as follows:
 
 
/etc/resolv.conf
nameserver 10.1.82.51
# 10.1.82.51 is my domain controller and DNS server
 
/etc/hosts file
 
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1   squidLhrTest localhost.localdomain localhost
10.1.82.52  squidLhrTest.v.local
::1 localhost6.localdomain6 localhost6
however running the hostname --fqdn shows squidLhrTest only
 
 
 
Please help me out and guide.
 
regards,
 
Bilal Aslam
 
 
 
 

  
_
Hotmail: Free, trusted and rich email service.
https://signup.live.com/signup.aspx?id=60969

RE: [squid-users] Re: Re: Creating a kerberos Service Principal.(Nick/Markus)

2010-04-15 Thread GIGO .

Dear Nick,
 
I was able to successfully create the keytab however i used the credentials of 
the domainadmin instead of the squidadmin account.
 
 
Markus,
 
Please tell me what i am doing wrong that i am unable to create keytab with 
squidadmin account though i tried to do according to your guidance. what i am 
missing?
 
 
 
 
please guide
 
regards,
 
Bilal Aslam
 



> From: gi...@msn.com
> To: nick.cairncr...@condenast.co.uk; hua...@moeller.plus.com; 
> squid-users@squid-cache.org
> Date: Thu, 15 Apr 2010 10:17:40 +
> Subject: RE: [squid-users] Re: Re: Creating a kerberos Service Principal.
>
>
> Nick,
>
> I tried but with not much success.
>
> .
> No computer account for squid-http found, creating a new one.
> Error: ldap_add_ext_s failed (Insufficient access)
> Error: ldap_check_account failed (No CSI structure available)
> Error: set_password failed
> -- krb5_cleanup: Destroying Kerberos Context
> -- ldap_cleanup: Disconnecting from LDAP server
> -- init_password: Wiping the computer password structure
> ...
>
>
>
>
> regards,
>
>
> Bilal
> 
>> From: nick.cairncr...@condenast.co.uk
>> To: gi...@msn.com; hua...@moeller.plus.com; squid-users@squid-cache.org
>> Date: Thu, 15 Apr 2010 09:31:40 +0100
>> Subject: Re: [squid-users] Re: Re: Creating a kerberos Service Principal.
>>
>> Bilal,
>>
>> I think we're doing a similar thing here! See my post earlier about SPN. I 
>> think you need to be using the fqdn of the machine in the HTTP/ spn & upn 
>> and not just the domain. Also check your DNS and host local host entries.
>>
>> E.g.: msktutil -c -b "CN=COMPUTERS" -s HTTP/squid1.[mydomain] -k 
>> /etc/squid/HTTP.keytab --computer-name auth1 --upn HTTP/squid1 --server dc1 
>> -verbose
>>
>> Nick
>>
>>
>>
>> On 15/04/2010 07:22, "GIGO ." wrote:
>>
>>
>>
>> Dear Markus/all,
>>
>>
>> I am unable to create the keytab using mskutil please help me out i followed 
>> the following steps:
>>
>> 1. I created a OU and named it UnixOU
>> 2. I created a group account in the UnixOU and named it as UnixAdmins
>> 3. I make my windows account bilal_admin part of UnixAdmins group.
>> 4. I set the settings of UnixOU to be managed by UnixAdmins.
>> 5. Then i synch time of Squid Machine and Active directory.
>> 6. My domain fully qualified domain name is v.local and netbios names is V.
>> 7. My domain controller name is vdc (fqdn=vdc.v.local)
>> 8. The following lines were changed in the krb5.conf while rest being 
>> untouched.
>>
>> [libdefaults]
>> default_realm=V.LOCAL
>>
>>
>> [realms]
>>
>> V.LOCAL = {
>> kdc = vdc.v.local:88
>> admin_server = kerberos.example.com:749 (e.g this not changed does it matter 
>> at the step of creation of keytab)
>> default_domain = example.com (unchanged)
>> }
>>
>>
>>
>>
>> The i run the following commands to create the keytab:
>>
>> kinit squidad...@v.local
>>
>>
>> msktutil -c -b "OU=unixPrincipals" -s HTTP/v.local -h squidLhrTest.v.local 
>> -k /etc/squid/HTTP.keytab --computer-name squid-http --upn HTTP/v.local 
>> --server vdc.v.local --verbose
>>
>> Output of the Command:
>>
>> -- init_password: Wiping the computer password structure
>> -- finalize_exec: Determining user principal name
>> -- finalize_exec: User Principal Name is: HTTP/v.lo...@v.local
>> -- create_fake_krb5_conf: Created a fake krb5.conf file: 
>> /tmp/.mskt-3550krb5.conf
>> -- get_krb5_context: Creating Kerberos Context
>> -- try_machine_keytab: Using the local credential cache: 
>> /tmp/.mskt-3550krb5_ccache
>> -- try_machine_keytab: krb5_get_init_creds_keytab failed (Client not found 
>> in Kerberos database)
>> -- try_machine_keytab: Unable to authenticate using the local keytab
>> -- try_ldap_connect: Connecting to LDAP server: vdc.v.local
>> -- try_ldap_connect: Connecting to LDAP server: vdc.v.local
>> SASL/GSSAPI authentication started
>> SASL username: squidad...@v.local
>> SASL SSF: 56
>> SASL installing layers
>> -- ldap_get_base_dn: Determining default LDAP base: dc=v,dc=local
>> Warning: No DNS entry found for squidLhrTest.v.local
>> -- get_short_hostname: Determined short hostname: squidLhrTest-v-local
>> -- finalize_exec: SAM Account Name is: squid-http$
>> Updating all entries for squidLhr

RE: [squid-users] Re: Re: Creating a kerberos Service Principal.

2010-04-15 Thread GIGO .

Nick,
 
I tried but with not much success. 
 
.
No computer account for squid-http found, creating a new one.
Error: ldap_add_ext_s failed (Insufficient access)
Error: ldap_check_account failed (No CSI structure available)
Error: set_password failed
 -- krb5_cleanup: Destroying Kerberos Context
 -- ldap_cleanup: Disconnecting from LDAP server
 -- init_password: Wiping the computer password structure
...
 

 
 
regards,
 
 
Bilal

> From: nick.cairncr...@condenast.co.uk
> To: gi...@msn.com; hua...@moeller.plus.com; squid-users@squid-cache.org
> Date: Thu, 15 Apr 2010 09:31:40 +0100
> Subject: Re: [squid-users] Re: Re: Creating a kerberos Service Principal.
>
> Bilal,
>
> I think we're doing a similar thing here! See my post earlier about SPN. I 
> think you need to be using the fqdn of the machine in the HTTP/ spn & upn and 
> not just the domain. Also check your DNS and host local host entries.
>
> E.g.: msktutil -c -b "CN=COMPUTERS" -s HTTP/squid1.[mydomain] -k 
> /etc/squid/HTTP.keytab --computer-name auth1 --upn HTTP/squid1 --server dc1 
> -verbose
>
> Nick
>
>
>
> On 15/04/2010 07:22, "GIGO ." wrote:
>
>
>
> Dear Markus/all,
>
>
> I am unable to create the keytab using mskutil please help me out i followed 
> the following steps:
>
> 1. I created a OU and named it UnixOU
> 2. I created a group account in the UnixOU and named it as UnixAdmins
> 3. I make my windows account bilal_admin part of UnixAdmins group.
> 4. I set the settings of UnixOU to be managed by UnixAdmins.
> 5. Then i synch time of Squid Machine and Active directory.
> 6. My domain fully qualified domain name is v.local and netbios names is V.
> 7. My domain controller name is vdc (fqdn=vdc.v.local)
> 8. The following lines were changed in the krb5.conf while rest being 
> untouched.
>
> [libdefaults]
> default_realm=V.LOCAL
>
>
> [realms]
>
> V.LOCAL = {
> kdc = vdc.v.local:88
> admin_server = kerberos.example.com:749 (e.g this not changed does it matter 
> at the step of creation of keytab)
> default_domain = example.com (unchanged)
> }
>
>
>
>
> The i run the following commands to create the keytab:
>
> kinit squidad...@v.local
>
>
> msktutil -c -b "OU=unixPrincipals" -s HTTP/v.local -h squidLhrTest.v.local -k 
> /etc/squid/HTTP.keytab --computer-name squid-http --upn HTTP/v.local --server 
> vdc.v.local --verbose
>
> Output of the Command:
>
> -- init_password: Wiping the computer password structure
> -- finalize_exec: Determining user principal name
> -- finalize_exec: User Principal Name is: HTTP/v.lo...@v.local
> -- create_fake_krb5_conf: Created a fake krb5.conf file: 
> /tmp/.mskt-3550krb5.conf
> -- get_krb5_context: Creating Kerberos Context
> -- try_machine_keytab: Using the local credential cache: 
> /tmp/.mskt-3550krb5_ccache
> -- try_machine_keytab: krb5_get_init_creds_keytab failed (Client not found in 
> Kerberos database)
> -- try_machine_keytab: Unable to authenticate using the local keytab
> -- try_ldap_connect: Connecting to LDAP server: vdc.v.local
> -- try_ldap_connect: Connecting to LDAP server: vdc.v.local
> SASL/GSSAPI authentication started
> SASL username: squidad...@v.local
> SASL SSF: 56
> SASL installing layers
> -- ldap_get_base_dn: Determining default LDAP base: dc=v,dc=local
> Warning: No DNS entry found for squidLhrTest.v.local
> -- get_short_hostname: Determined short hostname: squidLhrTest-v-local
> -- finalize_exec: SAM Account Name is: squid-http$
> Updating all entries for squidLhrTest.v.local in the keytab 
> /etc/squid/HTTP.keytab
> -- try_set_password: Attempting to reset computer's password
> -- ldap_check_account: Checking that a computer account for squid-http$ exists
> No computer account for squid-http found, creating a new one.
> Error: ldap_add_ext_s failed (Insufficient access)
> Error: ldap_check_account failed (No CSI structure available)
> Error: set_password failed
> -- krb5_cleanup: Destroying Kerberos Context
> -- ldap_cleanup: Disconnecting from LDAP server
> -- init_password: Wiping the computer password structure
>
>
> please help me resolving the issue.
>
> regards,
>
> Bilal Aslam
>
>
>
>
> 
>> To: squid-users@squid-cache.org
>> From: hua...@moeller.plus.com
>> Date: Fri, 9 Apr 2010 08:10:19 +0100
>> Subject: [squid-users] Re: Re: Creating a kerberos Service Principal.
>>
>> Hi Bilal,
>>
>> I create a new OU in Active Directory like OU=UnixPrincipals,DC=... I
>> then create a Windows Group UnixAdministrators and add the W

RE: [squid-users] Re: Re: Creating a kerberos Service Principal.

2010-04-14 Thread GIGO .

Dear Markus/all,
 
 
I am unable to create the keytab using mskutil please help me out i followed 
the following steps:
 
1. I created a OU and named it UnixOU
2. I created a group account in the UnixOU and named it as UnixAdmins
3. I make my windows account bilal_admin part of UnixAdmins group.
4. I set the settings of UnixOU to be managed by UnixAdmins.
5. Then i synch time of Squid Machine and  Active directory.
6. My domain fully qualified domain name is v.local and netbios names is V.
7. My domain controller name is vdc (fqdn=vdc.v.local)
8. The following lines were changed in the krb5.conf while rest being untouched.
 
   [libdefaults]
default_realm=V.LOCAL
 
 
[realms]

V.LOCAL = {
 kdc = vdc.v.local:88
 admin_server = kerberos.example.com:749 (e.g this not changed does 
it matter at the step of creation of keytab)
 default_domain = example.com (unchanged)
 }
 
 
 
 
The i run the following commands to create the keytab:
 
kinit squidad...@v.local
 
 
msktutil -c -b "OU=unixPrincipals" -s HTTP/v.local -h squidLhrTest.v.local -k 
/etc/squid/HTTP.keytab --computer-name squid-http --upn HTTP/v.local --server 
vdc.v.local --verbose
 
Output of the Command:

 -- init_password: Wiping the computer password structure
 -- finalize_exec: Determining user principal name
 -- finalize_exec: User Principal Name is: HTTP/v.lo...@v.local
 -- create_fake_krb5_conf: Created a fake krb5.conf file: 
/tmp/.mskt-3550krb5.conf
 -- get_krb5_context: Creating Kerberos Context
 -- try_machine_keytab: Using the local credential cache: 
/tmp/.mskt-3550krb5_ccache
 -- try_machine_keytab: krb5_get_init_creds_keytab failed (Client not found in 
Kerberos database)
 -- try_machine_keytab: Unable to authenticate using the local keytab
 -- try_ldap_connect: Connecting to LDAP server: vdc.v.local
 -- try_ldap_connect: Connecting to LDAP server: vdc.v.local
SASL/GSSAPI authentication started
SASL username: squidad...@v.local
SASL SSF: 56
SASL installing layers
 -- ldap_get_base_dn: Determining default LDAP base: dc=v,dc=local
Warning: No DNS entry found for squidLhrTest.v.local
 -- get_short_hostname: Determined short hostname: squidLhrTest-v-local
 -- finalize_exec: SAM Account Name is: squid-http$
Updating all entries for squidLhrTest.v.local in the keytab 
/etc/squid/HTTP.keytab
 -- try_set_password: Attempting to reset computer's password
 -- ldap_check_account: Checking that a computer account for squid-http$ exists
No computer account for squid-http found, creating a new one.
Error: ldap_add_ext_s failed (Insufficient access)
Error: ldap_check_account failed (No CSI structure available)
Error: set_password failed
 -- krb5_cleanup: Destroying Kerberos Context
 -- ldap_cleanup: Disconnecting from LDAP server
 -- init_password: Wiping the computer password structure
 
 
please help me resolving the issue.
 
regards,
 
Bilal Aslam
 
 



> To: squid-users@squid-cache.org
> From: hua...@moeller.plus.com
> Date: Fri, 9 Apr 2010 08:10:19 +0100
> Subject: [squid-users] Re: Re: Creating a kerberos Service Principal.
>
> Hi Bilal,
>
> I create a new OU in Active Directory like OU=UnixPrincipals,DC=... I
> then create a Windows Group UnixAdministrators and add the Windows account
> of the UnixAdministrators to it. Finally I change the permissions on the
> OU=UnixPrincipals so that the members of the group UnixAdministrators have
> full rights (or limited rights ) for objects under this OU.
>
> Regards
> Markus
>
> "GIGO ." wrote in message
> news:snt134-w395b3433738667ded2186eb9...@phx.gbl...
>
> Markus could not get you please can you elaborate a bit.
>
>
> thank you all!
>
> regards,
>
> Bilal
>
> 
>> To: squid-users@squid-cache.org
>> From: hua...@moeller.plus.com
>> Date: Thu, 8 Apr 2010 20:04:30 +0100
>> Subject: [squid-users] Re: Creating a kerberos Service Principal.
>>
>> BTW You do not need Administrator rights. You can set permission for
>> different Groups on OUs for example for Unix Kerberos Admins.
>>
>> Markus
>>
>> "Khaled Blah" wrote in message
>> news:n2j4a3250ab1004080957id2f4a051xb31445428c62b...@mail.gmail.com...
>> Hi Bilal,
>>
>> 1. ktpass and msktutil practically do the same, they create keytabs
>> which include the keys that squid will need to decrypt the ticket it
>> receives from the user. However ktpass only creates a file which you
>> will then have to securely transfer to your proxy server so that squid
>> can access it. Using msktutil on your proxy server, you can get the
>> same keytab without having to transfer it. Thus, msktutil saves you
>> some time and hassle. AFAIR both need "Administrator" rig

RE: [squid-users] ipcCreate error:

2010-04-13 Thread GIGO .

Hi Henrik,
 
Thank you this problem is resolved by placing the squid_kerb_auth in the 
libexec folder. Now i beleive that i also have to place any other helpers like 
squid_ldap_group in the same location to get it to work.
 
 
regards,
 
Bilal 


> From: hen...@henriknordstrom.net
> To: gi...@msn.com
> CC: squid-users@squid-cache.org
> Date: Sat, 10 Apr 2010 19:44:31 +0200
> Subject: Re: [squid-users] ipcCreate error:
>
> lör 2010-04-10 klockan 09:23 + skrev GIGO .:
>>
>> I have created a user proxy in Centos from which i am running my squid
>> successfully with all the rights properly configured until i change my
>> configuration file for Negotiate/Kerboros.
>
> Do you have selinux enabled?
>
> Try moving the helper to /usr/libexec/squid/ instead of /usr/sbin/...
>
>>
>> Now i have no idea how to use scripts from within squid.conf. And at which 
>> place should i place this script in the squid.conf in relation to the 
>> following?
>
> Instead of the normal program.
>>
>> auth_param negotiate program /usr/sbin/squid_kerb_auth
>
>
> Regards
> Henrik
> 
_
Your E-mail and More On-the-Go. Get Windows Live Hotmail Free.
https://signup.live.com/signup.aspx?id=60969

[squid-users] Upgradtion to Squid 3.1.1

2010-04-12 Thread GIGO .

When you upgrade is it possible to use the existing cache directories created 
through previous version(squid 3) or you have to rebuild your cache.
 
 
regards,
 
Bilal Aslam   
_
Hotmail: Trusted email with Microsoft’s powerful SPAM protection.
https://signup.live.com/signup.aspx?id=60969

[squid-users] Authorization via LDAP group

2010-04-12 Thread GIGO .

Authorizing users via LDAP group:
 
 
It is listed in the squid_ldap_group man page that using -D binddn -W secret 
fle is to be preferred on  -D binddn -w password. While it provides extra 
security then printing the password in plaintext inside squid.conf. Doesnt this 
query itself go in clear text over the network? If this is a risk how to handle 
this situation?

1. Should we create a special account with minimum of rights required to query 
Active Directory?


2. Or perform this query over TLS? and how it can be done?


3.  Allowing anonymous queries can also be configured in Active directory 
however it does not look appropriate. May be it has no issues in the total 
private setup!

 
Please your guidance is required. 
 

regards,
Bilal 
  
_
Your E-mail and More On-the-Go. Get Windows Live Hotmail Free.
https://signup.live.com/signup.aspx?id=60969

[squid-users] ipcCreate error:

2010-04-10 Thread GIGO .

 
I have created a user proxy in Centos from which i am running my squid 
successfully with all the rights properly configured until i change my 
configuration file for Negotiate/Kerboros.
 
 
I am receiving the following error when trying to start squid:
 
2010/04/09 05:06:12| helperOpenServers: Starting 10/10 'squid_kerb_auth' 
processes
2010/04/09 05:06:12| ipcCreate: /usr/sbin/squid_kerb_auth: (13) Permission 
denied
2010/04/09 05:06:12| ipcCreate: /usr/sbin/squid_kerb_auth: (13) Permission 
denied
2010/04/09 05:06:12| ipcCreate: /usr/sbin/squid_kerb_auth: (13) Permission 
denied
2010/04/09 05:06:12| ipcCreate: /usr/sbin/squid_kerb_auth: (13) Permission 
denied
2010/04/09 05:06:12| ipcCreate: /usr/sbin/squid_kerb_auth: (13) Permission 
denied
2010/04/09 05:06:12| ipcCreate: /usr/sbin/squid_kerb_auth: (13) Permission 
denied
2010/04/09 05:06:12| ipcCreate: /usr/sbin/squid_kerb_auth: (13) Permission 
denied
2010/04/09 05:06:12| ipcCreate: /usr/sbin/squid_kerb_auth: (13) Permission 
denied
2010/04/09 05:06:12| ipcCreate: /usr/sbin/squid_kerb_auth: (13) Permission 
denied
2010/04/09 05:06:12| ipcCreate: /usr/sbin/squid_kerb_auth: (13) Permission 
denied
2010/04/09 05:06:12| ipcCreate: /usr/sbin/squid_kerb_auth: (13) Permission 
denied
2010/04/09 05:06:12| ipcCreate: /usr/sbin/squid_kerb_auth: (13) Permission 
denied
2010/04/09 05:06:12| ipcCreate: /usr/sbin/squid_kerb_auth: (13) Permission 
denied
2010/04/09 05:06:12| ipcCreate: /usr/sbin/squid_kerb_auth: (13) Permission 
denied
2010/04/09 05:06:12| ipcCreate: /usr/sbin/squid_kerb_auth: (13) Permission 
denied
2010/04/09 05:06:12| ipcCreate: /usr/sbin/squid_kerb_auth: (13) Permission 
denied
2010/04/09 05:06:12| ipcCreate: /usr/sbin/squid_kerb_auth: (13) Permission 
denied
2010/04/09 05:06:12| ipcCreate: /usr/sbin/squid_kerb_auth: (13) Permission 
denied
2010/04/09 05:06:12| ipcCreate: /usr/sbin/squid_kerb_auth: (13) Permission 
denied
2010/04/09 05:06:12| ipcCreate: /usr/sbin/squid_kerb_auth: (13) Permission 
denied
2010/04/09 05:06:12| Unlinkd pipe opened on FD 20

 
For trouble shooting I have just installed Strace and created a script as per 
Markus recommendations.
---
#/bin/sh

strace -f -F -o /tmp/strace.out.$$ squid_kerb_auth $*
--
 
Now i have no idea how to use scripts from within squid.conf. And at which 
place should i place this script in the squid.conf in relation to the following?
 
auth_param negotiate program /usr/sbin/squid_kerb_auth
 
 
 
regards,
 
Bilal Aslam
 
 
 
  
_
Hotmail: Trusted email with powerful SPAM protection.
https://signup.live.com/signup.aspx?id=60969

RE: [squid-users] Re: Creating a kerberos Service Principal.

2010-04-08 Thread GIGO .

Markus could not get you please can you elaborate a bit.
 
 
thank you all!
 
regards,
 
Bilal


> To: squid-users@squid-cache.org
> From: hua...@moeller.plus.com
> Date: Thu, 8 Apr 2010 20:04:30 +0100
> Subject: [squid-users] Re: Creating a kerberos Service Principal.
>
> BTW You do not need Administrator rights. You can set permission for
> different Groups on OUs for example for Unix Kerberos Admins.
>
> Markus
>
> "Khaled Blah" wrote in message
> news:n2j4a3250ab1004080957id2f4a051xb31445428c62b...@mail.gmail.com...
> Hi Bilal,
>
> 1. ktpass and msktutil practically do the same, they create keytabs
> which include the keys that squid will need to decrypt the ticket it
> receives from the user. However ktpass only creates a file which you
> will then have to securely transfer to your proxy server so that squid
> can access it. Using msktutil on your proxy server, you can get the
> same keytab without having to transfer it. Thus, msktutil saves you
> some time and hassle. AFAIR both need "Administrator" rights, which
> means the account used for ktpass/msktutil needs to be a member of the
> Administrator group.
>
>
> 2. To answer this question, one would need more information about your
> network and your setup. Basically, mixing any other authentication
> method with Kerberos is not a good idea. That's because if the other
> method is insecure or less secure an attacker who gains access to a
> user's credentials will be able to impersonate that user against
> Kerberos and those be able to use ALL services that this user has
> access to. In any case DO NOT use basic auth with Kerberos in a
> public, set-up. That's a recipe for disaster. Digest auth and NTLM
> (v2) might be suitable but these are in fact less secure than Kerberos
> and thus not preferrable. One down-side to Kerberos is that it's an
> "all-or-nothing" service, either you use Kerberos and only Kerberos or
> you risk security breaches in any "mixed" situation.
>
> HTH
>
> Khaled
>
> 2010/4/6 GIGO . :
>>
>> Dear All,
>>
>> Please guide me in regard to SSO setup with Active Directory(No
>> winbind/Samba). I have the following questions in this regard.
>>
>>
>>
>> 1. Creating a Kerberos service principal and keytab file that is used by
>> the Squid what is the effective method? Difference between using Ktpass vs
>> Msktutil package? What rights would i be required in Active Directory and
>> if none then why so?
>>
>>
>>
>>
>>
>>
>> 2. How to configure the fallback Authentication scheme if Kerberos fails?
>> Ldap authentication using basic looks to be an option but isnt it less
>> secure? is there a better approach possible.
>>
>>
>>
>>
>> regards,
>>
>> Bilal Aslam
>> _
>> Hotmail: Powerful Free email with security by Microsoft.
>> https://signup.live.com/signup.aspx?id=60969
>
> 
_
Hotmail: Powerful Free email with security by Microsoft.
https://signup.live.com/signup.aspx?id=60969

RE: [squid-users] Re: Re: Re: SSO with Active Directory-Squid Clients

2010-04-08 Thread GIGO .

Hi Markus/Nick,
 
I have chosen the following method of creating the keytab can you give me your 
advice/expereince regarding it.

1. I have created a user account for SPN in Active directory with password 
never expires and preauthentication not required checked.
 
squidLhr-proxy
Pwd: X

C:\Program Files\Support Tools>
setspn -A HTTP/squidLhr-proxy.v.mcb.com.pk squidLhr-proxy
 
Creating keytab:
ktpass -out c:\squidLhr-proxy.keytab -princ 
HTTP/squidlhr-proxy.v.com...@myrealm.v.com.pk -mapUser V\squidLhr-proxy -mapOp 
set -pass * -crypto DES-CBC-MD5 -pType KRB_NT_PRINCIPAL
 

regards,
 
Bilal 
 
 
 
 
 
 


> To: squid-users@squid-cache.org
> From: hua...@moeller.plus.com
> Date: Thu, 8 Apr 2010 20:08:10 +0100
> Subject: [squid-users] Re: Re: Re: SSO with Active Directory-Squid Clients
>
> Hi Nick,
>
> Did you use samba to create the keytab. I have seen that if you use samba
> for more then squid (e.g. cifs, winbind, etc) it will update regularly the
> AD entry and key for the host/fqdn principal which is the same as for
> HTTP/fqdn. I usually use msktutil and create a second AD entry called
> -HTTP to be independent of samba which usually uses
> .
>
> Regards
> Markus
>
> "Nick Cairncross" wrote in message
> news:c7e35da9.1eb06%nick.cairncr...@condenast.co.uk...
> Bilal,
>
> I'm working on much the same thing, with added Apple Mac just to complicate
> things. My aim is to create an SSO environment for all my Windows, OSX and
> nix machines. I want to use Kerberos as my primary authentication as IE7 and
> FF onwards are moving that way..but for my situation some browsers or
> applications do not support this and I must also use NTLM. However, Opera
> on my Macs seems to not like either and prefers Basic.. It's been a struggle
> to get each element to work but not impossible.
>
> I have found that all Negotiate/Kerberos supporting browsers have worked
> extremely well with the helper developed by Markus. Many of the
> authentication breaking elements have disappeared when compared to my Blue
> Coat and ISA experiences. Those machines joined to the domain using browsers
> that support Neg/Kerb work seamlessly with Kerberos - FF and IE - and pass
> through credentials. Mac Safari relies on NTLM and prompts as such. Mac
> Opera prompts for Basic. Therefore if you're just Windows I would answer
> fairly confidently that your question 1 answer is Yes.
>
> Users not on the domain would be prompted for credentials. I haven't tested
> this and depending on which helper you are using (Samba or Squids) and
> whether you're joined to the domain I believe Negotiate should fall back to
> NTLM and work providing you supply a valid domain user/pass! So the answer
> to 2 would be 'depends..' :)
>
> As for the issue of not being to able to use Squid at all and taking into
> account what I said earlier, then yes there could be a scenario where Squid
> will not work for your users. However, it is less of a problem in just
> Windows. It's all about testing your various Windows configurations, apps
> and browsers until you are sure you have covered the conceivable setups of
> all your users.
> Finally, I have been struggling against an issue where my KVNO Keytab
> increments in AD and gets out of sync with the exported version making Squid
> un-useable until it's regenerated. Have you experienced this? Happy to
> discuss any of this off list or on.
>
> Cheers,
> Nick
>
>
>
> On 08/04/2010 04:06, "GIGO ." wrote:
>
>
>
> If i select negotiate/Kerberos as authentication protocol for my Squid on
> Linux and configure no FallBack Authentication.what would be the consequence
> ?
>
>
>
> 1. Isnt it that all of my users who have logged into Active Directory and
> where browser is supported will be able to use squid?
>
>
>
> 2. Only those users who will try to use squid from a workgroup giving their
> domain passoword (domainname/userid) will fail as there will be no fallback
> aviablable.
>
>
>
> 3. Is there any other scenario in which these users will not be able to use
> squid?
>
>
>
> I would be really thankful if you guide me further as i am failing to
> understand why a fallback authentication is necessary if it is. What could
> be the scenario when windows clients have no valid TGT even if they are
> login to the domain? I hope you can understand me and help me to clear my
> self.
>
>
> regards,
>
> Bilal Aslam
>
>
>
>
>
>
>
>
>
> 
>> To: squid-users@squid-cache.org
>> From: hua...@moeller.plus.com
>> Date: Wed, 7 Apr 2010 20:17:20 +0100
>> Su

RE: [squid-users] Re: Re: SSO with Active Directory-Squid Clients

2010-04-08 Thread GIGO .

Nick,
 
Thank you so much for your support. I am now much confident about 
Negotiate/Kerberos and have just decided to jump into the real thing (as enough 
theory). As far as KVNo i have not experienced it yet(as not practically 
implemented) however i may too in due course and surely will share with you. 
Rather i will share my whole experience. 
 
regards,
 
Bilal
 
 
 
 



> From: nick.cairncr...@condenast.co.uk
> To: gi...@msn.com; hua...@moeller.plus.com; squid-users@squid-cache.org
> Date: Thu, 8 Apr 2010 10:17:13 +0100
> Subject: Re: [squid-users] Re: Re: SSO with Active Directory-Squid Clients
>
> Bilal,
>
> I'm working on much the same thing, with added Apple Mac just to complicate 
> things. My aim is to create an SSO environment for all my Windows, OSX and 
> nix machines. I want to use Kerberos as my primary authentication as IE7 and 
> FF onwards are moving that way..but for my situation some browsers or 
> applications do not support this and I must also use NTLM. However, Opera on 
> my Macs seems to not like either and prefers Basic.. It's been a struggle to 
> get each element to work but not impossible.
>
> I have found that all Negotiate/Kerberos supporting browsers have worked 
> extremely well with the helper developed by Markus. Many of the 
> authentication breaking elements have disappeared when compared to my Blue 
> Coat and ISA experiences. Those machines joined to the domain using browsers 
> that support Neg/Kerb work seamlessly with Kerberos - FF and IE - and pass 
> through credentials. Mac Safari relies on NTLM and prompts as such. Mac Opera 
> prompts for Basic. Therefore if you're just Windows I would answer fairly 
> confidently that your question 1 answer is Yes.
>
> Users not on the domain would be prompted for credentials. I haven't tested 
> this and depending on which helper you are using (Samba or Squids) and 
> whether you're joined to the domain I believe Negotiate should fall back to 
> NTLM and work providing you supply a valid domain user/pass! So the answer to 
> 2 would be 'depends..' :)
>
> As for the issue of not being to able to use Squid at all and taking into 
> account what I said earlier, then yes there could be a scenario where Squid 
> will not work for your users. However, it is less of a problem in just 
> Windows. It's all about testing your various Windows configurations, apps and 
> browsers until you are sure you have covered the conceivable setups of all 
> your users.
> Finally, I have been struggling against an issue where my KVNO Keytab 
> increments in AD and gets out of sync with the exported version making Squid 
> un-useable until it's regenerated. Have you experienced this? Happy to 
> discuss any of this off list or on.
>
> Cheers,
> Nick
>
>
>
> On 08/04/2010 04:06, "GIGO ." wrote:
>
>
>
> If i select negotiate/Kerberos as authentication protocol for my Squid on 
> Linux and configure no FallBack Authentication.what would be the consequence ?
>
>
>
> 1. Isnt it that all of my users who have logged into Active Directory and 
> where browser is supported will be able to use squid?
>
>
>
> 2. Only those users who will try to use squid from a workgroup giving their 
> domain passoword (domainname/userid) will fail as there will be no fallback 
> aviablable.
>
>
>
> 3. Is there any other scenario in which these users will not be able to use 
> squid?
>
>
>
> I would be really thankful if you guide me further as i am failing to 
> understand why a fallback authentication is necessary if it is. What could be 
> the scenario when windows clients have no valid TGT even if they are login to 
> the domain? I hope you can understand me and help me to clear my self.
>
>
> regards,
>
> Bilal Aslam
>
>
>
>
>
>
>
>
>
> 
>> To: squid-users@squid-cache.org
>> From: hua...@moeller.plus.com
>> Date: Wed, 7 Apr 2010 20:17:20 +0100
>> Subject: Re: [squid-users] Re: Re: SSO with Active Directory-Squid Clients
>>
>> Sorry I knew that but forgot to mention that I was talking about the Unix
>> version.
>>
>> Thank you
>> Markus
>>
>> "Guido Serassio" wrote in message
>> news:58fd293ce494af419a59ef7e597fa4e6400...@hermes.acmeconsulting.loc...
>> Hi Markus,
>>
>>> If you have a Windows client and the proxy send WWW-Proxy-Authorize:
>>> Negotiate the Windows client will try first to get a Kerberos ticket
>> and
>>> if that succeeds sends a Negotiate response with a Kerberos token to
>> the
>>> proxy.
>>>

RE: [squid-users] Re: Re: SSO with Active Directory-Squid Clients

2010-04-07 Thread GIGO .

If i select negotiate/Kerberos as authentication protocol for my Squid on Linux 
and configure no FallBack Authentication.what would be the consequence ?
 
 
 
1. Isnt it that all of my users who have logged into Active Directory and where 
browser is supported will be able to use squid?
 
 
 
2. Only those users who will try to use squid from a workgroup giving their 
domain passoword (domainname/userid) will fail as there will be no fallback 
aviablable.
 
 
 
3. Is there any other scenario in which these users will not be able to use 
squid? 
 
 
 
I would be really thankful if you guide me further as i am failing to 
understand why a fallback authentication is necessary if it is. What could be 
the scenario when windows clients have no valid TGT even if they are login to 
the domain? I hope you can understand me and help me to clear my self.
 
 
regards,
 
Bilal Aslam
 
 
 
 
 
 
 



> To: squid-users@squid-cache.org
> From: hua...@moeller.plus.com
> Date: Wed, 7 Apr 2010 20:17:20 +0100
> Subject: Re: [squid-users] Re: Re: SSO with Active Directory-Squid Clients
>
> Sorry I knew that but forgot to mention that I was talking about the Unix
> version.
>
> Thank you
> Markus
>
> "Guido Serassio" wrote in message
> news:58fd293ce494af419a59ef7e597fa4e6400...@hermes.acmeconsulting.loc...
> Hi Markus,
>
>> If you have a Windows client and the proxy send WWW-Proxy-Authorize:
>> Negotiate the Windows client will try first to get a Kerberos ticket
> and
>> if that succeeds sends a Negotiate response with a Kerberos token to
> the
>> proxy.
>> If the Windows client fails to get a Kerberos ticket the client will
> send
>> a Negotiate response with a NTLM token to the proxy. Unfortunately
> there> is yet no squid helper which can handle both a
> Negotiate/Kerberos response
>> and a Negotiate/NTLM response (although maybe the samba ntlm helper
> can).> So there is a fallback when you use Negotiate, but it has some
> caveats.
>
> This is not true when Squid is running on Windows: the Windows native
> Negotiate Helper can handle both Negotiate/Kerberos and Negotiate/NTLM
> responses.
>
> Regards
>
>
> Guido Serassio
> Acme Consulting S.r.l.
> Microsoft Gold Certified Partner
> VMware Professional Partner
> Via Lucia Savarino, 1 10098 - Rivoli (TO) - ITALY
> Tel. : +39.011.9530135 Fax. : +39.011.9781115
> Email: guido.seras...@acmeconsulting.it
> WWW: http://www.acmeconsulting.it
>
> 
_
Hotmail: Trusted email with powerful SPAM protection.
https://signup.live.com/signup.aspx?id=60969

[squid-users] Negotiate/NTLM Authentication a safer option then Negotiate/Kerberos??

2010-04-06 Thread GIGO .

Hi All,
 
In our environment currently we are using ISA server with userbased 
authentication. we are using windows 2003 Active Directory and almost all of 
the users are using Windows based OS. We want to seemlessly migrate our users 
to Squid.
I have not yet reached to any conlusion despite lot of studies/efforts/Squid 
Support. I would like you people to guide me in detail please.
 
If Negotiate/kerberos has a limitation in squid that it has only one fallback 
scheme and that is Basic/Ldap. Then isnt it a safe option to use 
netgotiate/NTLM if all users belonged to Microsoft Active Directory only?
 
 
 
 
As every logged-in domain user will always possess a valid NTLM token even if 
it dont have a valid kerberos token. So this scheme will not require any 
Fallback authentication mechanism to be defined.I would probably be needing to 
enumerate Active directory users through some mechanism(which i am not sure 
about at this moment) to get this scheme working. Am i right? please guide in 
detail.
 
 
 
Another thing which is confusing is that if alike kerberos NTLM token(and hence 
users credentials) will automatically passed to squid and user never requires a 
need to explicitly give password. Am i right?
 
 
 
What will happen if the user is not logged into the domain but on a workstation 
that is part of workgroup. I assume that in that case a password popup screen 
will appear and user will give his/her credentials in domainname/user format  
and that will work? 

 
 
 
 
 
 
 
regards,
 
Bilal Aslam   
_
Hotmail: Powerful Free email with security by Microsoft.
https://signup.live.com/signup.aspx?id=60969

RE: [squid-users] Re: Re: SSO with Active Directory-Squid Clients

2010-04-06 Thread GIGO .

Dear Markus,
 
 
That cleared/explained a lot to me and given me direction for developing a 
better understanding of the whole concept. Thanks a lot.
 
 
 
regards,
 
Bilal 
 
 



> To: squid-users@squid-cache.org
> From: hua...@moeller.plus.com
> Date: Tue, 6 Apr 2010 20:14:32 +0100
> Subject: [squid-users] Re: Re: SSO with Active Directory-Squid Clients
>
> Hi Bilal,
>
> It is a bit more complicated. it is not a pure Kerberos authentication but
> a Negotiate/Kerberos authentication.
>
> If you have a Windows client and the proxy send WWW-Proxy-Authorize:
> Negotiate the Windows client will try first to get a Kerberos ticket and if
> that succeeds sends a Negotiate response with a Kerberos token to the proxy.
> If the Windows client fails to get a Kerberos ticket the client will send a
> Negotiate response with a NTLM token to the proxy. Unfortunately there is
> yet no squid helper which can handle both a Negotiate/Kerberos response and
> a Negotiate/NTLM response (although maybe the samba ntlm helper can). So
> there is a fallback when you use Negotiate, but it has some caveats.
>
> Regarding your second point I can not really judge which one is better I
> think it will depend on your environment.
>
> Regards
> Markus
>
> "GIGO ." wrote in message
> news:snt134-w101cbed44254f957cda154b9...@phx.gbl...
>
> Dear Markus,
>
> Please i have few confusions which i want to satisfy.
>
> 1. If kerberos Authentication fails then what would be the fallback behavior
> would the Basic authentication to Ldap will be used instead? Does it need to
> be defined? what is the best strategy as Basic Authentication will be in
> clear text. In microsoft Environment the fallback is to NTLM authentication
> if kerberos fails isnt it a better strategy.
>
>
>
> 2. Isnt it better to use the combinition of kerberos/ldap only for SSO with
> active directory? Why winbind/Samba is referred in many tutorials while to
> me it look redundant? does it give any additional benefit or is it more
> stable? can u please enlighten me.
>
>
>
>
> regards,
> Bilal
>
> 
>> To: squid-users@squid-cache.org
>> From: hua...@moeller.plus.com
>> Date: Sat, 3 Apr 2010 13:34:15 +0100
>> Subject: [squid-users] Re: SSO with Active Directory-Squid Clients
>>
>> Have a look at
>> http://wiki.squid-cache.org/ConfigExamples/Authenticate/Kerberos and
>> http://sourceforge.net/projects/squidkerbauth/files/squidkerbldap/squid_kerb_ldap-1.2.1/squid_kerb_ldap-1.2.1.tar.gz/download
>>
>> Regards
>> Markus
>>
>> "GIGO ." wrote in message
>> news:snt134-w171836624ce7937ad90d3eb9...@phx.gbl...
>>
>> Dear All/Amos,
>>
>> I want to allow certain(not all) Active Directory users to use squid by
>> way
>> of SSO with Active Directory. So means when any one from those specific
>> users will login into Active Directory they should have automatically
>> access
>> to internet via Squid Proxy. Other AD users which have not permissions
>> granted in Squid will be disallowed. Is it possible? How please guide in
>> detail.
>>
>>
>> This was my assumption of how it would be done:
>>
>> I needed to compile squid with these additional
>> options --enable-basic-auth-helpers="LDAP" 
>> --enable-auth="basic,negotiate,ntlm"
>> --enable-external-acl-helpers="wbinfo_group,ldap_group" 
>> --enable-negotiate-auth-helpers="squid_kerb_auth"
>> Right??
>>
>>
>> I need to configure krb5.conf to point to AD as Default_realm on CENTOS
>> 5.4
>> to right?
>>
>>
>> I think that i must need to make Centos 5.4 member of the domain? Am i
>> right
>> or its not necessary
>>
>>
>> How these specific AD users(with internet access allowed) will be
>> told/mentioned to the squid?
>>
>>
>>
>> I have also studied your article
>> http://wiki.squid-cache.org/ConfigExamples/Authenticate/Ldap?action=print
>>
>> However this is allowing all(not specific) Active Directory or LDAP users
>> internet access. This logic is just checking the validity of user account
>> with Active directory by popping up a login/password and if succeeded
>> network access is granted. Am i right?
>>
>>
>>
>> Bottom line is that i am completely lost and have not much idea what and
>> how
>> to do it. We previously are using Microsoft ISA server and are about to
>> move
>> to Squid and this requirement is very necessary.
>>
>>
>> regards,
>>
>> Bilal Aslam
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> _
>> Hotmail: Free, trusted and rich email service.
>> https://signup.live.com/signup.aspx?id=60969
>>
>>
> _
> Your E-mail and More On-the-Go. Get Windows Live Hotmail Free.
> https://signup.live.com/signup.aspx?id=60969
>
> 
_
Your E-mail and More On-the-Go. Get Windows Live Hotmail Free.
https://signup.live.com/signup.aspx?id=60969

[squid-users] Creating a kerberos Service Principal.

2010-04-06 Thread GIGO .

Dear All,
 
Please guide me in regard to SSO setup with Active Directory(No winbind/Samba). 
I have the following questions in this regard.
 
 
 
1.  Creating a Kerberos service principal and keytab file that is used by the 
Squid what is the effective method? Difference between using Ktpass vs Msktutil 
package? What rights would i be required in Active Directory and if none then 
why so?
 
 
 
 


2. How to configure the fallback Authentication scheme if Kerberos fails? Ldap 
authentication using basic looks to be an option but isnt it less secure? is 
there a better approach possible.
 
 
 
 
regards,
 
Bilal Aslam   
_
Hotmail: Powerful Free email with security by Microsoft.
https://signup.live.com/signup.aspx?id=60969

RE: [squid-users] Re: SSO with Active Directory-Squid Clients

2010-04-05 Thread GIGO .

Dear Markus,
 
Please i have few confusions which i want to satisfy.
 
1. If kerberos Authentication fails then what would be the fallback behavior 
would the Basic authentication to Ldap will be used instead? Does it need to be 
defined? what is the best strategy as Basic Authentication will be in clear 
text. In microsoft Environment the fallback is to NTLM authentication if 
kerberos fails isnt it a better strategy.
 
 
 
2. Isnt it better to use the combinition of kerberos/ldap only for SSO with 
active directory? Why winbind/Samba is referred in many tutorials while to me 
it look redundant? does it give any additional benefit or is it more stable? 
can u please enlighten me.
 
 
 
 
regards,
Bilal


> To: squid-users@squid-cache.org
> From: hua...@moeller.plus.com
> Date: Sat, 3 Apr 2010 13:34:15 +0100
> Subject: [squid-users] Re: SSO with Active Directory-Squid Clients
>
> Have a look at
> http://wiki.squid-cache.org/ConfigExamples/Authenticate/Kerberos and
> http://sourceforge.net/projects/squidkerbauth/files/squidkerbldap/squid_kerb_ldap-1.2.1/squid_kerb_ldap-1.2.1.tar.gz/download
>
> Regards
> Markus
>
> "GIGO ." wrote in message
> news:snt134-w171836624ce7937ad90d3eb9...@phx.gbl...
>
> Dear All/Amos,
>
> I want to allow certain(not all) Active Directory users to use squid by way
> of SSO with Active Directory. So means when any one from those specific
> users will login into Active Directory they should have automatically access
> to internet via Squid Proxy. Other AD users which have not permissions
> granted in Squid will be disallowed. Is it possible? How please guide in
> detail.
>
>
> This was my assumption of how it would be done:
>
> I needed to compile squid with these additional
> options --enable-basic-auth-helpers="LDAP" 
> --enable-auth="basic,negotiate,ntlm"
> --enable-external-acl-helpers="wbinfo_group,ldap_group" 
> --enable-negotiate-auth-helpers="squid_kerb_auth"
> Right??
>
>
> I need to configure krb5.conf to point to AD as Default_realm on CENTOS 5.4
> to right?
>
>
> I think that i must need to make Centos 5.4 member of the domain? Am i right
> or its not necessary
>
>
> How these specific AD users(with internet access allowed) will be
> told/mentioned to the squid?
>
>
>
> I have also studied your article
> http://wiki.squid-cache.org/ConfigExamples/Authenticate/Ldap?action=print
>
> However this is allowing all(not specific) Active Directory or LDAP users
> internet access. This logic is just checking the validity of user account
> with Active directory by popping up a login/password and if succeeded
> network access is granted. Am i right?
>
>
>
> Bottom line is that i am completely lost and have not much idea what and how
> to do it. We previously are using Microsoft ISA server and are about to move
> to Squid and this requirement is very necessary.
>
>
> regards,
>
> Bilal Aslam
>
>
>
>
>
>
>
>
>
>
> _
> Hotmail: Free, trusted and rich email service.
> https://signup.live.com/signup.aspx?id=60969
>
> 
_
Your E-mail and More On-the-Go. Get Windows Live Hotmail Free.
https://signup.live.com/signup.aspx?id=60969

[squid-users] SSO with Active Directory-Squid Clients

2010-04-03 Thread GIGO .

Dear All/Amos,
 
I  want to allow certain(not all) Active Directory users to use squid by way of 
SSO with Active Directory. So means when any one from those specific users will 
login into Active Directory they should have automatically access to internet 
via Squid Proxy. Other AD users which have not permissions granted in Squid 
will be disallowed. Is it possible? How please guide in detail.
 
 
This was my assumption of how it would be done:
 
I needed to compile squid with these additional options 
--enable-basic-auth-helpers="LDAP" --enable-auth="basic,negotiate,ntlm" 
--enable-external-acl-helpers="wbinfo_group,ldap_group" 
--enable-negotiate-auth-helpers="squid_kerb_auth"
Right?? 
 
 
I need to configure krb5.conf to point to AD as Default_realm on CENTOS 5.4 to 
right?
 
 
I think that i must need to make Centos 5.4 member of the domain? Am i right or 
its not necessary
 
 
How these specific AD users(with internet access allowed) will be 
told/mentioned to the squid?
 
 
 
I have also studied your article 
http://wiki.squid-cache.org/ConfigExamples/Authenticate/Ldap?action=print
 
However this is allowing all(not specific) Active Directory or LDAP users 
internet access. This logic is just checking the validity of user account with 
Active directory by popping up a login/password and if succeeded network access 
is granted. Am i right?
 
 
 
Bottom line is that i am completely lost and have not much idea what and how to 
do it. We previously are using Microsoft ISA server and are about to move to 
Squid and this requirement is very necessary.
 
 
regards,
 
Bilal Aslam
 
 
 
 





  
_
Hotmail: Free, trusted and rich email service.
https://signup.live.com/signup.aspx?id=60969

[squid-users] Upgrade to 3.1.1

2010-04-02 Thread GIGO .

Is it possible to upgrade from Squid3.0 to Squid3.1.1 by applying patch/diff. 
Is there any howto available which can be refered to? Should every body 
upgrade?
_
Hotmail: Trusted email with Microsoft’s powerful SPAM protection.
https://signup.live.com/signup.aspx?id=60969

[squid-users] Squid Reporting.

2010-03-31 Thread GIGO .

Is there a trick to trace cache_hits and Cache_misses in SARG and in more 
readable format.Also in detail and summarized form that how much data has been 
came through the cache.or i have to use someother tool and which? 
 
what is the best reporting tools to use for squid.Can someone give a suggestion 
please
 
regards,
 
Bilal Aslam

  
_
Hotmail: Powerful Free email with security by Microsoft.
https://signup.live.com/signup.aspx?id=60969

[squid-users] DNS Related Problem resolved your further guidance is required.

2010-03-30 Thread GIGO .

Dear Amos,
 
This problem is resolved by disabling following pieces of lines in my setup...
 
#Define Local Servers
# acl localServers dst 10.0.0.0/8
# Local server should never be forwarded to neighbour/peers and they should 
never be cached.
#always_direct allow localservers
#cache deny LocalServers

By disabling these directives no dns server is required at all as Cache_peer 
ISA is doing the trick now and ISA servers DNS settings(whatever) are being 
utilized instead right?
 
ok what was happending when these lines were not commented was that squid was 
trying to use the above acl in every request i have not a very confident 
picture. wasn't it should be able to resolve the dns throgh the settings in my 
etc/resovl.conf easily?? Or in reality it was trying to use the DNS 
configuration on the ISA server which has externel dns servers configured and 
therefore have no idea of the local network? what is the behaviour? Please 
guide me.
 
 
 
 
However i just wonder wt good these lines for? when users in you local net are 
bound to go to local servers by configure there browsers for "No proxy/bypass 
for local network web servers settings" . Is there a way to go to even local 
servers through proxy as i have developed an understandign that for local 
servers you have to bypass the squid proxy??
 
 
 
 
Please enligthen me. 
 
 
Thanks in advance
 
regards,
 
Bilal Aslam
 

> From: gi...@msn.com
> To: squ...@treenet.co.nz; squid-users@squid-cache.org
> Date: Tue, 30 Mar 2010 05:43:48 +
> Subject: RE: [squid-users] HTTP_Miss/200,304 Very Slow responsetime. Experts 
> please help.
>
>
> Dear Amos,
>
> Thank you so much i will try troubleshooting on the lines you suggested.
>
>
> regards,
>
> Bilal Aslam
>
> 
>> Date: Tue, 30 Mar 2010 17:05:50 +1300
>> From: squ...@treenet.co.nz
>> To: squid-users@squid-cache.org
>> Subject: Re: [squid-users] HTTP_Miss/200,304 Very Slow responsetime. Experts 
>> please help.
>>
>> GIGO . wrote:
>>> I am using ISA server as cache_peer parent and runing multiple instances on 
>>> my squid Sever. However i am failing to understand that why the behaviour 
>>> of Squid is extremely slow. At home where i have direct access to internet 
>>> the same setup works fine.Please somebody help me out
>>>
>>> regards,
>>>
>>> Bilal Aslam
>>>
>>
>> First thing to check is access times on the ISA and whether the problem
>> is actually Squid or something else down the software chain.
>>
>> Extremely slow times are usually the result of DNS failures. Each of the
>> proxies needs to do its own lookups, so any small failure will compound
>> into a big delay very fast.
>>
>> Your squid does its own DNS lookup on every request to figure out if
>> it's part of localservers ACL or not (in both the always_direct and
>> cache access controls).
>>
>> Amos
>>
>>>
>>> ---
>>> My squid server has internet access by being a secureNat client of ISA 
>>> Server.
>>>
>>> My Configuration file for first Instance:
>>> visible_hostname squidLhr
>>> unique_hostname squidMain
>>> pid_filename /var/run/squid.pid
>>> http_port 8080
>>> icp_port 0
>>> snmp_port 3161
>>> access_log /var/logs/access.log squid
>>> cache_log /var/logs/cache.log
>>> cache_store_log /var/logs/store.log
>>> cache_effective_user proxy
>>> cache_peer 127.0.0.1 parent 3128 0 default no-digest no-query
>>> prefer_direct off
>>> # never_direct allow all (handy to test that if the processes are working 
>>> in collaboration)
>>>
>>> cache_dir aufs /var/spool/squid 1 16 256
>>> coredump_dir /var/spool/squid
>>> cache_swap_low 75
>>> cache_replacement_policy lru
>>> refresh_pattern ^ftp: 1440 20% 10080
>>> refresh_pattern ^gopher: 1440 0% 1440
>>> refresh_pattern . 0 20% 4320
>>> acl manager proto cache_object
>>> acl localhost src 127.0.0.1/32
>>> acl to_localhost dst 127.0.0.0/8
>>> #Define Local Network.
>>> acl FcUsr src "/etc/squid/FcUsr.conf"
>>> acl PUsr src "/etc/squid/PUsr.conf"
>>> acl RUsr src "/etc/squid/RUsr.conf"
>>> #Define Local Servers
>>> acl localServers dst 10.0.0.0/8
>>> #Defining & allowing ports section
>>> acl SSL_ports port 443 #https
>>> acl Safe_ports port 80 # htt

RE: [squid-users] HTTP_Miss/200,304 Very Slow responsetime. Experts please help.

2010-03-29 Thread GIGO .

Dear Amos,

Thank you so much i will try troubleshooting on the lines you suggested.


regards,

Bilal Aslam


> Date: Tue, 30 Mar 2010 17:05:50 +1300
> From: squ...@treenet.co.nz
> To: squid-users@squid-cache.org
> Subject: Re: [squid-users] HTTP_Miss/200,304 Very Slow responsetime. Experts 
> please help.
>
> GIGO . wrote:
>> I am using ISA server as cache_peer parent and runing multiple instances on 
>> my squid Sever. However i am failing to understand that why the behaviour of 
>> Squid is extremely slow. At home where i have direct access to internet the 
>> same setup works fine.Please somebody help me out
>>
>> regards,
>>
>> Bilal Aslam
>>
>
> First thing to check is access times on the ISA and whether the problem
> is actually Squid or something else down the software chain.
>
> Extremely slow times are usually the result of DNS failures. Each of the
> proxies needs to do its own lookups, so any small failure will compound
> into a big delay very fast.
>
> Your squid does its own DNS lookup on every request to figure out if
> it's part of localservers ACL or not (in both the always_direct and
> cache access controls).
>
> Amos
>
>>
>> ---
>> My squid server has internet access by being a secureNat client of ISA 
>> Server.
>>
>> My Configuration file for first Instance:
>> visible_hostname squidLhr
>> unique_hostname squidMain
>> pid_filename /var/run/squid.pid
>> http_port 8080
>> icp_port 0
>> snmp_port 3161
>> access_log /var/logs/access.log squid
>> cache_log /var/logs/cache.log
>> cache_store_log /var/logs/store.log
>> cache_effective_user proxy
>> cache_peer 127.0.0.1 parent 3128 0 default no-digest no-query
>> prefer_direct off
>> # never_direct allow all (handy to test that if the processes are working in 
>> collaboration)
>>
>> cache_dir aufs /var/spool/squid 1 16 256
>> coredump_dir /var/spool/squid
>> cache_swap_low 75
>> cache_replacement_policy lru
>> refresh_pattern ^ftp: 1440 20% 10080
>> refresh_pattern ^gopher: 1440 0% 1440
>> refresh_pattern . 0 20% 4320
>> acl manager proto cache_object
>> acl localhost src 127.0.0.1/32
>> acl to_localhost dst 127.0.0.0/8
>> #Define Local Network.
>> acl FcUsr src "/etc/squid/FcUsr.conf"
>> acl PUsr src "/etc/squid/PUsr.conf"
>> acl RUsr src "/etc/squid/RUsr.conf"
>> #Define Local Servers
>> acl localServers dst 10.0.0.0/8
>> #Defining & allowing ports section
>> acl SSL_ports port 443 #https
>> acl Safe_ports port 80 # http
>> acl Safe_ports port 21 # ftp
>> acl Safe_ports port 443 # https
>> acl Safe_ports port 70 # gopher
>> acl Safe_ports port 210 # wais
>> acl Safe_ports port 1025-65535 # unregistered ports
>> acl Safe_ports port 280 # http-mgmt
>> acl Safe_ports port 488 # gss-http
>> acl Safe_ports port 591 # filemaker
>> acl Safe_ports port 777 # multiling http
>> acl CONNECT method CONNECT
>> # Only allow cachemgr access from localhost
>> http_access allow manager localhost
>> http_access deny manager
>> # Deny request to unknown ports
>> http_access deny !Safe_ports
>> # Deny request to other than SSL ports
>> http_access deny CONNECT !SSL_ports
>> #Allow access from localhost
>> http_access allow localhost
>> # Local server should never be forwarded to neighbour/peers and they should 
>> never be cached.
>> always_direct allow localservers
>> cache deny LocalServers
>> # Windows Update Section...
>> acl windowsupdate dstdomain windowsupdate.microsoft.com
>> acl windowsupdate dstdomain .update.microsoft.com
>> acl windowsupdate dstdomain download.windowsupdate.com
>> acl windowsupdate dstdomain redir.metaservices.microsoft.com
>> acl windowsupdate dstdomain images.metaservices.microsoft.com
>> acl windowsupdate dstdomain c.microsoft.com
>> acl windowsupdate dstdomain www.download.windowsupdate.com
>> acl windowsupdate dstdomain wustat.windows.com
>> acl windowsupdate dstdomain crl.microsoft.com
>> acl windowsupdate dstdomain sls.microsoft.com
>> acl windowsupdate dstdomain productactivation.one.microsoft.com
>> acl windowsupdate dstdomain ntservicepack.microsoft.com
>> acl wuCONNECT dstdomain www.update.microsoft.com
>> acl wuCONNECT dstdomain sls.microsoft.com
>> http_access allow CONNECT wuCONNECT FcUsr
>> http_access allow CONNECT wuCONNECT PUsr
>&

[squid-users] HTTP_Miss/200,304 Very Slow responsetime. Experts please help.

2010-03-29 Thread GIGO .

I am using ISA server as cache_peer parent and runing multiple instances on my 
squid Sever. However i am failing to understand that why the behaviour of Squid 
is extremely slow. At home where i have direct access to internet the same 
setup works fine.Please somebody help me out
 
regards,
 
Bilal Aslam
 
 
---
My squid server has internet access by being a secureNat client of ISA Server.
 
My Configuration file for first Instance:
visible_hostname squidLhr
unique_hostname squidMain
pid_filename /var/run/squid.pid
http_port 8080
icp_port 0
snmp_port 3161
access_log /var/logs/access.log squid
cache_log /var/logs/cache.log
cache_store_log /var/logs/store.log
cache_effective_user proxy 
cache_peer 127.0.0.1  parent 3128 0 default no-digest no-query
prefer_direct off 
# never_direct allow all (handy to test that if the processes are working in 
collaboration)

cache_dir aufs /var/spool/squid 1 16 256
coredump_dir /var/spool/squid
cache_swap_low 75
cache_replacement_policy lru
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern . 0 20% 4320
acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8
#Define Local Network.
acl FcUsr src "/etc/squid/FcUsr.conf"
acl PUsr src "/etc/squid/PUsr.conf"
acl RUsr src "/etc/squid/RUsr.conf"
#Define Local Servers
acl localServers dst 10.0.0.0/8
#Defining & allowing ports section
acl SSL_ports port 443  #https
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
# Only allow cachemgr access from localhost
http_access allow manager localhost
http_access deny manager
# Deny request to unknown ports
http_access deny !Safe_ports
# Deny request to other than SSL ports
http_access deny CONNECT !SSL_ports
#Allow access from localhost
http_access allow localhost
# Local server should never be forwarded to neighbour/peers and they should 
never be cached.
always_direct allow localservers
cache deny LocalServers
# Windows Update Section...
acl windowsupdate dstdomain windowsupdate.microsoft.com
acl windowsupdate dstdomain .update.microsoft.com
acl windowsupdate dstdomain download.windowsupdate.com
acl windowsupdate dstdomain redir.metaservices.microsoft.com
acl windowsupdate dstdomain images.metaservices.microsoft.com
acl windowsupdate dstdomain c.microsoft.com
acl windowsupdate dstdomain www.download.windowsupdate.com
acl windowsupdate dstdomain wustat.windows.com
acl windowsupdate dstdomain crl.microsoft.com
acl windowsupdate dstdomain sls.microsoft.com
acl windowsupdate dstdomain productactivation.one.microsoft.com
acl windowsupdate dstdomain ntservicepack.microsoft.com
acl wuCONNECT dstdomain www.update.microsoft.com
acl wuCONNECT dstdomain sls.microsoft.com
http_access allow CONNECT wuCONNECT FcUsr
http_access allow CONNECT wuCONNECT PUsr
http_access allow CONNECT wuCONNECT RUsr
http_access allow CONNECT wuCONNECT localhost
http_access allow windowsupdate all
http_access allow windowsupdate localhost
acl workinghours time MTWHF 09:00-12:59
acl workinghours time MTWHF 15:00-17:00
acl BIP dst "/etc/squid/Blocked.conf"
Definitions for BlockingRules#
###Definition of MP3/MPEG
acl FTP proto FTP
acl MP3url urlpath_regex \.mp3(\?.*)?$
acl Movies rep_mime_type video/mpeg
acl MP3s rep_mime_type audio/mpeg
###Definition of Flash Video
acl deny_rep_mime_flashvideo rep_mime_type video/flv
###Definition of  Porn
acl Sex urlpath_regex sex
acl PornSites url_regex "/etc/squid/pornlist"
Definition of YouTube.
## The videos come from several domains
acl youtube_domains dstdomain .youtube.com .googlevideo.com .ytimg.com
###Definition of FaceBook
acl facebook_sites dstdomain .facebook.com
 Definition of MSN Messenger
acl msn urlpath_regex -i gateway.dll
acl msnd dstdomain messenger.msn.com gateway.messenger.hotmail.com
acl msn1 req_mime_type application/x-msn-messenger
Definition of Skype
acl numeric_IPs url_regex 
^(([0-9]+\.[0-9]+\.[0-9]+\.[0-9]+)|(\[([0-9af]+)?:([0-9af:]+)?:([0-9af]+)?\])):443
acl Skype_UA browser ^skype^
##Definition of Yahoo! Messenger
acl ym dstdomain .messenger.yahoo.com .psq.yahoo.com
acl ym dstdomain .us.il.yimg.com .msg.yahoo.com .pager.yahoo.com
acl ym dstdomain .rareedge.com .ytunnelpro.com .chat.yahoo.com
acl ym dstdomain .voice.yahoo.com
acl ymregex url_regex yupdater.yim ymsgr myspaceim
## Other protocols Yahoo!Messenger uses ??
acl ym dstdomain .skype.com .imvu.com
###Definition for Disallowing download of executa

[squid-users] cache_peer

2010-03-28 Thread GIGO .

I want that if my first listed cache peer goes down then only should my second 
peer be used.
 
 
-
cache_peer 127.0.0.1 3128 0 default no-digest no-query no-delay (only if this 
is unavailable then the second one listed is used)
 
cache_peer 10.1.82.205 8080 0 default proxy-only no-query no-digest default
 
 
Please guide me on how to do this? what configuration would be required. 
 
Thanks
 
&
 
regards,
 
 
 
Bilal Aslam
 
  
_
Hotmail: Trusted email with powerful SPAM protection.
https://signup.live.com/signup.aspx?id=60969

RE: [squid-users] SquidCompilationproblem -squid_ldap_auth.c:123:18: error: lber.h: No such file or directory

2010-03-26 Thread GIGO .

Dear Amos,
 
It did worked on Ubuntu but now i am facing the same problem in RHEL. Can you 
please guide which package i would require.
 
regards,
 
Bilal



> Date: Sun, 21 Mar 2010 22:06:09 +
> From: squ...@treenet.co.nz
> To: squid-users@squid-cache.org
> Subject: Re: [squid-users] SquidCompilationproblem -squid_ldap_auth.c:123:18: 
> error: lber.h: No such file or directory
>
> On Sun, 21 Mar 2010 19:37:56 +, "GIGO ." wrote:
>> Please guide me on this whats wrong. I am unable to compile
>>
>> Squid3stable24 on Ubuntu 8.04 LTS server.
>>
>> I want to use active directory authentication(my clients should be able
> to
>> authenticate themselves with active directory accounts) Following is my
>> command:
>>
>
> You need the LDAP packages to be installed.
>
> Make sure you have the package build dependencies listed here:
> https://launchpad.net/ubuntu/lucid/+source/squid3
>
>
> Amos
_
Hotmail: Trusted email with Microsoft’s powerful SPAM protection.
https://signup.live.com/signup.aspx?id=60969

[squid-users] Rebuilding storage in /var/spool/squid3 (DIRTY) ?

2010-03-26 Thread GIGO .


What is the meaning of Rebuilding storage in /var/spool/squid3 (DIRTY).
 
regards,
  
_
Your E-mail and More On-the-Go. Get Windows Live Hotmail Free.
https://signup.live.com/signup.aspx?id=60969

[squid-users] Extreme Slow Resposne from Squid ( Test environment only 4 users at the moment)

2010-03-25 Thread GIGO .

>From the multiple instance setup using Squid 3stable25 i have shifted to 
>squid3stable1 packaged with ubuntu 8.04 LTS.However i am unable to understand 
>why its too much slow. Whats wrong please anybody help out.Is it something to 
>do with Operating system? Or initially Squid runs that much slow? I feel 
>helpless. Please guide me.
 
My Hardware:
Physical Server IBM 3650
Physical RAID 1 + A Volume Disk each of 73 GB size. currently i am doing 
caching on RAID1.
RAM 4GB
 
My Conf File:
 
visible_hostname squidLhr
unique_hostname squidDefault
pid_filename /var/run/squid3.pid
http_port 10.1.82.53:8080
icp_port 0
snmp_port 0
access_log /var/log/squid3/access.log squid
cache_log /var/log/squid3/cache.log
cache_peer 10.1.82.205  parent 8080 0 default no-digest no-query
#cache_peer 127.0.0.1 parent 3128 0 default no-digest no-query proxy-only 
no-delay use in the multiple setup
#temporarily Directive
never_direct allow all
#prefer_direct off use in the multiple setup while ponder on the above 
directive as well as it may not be needed with direct internet access.
cache_dir aufs /var/spool/squid3 1 32 320
coredump_dir /var/spool/squid3
cache_swap_low 75
cache_mem 100 MB
range_offset_limit 0 KB
maximum_object_size 4096 MB
minimum_object_size 0 KB
quick_abort_min 16 KB
cache_replacement_policy lru
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern . 0 20% 4320
#specific for youtube belowone
refresh_pattern (get_video\?|videoplayback\?|videodownload\?) 5259487 % 
5259487
refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8
#Define Local Network.
acl FcUsr src "/etc/squid3/FcUsr.conf"
acl PUsr src "/etc/squid3/PUsr.conf"
acl RUsr src "/etc/squid3/RUsr.conf"
#Define Local Servers
acl localServers dst 10.0.0.0/8
#Defining & allowing ports section
acl SSL_ports port 443  #https
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
# Only allow cachemgr access from localhost
http_access allow manager localhost
http_access deny manager
# Deny request to unknown ports
http_access deny !Safe_ports
# Deny request to other than SSL ports
http_access deny CONNECT !SSL_ports
#Allow access from localhost
http_access allow localhost
# Local server should never be forwarded to neighbour/peers and they should 
never be cached.
always_direct allow localservers
cache deny LocalServers
# Windows Update Section...
acl windowsupdate dstdomain windowsupdate.microsoft.com
acl windowsupdate dstdomain .update.microsoft.com
acl windowsupdate dstdomain download.windowsupdate.com
acl windowsupdate dstdomain redir.metaservices.microsoft.com
acl windowsupdate dstdomain images.metaservices.microsoft.com
acl windowsupdate dstdomain c.microsoft.com
acl windowsupdate dstdomain www.download.windowsupdate.com
acl windowsupdate dstdomain wustat.windows.com
acl windowsupdate dstdomain crl.microsoft.com
acl windowsupdate dstdomain sls.microsoft.com
acl windowsupdate dstdomain productactivation.one.microsoft.com
acl windowsupdate dstdomain ntservicepack.microsoft.com
acl wuCONNECT dstdomain www.update.microsoft.com
acl wuCONNECT dstdomain sls.microsoft.com
http_access allow CONNECT wuCONNECT FcUsr
http_access allow CONNECT wuCONNECT PUsr
http_access allow CONNECT wuCONNECT RUsr
http_access allow CONNECT wuCONNECT localhost
http_access allow windowsupdate all
http_access allow windowsupdate localhost
acl workinghours time MTWHF 09:00-12:59
acl workinghours time MTWHF 15:00-17:00
acl BIP dst "/etc/squid3/Blocked.conf"
Definitions for BlockingRules#
###Definition of MP3/MPEG
acl FTP proto FTP
acl MP3url urlpath_regex \.mp3(\?.*)?$
acl Movies rep_mime_type video/mpeg
acl MP3s rep_mime_type audio/mpeg
###Definition of Flash Video
acl deny_rep_mime_flashvideo rep_mime_type video/flv
###Definition of  Porn
acl Sex urlpath_regex sex
acl PornSites url_regex "/etc/squid3/pornlist"
Definition of YouTube.
## The videos come from several domains
acl youtube_domains dstdomain .youtube.com .googlevideo.com .ytimg.com
###Definition of FaceBook
acl facebook_sites dstdomain .facebook.com
 Definition of MSN Messenger
acl msn urlpath_regex -i gateway.dll
acl msnd dstdomain messenger.msn.com gateway.messenger.hotmail.com
acl msn1 req_mime_type application/x-msn-messenger
Definition of Skype
acl numeric_IPs url_regex 
^(([0-9]+\.[0-9]+\.[0-9]+\.[0-9]+)|(\[([0-9af]+)?:([0-9af:]+)?:([0-9af]+)?\])):443
acl Skype_UA browser ^skype^
##Definition of Yahoo! Messenger
acl ym dst

RE: [squid-users] After Running Multiple Instances my Squid speed/response is extremely slow.

2010-03-25 Thread GIGO .

Please I want to add information for my previous query. My previous setup with 
single instance was running fine.Another change is that i compiled my new setup 
with more options this time like enabling delay pools , cache digest and active 
directory authentication support. Is the below issue in any way related to this 
as well. Please your support is required.


> From: gi...@msn.com
> To: squid-users@squid-cache.org
> Date: Thu, 25 Mar 2010 11:31:01 +
> Subject: [squid-users] After Running Multiple Instances my Squid 
> speed/response is extremely slow.
>
>
> DearAll,
>
> Please help me on this as after setting up multiple instances on the same 
> server for (cache Directory fault tolerance myy squid speed/response is 
> extremely slow and even most of the sites keep on opening and opening. I am 
> failing to figure out whats wrong. Please guide me on this i am enclosing my 
> configuration files for your reference.
>
>
>
> Instance 1 with which all the users are connected:
>
>
> visible_hostname squidLhr
> unique_hostname squidMainProcess
> pid_filename /var/run/squid3main.pid
> http_port 8080
> icp_port 0
> snmp_port 3161
> access_log /var/logs/access.log
> cache_log /var/logs/cache.log
> cache_effective_user proxy
> cache_peer 127.0.0.1 parent 3128 0 default no-digest no-query proxy-only 
> no-delay
>
> #temporarily Directive
> never_direct allow all
>
> prefer_direct off
> cache_dir aufs /var/spool/squid3 1 32 320
> coredump_dir /var/spool/squid3
> cache deny all
>
> acl localServers dst 10.0.0.0/8
> always_direct allow localservers
> cache deny LocalServers
> acl localhost src 127.0.0.1/32
> acl to_localhost dst 127.0.0.0/8
> http_access allow localhost
> acl FcUsr src "/etc/squid3/FcUsr.conf"
> acl PUsr src "/etc/squid3/PUsr.conf"
> acl RUsr src "/etc/squid3/RUsr.conf"
> acl BIP dst "/etc/squid3/Blocked.conf"
> acl CONNECT method CONNECT
> # Windows Update Section...
> acl windowsupdate dstdomain windowsupdate.microsoft.com
> acl windowsupdate dstdomain .update.microsoft.com
> acl windowsupdate dstdomain download.windowsupdate.com
> acl windowsupdate dstdomain redir.metaservices.microsoft.com
> acl windowsupdate dstdomain images.metaservices.microsoft.com
> acl windowsupdate dstdomain c.microsoft.com
> acl windowsupdate dstdomain www.download.windowsupdate.com
> acl windowsupdate dstdomain wustat.windows.com
> acl windowsupdate dstdomain crl.microsoft.com
> acl windowsupdate dstdomain sls.microsoft.com
> acl windowsupdate dstdomain productactivation.one.microsoft.com
> acl windowsupdate dstdomain ntservicepack.microsoft.com
> acl wuCONNECT dstdomain www.update.microsoft.com
> acl wuCONNECT dstdomain sls.microsoft.com
> http_access allow CONNECT wuCONNECT FcUsr
> http_access allow CONNECT wuCONNECT PUsr
> http_access allow CONNECT wuCONNECT RUsr
> http_access allow CONNECT wuCONNECT localhost
> http_access allow windowsupdate FcUsr
> http_access allow windowsupdate PUsr
> http_access allow windowsupdate RUsr
> http_access allow windowsupdate localhost
> #Defining & allowing ports section
> acl SSL_ports port 443 #https
> acl Safe_ports port 80 # http
> acl Safe_ports port 21 # ftp
> acl Safe_ports port 443 # https
> acl Safe_ports port 70 # gopher
> acl Safe_ports port 210 # wais
> acl Safe_ports port 1025-65535 # unregistered ports
> acl Safe_ports port 280 # http-mgmt
> acl Safe_ports port 488 # gss-http
> acl Safe_ports port 591 # filemaker
> acl Safe_ports port 777 # multiling http
> http_access deny !Safe_ports
> http_access deny CONNECT !SSL_ports
> acl manager proto cache_object
> http_access allow manager localhost
> http_access deny manager
> acl workinghours time MTWHF 09:00-12:59
> acl workinghours time MTWHF 15:00-17:00
> Definitions for BlockingRules#
> ###Definition of MP3/MPEG
> acl FTP proto FTP
> acl MP3url urlpath_regex \.mp3(\?.*)?$
> acl Movies rep_mime_type video/mpeg
> acl MP3s rep_mime_type audio/mpeg
>
> ###Definition of Flash Video
> acl deny_rep_mime_flashvideo rep_mime_type video/flv
> ###Definition of Porn
> acl Sex urlpath_regex sex
> acl PornSites url_regex "/etc/squid3/pornlist"
>
> Definition of YouTube.
> ## The videos come from several domains
> acl youtube_domains dstdomain .youtube.com .googlevideo.com .ytimg.com
> ###Definition of FaceBook
> acl facebook_sites dstdomain .facebook.com
>
>  Definition of MSN Messenger
> acl msn urlpath_regex -i gateway.dll
> acl msnd dstdomain messenger.msn.com gateway.messenger.hotmail.com
> acl msn1 req_mime_type application/x-msn-messenger
>
> Definition of Skype
> acl numeric_IPs url_regex 
> ^(([0-9]+\.[0-9]+\.[0-9]+\.[0-9]+)|(\[([0-9af]+)?:([0-9af:]+)?:([0-9af]+)?\])):443
> acl Skype_UA browser ^skype^
> ##Definition of Yahoo! Messenger
> acl ym dstdomain .messenger.yahoo.com .psq.yahoo.com
> acl ym dstdomain .us.il.yimg.com .msg.yahoo.com .pager.yahoo.com
> acl ym dstdomain .rareedge.com .ytunnelpro.com .chat.yahoo.com
> acl ym dstdomain .voice

[squid-users] After Running Multiple Instances my Squid speed/response is extremely slow.

2010-03-25 Thread GIGO .

DearAll,
 
Please help me on this as after setting up multiple instances on the same 
server for (cache Directory fault tolerance myy squid speed/response is 
extremely slow and even most of the sites keep on opening and opening. I am 
failing to figure out whats wrong. Please guide me on this i am enclosing my 
configuration files for your reference.
 
 
 
Instance 1 with which all the users are connected:
 
 
visible_hostname squidLhr
unique_hostname squidMainProcess
pid_filename /var/run/squid3main.pid
http_port 8080
icp_port 0
snmp_port 3161
access_log  /var/logs/access.log
cache_log /var/logs/cache.log
cache_effective_user proxy 
cache_peer 127.0.0.1 parent 3128 0 default no-digest no-query proxy-only 
no-delay

#temporarily Directive
never_direct allow all
 
prefer_direct off
cache_dir aufs /var/spool/squid3 1 32 320
coredump_dir /var/spool/squid3
cache deny all

acl localServers dst 10.0.0.0/8
always_direct allow localservers
cache deny LocalServers
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8
http_access allow localhost
acl FcUsr src "/etc/squid3/FcUsr.conf"
acl PUsr src "/etc/squid3/PUsr.conf"
acl RUsr src "/etc/squid3/RUsr.conf"
acl BIP dst "/etc/squid3/Blocked.conf"
acl CONNECT method CONNECT
# Windows Update Section...
acl windowsupdate dstdomain windowsupdate.microsoft.com
acl windowsupdate dstdomain .update.microsoft.com
acl windowsupdate dstdomain download.windowsupdate.com
acl windowsupdate dstdomain redir.metaservices.microsoft.com
acl windowsupdate dstdomain images.metaservices.microsoft.com
acl windowsupdate dstdomain c.microsoft.com
acl windowsupdate dstdomain www.download.windowsupdate.com
acl windowsupdate dstdomain wustat.windows.com
acl windowsupdate dstdomain crl.microsoft.com
acl windowsupdate dstdomain sls.microsoft.com
acl windowsupdate dstdomain productactivation.one.microsoft.com
acl windowsupdate dstdomain ntservicepack.microsoft.com
acl wuCONNECT dstdomain www.update.microsoft.com
acl wuCONNECT dstdomain sls.microsoft.com
http_access allow CONNECT wuCONNECT FcUsr
http_access allow CONNECT wuCONNECT PUsr
http_access allow CONNECT wuCONNECT RUsr
http_access allow CONNECT wuCONNECT localhost
http_access allow windowsupdate FcUsr
http_access allow windowsupdate PUsr
http_access allow windowsupdate RUsr
http_access allow windowsupdate localhost
#Defining & allowing ports section
acl SSL_ports port 443  #https
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443  # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210  # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280  # http-mgmt
acl Safe_ports port 488  # gss-http
acl Safe_ports port 591  # filemaker
acl Safe_ports port 777  # multiling http
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
acl manager proto cache_object
http_access allow manager localhost
http_access deny manager
acl workinghours time MTWHF 09:00-12:59
acl workinghours time MTWHF 15:00-17:00
Definitions for BlockingRules#
###Definition of MP3/MPEG
acl FTP proto FTP
acl MP3url urlpath_regex \.mp3(\?.*)?$
acl Movies rep_mime_type video/mpeg
acl MP3s rep_mime_type audio/mpeg

###Definition of Flash Video
acl deny_rep_mime_flashvideo rep_mime_type video/flv
###Definition of  Porn
acl Sex urlpath_regex sex
acl PornSites url_regex "/etc/squid3/pornlist"

Definition of YouTube.
## The videos come from several domains
acl youtube_domains dstdomain .youtube.com .googlevideo.com .ytimg.com
###Definition of FaceBook
acl facebook_sites dstdomain .facebook.com

 Definition of MSN Messenger
acl msn urlpath_regex -i gateway.dll
acl msnd dstdomain messenger.msn.com gateway.messenger.hotmail.com
acl msn1 req_mime_type application/x-msn-messenger

Definition of Skype
acl numeric_IPs url_regex 
^(([0-9]+\.[0-9]+\.[0-9]+\.[0-9]+)|(\[([0-9af]+)?:([0-9af:]+)?:([0-9af]+)?\])):443
acl Skype_UA browser ^skype^
##Definition of Yahoo! Messenger
acl ym dstdomain .messenger.yahoo.com .psq.yahoo.com
acl ym dstdomain .us.il.yimg.com .msg.yahoo.com .pager.yahoo.com
acl ym dstdomain .rareedge.com .ytunnelpro.com .chat.yahoo.com
acl ym dstdomain .voice.yahoo.com
acl ymregex url_regex yupdater.yim ymsgr myspaceim
## Other protocols Yahoo!Messenger uses ??
acl ym dstdomain .skype.com .imvu.com
###Definition for Disallowing download of executables from web#
acl downloads url_regex "/etc/squid3/download.conf"
###Definiton of Torrentz
acl torrentSeeds urlpath_regex \.torrent(\?.*)?$
###Definition of Rapidshare###
acl dlSites dstdomain .rapidshare.com .rapidsharemegaupload.com .filespump.com
###-
http_access deny  PornSites
http_access deny Sex
#http_access deny RUsr PornSites 
#http_access deny PUsr PornSites #deny everyone porn sites 
#http_access deny RUsr Sex 
#http_access deny PUsr Sex 
http_access deny PUsr msnd 
http_access deny RUsr msnd 
http_access deny PUsr msn 
http_access deny RUsr 

[squid-users] Squid Compilation and Active Directory Authentication

2010-03-24 Thread GIGO .


purpose:
 
To authenticate squid users through active directory before allowing them 
access to internet.
 
 
Compile Options: 
 
./configure --prefix=/usr --localstatedir=/var --libexecdir=${prefix}/lib/squid 
--srcdir=. --datadir=${prefix}/shares/squid --sysconfdir=/etc/squid3 
--enable-cache-digests --enable-removal-policies=lru --enable-delay-pools 
--enable-storeio=aufs,ufs --with-large-files --disable-ident-lookups 
--with-default-user=proxy --enable-basic-auth-helpers="LDAP" 
--enable-auth="basic,negotiate,ntlm" 
--enable-external-acl-helpers="wbinfo_group,ldap_group" 
--enable-negotiate-auth-helpers="squid_kerb_auth"
 
 
Question:
 
1. --enable-digest-auth-helpers=\"list of helpers\" if this option to have any 
role in authentication through active directory. 
 
2. If comiling with more options then you currently required has a down side or 
its a good option to compile with as many options as you can guess you may need 
in future.
 
3. Could you refer to an online complete guide for the authentication of squid 
users through active directory. Currently i am refering to these hoping that 
they are the latest and complete.
 http://wiki.squid-cache.org/ConfigExamples/Authenticate/Kerberos
 http://wiki.squid-cache.org/ConfigExamples/Authenticate/WindowsActiveDirectory
 
regards,
 
 
  
_
Hotmail: Trusted email with powerful SPAM protection.
https://signup.live.com/signup.aspx?id=60969

[squid-users] Allowing ports used by Squid through Iptables.

2010-03-24 Thread GIGO .

I want to do  the security hardening of my Squid Server with Iptables. I intend 
to have no rule on outbond traffic however ibound traffic would be restricted. 
please guide what are the minimum ports that are required to be open on 
iptables.
 
 
Following is what i thought:
 
Allow all incoming traffic from loopback adapter
Allow SSH traffic incoming
Allow 80,443,161,389 these multiple ports (389 as i intend to authenticate my 
clients from active directory)
Allow Squid specific http_port (i am using 8080)
Allow snmp port according to the defined directive (mine is 3161 & 7172)
Deny all other incoming traffic
Any other perhaps i am not calculating?
 
Please guide me.
 
thanks
 
Regards,
 
  
_
Hotmail: Trusted email with powerful SPAM protection.
https://signup.live.com/signup.aspx?id=60969

FW: [squid-users] Peering squid multiple instances.

2010-03-24 Thread GIGO .
some good read for knowledge/concepts builder? I have get 
> hold of squid definitve guide though a very good one however isnt'it a bit 
> outdated.Can you recommend please? Specially on the topics of Authenticating 
> Active directory users in squid proxy.
>
>
>
>
>
>
>
>
> 
>> Date: Wed, 24 Mar 2010 18:06:46 +1300
>> From: squ...@treenet.co.nz
>> To: squid-users@squid-cache.org
>> Subject: Re: [squid-users] Peering squid multiple instances.
>>
>> GIGO . wrote:
>>> I have successfully setup running of multiple instances of squid for the 
>>> sake of surviving a Cache directory failure. However I still have few 
>>> confusions regarding peering multiple instances of squid. Please guide me 
>>> in this respect.
>>>
>>>
>>> In my setup i percept that my second instance is doing caching on behalf of 
>>> requests send to Instance 1? Am i correct.
>>>
>>
>> You are right in your understanding of what you have configured. I've
>> some suggestions below on a better topology though.
>>
>>>
>>>
>>> what protocol to select for peers in this scenario? what is the 
>>> recommendation? (carp, digest, or icp/htcp)
>>>
>>
>> Under your current config there is no selection, ALL requests go through
>> both peers.
>>
>> Client -> Squid1 -> Squid2 -> WebServer
>>
>> or
>>
>> Client -> Squid2 -> WebServer
>>
>> thus Squid2 and WebServer are both bottleneck points.
>>
>>>
>>>
>>> If syntax of my cache_peer directive is correct or local loop back address 
>>> should not be used this way?
>>>
>>
>> Syntax is correct.
>> Use of localhost does not matter. It's a useful choice for providing
>> some security and extra speed to the inter-proxy traffic.
>>
>>
>>>
>>> what is the recommended protocol for peering squids with each other?
>>>
>>
>> Does not matter to your existing config. By reason of the "parent"
>> selection.
>>
>>>
>>>
>>> what is the recommended protocl for peering squid with ISA Server.
>>>
>>
>> "parent" is the peering method for origin web servers. With
>> "originserver" selection method.
>>
>>>
>>> Instance 1:
>>>
>>> visible_hostname vSquidlhr
>>> unique_hostname vSquidMain
>>> pid_filename /var/run/squid3main.pid
>>> http_port 8080
>>> icp_port 0
>>> snmp_port 3161
>>> access_log /var/logs/access.log
>>> cache_log /var/logs/cache.log
>>>
>>> cache_peer 127.0.0.1 parent 3128 0 default no-digest no-query proxy-only 
>>> no-delay
>>> prefer_direct off
>>> cache_dir aufs /var/spool/squid3 100 256 16
>>> coredump_dir /var/spool/squid3
>>> cache deny all
>>>
>>>
>>>
>>> Instance 2:
>>>
>>> visible_hostname SquidProxylhr
>>> unique_hostname squidcacheprocess
>>> pid_filename /var/run/squid3cache.pid
>>> http_port 3128
>>> icp_port 0
>>> snmp_port 7172
>>> access_log /var/logs/access2.log
>>> cache_log /var/logs/cache2.log
>>>
>>>
>>> coredump_dir /cache01/var/spool/squid3
>>> cache_dir aufs /cache01/var/spool/squid3 5 48 768
>>> cache_swap_low 75
>>> cache_mem 1000 MB
>>> range_offset_limit -1
>>> maximum_object_size 4096 MB
>>> minimum_object_size 12 bytes
>>> quick_abort_min -1
>>>
>>
>> What I suggest for failover is two proxies configured identically:
>>
>> * a cache_peer "sibling" type between them. Using digest selection. To
>> localhost (different ports)
>> * permitting both to cache data from the origin (optionally from the
>> peer).
>> * a cache_peer "parent" type to the web server. With "originserver"
>> and "default" selection enabled.
>>
>>
>> This topology utilizes a single layer of multiple proxies. Possibly with
>> hardware load balancing in iptables etc sending alternate requests to
>> each of the two proxies listening ports.
>> Useful for small-medium businesses requiring scale with minimal
>> hardware. Probably their own existing load balancers already purchased
>> from earlier attempts. IIRC the benchmark for this is somewhere around
>> 600-700 req/sec.
>>
>>
>> The next step up in performance and HA is to have an additional layer of
>> Squid acting as the load-balancer doing CARP to reduce cache duplication
>> and remove sibling data transfers. This form of scaling out is how
>> WikiMedia serve their sites up.
>> It is documented somewhat in the wiki as ExtremeCarpFrontend. With a
>> benchmark so far for a single box reaching 990 req/sec.
>>
>>
>> These maximum speed benchmarks are only achievable by reverse-proxy
>> people. Regular ISP setups can expect their maximum to be somewhere
>> below 1/2 or 1/3 of that rate due to the content diversity and RTT lag
>> of remote servers.
>>
>> Amos
>> --
>> Please be using
>> Current Stable Squid 2.7.STABLE8 or 3.0.STABLE25
>> Current Beta Squid 3.1.0.18
> _
> Hotmail: Free, trusted and rich email service.
> https://signup.live.com/signup.aspx?id=60969  
>   
_
Your E-mail and More On-the-Go. Get Windows Live Hotmail Free.
https://signup.live.com/signup.aspx?id=60969

[squid-users] Peering squid multiple instances.

2010-03-23 Thread GIGO .

I have successfully setup running of multiple instances of squid for the sake 
of surviving a Cache directory failure. However I still have few confusions 
regarding peering multiple instances of squid. Please guide me in this respect.
 
 
In my setup i percept that my second instance is doing caching on behalf of 
requests send to Instance 1? Am i correct.
 
 
 
what protocol to select for peers in this scenario? what is the recommendation? 
(carp, digest, or icp/htcp)
 
 
 
If syntax of my cache_peer directive is correct or local loop back address 
should not be used this way?
 
 
 
what is the recommended protocol for peering squids with each other?
 
 
 
what is the recommended protocl for peering squid with ISA Server.
 
 
 
Instance 1:

visible_hostname vSquidlhr
unique_hostname vSquidMain
pid_filename /var/run/squid3main.pid
http_port 8080
icp_port 0
snmp_port 3161
access_log  /var/logs/access.log
cache_log /var/logs/cache.log

cache_peer 127.0.0.1 parent 3128 0 default no-digest no-query proxy-only 
no-delay
prefer_direct off
cache_dir aufs /var/spool/squid3 100 256 16
coredump_dir /var/spool/squid3
cache deny all
 
 
 
Instance 2:
 
visible_hostname SquidProxylhr
unique_hostname squidcacheprocess
pid_filename /var/run/squid3cache.pid
http_port 3128
icp_port 0
snmp_port 7172
access_log /var/logs/access2.log
cache_log /var/logs/cache2.log
 

coredump_dir /cache01/var/spool/squid3
cache_dir aufs /cache01/var/spool/squid3 5 48 768
cache_swap_low 75
cache_mem 1000 MB
range_offset_limit -1
maximum_object_size 4096 MB
minimum_object_size 12 bytes
quick_abort_min -1
 
 
 
regards,

  
_
Hotmail: Powerful Free email with security by Microsoft.
https://signup.live.com/signup.aspx?id=60969

[squid-users] SquidCompilationproblem -squid_ldap_auth.c:123:18: error: lber.h: No such file or directory

2010-03-21 Thread GIGO .

Please guide me on this whats wrong. I am unable to compile 
 
Squid3stable24 on Ubuntu 8.04 LTS server.
 
I want to use active directory authentication(my clients should be able to 
authenticate themselves with active directory accounts) Following is my command:
 

./configure --sbindir=/usr/sbin --sysconfdir=/etc/squid3 
--enable-removal-policies=lru --enable-delay-pools --enable-storeio=aufs,ufs 
--with-large-files --disable-ident-lookups --with-default-user=proxy 
--enable-basic-auth-helpers="LDAP" --enable-auth="basic,negotiate,ntlm"  
--enable-external-acl-helpers="wbinfo_group,ldap_group" 
--enable-negotiate-auth-helpers="squid_kerb_auth"
 
 
The error i am getting is:
 
Making all in basic_auth
make[2]: Entering directory `/home/bilal/squid-3.0.STABLE24/helpers/basic_auth'
Making all in LDAP
make[3]: Entering directory 
`/home/bilal/squid-3.0.STABLE24/helpers/basic_auth/LDAP'
gcc -DHAVE_CONFIG_H -I. -I../../../include -I../../../include-m32 
-D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -Wall -g -O2 -MT squid_ldap_auth.o 
-MD -MP -MF .deps/squid_ldap_auth.Tpo -c -o squid_ldap_auth.o squid_ldap_auth.c
squid_ldap_auth.c:123:18: error: lber.h: No such file or directory
squid_ldap_auth.c:124:18: error: ldap.h: No such file or directory
squid_ldap_auth.c:137: error: âLDAP_SCOPE_SUBTREEâ undeclared here (not in a 
function)
squid_ldap_auth.c:141: error: âLDAP_DEREF_NEVERâ undeclared here (not in a 
function)
squid_ldap_auth.c:147: error: âLDAP_NO_LIMITâ undeclared here (not in a 
function)
squid_ldap_auth.c:154: error: expected â)â before â*â token
squid_ldap_auth.c:208: error: expected â)â before â*â token
squid_ldap_auth.c:213: error: expected â)â before â*â token
squid_ldap_auth.c:218: error: expected â)â before â*â token
squid_ldap_auth.c:226: error: expected â)â before â*â token
squid_ldap_auth.c:231: error: expected â)â before â*â token
squid_ldap_auth.c:249: error: expected â=â, â,â, â;â, âasmâ or â__attribute__â 
before â*â token
squid_ldap_auth.c: In function âmainâ:
squid_ldap_auth.c:348: error: âLDAPâ undeclared (first use in this function)
squid_ldap_auth.c:348: error: (Each undeclared identifier is reported only once
squid_ldap_auth.c:348: error: for each function it appears in.)
squid_ldap_auth.c:348: error: âldâ undeclared (first use in this function)
squid_ldap_auth.c:350: error: âLDAP_PORTâ undeclared (first use in this 
function)
squid_ldap_auth.c:410: error: âLDAP_SCOPE_BASEâ undeclared (first use in this 
function)
squid_ldap_auth.c:412: error: âLDAP_SCOPE_ONELEVELâ undeclared (first use in 
this function)
squid_ldap_auth.c:440: error: âLDAP_DEREF_ALWAYSâ undeclared (first use in this 
function)
squid_ldap_auth.c:442: error: âLDAP_DEREF_SEARCHINGâ undeclared (first use in 
this function)
squid_ldap_auth.c:444: error: âLDAP_DEREF_FINDINGâ undeclared (first use in 
this function)
squid_ldap_auth.c:586: warning: implicit declaration of function 
âopen_ldap_connectionâ
squid_ldap_auth.c:587: warning: implicit declaration of function âcheckLDAPâ
squid_ldap_auth.c:588: warning: implicit declaration of function 
âsquid_ldap_errnoâ
squid_ldap_auth.c:588: error: âLDAP_INVALID_CREDENTIALSâ undeclared (first use 
in this function)
squid_ldap_auth.c:590: warning: implicit declaration of function âldap_unbindâ
squid_ldap_auth.c:594: warning: implicit declaration of function 
âldap_err2stringâ
squid_ldap_auth.c:594: warning: format â%sâ expects type âchar *â, but argument 
2 has type âintâ
squid_ldap_auth.c:598: error: âLDAP_SUCCESSâ undeclared (first use in this 
function)
squid_ldap_auth.c: At top level:
squid_ldap_auth.c:640: error: expected â)â before â*â token
make[3]: *** [squid_ldap_auth.o] Error 1
make[3]: Leaving directory 
`/home/bilal/squid-3.0.STABLE24/helpers/basic_auth/LDAP'
make[2]: *** [all-recursive] Error 1
make[2]: Leaving directory `/home/bilal/squid-3.0.STABLE24/helpers/basic_auth'
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory `/home/bilal/squid-3.0.STABLE24/helpers'
make: *** [all-recursive] Error 1

  
_
Hotmail: Free, trusted and rich email service.
https://signup.live.com/signup.aspx?id=60969

RE: [squid-users] Cache_dir size considerations

2010-03-19 Thread GIGO .

Yes you are right about asking of lot of questions at once. i be careful.
 
+
 
Thank you


> Date: Fri, 19 Mar 2010 16:44:18 +0100
> From: mrom...@ottotecnica.com
> To: gi...@msn.com
> CC: squid-users@squid-cache.org
> Subject: Re: [squid-users] Cache_dir size considerations
>
> GIGO . ha scritto:
>> Well i want to make sure that my settings are optimized and want to
>> learn more about the cache_dir settings.....let me come in details
>
> Gigo,
> you are asking a lot of questions all at once.
> This is a volounteer-based support list, so your chances of getting
> (good) responses are maximized if you ask specific questions, one or two
> per post (possibly related).
>
> That said, I'll try to answer with what I know...
>
>>
>>
>> I have installed squid3stable24 on Ubuntu 8.04 on IBM 3650 X series
>> server with two hard disks on which physical RAID1 is implemented. I
>> am to use the Squid Server for 1000 users out of which 250 are power
>> usrs rest of them are normal users for which there are many
>> restrictions(youtube,facebook,msnmessgenger,yahoomessenger,mp3mpg
>> etc...).
>
> OK
>
>>
>> I have done my settings specifically to ensure that windows updates
>> are cached and my maximum_object_size is 256 mb. Also i am looking
>> forward to cache Youtube content(for which i have no updated script
>> and settings so far the one on internet is with storeurl directive
>> which is depricated)...
>>
>>
>> Now my cache directory size is 50 gb with 16 L1 and 256 L2. I think
>> better would be
>>
>> Cache_dir_size aufs 50 GB 48(L1) 768(L2)
>>
>>
>> as far as L1 & L2 settings i am clear that there should be no more
>> than around 100 file in L2 directories so one's settings should be
>> adjusted accordingly. However i am confused that if setting your
>> cache (50gb) of too large a size will have anything to do with your
>> performance. Secondly at the moment the cache directory is
>> implemented on the same hard drive on which OS is installed. I know
>> that cache should be better moved to a spare hard drive. But what
>> about the highavailability? Failure of a disk cud result in the
>> failure of proxy?
>
> To maximize performance you want 1 disk for OS and logs, and one disk
> per cache_dir, without any RAID.
> With only two disks, obviously if either one dies you have an out of
> service.
> So to achieve ha squid you'd neeto to have two phisical squid boxes, I
> think. Haven't tried myself, so i cannot guide you on how to set that up...
>
>>
>> Another confusion which i have is that what about the
>> cahe_effective_user i hav set my user
>>
>> cache_effective_user proxy but i dont have much concepts about it. I
>> have read on SAN institute site a white paper published 2003 that
>> squid should not be run as nobody user but as a sandbox user with
>> noshell. However i am not sure what is it all about and whether this
>> informaiton is still valid after 7 years have been passed.
>
> Squid should not be run as root.
> You should have a dedicated user account for it.
> Squid cache dirs should be rw by that squid account, obviously.
> I belive most distros (at least server-oriented ones) take care of this
> setup when you install squid via package manager.
>
>>
>> Please also guide me that what are the risks involved with this
>> setting which i have done for windows update:
>>
>> range_offset_limit -1 maximum_object_size 256 MB quick_abort_min -1
>
> No risk, but if a user interrupts a huge download, squid will continue
> it until it finishes, possibly wasting a lot of bandwidth on the wan side.
>
>>
>>
>> Further after giving squid too many longs list of blocked site say
>> containg 100+ sites. I have noticed that its slowed down however i am
>> not sure that if it is the reason? please guide..
>>
>
> Well, blocking sites involves checking every request's url against all
> the sites in the blacklist. This might have a noticeable impact on the
> server load. Also, if you have many regexes in the blacklist(s) the load
> will be significantly higher.
> You might want to have a look at squidGuard or other external helper, to
> take advantage of the multiple CPU cores your server might have.
>
>>
>> Please guide in detail it will be really beneficial for me as concept
>> building...i would be really thankful..
>>
>> regards,
>>
>>
>
> HTH
>
>>
>>
>>
>>
>> ---

[squid-users] Absolute Beginner help required on concepts related to Cache_effective_user.

2010-03-19 Thread GIGO .


On a compiled squid3 stable24. I am unable to run squid as root in Ubuntu. So 
the cache_effective_user defined in squid.conf never comes into play. Is this a 
security concern? what good is cache_effective_user for? 
 
 
 
Is it right to run squid with the default ubuntu user one has installed the OS? 
 
 
 
On ubuntu there lies another user proxy(13) having group proxy? For what 
purpose this user exist if this has any relation with squid?
 
 
 
Startup scripts in etc/init.d run with root privilege on system startup? 
however my startup script never succeeds because permission is denied to run 
squid as root? is there a way to fix this issue.
 
 
 
 
please if somebody enlighten me about these concepts i would be really thankful 
as unable to get this concept right myself.
 
regards,
  
_
Hotmail: Free, trusted and rich email service.
https://signup.live.com/signup.aspx?id=60969

RE: [squid-users] Squid cache_dir failed - can squid survive?

2010-03-18 Thread GIGO .

Is it possible to run two instances/processes of squid on the same physicail 
machine that is one with cache and other in proxy only mode? is that what u 
mean ? how.


> From: hen...@henriknordstrom.net
> To: gi...@msn.com
> CC: gina...@gmail.com; squid-users@squid-cache.org
> Date: Thu, 18 Mar 2010 09:54:34 +0100
> Subject: RE: [squid-users] Squid cache_dir failed - can squid survive?
>
> tor 2010-03-18 klockan 06:16 + skrev GIGO .:
>> Dear henrik,
>>
>> If you have only one physical machine what is the best strategy for
>> miminmizing the downtime and rebuild the cache directory again or
>> start utilizing the squid without the cache directory? I assume we
>> have to reinstall the Squid Software? Please guide
>
> The approach I proposed earlier with two Squid processes running in
> cooperation will make service surive automatically for as long as the
> system disk is working.
>
> If using just one process then making Squid stop trying to using the
> cache is as simple as removing the cache_dir specifications from
> squid.conf and start Squid again. You do not need to reinstall unless
> the system/os partition have been damaged. This change to squid.conf can
> easily be automated with a little script if you want.
>
> Regards
> Henrik
>
>
> 
_
Hotmail: Trusted email with powerful SPAM protection.
https://signup.live.com/signup.aspx?id=60969

  1   2   >