Re: [squid-users] Re: Constant Login Prompt for NTLM Auth against Samba PDC

2008-11-05 Thread Amos Jeffries
> I figured it out to a point:
>
> I had this config, which worked on another setup:
>
> #Samba PDC Auth
> auth_param ntlm program /usr/bin/ntlm_auth
> --helper-protocol=squid-2.5-ntlmssp
> #auth_param ntlm max_challenge_reuses 0
> #auth_param ntlm max_challenge_lifetime 2 minutes
> auth_param ntlm children 40
> auth_param basic program /usr/bin/ntlm_auth
> --helper-protocol=squid-2.5-basic
> auth_param basic children 40
> auth_param basic realm Cache NTLM Authentication
> auth_param basic credentialsttl 2 hours
>
> Though this setup now works:
> auth_param ntlm program /usr/lib/squid/ntlm_auth 01Networks/Debian-PDC
> auth_param ntlm children 5
> #auth_param ntlm max_challenge_reuses 0
> #auth_param ntlm max_challenge_lifetime 2 minutes
>
>
> The reason I have two lines commented out on each is because even
> though tons of sites claim to use max_challenge but they always error
> out. Did something change?

This is a perfect example of the squid vs Samba bundled confusion.

The top config uses the Samba helper for full NTLM auth with some kerberos
support by rumour. It also has basic auth input accepted as a backup if
the client fails NTLM handshake.

The second config uses the squid helper for partial SMB LanManager auth.

Amos

>
> On Wed, Nov 5, 2008 at 12:50 AM, Adam McCarthy
> <[EMAIL PROTECTED]> wrote:
>> I currently have a Samba 3 PDC.
>>
>> Everything seems to work, except IE/Firefox both bring up a prompt for
>> username and password.
>>
>> I'm using the exact same config files from another setup that worked
>> fine.
>>
>> You for some reason can't type in just the username and password, like
>> you would think.
>>
>> For example, my workgroup is 01Networks, and even though the XP Pro
>> machine is logged in sucessfully with that same name, unless I type in
>>
>> 01Networks/adam and password, the prompts never go away.
>>
>> After I type those in they work.
>>
>> Why is this setup acting strange after a previous setup done exactly
>> the same way works fine?
>>
>> Also, why would I be required to put in my Domain/User instead of just
>> User when normally I only ever needed User?
>>
>> Also normally IE/Firefox just sent out my info.
>>
>




Re: [squid-users] Vedio streming erros

2008-11-05 Thread Amos Jeffries
> Hi,
>
> We want to go to below website which contains streaming vedio. When We
> get there all the images. But We will NOT get streaming vedio. If We
> bypass squid, We get streamig Vedio.
>
> http://uticctv.mine.nu/index.htm
>
> The above site has a user name and password. I can Not give it you.
> sorry for it.
>
> Anyway, This is squid version , Pls see below
>
>  Squid Cache: Version 2.6.STABLE6
>

Please verify that the video is actually sent via HTTP.

Most common breakage with streaming media is people blocking RTSP protocol
ports or other custom stream ports and assuming media client and server
can use the proxy.

Amos



[squid-users] Re: Constant Login Prompt for NTLM Auth against Samba PDC

2008-11-05 Thread Adam McCarthy
I figured it out to a point:

I had this config, which worked on another setup:

#Samba PDC Auth
auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp
#auth_param ntlm max_challenge_reuses 0
#auth_param ntlm max_challenge_lifetime 2 minutes
auth_param ntlm children 40
auth_param basic program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-basic
auth_param basic children 40
auth_param basic realm Cache NTLM Authentication
auth_param basic credentialsttl 2 hours

Though this setup now works:
auth_param ntlm program /usr/lib/squid/ntlm_auth 01Networks/Debian-PDC
auth_param ntlm children 5
#auth_param ntlm max_challenge_reuses 0
#auth_param ntlm max_challenge_lifetime 2 minutes


The reason I have two lines commented out on each is because even
though tons of sites claim to use max_challenge but they always error
out. Did something change?


On Wed, Nov 5, 2008 at 12:50 AM, Adam McCarthy
<[EMAIL PROTECTED]> wrote:
> I currently have a Samba 3 PDC.
>
> Everything seems to work, except IE/Firefox both bring up a prompt for
> username and password.
>
> I'm using the exact same config files from another setup that worked fine.
>
> You for some reason can't type in just the username and password, like
> you would think.
>
> For example, my workgroup is 01Networks, and even though the XP Pro
> machine is logged in sucessfully with that same name, unless I type in
>
> 01Networks/adam and password, the prompts never go away.
>
> After I type those in they work.
>
> Why is this setup acting strange after a previous setup done exactly
> the same way works fine?
>
> Also, why would I be required to put in my Domain/User instead of just
> User when normally I only ever needed User?
>
> Also normally IE/Firefox just sent out my info.
>


Re: [squid-users] SSL Site Problem...

2008-11-05 Thread Henrik Nordstrom
Most likely a window scaling issue. There is still very many broken
firewalls out there..

Squid FAQ System Wierdness - Linux - Some sites load extremely slowly or not at 
all:
http://wiki.squid-cache.org/SquidFaq/SystemWeirdnesses#head-4920199b311ce7d20b9a0d85723fd5d0dfc9bc84

Regards
Henrik

On ons, 2008-11-05 at 15:07 +, Andy McCall wrote:
> Hi Folks,
> 
> I have a problem accessing an SSL site through my Squid setup, IE just spins 
> its blue circle forever, and doesn't seem to ever actually time out.  The 
> same site works when going direct.  I have tried multiple browsers to 
> eliminate the browser as the issue.
> 
> Any help is appreciated, as I am really stuck now...
> 
> The site is:
> 
> https://secure.crtsolutions.co.uk
> 
> I am using:
> 
> Squid Cache: Version 2.6.STABLE18
> configure options:  '--prefix=/usr' '--exec_prefix=/usr' '--bindir=/usr/sbin' 
> '--sbindir=/usr/sbin' '--libexecdir=/usr/lib/squid' '--sysconfdir=/etc/squid' 
> '--localstatedir=/var/spool/squid' '--datadir=/usr/share/squid' 
> '--enable-async-io' '--with-pthreads' 
> '--enable-storeio=ufs,aufs,coss,diskd,null' '--enable-linux-netfilter' 
> '--enable-arp-acl' '--enable-epoll' '--enable-removal-policies=lru,heap' 
> '--enable-snmp' '--enable-delay-pools' '--enable-htcp' 
> '--enable-cache-digests' '--enable-underscores' '--enable-referer-log' 
> '--enable-useragent-log' '--enable-auth=basic,digest,ntlm' '--enable-carp' 
> '--enable-follow-x-forwarded-for' '--with-large-files' '--with-maxfd=65536' 
> 'i386-debian-linux' 'build_alias=i386-debian-linux' 
> 'host_alias=i386-debian-linux' 'target_alias=i386-debian-linux' 'CFLAGS=-Wall 
> -g -O2' 'LDFLAGS=-Wl,-Bsymbolic-functions' 'CPPFLAGS='
> 
> The entry in access.log is:
> 
> 1225894644.785   3106 10.XX.XX.XX TCP_MISS/200 174 CONNECT 
> secure.crtsolutions.co.uk:443 - DIRECT/195.114.102.18 -
> 
> The cache.log entry is (if there is too much here, I apologise, I am not sure 
> how much to post!):
> 
> 2008/11/05 14:17:25| parseHttpRequest: Client HTTP version 1.0.
> 2008/11/05 14:17:25| parseHttpRequest: Method is 'CONNECT'
> 2008/11/05 14:17:25| parseHttpRequest: URI is 'secure.crtsolutions.co.uk:443'
> 2008/11/05 14:17:25| parseHttpRequest: req_hdr = {User-Agent: Mozilla/4.0 
> (compatible; MSIE 7.0; Windows NT 6.0; SLCC1; .NET CLR 2.0.50727; Media 
> Center PC 5.0; .NET CLR 3.0.04506; InfoPath.2)^M
> Proxy-Connection: Keep-Alive^M
> Content-Length: 0^M
> Host: secure.crtsolutions.co.uk^M
> Pragma: no-cache^M
> ^M
> }
> 2008/11/05 14:17:25| parseHttpRequest: end = {}
> 2008/11/05 14:17:25| parseHttpRequest: prefix_sz = 294, req_line_sz = 48
> 2008/11/05 14:17:25| parseHttpRequest: Request Header is
> User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0; SLCC1; .NET 
> CLR 2.0.50727; Media Center PC 5.0; .NET CLR 3.0.04506; InfoPath.2)^M
> Proxy-Connection: Keep-Alive^M
> Content-Length: 0^M
> Host: secure.crtsolutions.co.uk^M
> Pragma: no-cache^M
> ^M
> 
> 2008/11/05 14:17:25| parseHttpRequest: Complete request received
> 2008/11/05 14:17:25| conn->in.offset = 0
> 2008/11/05 14:17:25| commSetTimeout: FD 44 timeout 86400
> 2008/11/05 14:17:25| init-ing hdr: 0x191b82c8 owner: 2
> 2008/11/05 14:17:25| parsing hdr: (0x191b82c8)
> User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0; SLCC1; .NET 
> CLR 2.0.50727; Media Center PC 5.0; .NET CLR 3.0.04506; InfoPath.2)^M
> Proxy-Connection: Keep-Alive^M
> Content-Length: 0^M
> Host: secure.crtsolutions.co.uk^M
> Pragma: no-cache^M
> 
> 2008/11/05 14:17:25| creating entry 0x1a1f39a0: near 'User-Agent: Mozilla/4.0 
> (compatible; MSIE 7.0; Windows NT 6.0; SLCC1; .NET CLR 2.0.50727; Media 
> Center PC 5.0; .NET CLR 3.0.04506; InfoPath.2)'
> 2008/11/05 14:17:25| created entry 0x1a1f39a0: 'User-Agent: Mozilla/4.0 
> (compatible; MSIE 7.0; Windows NT 6.0; SLCC1; .NET CLR 2.0.50727; Media 
> Center PC 5.0; .NET CLR 3.0.04506; InfoPath.2)'
> 2008/11/05 14:17:25| 0x191b82c8 adding entry: 50 at 0
> 2008/11/05 14:17:25| creating entry 0x1a1ea180: near 'Proxy-Connection: 
> Keep-Alive'
> 2008/11/05 14:17:25| created entry 0x1a1ea180: 'Proxy-Connection: Keep-Alive'
> 2008/11/05 14:17:25| 0x191b82c8 adding entry: 41 at 1
> 2008/11/05 14:17:25| creating entry 0x82b4a88: near 'Content-Length: 0'
> 2008/11/05 14:17:25| created entry 0x82b4a88: 'Content-Length: 0'
> 2008/11/05 14:17:25| 0x191b82c8 adding entry: 14 at 2
> 2008/11/05 14:17:25| creating entry 0x1a1f38d0: near 'Host: 
> secure.crtsolutions.co.uk'
> 2008/11/05 14:17:25| created entry 0x1a1f38d0: 'Host: 
> secure.crtsolutions.co.uk'
> 2008/11/05 14:17:25| 0x191b82c8 adding entry: 27 at 3
> 2008/11/05 14:17:25| creating entry 0x1a1f3910: near 'Pragma: no-cache'
> 2008/11/05 14:17:25| created entry 0x1a1f3910: 'Pragma: no-cache'
> 2008/11/05 14:17:25| 0x191b82c8 adding entry: 37 at 4
> 2008/11/05 14:17:25| 0x191b82c8 lookup for 20
> 2008/11/05 14:17:25| clientSetKeepaliveFlag: http_ver = 1.0
> 2008/11/05 14:17:25| clientSetKeepaliveFlag: method = CONNECT
> 2008/11/05 

Re: [squid-users] squid 2.6/block https

2008-11-05 Thread Henrik Nordstrom
On ons, 2008-11-05 at 17:57 +0530, sohan krishi wrote:

> My configuration is Ubuntu-iptables-squid2.6/Transparent Proxy. I
> block gmail to all employees in my company. My problem is, squid does
> not block https://gmail.com. And does not even log https://gmail.com !
> I didn't knew this until I've seen one of our employe browsing gmail!

It's because https is encrypted on port 443.

> I did add this to my iptables : #iptables -t nat -A PREROUTING -i eth1
> -p tcp --dport 443 -j DNAT --to eth0:3128 but get this meesage in
> access.log : error:unsupported-request-method

It's because https is encrypted. It sort of works it you redirect it to
an https_port, but probably not what you want as it breaks many things.

The proper soultion to all this is to use proxy settings. It's fairly
easy to roll out proxy settings company wide using group policies or
login scripts or eeven auto discovery using WPAD, and then use
interception and firewalling only as a backup method for those who for
some reason did not get the prexy settings.

> Can anyone please help me how to block gmail. I want to block
> gmail/gtalk to all IPs except couple of IPs.

You'll have to block pore 443 traffic to all addresses used by google
servers almost..

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] WCCP load balancing and TPROXY fully transparent interception

2008-11-05 Thread Bin Liu
Thanks for your reply.

> The redirection in both directions must match for this to work. See the
> wiki for a configuration example
>
> http://wiki.squid-cache.org/ConfigExamples/FullyTransparentWithTPROXY

The configuration example does not mention the scenario that one
router talks to *MULTIPLE* squid servers. As far as I know, cisco
routers does not fully track connections, they just redirect packets
by their IP addresses and source/destination ports. With TPROXY
enabled, router can not tell which outgoing request packet to original
destination server is sent by which squid server, as the source IP
address is original client's address. So the question arises:

I have 2 squid servers, squid A and squid B, both implented TPROXY and
connected to the same Cisco router:

Internet
|
|
squid ARoutersquid B
|
|
Customers

Here squid A wants to send a HTTP request to original destination
server, the routers just forwards this packet, it's OK; but when the
response packet from the original server returns in, how does the
router redirect that packet? Redirect it to squid A or squid B? As
there's no connection table in router memory or any mark in the
packet, how can the router determine that this response packet should
be forwarded to squid A?

squid A -- (request to original server) --> router --> original server
-- (response) --> router --> squid A or B?



Many thanks again.
Regards


[squid-users] SSL Site Problem...

2008-11-05 Thread Andy McCall
Hi Folks,

I have a problem accessing an SSL site through my Squid setup, IE just spins 
its blue circle forever, and doesn't seem to ever actually time out.  The same 
site works when going direct.  I have tried multiple browsers to eliminate the 
browser as the issue.

Any help is appreciated, as I am really stuck now...

The site is:

https://secure.crtsolutions.co.uk

I am using:

Squid Cache: Version 2.6.STABLE18
configure options:  '--prefix=/usr' '--exec_prefix=/usr' '--bindir=/usr/sbin' 
'--sbindir=/usr/sbin' '--libexecdir=/usr/lib/squid' '--sysconfdir=/etc/squid' 
'--localstatedir=/var/spool/squid' '--datadir=/usr/share/squid' 
'--enable-async-io' '--with-pthreads' 
'--enable-storeio=ufs,aufs,coss,diskd,null' '--enable-linux-netfilter' 
'--enable-arp-acl' '--enable-epoll' '--enable-removal-policies=lru,heap' 
'--enable-snmp' '--enable-delay-pools' '--enable-htcp' '--enable-cache-digests' 
'--enable-underscores' '--enable-referer-log' '--enable-useragent-log' 
'--enable-auth=basic,digest,ntlm' '--enable-carp' 
'--enable-follow-x-forwarded-for' '--with-large-files' '--with-maxfd=65536' 
'i386-debian-linux' 'build_alias=i386-debian-linux' 
'host_alias=i386-debian-linux' 'target_alias=i386-debian-linux' 'CFLAGS=-Wall 
-g -O2' 'LDFLAGS=-Wl,-Bsymbolic-functions' 'CPPFLAGS='

The entry in access.log is:

1225894644.785   3106 10.XX.XX.XX TCP_MISS/200 174 CONNECT 
secure.crtsolutions.co.uk:443 - DIRECT/195.114.102.18 -

The cache.log entry is (if there is too much here, I apologise, I am not sure 
how much to post!):

2008/11/05 14:17:25| parseHttpRequest: Client HTTP version 1.0.
2008/11/05 14:17:25| parseHttpRequest: Method is 'CONNECT'
2008/11/05 14:17:25| parseHttpRequest: URI is 'secure.crtsolutions.co.uk:443'
2008/11/05 14:17:25| parseHttpRequest: req_hdr = {User-Agent: Mozilla/4.0 
(compatible; MSIE 7.0; Windows NT 6.0; SLCC1; .NET CLR 2.0.50727; Media Center 
PC 5.0; .NET CLR 3.0.04506; InfoPath.2)^M
Proxy-Connection: Keep-Alive^M
Content-Length: 0^M
Host: secure.crtsolutions.co.uk^M
Pragma: no-cache^M
^M
}
2008/11/05 14:17:25| parseHttpRequest: end = {}
2008/11/05 14:17:25| parseHttpRequest: prefix_sz = 294, req_line_sz = 48
2008/11/05 14:17:25| parseHttpRequest: Request Header is
User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0; SLCC1; .NET CLR 
2.0.50727; Media Center PC 5.0; .NET CLR 3.0.04506; InfoPath.2)^M
Proxy-Connection: Keep-Alive^M
Content-Length: 0^M
Host: secure.crtsolutions.co.uk^M
Pragma: no-cache^M
^M

2008/11/05 14:17:25| parseHttpRequest: Complete request received
2008/11/05 14:17:25| conn->in.offset = 0
2008/11/05 14:17:25| commSetTimeout: FD 44 timeout 86400
2008/11/05 14:17:25| init-ing hdr: 0x191b82c8 owner: 2
2008/11/05 14:17:25| parsing hdr: (0x191b82c8)
User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0; SLCC1; .NET CLR 
2.0.50727; Media Center PC 5.0; .NET CLR 3.0.04506; InfoPath.2)^M
Proxy-Connection: Keep-Alive^M
Content-Length: 0^M
Host: secure.crtsolutions.co.uk^M
Pragma: no-cache^M

2008/11/05 14:17:25| creating entry 0x1a1f39a0: near 'User-Agent: Mozilla/4.0 
(compatible; MSIE 7.0; Windows NT 6.0; SLCC1; .NET CLR 2.0.50727; Media Center 
PC 5.0; .NET CLR 3.0.04506; InfoPath.2)'
2008/11/05 14:17:25| created entry 0x1a1f39a0: 'User-Agent: Mozilla/4.0 
(compatible; MSIE 7.0; Windows NT 6.0; SLCC1; .NET CLR 2.0.50727; Media Center 
PC 5.0; .NET CLR 3.0.04506; InfoPath.2)'
2008/11/05 14:17:25| 0x191b82c8 adding entry: 50 at 0
2008/11/05 14:17:25| creating entry 0x1a1ea180: near 'Proxy-Connection: 
Keep-Alive'
2008/11/05 14:17:25| created entry 0x1a1ea180: 'Proxy-Connection: Keep-Alive'
2008/11/05 14:17:25| 0x191b82c8 adding entry: 41 at 1
2008/11/05 14:17:25| creating entry 0x82b4a88: near 'Content-Length: 0'
2008/11/05 14:17:25| created entry 0x82b4a88: 'Content-Length: 0'
2008/11/05 14:17:25| 0x191b82c8 adding entry: 14 at 2
2008/11/05 14:17:25| creating entry 0x1a1f38d0: near 'Host: 
secure.crtsolutions.co.uk'
2008/11/05 14:17:25| created entry 0x1a1f38d0: 'Host: secure.crtsolutions.co.uk'
2008/11/05 14:17:25| 0x191b82c8 adding entry: 27 at 3
2008/11/05 14:17:25| creating entry 0x1a1f3910: near 'Pragma: no-cache'
2008/11/05 14:17:25| created entry 0x1a1f3910: 'Pragma: no-cache'
2008/11/05 14:17:25| 0x191b82c8 adding entry: 37 at 4
2008/11/05 14:17:25| 0x191b82c8 lookup for 20
2008/11/05 14:17:25| clientSetKeepaliveFlag: http_ver = 1.0
2008/11/05 14:17:25| clientSetKeepaliveFlag: method = CONNECT
2008/11/05 14:17:25| 0x191b82c8 lookup for 41
2008/11/05 14:17:25| 0x191b82c8: joining for id 41
2008/11/05 14:17:25| 0x191b82c8: joined for id 41: Keep-Alive
2008/11/05 14:17:25| 0x191b82c8 lookup for 52
2008/11/05 14:17:25| 0x191b82c8 lookup for 41
2008/11/05 14:17:25| 0x191b82c8: joining for id 41
2008/11/05 14:17:25| 0x191b82c8: joined for id 41: Keep-Alive
2008/11/05 14:17:25| commSetSelect: FD 44 type 1
2008/11/05 14:17:25| commSetEvents(fd=44)
2008/11/05 14:17:25| 0x191b82c8 lookup for 59
2008/11/05 14:17:25| cbdataLock: 0x82a9c78
2008/11/05 14:1

Re: [squid-users] Ignoring query string from url

2008-11-05 Thread Amos Jeffries

nitesh naik wrote:

Hi All,

Issues was with Disk I/O. I have used null cache dir and squid
response is much faster now.

 cache_dir null /empty

Thanks everyone for your help.

Regards
Nitesh


Oh dear, I can't believe I overlooked this.
cache_dir aufs (linux) or diskd (FreeBSD) is likely to solve the disk 
speed issues. Particularly if you don't use a blocking IOEngine (leaving 
it unset is usually best).


Amos



On Tue, Nov 4, 2008 at 9:40 AM, nitesh naik <[EMAIL PROTECTED]> wrote:

Does these Redirector statistics mean url rewrite helper program is
slowing down squid response ? avg service time is 1550 msec.

Redirector Statistics:
program: /home/zdn/bin/redirect_parallel.pl
number running: 2 of 2
requests sent: 1069753
replies received: 1069752
queue length: 0
avg service time: 1550 msec


#   FD  PID # Requests  Flags   TimeOffset  Request
1   10  18237   12645   B   0.002   38  (none)
2   15  18238   12335   2.144   0   (none)

Regards
Nitesh

On Mon, Nov 3, 2008 at 2:46 PM, nitesh naik <[EMAIL PROTECTED]> wrote:

Not sure if url rewrite helper is slowing down process because via
cache manager interface it didn't show any connection back log. What
information I should look for in cache manager to find out the cause
of the slow serving of requests ?

Redirector Statistics:
program: /home/zdn/bin/redirect_parallel.pl
number running: 2 of 2
requests sent: 155697
replies received: 155692
queue length: 0
avg service time: 0 msec


#   FD  PID # Requests  Flags   TimeOffset  Request
1   8   21149   104125
BW  0.033   38  http://s2.xyz.com/1821/78/570/1789/563/i88.js?z=4258
81.52.249.106/- - GET myip=10.0.0.165 myport=80\n
2   9   21150   51572   BW  0.039   0   
http://s2.xyz.com/1813/2/570/1781/563/i7.js?z=8853
81.52.249.106/- - GET myip=10.0.0.165 myport=80\n


Following are my squid settings.

acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1
acl to_localhost dst 127.0.0.0/255.0.0.0
acl localnet src 10.0.0.0/255.0.0.0
acl SSL_ports port 443
acl Safe_ports port 80 21 443 70 210 1025-65535 280 488 591 777
acl CONNECT method CONNECT
http_access Allow manager localhost
http_access Deny manager
http_access Deny !Safe_ports
http_access Deny CONNECT !SSL_ports
http_access Allow all
http_access Allow localnet
http_access Deny all
icp_access Allow localnet
icp_access Deny all
htcp_access Allow localnet
htcp_access Deny all
htcp_clr_access Deny all
ident_lookup_access Deny all
http_port 0.0.0.0:80 defaultsite=s1.xyz.com vhost
cache_peer 10.0.0.175 Parent 80 0 no-query round-robin originserver
cache_peer 10.0.0.177 Parent 80 0 no-query round-robin originserver
cache_peer 10.0.0.179 Parent 80 0 no-query round-robin originserver
cache_peer 10.0.0.181 Parent 80 0 no-query round-robin originserver
dead_peer_timeout 10 seconds
hierarchy_stoplist cgi-bin
hierarchy_stoplist ?
cache_mem 0 bytes
maximum_object_size_in_memory 1048576 bytes
memory_replacement_policy lru
cache_replacement_policy lru
cache_dir ufs /home/zdn/squid/var/cache 6000 16 256 IOEngine=Blocking
store_dir_select_algorithm least-load
max_open_disk_fds 0
minimum_object_size 0 bytes
maximum_object_size 4194304 bytes
cache_swap_low 90
cache_swap_high 95
logformat combined %>a %ui %un [%[tl] "%"rm %"ru HTTP/%">v" %Hs %h" "%"{User-Agent}>h" %Ss:%Sh
access_log /home/zdn/squid/var/logs/access.log squid
cache_log /home/zdn/squid/var/logs/cache.log
cache_store_log /home/zdn/squid/var/logs/store.log
logfile_rotate 10
emulate_httpd_log off
log_ip_on_direct on
mime_table /home/zdn/squid/etc/mime.conf
log_mime_hdrs off
pid_filename /home/zdn/squid/var/logs/squid.pid
debug_options ALL,1
log_fqdn off
client_netmask 255.255.255.255
strip_query_terms off
buffered_logs off
url_rewrite_program /home/zdn/bin/redirect_parallel.pl
url_rewrite_children 2
url_rewrite_concurrency 2000
url_rewrite_host_header off
url_rewrite_bypass off
refresh_pattern ^ftp: 1440 20% 10080

refresh_pattern ^gopher: 1440 0% 1440

refresh_pattern (cgi-bin|\?) 0 0% 0

refresh_pattern . 0 20% 4320

quick_abort_min 16 KB
quick_abort_max 16 KB
quick_abort_pct 95
read_ahead_gap 16384 bytes
negative_ttl 0 seconds
positive_dns_ttl 21600 seconds
negative_dns_ttl 60 seconds
range_offset_limit 0 bytes
minimum_expiry_time 60 seconds
store_avg_object_size 13 KB
store_objects_per_bucket 20
request_header_max_size 20480 bytes
reply_header_max_size 20480 bytes
request_body_max_size 0 bytes
via off
ie_refresh off
vary_ignore_expire off
request_entities off
relaxed_header_parser on
forward_timeout 240 seconds
connect_timeout 10 seconds
peer_connect_timeout 5 seconds
read_timeout 120 seconds
request_timeout 10 seconds
persistent_request_timeout 120 seconds
client_lifetime 86400 seconds
half_closed_clients off
pconn_timeout 60 seconds
ident_timeout 10 seconds
shutdown_lifetime 30 seconds
cache_mgr webmaster
mail_program mail
cache_effective_user zdn
httpd_suppress_version

[squid-users] squid 2.6/block https

2008-11-05 Thread sohan krishi
Hi All,

My configuration is Ubuntu-iptables-squid2.6/Transparent Proxy. I
block gmail to all employees in my company. My problem is, squid does
not block https://gmail.com. And does not even log https://gmail.com !
I didn't knew this until I've seen one of our employe browsing gmail!

I did add this to my iptables : #iptables -t nat -A PREROUTING -i eth1
-p tcp --dport 443 -j DNAT --to eth0:3128 but get this meesage in
access.log : error:unsupported-request-method


Can anyone please help me how to block gmail. I want to block
gmail/gtalk to all IPs except couple of IPs.

Thanks

Sohan Krishi


Re: [squid-users] squid cache proxy + Exchange 2007 problems

2008-11-05 Thread Amos Jeffries

Retaliator wrote:

Hello,

i found out after few months i have problems with clients using office 2007
against exchange 2007.
if proxy is enabled out of office and more issues wont work becasue squid
blocks them, the autodiscover service is a part of exchange 2007, if you
remove the proxy it works.


Please define what you mean by "proxy is enabled".
 do you mean browsers configured with proxy?
 proxy sitting in middle intercepting traffic?
 proxy reversed for acceleration of the exchange server?


my proxy ip is a real ip, the exchange servers are internal.
while the user try to use out of office on his client with office 2007 he
recieves an error

"your out of office settings cannot be displayed, because the server is
currently unavailable"

on the squid log i see
TCP_MISS/404 0 CONNECT SERVERNAME.SUBDOMAIN.beeper.co.il:443 - DIRECT/- -
servername and subdomain are smt else i changed.


Appears that browsers are configured to tunnel requests through a proxy. 
you likely need to setup a correct reverse-proxy config to access the 
internal servers while outside.



if i add to internet explorer exceptions to the exchange servers its ok,
but how can i fix that in squid configuration that the local domain or those
two servers it will allow/find em?




Amos
--
Please be using
  Current Stable Squid 2.7.STABLE5 or 3.0.STABLE10
  Current Beta Squid 3.1.0.1


Re: [squid-users] MSNT authentication - login window

2008-11-05 Thread Luciano Cassemiro
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

It worked!
Thanks so much for your help.

Henrik Nordstrom escreveu:
> On mån, 2008-11-03 at 09:25 -0200, Luciano Cassemiro wrote:
> 
> 
>> http_access deny our_networks users forbidden_sites !directors
> 
> This line requests authentication as the last acl on the line is
> authentication related (directors).
> 
> Rewrite it to
> 
> http_acccess deny out_networks !directors forbidden_sites
> 
> and it will show an "access denied" message instead. And it also makes
> deny_info more natural if you want a custom error message based on
> forbidden_sites.
> 
> Regards
> Henrik
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.7 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFJEYet4/f2ihDUoIkRAsdRAKC7oDpn8GWk4rDjnwZkUmaKEhuCCgCggcI6
SfFyWe4JjyP4bJVKQNOisJg=
=2S1K
-END PGP SIGNATURE-


Re: [squid-users] Squid-3 + Tproxy4 clarification

2008-11-05 Thread Amos Jeffries

Arun Srinivasan wrote:

Thanks for the response.

" - does the client IP have access to use the hidden peer proxy?"
Yes. To ensure this I tried it out with an 'nc' utility instead of peer proxy.

"- do the connections between peers go over lo interface? I'm not sure
what the special kernel behavior with public IPs on localhost
interface would be."
Yes. I could see the connections go over lo interface. However, it is
not getting handled by the stack.


Aha, there is the problem then.
Henriks other post described the problem clearly, so I won't repeat.

To get this to work you will likely need to try having both squid 
instances listening on different ports of the machines public IP.
You will still loose the spoofing ability within the second-hop proxy, 
but the traffic should at least flow properly.


Amos



2008/11/4 Amos Jeffries <[EMAIL PROTECTED]>:

Arun Srinivasan wrote:

Hi List,

Has anyone successfully used cache_peer support with tproxy4 enabled?

Not that I'm aware of at this point.


The scenario is running Squid proxy with tproxy4 enabled and another
http proxy (no tproxy4) on the same box.

First Squid would receive the request from the user, then connects to
its cache_peer which is the other http proxy.

With tproxy enabled, am not able to establish connection between Squid
and the other proxy. However, in interception mode, am able to do
this.

Please advise if I am missing out anything.

Following are the packages and its versions used:
Kernel version: 2.6.26
Tproxy version: tproxy4-2.6.26-200809262032
iptables version: tproxy-iptables-1.4.0-20080521-113954-1211362794
Squid version: squid-3.HEAD-20081021

The new TPROXY/Squid interaction is that it natively spoofs the client IP on
all outbound links made newly for that request.

Two things to check are:
 - does the client IP have access to use the hidden peer proxy?

 - do the connections between peers go over lo interface? I'm not sure what
the special kernel behavior with public IPs on localhost interface would be.


Amos
--
Please be using
 Current Stable Squid 2.7.STABLE5 or 3.0.STABLE10
 Current Beta Squid 3.1.0.1








--
Please be using
  Current Stable Squid 2.7.STABLE5 or 3.0.STABLE10
  Current Beta Squid 3.1.0.1


Re: [squid-users] CACHEMGR - What`s wrong?

2008-11-05 Thread Henrik Nordstrom
On tis, 2008-11-04 at 14:22 -0300, Rodrigo de Oliveira Gomes wrote:
>Cache Manager Error
> 
>target 192.168.47.89:3128 not allowed in cachemgr.conf
>  __

>cachemgr.conf:
>localhost
>192.168.47.89:3128
> 
> Am I doing something wrong? Lack configuration? Permission? I look forward to 
> in a hand. 

Can cachemgr.cgi open cachemgr.conf?

Is cachemgr.conf in the proper location? Either same directory as
cachemgr.cgi (or to be exact the current working directory when
cachemgr.cgi runs.. usually the same directory but depends on web server
setup), or if not there then /etc/cachemgr.conf

If you are unsure about the  location then strings cachemgr.cgi
| grep cachemgr.conf should tell you.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Squid-3 + Tproxy4 clarification

2008-11-05 Thread Henrik Nordstrom
On tis, 2008-11-04 at 22:37 +0530, Arun Srinivasan wrote:

> Yes. I could see the connections go over lo interface. However, it is
> not getting handled by the stack.

Public addresses can not talk to loopback addresses (127.X). This is an
intentional security restriction in the TCP/IP stack.

Also I don't think using TPROXY internally on the same server is even
intended to work. It's intended use is on traffic being routed by the
proxy to some other servers (i.e. Internet).

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] squid cache proxy + Exchange 2007 problems

2008-11-05 Thread Henrik Nordstrom
On tis, 2008-11-04 at 01:58 -0800, Retaliator wrote:

> on the squid log i see
> TCP_MISS/404 0 CONNECT SERVERNAME.SUBDOMAIN.beeper.co.il:443 - DIRECT/- -
> servername and subdomain are smt else i changed.

From this it looks like yout Squid can not resolve te requested hostname
into an IP.

Check your DNS

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


[squid-users] Vedio streming erros

2008-11-05 Thread Indunil Jayasooriya
Hi,

We want to go to below website which contains streaming vedio. When We
get there all the images. But We will NOT get streaming vedio. If We
bypass squid, We get streamig Vedio.

http://uticctv.mine.nu/index.htm

The above site has a user name and password. I can Not give it you.
sorry for it.

Anyway, This is squid version , Pls see below

 Squid Cache: Version 2.6.STABLE6


Your Idead expected




-- 
Thank you
Indunil Jayasooriya


Re: [squid-users] Timezone issue

2008-11-05 Thread Henrik Nordstrom
On tis, 2008-11-04 at 18:02 +1100, Rod Taylor wrote:

> My squid is running on a machine that is set to local time in both
> software and hardware. Squid shows GMT in all error messages and uses
> GMT in the ACLs. How do I set Squid to use local time not GMT. Squid is
> the only program to do this...

Squid FAQ I want to use local time zone in error messages.
http://wiki.squid-cache.org/SquidFaq/SquidAcl#head-de11286b4accdede48d411359ab365725673c88a

Regards
Henrik



signature.asc
Description: This is a digitally signed message part


Re: [squid-users] R: [squid-users] Connection to webmail sites problem using more than one parent proxy

2008-11-05 Thread Henrik Nordstrom
On tis, 2008-11-04 at 19:49 +0100, Sergio Marchi wrote:

> cache_peer myparentproxy1.dipvvf.it parent 3128 3130 sourcehash
> round-robin no-query

Don't mix round-robin and sourcehash. Not sure what will happen in such
confusing setup.

But you should indeed use no-query if you use sourcehash or round-robin.

> It seems to work , but the connection are established only on one
> parentproxy, even if  the clients ip addresses are different.

How many addresses did you try with? There is a 1/3 probability of two
addresses to end up on the same parent when having 3 srchash parents.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] MSNT authentication - login window

2008-11-05 Thread Henrik Nordstrom
On mån, 2008-11-03 at 09:25 -0200, Luciano Cassemiro wrote:


> http_access deny our_networks users forbidden_sites !directors

This line requests authentication as the last acl on the line is
authentication related (directors).

Rewrite it to

http_acccess deny out_networks !directors forbidden_sites

and it will show an "access denied" message instead. And it also makes
deny_info more natural if you want a custom error message based on
forbidden_sites.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] WCCP load balancing and TPROXY fully transparent interception

2008-11-05 Thread Henrik Nordstrom
On mån, 2008-11-03 at 16:57 +0800, Bin Liu wrote:
> Hi,
> 
> I'm going to deploy multiple squid servers in a ISP for HTTP traffic
> caching. I'm now considering using WCCP for load balancing and TPROXY
> for fully transparent interception.
> 
> Here is the problem. As far as I know, Cisco WCCP module does not
> maintain connection status, it just redirect packets based on their IP
> addresses and ports. I'm just wondering if it's possible that one
> squid server(squid A, for example) sends a outbound request, but the
> router redirects the corresponding inbound response to another
> squid(squid B)? Then that's totally messed.

The redirection in both directions must match for this to work. See the
wiki for a configuration example

http://wiki.squid-cache.org/ConfigExamples/FullyTransparentWithTPROXY

Regards
Henrik


signature.asc
Description: This is a digitally signed message part