Re: [squid-users] Any Way To Check If Windows Updates Are Cached?

2013-09-04 Thread Helmut Hullen
Hallo, HillTopsGM,

Du meintest am 03.09.13:

 I have basically done everything the FAQ has said with regard to
 optimizing the settings for windows updates.
 I have done a few, and I am wondering if there is any way to check
 what has been cached?

You should really use something like wsusoffline - it does a better  
job for this purpose than fiddling around with squid.

Viele Gruesse!
Helmut


[squid-users] Re: ERROR: This proxy does not support the 'rock' cache type. Ignoring.

2013-09-04 Thread Ahmad
hi  amos ,
thanks alot for clarification ,

but again i  still have problems ,

after i corrected rock  storage and recompiled squid ,
it worked , but wccp is no longer working with router ???!  but if i use
port in my explorer it works 


when i comment smp options  and restart squid ,  i note that wccp works with
router ???


here is sample start when disable smp :
2013/09/04 09:19:05 kid1| Squid Cache (Version 3.3.8): Exiting normally.
2013/09/04 09:19:10 kid1| Starting Squid Cache version 3.3.8 for
i486-pc-linux-gnu...
2013/09/04 09:19:10 kid1| Process ID 21334
2013/09/04 09:19:10 kid1| Process Roles: worker
2013/09/04 09:19:10 kid1| With 65536 file descriptors available
2013/09/04 09:19:10 kid1| Initializing IP Cache...
2013/09/04 09:19:10 kid1| DNS Socket created at [::], FD 7
2013/09/04 09:19:10 kid1| DNS Socket created at 0.0.0.0, FD 8
2013/09/04 09:19:10 kid1| Adding nameserver x.x.x.x from squid.conf
2013/09/04 09:19:10 kid1| Adding nameserver x.x.x.x from squid.conf
2013/09/04 09:19:10 kid1| Adding nameserver 8.8.8.8 from squid.conf
2013/09/04 09:19:10 kid1| Logfile: opening log
daemon:/var/log/squid3/access.log
2013/09/04 09:19:10 kid1| Logfile Daemon: opening log
/var/log/squid3/access.log
2013/09/04 09:19:10 kid1| Local cache digest enabled; rebuild/rewrite every
3600/3600 sec
2013/09/04 09:19:10 kid1| Store logging disabled
2013/09/04 09:19:10 kid1| Swap maxSize 0 + 524288 KB, estimated 40329
objects
2013/09/04 09:19:10 kid1| Target number of buckets: 2016
2013/09/04 09:19:10 kid1| Using 8192 Store buckets
2013/09/04 09:19:10 kid1| Max Mem  size: 524288 KB
2013/09/04 09:19:10 kid1| Max Swap size: 0 KB
2013/09/04 09:19:10 kid1| Using Least Load store dir selection
2013/09/04 09:19:10 kid1| Set Current Directory to /var/cache/squid
2013/09/04 09:19:10 kid1| Loaded Icons.
2013/09/04 09:19:10 kid1| Accepting WCCPv2 messages on port 2048, FD 11.
2013/09/04 09:19:10 kid1| Initialising all WCCPv2 lists
2013/09/04 09:19:10 kid1| HTCP Disabled.
2013/09/04 09:19:10 kid1| Squid plugin modules loaded: 0
2013/09/04 09:19:10 kid1| Adaptation support is off.
2013/09/04 09:19:10 kid1| Accepting HTTP Socket connections at
local=x.x.x.x:3128 remote=[::] FD 12 flags=9
2013/09/04 09:19:10 kid1| Accepting TPROXY spoofing HTTP Socket connections
at local=x.x.x.x:64000 remote=[::] FD 13 flags=25
2013/09/04 09:19:10 kid1| Accepting SNMP messages on 0.0.0.0:3401
2013/09/04 09:19:10 kid1| Sending SNMP messages from 0.0.0.0:3401
2013/09/04 09:19:11 kid1| storeLateRelease: released 0 objects
2013/09/04 09:19:10 kid1| Starting Squid Cache version 3.3.8 for
i486-pc-linux-gnu...
2013/09/04 09:19:10 kid1| Process ID 21334
2013/09/04 09:19:10 kid1| Process Roles: worker
2013/09/04 09:19:10 kid1| With 65536 file descriptors available
2013/09/04 09:19:10 kid1| Initializing IP Cache...
2013/09/04 09:19:10 kid1| DNS Socket created at [::], FD 7
2013/09/04 09:19:10 kid1| DNS Socket created at 0.0.0.0, FD 8
2013/09/04 09:19:10 kid1| Adding nameserver x.x.x.x from squid.conf
2013/09/04 09:19:10 kid1| Adding nameserver x.x.x.x from squid.conf
2013/09/04 09:19:10 kid1| Adding nameserver 8.8.8.8 from squid.conf
2013/09/04 09:19:10 kid1| Logfile: opening log
daemon:/var/log/squid3/access.log
2013/09/04 09:19:10 kid1| Logfile Daemon: opening log
/var/log/squid3/access.log
2013/09/04 09:19:10 kid1| Local cache digest enabled; rebuild/rewrite every
3600/3600 sec
2013/09/04 09:19:10 kid1| Store logging disabled
2013/09/04 09:19:10 kid1| Swap maxSize 0 + 524288 KB, estimated 40329
objects
2013/09/04 09:19:10 kid1| Target number of buckets: 2016
2013/09/04 09:19:10 kid1| Using 8192 Store buckets
2013/09/04 09:19:10 kid1| Max Mem  size: 524288 KB
2013/09/04 09:19:10 kid1| Max Swap size: 0 KB
2013/09/04 09:19:10 kid1| Using Least Load store dir selection
2013/09/04 09:19:10 kid1| Set Current Directory to /var/cache/squid
2013/09/04 09:19:10 kid1| Loaded Icons.
2013/09/04 09:19:10 kid1| Accepting WCCPv2 messages on port 2048, FD 11.
2013/09/04 09:19:10 kid1| Initialising all WCCPv2 lists
2013/09/04 09:19:10 kid1| HTCP Disabled.
2013/09/04 09:19:10 kid1| Squid plugin modules loaded: 0
2013/09/04 09:19:10 kid1| Adaptation support is off.
2013/09/04 09:19:10 kid1| Accepting HTTP Socket connections at
local=x.x.x.x:3128 remote=[::] FD 12 flags=9
2013/09/04 09:19:10 kid1| Accepting TPROXY spoofing HTTP Socket connections
at local=x.x.x.x:remote=[::] FD 13 flags=25
2013/09/04 09:19:10 kid1| Accepting SNMP messages on 0.0.0.0:3401
2013/09/04 09:19:10 kid1| Sending SNMP messages from 0.0.0.0:3401
2013/09/04 09:19:11 kid1| storeLateRelease: released 0 objects


here is when i add  and start squid
#SMP implementations##
# Rockstore filesytem
workers 2
cache_dir rock /squid-cache/rock-1 3000 max-size=31000 max-swap-rate=250
swap-timeout=350
###


FATAL: kid1 registration timed out

Re: [squid-users] Re: Log Squid logs to Syslog server on the Network

2013-09-04 Thread Kinkie
On Wed, Sep 4, 2013 at 7:32 AM, Sachin Gupta ching...@gmail.com wrote:
 Hi,
Hi Sachin,

 Is there a way to log SQUID log messages to a Syslog server listening on the
 network?

Yes: http://wiki.squid-cache.org/Features/LogModules#Module:_System_Log


-- 
/kinkie


[squid-users] Re: squid active directory integration

2013-09-04 Thread Sandeep
Hi David,

You can try /usr/lib64/squid/wbinfo_group.pl. For testing purpose run
following in terminal:

 # echo “username windowsgroup” | /usr/lib64/squid/wbinfo_group.pl -d

And if it return something as:

Sending OK to squid
OK

Then you can use it in your squid config. If you are using kerberos then you
have to do some modification in /usr/lib64/squid/wbinfo_group.pl file as
this helper is designed for NTLM naming. Just open the file in any text
editor and change the sub check {
section look as follows:

sub check {
local($user, $group) = @_;
   
my @DATA = split (/\@/, $user);
$user = $DATA[0];


Make sure that you take a copy of this file before editing.

Hope this will help.

Best Regards,
Sandeep



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/squid-active-directory-integration-tp4661575p4661955.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: squid active directory integration

2013-09-04 Thread Sandeep
Hi David,

  Add the following line in samba configuration:

  idmap config * : backend = rid

  Restart winbind service and give it a try. Let me know the
result.

Best Regards,
Sandeep



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/squid-active-directory-integration-tp4661575p4661957.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: squid active directory integration

2013-09-04 Thread Sandeep
Great, thank you.

I'm testing it two days. it works without problem. 

may ask you one question?

I add in my these lines in my squid.conf
--
external_acl_type InetUsers_Ldap ttl=0 children=5 %LOGIN
/usr/lib64/squid/wbinfo_group.pl
acl InetUers external InetUsers_Ldap Internet_Users
-

I'm interesting is it possible to use space separated group? for example
(Internet Users, Admin Users, ) or i need to rename those groups and remove
or change spaces in it?


thanks. 




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/squid-active-directory-integration-tp4661575p4661958.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: squid active directory integration

2013-09-04 Thread Sandeep
HI,

I try wbinfo_group.pl with default config and after that modified, But I
received error bellow.

-
[root@SQUIDSRV01 squid]# echo “squidtest restrictedinet” |
/usr/lib64/squid/wbinfo_group.pl -d
Debugging mode ON.
Got “squidtest restrictedinet” from squid
failed to call wbcLookupName: WBC_ERR_DOMAIN_NOT_FOUND
Could not lookup name restrictedinet”
failed to call wbcStringToSid: WBC_ERR_INVALID_PARAM
Could not convert sid  to gid
User:  -“squidtest-
Group: -restrictedinet”-
SID:   --
GID:   --
Sending ERR to squid
ERR



wbinfo -t
wbinfo -u 
wbinfo -g works fine.



here is my smb.conf I have no Idea what is wrong.

#--authconfig--start-line--

# Generated by authconfig on 2013/08/09 16:14:11
# DO NOT EDIT THIS SECTION (delimited by --start-line--/--end-line--)
# Any modification may be deleted or altered by authconfig in future

   workgroup = MYORG
   password server = 192.168.1.35
   realm = MYORG.EXAMPLE.LOCAL
   security = ads
   idmap config * : range = 16777216-33554431
   winbind separator = +
   template shell = /bin/false
   winbind use default domain = yes
   winbind offline logon = false

#--authconfig--end-line--
kerberos method = dedicated keytab
dedicated keytab file = /etc/squid/HTTP.keytab
winbind enum groups = Yes
winbind enum users = Yes
idmap config * : range = 1 - 2
idmap config * : backend = tdb
idmap config myorg : backend = tdb
idmap config myorg : range = 2 - 2000
map untrusted to domain = Yes
client ntlmv2 auth = Yes
client lanman auth = No
winbind normalize names = No
winbind nested groups = Yes
winbind nss info = rfc2307
winbind reconnect delay = 30
winbind cache time = 1800
winbind refresh tickets = true
allow trusted domains = Yes
server signing = auto
client signing = auto
lm announce = No
ntlm auth = no
lanman auth = No
preferred master = No
wins support = No
encrypt passwords = yes
printing = bsd
load printers = no
socket options = TCP_NODELAY SO_RCVBUF=8192 SO_SNDBUF=8192





--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/squid-active-directory-integration-tp4661575p4661956.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Re: Re: ext_kerberos_ldap_group_acl vs ext_ldap_group_acl

2013-09-04 Thread Eugene M. Zheganin
Hi.

On 04.09.2013 11:01, Markus Moeller wrote:

 Are you still interested in tcpdump captures you mentioned in previous
 letter ?


 Yes I would still like to see it.

(looks like for some reason mailing list tracker ate this message - my
relay says it's send, but it doesn't appear in the mailing list,
probably because of the URLs it was marked as spam, so here's the copy
I'm sending to you directly.)

Here's the pcap capture:
http://unix.zhegan.in/files/ext_kerberos_ldap_group_acl.pcap
Console log for the exchange:
http://unix.zhegan.in/files/ext_kerberos_ldap_group_acl.txt

The capture contains network exchange from the following sequence of
actions:

- tcpdump was started as 'tcpdump -s 0 -w
ext_kerberos_ldap_group_acl.pcap -ni vlan1 port 53 or port 389 or port 88'
- helper was started in shell, arguments:

/usr/local/libexec/squid/ext_kerberos_ldap_group_acl \
-i \
-a \
-m 16 \
-d \
-D NORMA.COM \
-b cn=Users,dc=norma,dc=com \
-u proxy5-backup \
-p  \
-N soft...@norma.com \
-S hq-gc.norma@norma.com

- line 'emz Internet%20Users%20-%20Proxy1' was typed 5 times (5 'OK'
answers were received).
- helper was stopped
- tcpdump was stopped

From my point of view the initial pause and the subsequent ones are the
same.

Addresses:

192.168.13.3 - the address of a machine where the helper was ran
192.168.3.45 - one of the AD controllers

The machine was idle for the time of the experiment (this is a backup
gateway with VRRP, in inactive state).
This machine has a named ran, and it's resolver uses it via lo0
interface, so no DNS exchange can be seen, as all of the answers were
cached by named.
If seeing DNS exchange is vital for understanding the pause, I can
probably recapture the exchange using external DNS.

Eugene.


[squid-users] Re: squid active directory integration

2013-09-04 Thread Sandeep
Great, thank you.

I'm testing it two days. it works without problem. 

may ask you one question?

I add in my these lines in my squid.conf
--
external_acl_type InetUsers_Ldap ttl=0 children=5 %LOGIN
/usr/lib64/squid/wbinfo_group.pl
acl InetUers external InetUsers_Ldap Internet_Users
-

I'm interesting is it possible to use space separated group? for example
(Internet Users, Admin Users, ) or i need to rename those groups and remove
or change spaces in it?


thanks. 



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/squid-active-directory-integration-tp4661575p4661959.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Reiser vs etx4

2013-09-04 Thread Alfredo Rezinovsky
In the page reccomends reiserfs. So I tried to set up a squid for 
200Mbps squid using reiser and squid 3.3


I had a problem, every five to ten minutes, the system freezes and the 
bandwith drops to a half for about 10 seconds.


I tried tuning squid in many ways. And I noticed reiser flushes the 
buffers all at a time, freezing the I/O, I suppose even the network I/O. 
The solution was to reformart the cache disks using ext4 (noatime an 
so). Now the proxy runs smoothly.


I know there should be a reason for recomending reiser in the 1st place 
but I had very bad experience with it.


Also... I know people leaving linux in favor of BSD to be able to use 
zfs. Anyone knows the status and usability of zfs in linux ?


--
Alfrenovsky


Re: [squid-users] Reiser vs etx4

2013-09-04 Thread Antony Stone
On Wednesday 04 September 2013 at 12:35:50, Alfredo Rezinovsky wrote:

 I know people leaving linux in favor of BSD to be able to use
 zfs. Anyone knows the status and usability of zfs in linux ?

It's a licensing thing, not a technical one.  ZFS works fine on Linux, but its 
licence is incompatible with GPL, therefore it's not included in any standard 
distros.

You can easily add it to your own system afterwards, though - for example see 
http://zfsonlinux.org/


Antony.

-- 
Users don't know what they want until they see what they get.

 Please reply to the list;
   please don't CC me.


Re: [squid-users] Reiser vs etx4

2013-09-04 Thread Helmut Hullen
Hallo, Alfredo,

Du meintest am 04.09.13:

 In the page reccomends reiserfs. So I tried to set up a squid for
 200Mbps squid using reiser and squid 3.3

For the squid cache? Strange.
That's a cache, no archive. Journalling shouldn't be necessary.

Viele Gruesse!
Helmut


Re: [squid-users] Reiser vs etx4

2013-09-04 Thread Eliezer Croitoru
Hey there,

On 09/04/2013 01:35 PM, Alfredo Rezinovsky wrote:
 In the page reccomends reiserfs. So I tried to set up a squid for
 200Mbps squid using reiser and squid 3.3
 
Since squid works in L1 and L2 directories the design meant for every FS
that has limits which reiser has a limit on your specific setup and
maybe some others.

 I had a problem, every five to ten minutes, the system freezes and the
 bandwith drops to a half for about 10 seconds.
If you can send more details on the problem you encountered it will be a
benefit while comparing extX to reiser FS.
 
 I tried tuning squid in many ways. And I noticed reiser flushes the
 buffers all at a time, freezing the I/O, I suppose even the network I/O.
 The solution was to reformart the cache disks using ext4 (noatime an
 so). Now the proxy runs smoothly.
Squid cannot do a thing in the kernel land and I/O from my experience
should be the OS and not the application level.
 
 I know there should be a reason for recomending reiser in the 1st place
 but I had very bad experience with it.
We cannot ask the one that recommended it? or maybe yes?
 
 Also... I know people leaving linux in favor of BSD to be able to use
 zfs. Anyone knows the status and usability of zfs in linux ?
Well maybe small groups of people do leave or try zfs but since
enterprise class storage is the real issue and I have not seen yet a BSD
vendor like RH I would be happy to see this kind of vendor..

Eliezer
 
 -- 
 Alfrenovsky



Re: [squid-users] Any Way To Check If Windows Updates Are Cached?

2013-09-04 Thread Eliezer Croitoru
It really depends on the environment and links speed you do have now.
If you do a very specific thing which is use a cache_peer only for
windows updates while using a hierarchy like this:
Main LB and router proxy - domain specific proxy cache_peer

on the domain specific cache_peer instance or machine use:
#start
range_offset_limit -1
maximum_object_size 2 GB # or any other size that you think worth and
make sense.
quick_abort_min -1
#end
I am not following windows updates too much but I assume that these
updates has a pattern by the url and or headers which can clarify how it
*should* be cached safely.

this wiki: http://wiki.squid-cache.org/SquidFaq/WindowsUpdate
is not up to date with all the settings but it's a good started in the
research of windows update in the internet.
If you have a local windows network and server you can just add WSUS
service that will build a updates store which is better for many networks.


Eliezer

On 09/04/2013 06:40 AM, HillTopsGM wrote:
 the settings for windows updates.
 I have done a few, and I am wondering if there is any way to check what has
 been cached?
 
 Thanks
 
 
 
 --
 View this message in context: 
 http://squid-web-proxy-cache.1019090.n4.nabble.com/Any-Way-To-Check-If-Windows-Updates-Are-Cached-tp4661935.html
 Sent from the Squid - Users mailing list archive at Nabble.com.
 



Re: [squid-users] Re: ERROR: This proxy does not support the 'rock' cache type. Ignoring.

2013-09-04 Thread Amos Jeffries

On 4/09/2013 7:28 p.m., Ahmad wrote:

hi  amos ,
thanks alot for clarification ,

but again i  still have problems ,

after i corrected rock  storage and recompiled squid ,
it worked , but wccp is no longer working with router ???!  but if i use
port in my explorer it works 


You may have some luck with WCCP by enabling

However, the kid registration and IPC shm_open() issues are more 
difficult. I don't have any clues here at this point. The 
http://wiki.squid-cache.org/Features/SmpScale wiki page itself does not 
have much to say on the no such file error coming out of it. We know 
about permissions errors and strange invalid parameter problems on 
MacOS - but not no such file.




here is when i add  and start squid
#SMP implementations##
# Rockstore filesytem
workers 2
cache_dir rock /squid-cache/rock-1 3000 max-size=31000 max-swap-rate=250
swap-timeout=350
###


FATAL: kid1 registration timed out
Squid Cache (Version 3.3.8): Terminated abnormally.
CPU Usage: 0.050 seconds = 0.000 user + 0.050 sys
Maximum Resident Size: 77328 KB
Page faults with physical i/o: 0
Memory usage for squid via mallinfo():
 total space in arena:4612 KB
 Ordinary blocks: 4553 KB 15 blks
 Small blocks:   0 KB  1 blks
 Holding blocks: 36892 KB  8 blks
 Free Small blocks:  0 KB
 Free Ordinary blocks:  58 KB
 Total in use:   41445 KB 899%
 Total free:59 KB 1%
FATAL: Ipc::Mem::Segment::open failed to
shm_open(/squid-squid-page-pool.shm): (2) No such file or directory

Squid Cache (Version 3.3.8): Terminated abnormally.
CPU Usage: 0.000 seconds = 0.000 user + 0.000 sys
Maximum Resident Size: 19552 KB
Page faults with physical i/o: 8
FATAL: Ipc::Mem::Segment::open failed to
shm_open(/squid-squid-page-pool.shm): (2) No such file or directory

Squid Cache (Version 3.3.8): Terminated abnormally.
CPU Usage: 0.000 seconds = 0.000 user + 0.000 sys
Maximum Resident Size: 19552 KB
Page faults with physical i/o: 0
FATAL: Ipc::Mem::Segment::open failed to
shm_open(/squid-squid-page-pool.shm): (2) No such file or directory

Squid Cache (Version 3.3.8): Terminated abnormally.
CPU Usage: 0.000 seconds = 0.000 user + 0.000 sys
Maximum Resident Size: 19552 KB
Page faults with physical i/o: 0
FATAL: Ipc::Mem::Segment::open failed to
shm_open(/squid-squid-page-pool.shm): (2) No such file or directory

Squid Cache (Version 3.3.8): Terminated abnormally.
CPU Usage: 0.000 seconds = 0.000 user + 0.000 sys
Maximum Resident Size: 19568 KB
Page faults with physical i/o: 0
=
here is permissions and mount :
root@drvirus:/home/squid-3.3.8# ls -l /var/run/squid
total 0
srwxr-x--- 1 proxy proxy 0 Sep  4 09:28 coordinator.ipc
srwxr-x--- 1 proxy proxy 0 Sep  4 09:28 kid-1.ipc
srwxr-x--- 1 proxy proxy 0 Sep  4 09:28 kid-2.ipc
srwxr-x--- 1 proxy proxy 0 Sep  4 09:28 kid-3.ipc
srwxr-x--- 1 proxy proxy 0 Sep  3 15:37 kid-4.ipc
===
root@drvirus:/home/squid-3.3.8# mount
/dev/sda1 on / type ext4 (rw,noatime)
tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620)
none on /opt/vyatta/config type tmpfs (rw,nosuid,nodev,mode=775,nr_inodes=0)
/opt/vyatta/etc/config on /config type none (rw,bind)
shm on /dev/shm type tmpfs (rw,noexec,nosuid,nodev)
shm on /dev/shm type tmpfs (rw,noexec,nosuid,nodev)
root@drvirus:/home/squid-3.3.8#




wts wrong ???

agian  , note that although error above still exist , i still can use procy
by port , but wccp is no longer up with router ???!!!



regards




-
Mr.Ahmad
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/ERROR-This-proxy-does-not-support-the-rock-cache-type-Ignoring-tp4661924p4661953.html
Sent from the Squid - Users mailing list archive at Nabble.com.




[squid-users] Re: squid active directory integration

2013-09-04 Thread cheitac
hi all,

I'm interesting is it possible to use space separated group? for example
(Internet Users, Admin Users, ) or i need to rename those groups and remove
or change spaces in it like tis(InternetUsers AdminUsers)? 



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/squid-active-directory-integration-tp4661575p4661967.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Squid 3 doesn't overwrite/replace cached objects(?)

2013-09-04 Thread uners
Thanks Antony for your explanation. Sounds reasonable.

As the production process of Squid3 runs as user proxy and the cache disk
contents belong to the same user, there shouldn't be a problem for Squid3 to
overwrite/recycle the cached objects.

The thread is marked resolved.

Regards,
Bob



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-3-doesn-t-overwrite-replace-cached-objects-tp4661911p4661968.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Squid 3 doesn't overwrite/replace cached objects(?)

2013-09-04 Thread uners
Thanks Antony for your explanation. Sounds reasonable to me.

The thread is marked solved.

Regards,
Bob



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-3-doesn-t-overwrite-replace-cached-objects-tp4661911p4661969.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Re: squid active directory integration

2013-09-04 Thread Amos Jeffries

On 4/09/2013 9:44 p.m., Sandeep wrote:

Great, thank you.

I'm testing it two days. it works without problem.

may ask you one question?

I add in my these lines in my squid.conf
--
external_acl_type InetUsers_Ldap ttl=0 children=5 %LOGIN
/usr/lib64/squid/wbinfo_group.pl
acl InetUers external InetUsers_Ldap Internet_Users
-

I'm interesting is it possible to use space separated group? for example
(Internet Users, Admin Users, ) or i need to rename those groups and remove
or change spaces in it?


In the current stable releases you can load the ACL values from a file 
and write each group name on one line of the file. Squid will load each 
line, spaces an dall as a single value for the ACL.


Likes this:
in file /etc/squid/groups:
Internet Users
Other People

in file /etc/squid/squid.conf:
  acl InternetUsers external /etc/squid/groups


In version 3.4 (beta) you will be able to use the 
configuration_includes_quoted_values directive to turn on/off string 
quoting of configuration tokens. (but please wait for 3.4.0.2 release, 
there are some problems in 3.4.0.1)


Amos


[squid-users] Re: squid active directory integration

2013-09-04 Thread Sandeep
Hi David,

You can try the below steps:

1. Assume there is a user bob and he is member of Admin 
Users group.
Execute the following command in terminal:

# echo bob Admin%20Users | /usr/lib64/squid/wbinfo_group.pl

And if you get OK as result that means you can go ahead.

2. Take a copy of  /usr/lib64/squid/wbinfo_group.pl file and 
open the
original file in a editor (vi or nano). Search for Main loop section and
edit the for loop as follows:

 foreach $group (@groups) {
$group =~ s/%([0-9a-fA-F][0-9a-fA-F])/pack(c,hex($1))/eg;

# Add this line
$group =~ s/%20/ /;

$ans = check($user, $group);
last if $ans eq OK;
}


You have to add one line, which is shown above under the 
commented
section.


3. In you squid configuration replace the space in group name 
with %20
to make it look as Admin%20Users.

4. Restart the squid.


Try these steps and let us know the result.

Best Regards
Sandeep



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/squid-active-directory-integration-tp4661575p4661973.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Any Way To Check If Windows Updates Are Cached?

2013-09-04 Thread HillTopsGM
Hi Helmut Hullen,

Thanks for this tip - I know you have mentioned it before, but what I am
trying to avoid is on an on going basis (every week) when there is an
update, having all 12 computers download the same file.

This is a great tool if you are always doing fresh installs - that's fine -
but it doesn't help me day to day.

The other thing is that it doesn't look like the tool is updated as
frequently as windows updates come out - in other words, it doesn't appear
to incrementally update itself.

Am I correct in assuming this?

This is why I'd really prefer to have the proxy work properly - just set it
up and forget it.  That's the dream.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Any-Way-To-Check-If-Windows-Updates-Are-Cached-tp4661935p4661974.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Squid applying acls differently when transparent non transparent proxy

2013-09-04 Thread Andrew Wood
My Squid proxy which is being used to prevent access to inappropriate sites and 
to display a session splash / AUP page to public visitors on the public wifi 
VLAN subnet works great when transparently intercepting traffic via NAT/ 
iptables but intermittently fails to block stuff when the client is set to 
explicitly use the proxy. Does Squid see the source or dest Ip differently in 
this case?


Is it possible to block squid from accepting stuff which hasnt been 
transparently intercepted so clients cant manually set the proxy to circumvent 
the acls?

If I block non transparently intercepted traffic i have a further issue...
I need to allow https through squid somehow and as I understand there are 3 
ways to do it:

1. Transparently intercept port 443 with Bump client first man in the middle

2. Configure clients to explicitly use the proxy for https via a CONNECT tunnel

3. Transparently intercept port 443 with bump server first  dynamic 
certificate generation

Option 1 is ruled out as visitors will be spooked by the browser warnings

Option 2 requires the client to be explicitly configured, which with BYOD   
means a PAC file set via DHCP or DNS, but this is problematic with many 
browsers and means Squid will need to accept non transparently intercepted 
traffic and as mentioned at the start this is causing problems with the acls

Option 3 is promising but how transparent is the dynamic cert generation? Do 
browsers still need to be configured to accept our gateway as a CA or is the 
remote server cert passed through verbatim?


Hope this makes sense Ive experimented with many things but its looking 
increasinly like im going to have to block non intercepted stuff (how?) and go 
with option 3

Many thanks
Andrew

Sent from  iPhone

[squid-users] HTTPS Caching between Squid's Parent and Child

2013-09-04 Thread Ghassan Gharabli
Hi,

I am trying to setup SSL-Bump between Parent Squid Proxy and the Child Proxy .

I am using Squid Version : 3.3.8 for each Parent and Squid installed
on the same system (Fedora 64-Bit)

Configure Options : --enable-ssl --enable-ssl-crtd --enable-icap-client
--with-filediscriptors=65536 --enable-ltdl-convenience

My target is to cache HTTPS Traffic, due to the very expensive
bandwidth, I have also noticed that most websites are moving to HTTPS
protocol.

I am having difficulties establishing a connection between Parent with
Child Squid .

I am able to cache HTTPS Traffic by  installing a certificate file on
each customer's PC or Phone .

Is there any possible idea that can make the parent proxy cache just
the HTTPS Traffic and let the child proxy negotiate between parent and
establish SSL connection, using the required certificate and then the
child could possibly share the connection again without annoying
customers to install the certificate ?.

Parent Proxy Settings:
-

#
# Recommended minimum configuration:
#

# Example rule allowing access from your local networks.
# Adapt to list your (internal) IP networks from where browsing
# should be allowed
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12 # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly
plugged) machines

acl SSL_ports port 443
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
acl SSL method CONNECT

#
# Recommended minimum Access Permission configuration:
#
# Only allow cachemgr access from localhost
http_access allow localhost manager
#http_access deny manager

# Deny requests to certain unsafe ports
http_access deny !Safe_ports

# Deny CONNECT to other than secure SSL ports
http_access deny CONNECT !SSL_ports

# We strongly recommend the following be uncommented to protect innocent
# web applications running on the proxy server who think the only
# one who can access services on localhost is a local user
#http_access deny to_localhost

#
# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
#

# Example rule allowing access from your local networks.
# Adapt localnet in the ACL section to list your (internal) IP networks
# from where browsing should be allowed
http_access allow localnet
http_access allow localhost
# And finally deny all other access to this proxy
http_access deny all

# Squid normally listens to port 3128
http_port 0.0.0.0:9000
http_port 0.0.0.0:3128 intercept ssl-bump
generate-host-certificates=on dynamic_cert_mem_cache_size=16MB
cert=/usr/local/squidparent/ssl_cert/myCA.pem
https_port 3129 intercept ssl-bump generate-host-certificates=on
dynamic_cert_mem_cache_size=16MB
cert=/usr/local/squidparent/ssl_cert/myCA.pem

# Uncomment and adjust the following to add a disk cache directory.
cache_dir ufs /usr/local/squidparent/var/cache/squid 1 16 256

# Leave coredumps in the first cache dir
coredump_dir /usr/local/squidparent/var/cache/squid

# Add any of your own refresh_pattern entries above these.
refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern
^https:\/\/.*\.(jp(eg|g|e|2)|tiff?|bmp|gif|png|kmz|eot|css|js)
129600 99% 129600 ignore-no-cache ignore-no-store reload-into-ims
override-expire ignore-must-revalidate store-stale ignore-private
ignore-auth
refresh_pattern
\.(class|css|cssz|js|jsz|xml|jhtml|txt|tif|swf|zsci|arc|asc) 129600
99% 129600 ignore-no-cache ignore-no-store reload-into-ims
override-expire ignore-must-revalidate store-stale ignore-private
ignore-auth
refresh_pattern \.(doc|xls|ppt|ods|odt|odp|pdf|rtf|inf|ini)
  129600 99% 129600 ignore-no-cache ignore-no-store reload-into-ims
override-expire ignore-must-revalidate store-stale ignore-private
refresh_pattern \.(jp(eg|g|e|2)|tiff?|bmp|gif|png|kmz|eot)   129600
99% 129600 ignore-no-cache ignore-no-store override-lastmod
reload-into-ims override-expire ignore-must-revalidate store-stale
ignore-private ignore-auth
refresh_pattern
\.(z(ip|[0-9]{2})|r(ar|[0-9]{2})|jar|tgz|bz2|grf|gpf|lz|lzh|lha|arj|sis|gz|ipa|tar|rpm|vpu|amz|img)
129600 99% 129600 ignore-no-cache ignore-no-store override-lastmod
reload-into-ims override-expire ignore-must-revalidate store-stale
ignore-private
refresh_pattern
\.(mp(2|3|4)|wav|og(g|a)|flac|mid|midi?|r(m|mvb)|aac|mka|ap(e|k))
  129600 99% 129600 ignore-no-cache ignore-no-store
override-lastmod reload-into-ims override-expire
ignore-must-revalidate store-stale ignore-private

[squid-users] Re: Any Way To Check If Windows Updates Are Cached?

2013-09-04 Thread HillTopsGM
Hey Eliezer Croitoru-2,

You mentioned:

 If you have a local windows network and server you can just add WSUS
 service that will build a updates store which is better for many networks. 

Is that an actual part of the windows server (WSUS) or is it refering to
what Helmut was referring to earlier  http://www.wsusoffline.net/
http://www.wsusoffline.net/  ?

Regarding:

 #start
 range_offset_limit -1
 maximum_object_size 2 GB # or any other size that you think worth and
 make sense.
 quick_abort_min -1
 #end 

That is basically right out of the FAQ, and I have done that.

Here is what I noticed: There was one particular update (it was a samsung
printer driver - ok not really a windows update per se, but it was one I
could identify easily) that appeared on all machines.  I updated one machine
choosing that update only. When I went to the next machine to do the same,
it appeared to take just as long if not longer to download.

It is not very scientific, but it leaves me wondering what to do next.

I did see this added to a particular squid.conf file . . . I am not sure,
but does anyone think this would help:


 # compressed
 refresh_pattern -i \.gz$ 10080 90% 99 override-expire override-lastmod
 reload-into-ims ignore-reload
 refresh_pattern -i \.cab$ 10080 90% 99 override-expire
 override-lastmod reload-into-ims ignore-reload
 refresh_pattern -i \.bzip2$ 10080 90% 99 override-expire
 override-lastmod reload-into-ims ignore-reload
 refresh_pattern -i \.bz2$ 10080 90% 99 override-expire
 override-lastmod reload-into-ims ignore-reload
 refresh_pattern -i \.gz2$ 10080 90% 99 override-expire
 override-lastmod reload-into-ims ignore-reload
 refresh_pattern -i \.tgz$ 10080 90% 99 override-expire
 override-lastmod reload-into-ims ignore-reload
 refresh_pattern -i \.tar.gz$ 10080 90% 99 override-expire
 override-lastmod reload-into-ims ignore-reload
 refresh_pattern -i \.zip$ 10080 90% 99 override-expire
 override-lastmod reload-into-ims ignore-reload
 refresh_pattern -i \.rar$ 1008000 90%  override-expire
 override-lastmod reload-into-ims ignore-reload
 refresh_pattern -i \.tar$ 10080 90% 99 override-expire
 override-lastmod reload-into-ims ignore-reload
 refresh_pattern -i \.ace$ 10080 90% 99 override-expire
 override-lastmod reload-into-ims ignore-reload
 refresh_pattern -i \.7z$ 10080 90% 99 override-expire override-lastmod
 reload-into-ims ignore-reload
 
 # documents
 refresh_pattern -i \.xls$ 10080 90% 99 override-expire
 override-lastmod reload-into-ims ignore-reload
 refresh_pattern -i \.doc$ 10080 90% 99 override-expire
 override-lastmod reload-into-ims ignore-reload
 refresh_pattern -i \.xlsx$ 10080 90% 99 override-expire
 override-lastmod reload-into-ims ignore-reload
 refresh_pattern -i \.docx$ 10080 90% 99 override-expire
 override-lastmod reload-into-ims ignore-reload
 refresh_pattern -i \.pdf$ 10080 90% 99 override-expire
 override-lastmod reload-into-ims ignore-reload
 refresh_pattern -i \.ppt$ 10080 90% 99 override-expire
 override-lastmod reload-into-ims ignore-reload
 refresh_pattern -i \.pptx$ 10080 90% 99 override-expire
 override-lastmod reload-into-ims ignore-reload
 refresh_pattern -i \.rtf\?$ 10080 90% 99 override-expire
 override-lastmod reload-into-ims ignore-reload
 
 # multimedia
 refresh_pattern -i \.mid$ 10080 90% 99 override-expire
 override-lastmod reload-into-ims ignore-reload
 refresh_pattern -i \.wav$ 10080 90% 99 override-expire
 override-lastmod reload-into-ims ignore-reload
 refresh_pattern -i \.viv$ 10080 90% 99 override-expire
 override-lastmod reload-into-ims ignore-reload
 refresh_pattern -i \.mpg$ 10080 90% 99 override-expire
 override-lastmod reload-into-ims ignore-reload
 refresh_pattern -i \.mov$ 10080 90% 99 override-expire
 override-lastmod reload-into-ims ignore-reload
 refresh_pattern -i \.avi$ 10080 90% 99 override-expire
 override-lastmod reload-into-ims ignore-reload
 refresh_pattern -i \.asf$ 10080 90% 99 override-expire
 override-lastmod reload-into-ims ignore-reload
 refresh_pattern -i \.qt$ 10080 90% 99 override-expire override-lastmod
 reload-into-ims ignore-reload
 refresh_pattern -i \.rm$ 10080 90% 99 override-expire override-lastmod
 reload-into-ims ignore-reload
 refresh_pattern -i \.rmvb$ 10080 90% 99 override-expire
 override-lastmod reload-into-ims ignore-reload
 refresh_pattern -i \.mpeg$ 10080 90% 99 override-expire
 override-lastmod reload-into-ims ignore-reload
 refresh_pattern -i \.wmp$ 10080 90% 99 override-expire
 override-lastmod reload-into-ims ignore-reload
 refresh_pattern -i \.3gp$ 10080 90% 99 override-expire
 override-lastmod reload-into-ims ignore-reload
 refresh_pattern -i \.mp3$ 10080 90% 99 override-expire
 override-lastmod reload-into-ims ignore-reload
 refresh_pattern -i \.mp4$ 10080 90% 99 override-expire
 override-lastmod reload-into-ims ignore-reload
 
 # images
 refresh_pattern -i 

Re: [squid-users] Re: Any Way To Check If Windows Updates Are Cached?

2013-09-04 Thread Helmut Hullen
Hallo, HillTopsGM,

Du meintest am 04.09.13:

[wsusoffline]

 Thanks for this tip - I know you have mentioned it before, but what I
 am trying to avoid is on an on going basis (every week) when there is
 an update, having all 12 computers download the same file.

 This is a great tool if you are always doing fresh installs - that's
 fine - but it doesn't help me day to day.


But surely it helps! It's a data base for all desired windows versions,  
it's completed/refreshed every time you want.

 The other thing is that it doesn't look like the tool is updated as
 frequently as windows updates come out - in other words, it doesn't
 appear to incrementally update itself.

Updateing is a cron job. Only not yet existing files are downloaded  
during such a job, and they stay in the directory as long as Windows  
looks for them - that's another way than staying in the squid cache.

 Am I correct in assuming this?

No. Just take a try, for about some weeks. Microsoft has a fixed patch  
day.

 This is why I'd really prefer to have the proxy work properly - just
 set it up and forget it.  That's the dream.

The squid cache deletes old files, because it's a cache.

By the way: my wsusoffline directory contains actually the updates for  
Windows XP (32 bit), Windows 7 (32 bit and 64 bit), that's nearly 7  
Gbyte. I wouldn't fill a cache with so much nearly static files.


Viele Gruesse!
Helmut


Re: [squid-users] Re: ERROR: This proxy does not support the 'rock' cache type. Ignoring.

2013-09-04 Thread Alex Rousskov
On 09/04/2013 01:28 AM, Ahmad wrote:

 after i corrected rock  storage and recompiled squid ,
 it worked , but wccp is no longer working with router ???!

I believe WCCP registration timeouts when using Rock Store have been
discussed before on this mailing list this year. I do not have a pointer
to that email and I am not sure there is a corresponding bug report, but
you should be able to find it using rock and wccp as keywords.
Consider trying foreground rebuild as a workaround.


 FATAL: kid1 registration timed out
 Squid Cache (Version 3.3.8): Terminated abnormally.

 wts wrong ???

You may be suffering from problems discussed at the following URL.
The same comment suggests a workaround.
http://bugs.squid-cache.org/show_bug.cgi?id=3880#c1


The two issues discussed above are kind of related because they have a
similar underlying trigger, but fixing one of them will not fix the other.


Alex.



Re: [squid-users] HTTPS Caching between Squid's Parent and Child

2013-09-04 Thread Alex Rousskov
On 09/04/2013 09:50 AM, Ghassan Gharabli wrote:

 Is there any solution to cache HTTPS Traffic without installing a
 certificate file at customer's machines?.

In short, no.


 Do you think that by using SSL Bump between Parent and Child Squids ,
 it would solve the problem 

No, it would not. Without Squid bumping the browser connection, the
browser would be using a secure connection with the origin server, and
any Squid within your hierarchy would not be able to see (and cache) any
HTTP requests.

Alex.



Re: [squid-users] Reiser vs etx4

2013-09-04 Thread Alex Rousskov
On 09/04/2013 04:35 AM, Alfredo Rezinovsky wrote:
 In the page reccomends reiserfs. So I tried to set up a squid for
 200Mbps squid using reiser and squid 3.3
 
 I had a problem, every five to ten minutes, the system freezes and the
 bandwith drops to a half for about 10 seconds.

Yes, this is typical for either untuned or overloaded ext2 and ext4 file
systems as well.


 I tried tuning squid in many ways. And I noticed reiser flushes the
 buffers all at a time, freezing the I/O, I suppose even the network I/O.

Yes. AFAIK, _any_ Linux process might be frozen indiscriminately when fs
is running behind its flushing schedule.


 The solution was to reformart the cache disks using ext4 (noatime an
 so). Now the proxy runs smoothly.

In our tests, it was easier to configure ext2 to do the right thing with
respect to flushing than ext4, but we were not using ufs-based
cache_dirs and YMMV.

Also, if you are using ufs-based cache_dirs then you are likely to start
seeing similar overload/flushing effects if your Squid load increases.
Besides fs tuning (to avoid excessive accumulation of unflushed data),
the key is in limiting disk traffic, which ufs-based cache_dirs cannot
do (yet?). On the other hand, ufs-based systems are not meant for
performance-sensitive environments anyway :-).


Cheers,

Alex.



[squid-users] Re: Any Way To Check If Windows Updates Are Cached?

2013-09-04 Thread HillTopsGM
Hi Helmut,

You have my attention now!


Helmut Hullen wrote
 Hallo, HillTopsGM,
 
 Du meintest am 04.09.13:
 
 [wsusoffline]
 
 Thanks for this tip - I know you have mentioned it before, but what I
 am trying to avoid is on an on going basis (every week) when there is
 an update, having all 12 computers download the same file.
 
 This is a great tool if you are always doing fresh installs - that's
 fine - but it doesn't help me day to day.
 
 
 But surely it helps! It's a data base for all desired windows versions,  
 it's completed/refreshed every time you want.

it's completed/refreshed every time you want.?
I was looking into it more and so can you confirm this for me;

If I run the updategnerator.exe file that will ONLY add the files I don't
have right at the moment?

If that is so, then anytime windows notifies me that there are updates, I'd
simply have to run this updategnerator.exe file to get the new ones; and
then go to all the other machines and run the updateinstaller.exe file.  Is
that right?



Helmut Hullen wrote
 Updateing is a cron job. Only not yet existing files are downloaded  
 during such a job, and they stay in the directory as long as Windows  
 looks for them - that's another way than staying in the squid cache.

. . . when you say it is a cron job, are you saying that it is part of the
*wsusoffline* program itself?

These updates that it collects come directly from Microsoft?

If this is the case, this would be tremendously helpful!

Oh, and what happens when  http://www.wsusoffline.net/
http://www.wsusoffline.net/   comes up with a new 'version', do you have
to start all over again?

Thanks for you help!




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Any-Way-To-Check-If-Windows-Updates-Are-Cached-tp4661935p4661981.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Wondering Why This Isn't Caching

2013-09-04 Thread Geoffrey Schwartz
I have a page which gets cached by Firefox.  Google Page Speed says it's
cacheable as well.  Squid does not cache it.

I'm running Squid v3.3.8.  My configuration is pretty standard... I'll post
it if need be.

The page is being served through Apache's mod_asis so I can explicitly set
the headers.  Here it is:

=
Status: 200 A-OK
Date: Wed, 04 Sep 2013 15:32:28 GMT
Cache-Control: public
Expires: Thu, 05 Sep 2013 15:32:28 GMT
Vary: Accept-Encoding
Content-Type: text/html

html
body
hello, world
/body
/html
=

When I visit the page in Firefox (I have to disable browser caching in
order to get it to send the request in the first place), my browser
sends/receives the following headers (according to Firebug):
=
GET http://example.com/test.asis HTTP/1.1
Host: example.com
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7; rv:23.0)
Gecko/20100101 Firefox/23.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Connection: keep-alive

HTTP/1.1 200 OK
Date: Wed, 04 Sep 2013 17:09:48 GMT
Server: Apache/2.2.25 (Unix) mod_ssl/2.2.25 OpenSSL/1.0.1e DAV/2
mod_perl/2.0.7 Perl/v5.12.4
Cache-Control: public
Expires: Thu, 05 Sep 2013 15:32:28 GMT
Vary: Accept-Encoding
Content-Length: 43
Content-Type: text/html
X-Cache: MISS from webcache-dev1.cc.columbia.edu
Via: 1.1 webcache-dev1.cc.columbia.edu (squid/3.3.8)
Connection: keep-alive
=

I get this response every time (as in, X-Cache: MISS).  I know the Date
header gets changed (probably by apache using the last-modified time) as
well as the Status, but the Expires header is still there (and still in the
future).  This should be cached, right?  Apparently Google's Page Speed
tool and Firefox think so.

Out of curiosity, I removed the Vary header.  Squid then started caching
the page.

Did I run into a bug with 3.3.8?  I did several searches but wasn't able to
find anything about this issue.

--
Geoffrey Schwartz
Columbia University - CUIT
Ext. 19835


Re: [squid-users] Re: Any Way To Check If Windows Updates Are Cached?

2013-09-04 Thread Helmut Hullen
Hallo, HillTopsGM,

Du meintest am 04.09.13:

 it's completed/refreshed every time you want.?
 I was looking into it more and so can you confirm this for me;

 If I run the updategnerator.exe file that will ONLY add the files I
 don't have right at the moment?

Surely.
wget checks if the file already exists.

 If that is so, then anytime windows notifies me that there are
 updates, I'd simply have to run this updategnerator.exe file to get
 the new ones; and then go to all the other machines and run the
 updateinstaller.exe file.  Is that right?

That's right.

 Helmut Hullen wrote
 Updateing is a cron job. Only not yet existing files are downloaded
 during such a job, and they stay in the directory as long as Windows
 looks for them - that's another way than staying in the squid cache.

 .. . . when you say it is a cron job, are you saying that it is part
 of the *wsusoffline* program itself?

No - writing a cronjob is the administrator's job. But that's a very  
simple job.

 These updates that it collects come directly from Microsoft?

Yes.

 If this is the case, this would be tremendously helpful!

 Oh, and what happens when  http://www.wsusoffline.net/
 http://www.wsusoffline.net/   comes up with a new 'version', do you
 have to start all over again?

That depends!
Windows: you're told that there is a newer version.
Linux: the administrator has to watch the wsusoffline website.

Installing the program: under Linux just copy it into your desired  
directory; it overwrites only the wsusoffline binaries/scripts.

But that's a wsusoffline problem (if it is a problem), no squid problem.

Viele Gruesse!
Helmut


Re: [squid-users] One squid instance, two WAN links, how to failover from primary to secondary link?

2013-09-04 Thread Thomas Harold

On 8/26/2013 6:41 AM, Nishant Sharma wrote:

Hi Thomas,

Thomas Harold thomas-li...@nybeta.com wrote:

In an instance where you have a single instance of squid running on a two WAN 
links as WAN
#2
is very slow compared to WAN #1.

Is this simply handled by changing the default gateway of the server
using the ip route commands when we detect that WAN#1 is down?


Yes, it should work that way. Simple and easy.



Is it necessary to restart or reload squid when the default routes change?





Re: [squid-users] One squid instance, two WAN links, how to failover from primary to secondary link?

2013-09-04 Thread Alfredo Rezinovsky

El 04/09/13 17:22, Thomas Harold escribió:

On 8/26/2013 6:41 AM, Nishant Sharma wrote:

Hi Thomas,

Thomas Harold thomas-li...@nybeta.com wrote:
In an instance where you have a single instance of squid running on 
a two WAN links as WAN

#2
is very slow compared to WAN #1.

Is this simply handled by changing the default gateway of the server
using the ip route commands when we detect that WAN#1 is down?


Yes, it should work that way. Simple and easy.



Is it necessary to restart or reload squid when the default routes 
change?


You can balance with ip route with nexthop with different weights for 
each link and change to a one link only route if the other fails.


You only need to restart if DNSs change, if you have a local DNS no 
restart is needed cause for squid the DNS will be always localhost





[squid-users] Re: Squid Reverse Proxy. Attempted connections to domains we do not host?

2013-09-04 Thread PSA4444
Hi Amos,
 
We did not get a solution to this yet.
 
The work around has been to disable http (port 80) and only run https (port
443) with a firewall in front of the proxy server.  This blocked out 100% of
these requests for now but I will need to re-enable it later.
 
How can I disable this open-proxy relaying?

Config:

###
 
visible_hostname domain.com
 
 
https_port 443 accel cert=/usr/newrprgate/CertAuth/cert.cert
key=/usr/newrprgate/CertAuth/key.pem vhost defaultsite=www.domain.com
 
sslproxy_flags DONT_VERIFY_PEER
forwarded_for on
 
#Cache Peer 1
cache_peer one.domain.com parent 443 0 no-query originserver ssl
sslversion=3 connect-timeout=8 connect-fail-limit=2
sslflags=DONT_VERIFY_PEER front-end-https=on name=one login=PASSTHRU
acl sites_one dstdomain one.domain.com
cache_peer_access one allow sites_one
acl http proto http
acl https proto https
 
 
#Cache Peer 2
cache_peer two.domain.com parent 443 0 no-query originserver ssl
sslversion=3 connect-timeout=8 connect-fail-limit=2
sslflags=DONT_VERIFY_PEER front-end-https=on name=two login=PASSTHRU
acl sites_two dstdomain two.domain.com
cache_peer_access two allow sites_two
acl http proto http
acl https proto https
 
http_access allow all
 
header_replace Vary Accept-Encoding
request_header_access All allow all
 
 
###
 
Thanks,
Paul



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-Reverse-Proxy-Attempted-connections-to-domains-we-do-not-host-tp4661522p4661988.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Re: URL blacklist/block and redirector

2013-09-04 Thread Amos Jeffries

On 4/09/2013 5:31 p.m., Sachin Gupta wrote:

Hi,


I came through quite a number of redirectors listed on the site:
http://www.squid-cache.org/Misc/redirectors.html

However was not able to decide which one of these suits my requirement.

I will be having a list of URLs which need to be blocked written in a file and 
specified in the squid configuration file.
Also on blocking of the URL, the User should be displayed a custom page.

Please guide.

Regards


Alternatively;  use an ACL blacklist to match requests, the deny_info 
directive to do redirection to some URL when the blacklist ACL matches. 
And configure your http_access rules to use the ACL as appropriate.


A proper *redirector* can be just as effective and more efficient on 
managing the blacklist contents. Soem of them have flexible multi-type 
access controls as well. But you do need to be careful that they are 
doing a proper HTTP 30x redirection and not fakingit wit a url-rewrite. 
URL re-write introduces a lot of troubles and nasty side effetcs due to 
being a protocol violation.


Amos