[squid-users] squid always dies
Hi all please cant someone give me a hand with this issue ? my squid box randomly dies after running for some times and I always need to restart it I get this output from cache.log *Page faults with physical i/o: 0 CPU Usage: 2.636 seconds = 1.034 user + 1.602 sys Maximum Resident Size: 541392 KB Page faults with physical i/o: 0 CPU Usage: 2.728 seconds = 0.996 user + 1.732 sys Maximum Resident Size: 552320 KB Page faults with physical i/o: 0 CPU Usage: 2.999 seconds = 1.158 user + 1.841 sys Maximum Resident Size: 548464 KB Page faults with physical i/o: 0 thanks and Bests regards * -- ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
[squid-users] cgi-bin
Hi Mate I have this set on my squid.conf but seems that this is obsolete so how can nicely convert that for that version is true that log suggest always_direct hierarchy_stoplist cgi-bin ? .js .jsp acl QUERY urlpath_regex cgi-bin \? .js .jsp no_cache deny QUERY 2015/06/10 20:53:42| ERROR: Directive 'hierarchy_stoplist' is obsolete. 2015/06/10 20:53:42| hierarchy_stoplist : Remove this line. Use always_direct or cache_peer_access ACLs instead if you need to prevent cache_peer use. just to confirm this is the right way ?? always_direct cgi-bin ? .js .jsp acl QUERY urlpath_regex cgi-bin \? .js .jsp no_cache deny QUERY Thanks -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/cgi-bin-tp4671670.html Sent from the Squid - Users mailing list archive at Nabble.com. ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] worker per cache_dir
ls -la /dev/shm/ total 0 drwxrwxrwt 2 proxy proxy 40 Jun 6 14:56 . drwxr-xr-x 14 root root 3920 Jun 6 02:07 .. -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/worker-per-cache-dir-tp4671510p4671576.html Sent from the Squid - Users mailing list archive at Nabble.com. ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] worker per cache_dir
Hi Amos any hints to help me with this issue I not able to set more than on cache_dir per worker because it's alway end up with errors (but no errors appears if only one cache_dir is set per worker) 2015/06/06 01:36:09 kid7| Starting eCAP service: ecap://www.vigos.com/ecap_gzip 2015/06/06 01:36:09 kid7| commBind: Cannot bind socket FD 9 to [::]: (13) Permission denied 2015/06/06 01:36:09 kid6| commBind: Cannot bind socket FD 21 to [::]: (13) Permission denied 2015/06/06 01:36:09 kid6| Store rebuilding is 0.00% complete FATAL: Ipc::Mem::Segment::open failed to shm_open(/squid-cf__metadata.shm): (2) No such file or directory Squid Cache (Version 3.5.5): Terminated abnormally. CPU Usage: 0.016 seconds = 0.016 user + 0.000 sys Maximum Resident Size: 31344 KB Page faults with physical i/o: 0 2015/06/06 01:36:14 kid6| Store rebuilding is 43.40% complete FATAL: Ipc::Mem::Segment::open failed to shm_open(/squid-cf__metadata.shm): (2) No such file or directory Squid Cache (Version 3.5.5): Terminated abnormally. CPU Usage: 0.016 seconds = 0.016 user + 0.000 sys Maximum Resident Size: 31328 KB Page faults with physical i/o: 0 FATAL: Ipc::Mem::Segment::open failed to shm_open(/squid-cf__metadata.shm): (2) No such file or directory Squid Cache (Version 3.5.5): Terminated abnormally. FATAL: Ipc::Mem::Segment::open failed to shm_open(/squid-cf__metadata.shm): (2) No such file or directory one cache_dir/worker ** cache_dir rock /var/spool/squid3/cache10 46 min-size=1 max-size=31000 max-swap-rate=200 swap-timeout=300 # a 200GB x 8 caches of large ( over 32KB) objects per-worker if ${process_number} = 1 cache_dir aufs /cache0 350 32 256 min-size=31001 max-size=104857600 endif if ${process_number} = 2 cache_dir aufs /cache1 350 32 256 min-size=31001 max-size=1048576000 endif if ${process_number} = 3 cache_dir aufs /cache2 350 32 256 min-size=31001 max-size=1048576000 endif if ${process_number} = 4 cache_dir aufs /cache3 350 32 256 min-size=31001 max-size=1048576000 endif if ${process_number} = 5 cache_dir aufs /cache4 350 32 256 min-size=31001 max-size=1048576000 endif 2 cache_dir/worker * #if ${process_number} = 4 #cache_dir aufs /cache6/${process_number} 350 32 256 min-size=31001 max-size=1048576000 #cache_dir aufs /cache7/${process_number} 350 32 256 min-size=31001 max-size=1048576000 #endif #if ${process_number} = 5 #cache_dir aufs /cache8/${process_number} 350 32 256 min-size=31001 max-size=1048576000 #cache_dir aufs /cache9/${process_number} 350 32 256 min-size=31001 max-size=1048576000 #endif Thanks -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/worker-per-cache-dir-tp4671510p4671571.html Sent from the Squid - Users mailing list archive at Nabble.com. ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] worker per cache_dir
Cool Thanks but I have an error while doing that maybe it could be the HDD size By the way Amos what could you suggest me to handle disks I have a jbod with 15 disks (4TB) each I read on of your comment stipulating to set a cache_dir per drive (o I'm totaly wrong) with this worker/disk distribution I end up with error cache_dir rock /cache16 50 min-size=1 max-size=31000 max-swap-rate=200 swap-timeout=300 if ${process_number} = 1 cache_dir aufs /cache0 350 32 256 min-size=31001 max-size=1048576000 cache_dir aufs /cache1 350 32 256 min-size=31001 max-size=1048576000 endif if ${process_number} = 2 cache_dir aufs /cache2 350 32 256 min-size=31001 max-size=1048576000 cache_dir aufs /cache3 350 32 256 min-size=31001 max-size=1048576000 endif if ${process_number} = 3 cache_dir aufs /cache4 350 32 256 min-size=31001 max-size=1048576000 cache_dir aufs /cache5 350 32 256 min-size=31001 max-size=1048576000 endif if ${process_number} = 4 cache_dir aufs /cache6 350 32 256 min-size=31001 max-size=1048576000 cache_dir aufs /cache7 350 32 256 min-size=31001 max-size=1048576000 endif if ${process_number} = 5 cache_dir aufs /cache8 350 32 256 min-size=31001 max-size=1048576000 cache_dir aufs /cache9 350 32 256 min-size=31001 max-size=1048576000 endif if ${process_number} = 6 cache_dir aufs /cache10 350 32 256 min-size=31001 max-size=1048576000 cache_dir aufs /cache11 350 32 256 min-size=31001 max-size=1048576000 endif if ${process_number} = 7 cache_dir aufs /cache12 350 32 256 min-size=31001 max-size=1048576000 cache_dir aufs /cache13 350 32 256 min-size=31001 max-size=1048576000 endif if ${process_number} = 8 cache_dir aufs /cache14 350 32 256 min-size=31001 max-size=1048576000 cache_dir aufs /cache15 350 32 256 min-size=31001 max-size=1048576000 endif -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/worker-per-cache-dir-tp4671510p4671541.html Sent from the Squid - Users mailing list archive at Nabble.com. ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] TOS squid-3.5.0.4
Hi Amos not really after setting TOS config on Squid the idea is to allow Mikrotik router recognize marked paquets (as on previous squid 3.1.x) and then mark cache content, so that it can later pick by Mikrotik to deliver the already cached content to user at full lan speed, no queue on cache content. 1. /ip firewall mangle 2. add action=mark-connection chain=postrouting comment===SQUID - TOS 12== disabled=no dscp=12 \ 3. new-connection-mark=squid-connection passthrough=yes protocol=tcp src-address=192.168.10.2 4. add action=mark-packet chain=postrouting connection-mark=squid-connection disabled=\ 5. no new-packet-mark=squid-packs passthrough=yes El 3/6/15 a las 5:28, Amos Jeffries [via Squid Web Proxy Cache] escribió: On 1/06/2015 1:19 p.m., Marcel Fossua wrote: No luck Still not getting result at all I think the issue could be with my Mikrotik box # Marking packets with DSCP (for Mikrotik 6.x) for cache hit content coming from SQUID Proxy /ip firewall mangle add action=mark-packet chain=prerouting disabled=no dscp=12 new-packet-mark=squid-connection passthrough=no comment===SQUID - TOS 12 == http://squid-web-proxy-cache.1019090.n4.nabble.com/file/n4671467/Captura_de_pantalla_2015-05-29_a_las_21.png Um. Do you mean you are go with having the router mark the packets instead of Squid? Amos ___ squid-users mailing list [hidden email] /user/SendEmail.jtp?type=nodenode=4671496i=0 http://lists.squid-cache.org/listinfo/squid-users If you reply to this email, your message will be added to the discussion below: http://squid-web-proxy-cache.1019090.n4.nabble.com/TOS-squid-3-5-0-4-tp4671459p4671496.html To unsubscribe from TOS squid-3.5.0.4, click here http://squid-web-proxy-cache.1019090.n4.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_codenode=4671459code=bWFyY2VsQGd1aW5lYW5ldC5uZXR8NDY3MTQ1OXw4NDM0NzU1NzE=. NAML http://squid-web-proxy-cache.1019090.n4.nabble.com/template/NamlServlet.jtp?macro=macro_viewerid=instant_html%21nabble%3Aemail.namlbase=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespacebreadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml -- Fossua-vcard Marcel Fossua Unix/Linux Network Administrator Tel: 0240 99448 www.guineanet.net http://www.guineanet.net/ www.familyfossua.com http://www.familyfossua.com guineanet.png (33K) http://squid-web-proxy-cache.1019090.n4.nabble.com/attachment/4671503/0/guineanet.png -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/TOS-squid-3-5-0-4-tp4671459p4671503.html Sent from the Squid - Users mailing list archive at Nabble.com. ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
[squid-users] worker per cache_dir
Hi All hope someone can give me a way to accomplish what I have in mind I have 3.5.5 running with 9 workers active (the best numbers a get without errors) so I just set 1 worker per disk as schema below but I have a jbod with lot of disk I would like to add on squid.conf obviously the idea of 1 worker /disk is not longer a good deal then what is the best config to make let say worker 1 deal with several cache (3 or 4 for the occurence) so I could set 3 o 4 cache_dir per worker. Thanks /cache_dir rock /cache1 46 min-size=1 max-size=31000 max-swap-rate=200 swap-timeout=300 # 200GB x 8 caches of large ( over 32KB) objects per-worker if ${process_number} = 1 cache_dir aufs /cache2 275000 32 256 min-size=31001 max-size=1048576000 endif if ${process_number} = 2 cache_dir aufs /cache3 19 32 256 min-size=31001 max-size=1048576000 endif if ${process_number} = 3 cache_dir aufs /cache4 19 32 256 min-size=31001 max-size=1048576000 endif if ${process_number} = 4 cache_dir aufs /cache5 19 32 256 min-size=31001 max-size=1048576000 endif if ${process_number} = 5 cache_dir aufs /cache6 19 32 256 min-size=31001 max-size=1048576000 endif if ${process_number} = 6 cache_dir aufs /cache7 19 32 256 min-size=31001 max-size=1048576000 endif if ${process_number} = 7 cache_dir aufs /cache8 19 32 256 min-size=31001 max-size=1048576000 endif if ${process_number} = 8 cache_dir aufs /cache9 19 32 256 min-size=31001 max-size=1048576000 endif if ${process_number} = 9 cache_dir aufs /cache10 19 32 256 min-size=31001 max-size=1048576000 endif/ -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/worker-per-cache-dir-tp4671510.html Sent from the Squid - Users mailing list archive at Nabble.com. ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] TOS squid-3.5.0.4
No luck Still not getting result at all I think the issue could be with my Mikrotik box # Marking packets with DSCP (for Mikrotik 6.x) for cache hit content coming from SQUID Proxy /ip firewall mangle add action=mark-packet chain=prerouting disabled=no dscp=12 new-packet-mark=squid-connection passthrough=no comment===SQUID - TOS 12 == http://squid-web-proxy-cache.1019090.n4.nabble.com/file/n4671467/Captura_de_pantalla_2015-05-29_a_las_21.png -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/TOS-squid-3-5-0-4-tp4671459p4671467.html Sent from the Squid - Users mailing list archive at Nabble.com. ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] TOS squid-3.5.0.4
Thanks Amos I will try it. Rgds -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/TOS-squid-3-5-0-4-tp4671459p4671465.html Sent from the Squid - Users mailing list archive at Nabble.com. ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
[squid-users] TOS squid-3.5.0.4
Hi All let see if some of you can help me troubleshoot the issue I have with squid-3.5.0.4 on centos 6.6 configure with tproxy in fact the issue is relate to qos stuff i just set things according to manual qos_flows tos local-hit=0x30 qos_flows mark local-hit=0x30 qos_flows tos sibling-hit=0x31 qos_flows mark sibling-hit=0x31 qos_flows tos parent-hit=0x32 qos_flows mark parent-hit=0x32 qos_flows tos disable-preserve-miss tcpdump output tcpdump -vni eth1 | grep 'tos 0x30' tcpdump: listening on eth1, link-type EN10MB (Ethernet), capture size 65535 bytes 01:37:24.787867 IP (tos 0x30, ttl 64, id 38723, offset 0, flags [DF], proto TCP (6), length 534) 01:37:24.788003 IP (tos 0x30, ttl 64, id 38724, offset 0, flags [DF], proto TCP (6), length 2920) 01:37:24.788019 IP (tos 0x30, ttl 64, id 38726, offset 0, flags [DF], proto TCP (6), length 1256) 01:37:24.788141 IP (tos 0x30, ttl 64, id 38727, offset 0, flags [DF], proto TCP (6), length 2920) but for sure it's not marking anything while send traffic to my pppoe BRAS (MK) -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/TOS-squid-3-5-0-4-tp4671459.html Sent from the Squid - Users mailing list archive at Nabble.com. ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users