Re: [squid-users] Will a Shared cache_dir? Will it be possible to use shared cache_dir or some share backend like a DB with squid?
IMHO the proper way would be to adopt zeromq or similar messaging library for communication with rock diskers. On Wed, Mar 26, 2014 at 2:23 AM, Eliezer Croitoru elie...@ngtech.co.il wrote: I have been wondering about the need of a shared cache_dir. In squid the development of a cluster is kind of uses ICP and HTCP to dispose the need of a shared cache_dir by using a\the cluster as a backend for the whole data. If someone have a nice idea\example to demonstrate it I am looking for one. Eliezer
Re: [squid-users] Is it possible to mark tcp_outgoing_mark (server side) with SAME MARK as incoming packet (client side)?
Hi So documentation is right but placement of the statement is possibly wrong. Its not highlighted right infront. i.e qos_flows applies only for packets from server to client(squid) NOT from client to server. Is it possible to do reverse too? Or atleast have an acl where I can check incoming MARK on packet? So then I can make use of tcp_outgoing_mark. I just noticed that there was same discussion done in list previously as well (in 2013), here is the link: http://www.squid-cache.org/mail-archive/squid-users/201303/0421.html Yes, I'm still really interested to implement this. I got as far as doing some investigation a few weeks back. It seems *most* of the groundwork is there. I think there is space to store the incoming client connection mark, there are facilities to set the outgoing upstream mark (to an acl value). What is needed is: - code to connect the two, ie set a default outgoing mark - some thought on handling connection pipelining and re-use. At present squid maintains a pool of connections say to an upstream proxy, these now need to be selected not just because they are idle, but also because they have the correct connection mark set. This looks do-able, but slightly more tricky Ed W
[squid-users] Re: How to authorize SMTP and POP3 on SQUID
Although this is a squid forum, and not for email or firewall: Just completely remove the firewall (all ports on all interfaces are open !) In case, then email is usable, it really is a firewall problem. Then Make shure, your clients are allowed access to your mail server, and mail server cann access internet. So, something like #Allow access to mail server iptables -A INPUT -p tcp --destination-port 25 -j ACCEPT iptables -A INPUT -p tcp --destination-port 110 -j ACCEPT should be in your firewall. You might restrict it to special interface: iptables -A INPUT -i eth1 -p tcp --destination-port 25 -j ACCEPT iptables -A INPUT -i eth0 -p tcp --destination-port 110 -j ACCEPT (eth1 local int, eth0 public) -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/How-to-authorize-SMTP-and-POP3-on-SQUID-tp4665342p4665359.html Sent from the Squid - Users mailing list archive at Nabble.com.
Re: [squid-users] squid cpu problem
Dear Amos returning to our problem CPU Spikes on squid 3.1.19 or 2.7.9 i see this bug in gdb while ((t_off + p.len) offset) and other while loops like ( while L ) and hanging and some errors like :0x7f4e932c8103 in epoll_wait () from /lib64/libc.so.6 this errors rises CPU to 100% for random time and comes down back to the normal value i dont think there is a hardware issue because the problem occurs in any time even when squid is not loaded as a notice i have 4 servers one of them is having such problem . the only difference that the server with the issue has new kernel Linux proxy4 3.10.25-gentoo SMP gcc version 4.5.4 (Gentoo 4.5.4 p1.2, pie-0.4.7) while the other servers have Linux proxy91 3.4.9-gentoo gcc version 4.5.4 (Gentoo 4.5.4 p1.0, pie-0.4.7) i had compile errors on proxy 4 for squid 3.1 so i added this to configure CC=x86_64-pc-linux-gnu-gcc CFLAGS=-O2 -pipe -m64 -mtune=generic LDFLAGS=-Wl,-O1 -Wl,--as-needed CXXFLAGS= --cache-file=/dev/null --srcdir=. does these cause these C loops Thanks Amos On 02/07/2014 03:29 PM, Ayham Abou Afach wrote: On 02/03/2014 10:37 PM, Amos Jeffries wrote: On 2014-02-04 00:00, Ayham Abou Afach wrote: Dear All i have problem with squid CPU spikes for 100% some times for random times from seconds to minutes . in this time traffic goes down and squid seems to be hanging and after this every thing become normal and the process continue working without shutdown and even any change in PID .. i tried squid 3.1 , 2.7 with the same problem i have solid state disks for small objects , and the same config is used on three other servers could be a hardware issue ( i don't see any error in syslog ) Best regards Ayham Are you sure it is Squid and not something else in the system? there is nothing on system but squid and named and what is clear in top command that squid is taking 100% CPU some times and when try to strace to this process while it's hanging the strace hangs without going into process (eg. memory swapping, kernel loading/unloading something big from RAM, SSD controllers recovering broken sectors with blocking I/O operations, schtuff like that...) vmstat shows wa ( CPU wait ) less than 2 ( using ssd disks ) how can i monitor the other things like disk controllers or other things Does it still occur with the latest stable Squid? (3.4.3 today). i didn't try squid 3.4 yet Amos
Re: [squid-users] squid cpu problem
Maybe I missed something but: Is there any bug report in the bugzilla? It is better tracked this way. What was the original issue? CPU SPIKES? how many users? Not related to hardware issues but what are the specs of the machine? Why 3.1.19? have you considered that one cpu cannot take the load by any chance? Eliezer On 03/27/2014 03:55 PM, a.af...@hybridware.co wrote: Dear Amos returning to our problem CPU Spikes on squid 3.1.19 or 2.7.9 i see this bug in gdb while ((t_off + p.len) offset) and other while loops like ( while L ) and hanging and some errors like :0x7f4e932c8103 in epoll_wait () from /lib64/libc.so.6 this errors rises CPU to 100% for random time and comes down back to the normal value i dont think there is a hardware issue because the problem occurs in any time even when squid is not loaded as a notice i have 4 servers one of them is having such problem . the only difference that the server with the issue has new kernel Linux proxy4 3.10.25-gentoo SMP gcc version 4.5.4 (Gentoo 4.5.4 p1.2, pie-0.4.7) while the other servers have Linux proxy91 3.4.9-gentoo gcc version 4.5.4 (Gentoo 4.5.4 p1.0, pie-0.4.7) i had compile errors on proxy 4 for squid 3.1 so i added this to configure CC=x86_64-pc-linux-gnu-gcc CFLAGS=-O2 -pipe -m64 -mtune=generic LDFLAGS=-Wl,-O1 -Wl,--as-needed CXXFLAGS= --cache-file=/dev/null --srcdir=. does these cause these C loops Thanks Amos
Re: [squid-users] Is it possible to mark tcp_outgoing_mark (server side) with SAME MARK as incoming packet (client side)?
On Thu, 2014-03-27 at 10:26 +, Ed W wrote: Yes, I'm still really interested to implement this. I got as far as doing some investigation a few weeks back. Thanks for looking into it. I'd like to sort it myself, but don't have the time at the moment. In the meantime, I'll aim to submit a patch to update the documentation! Andy
[squid-users] Connections are closing up?
I am using clients and it seems like the connection is breaking from unknown reason. I was thinking of maybe the blame is the: client_idle_pconn_timeout. it's a CONNECT connection and I can dig up the logs to see what happens but it seems rather weird. Are there any settings in the linux kernel like sysctl to turn on keep alive for all tcp connections? Eliezer
Re: [squid-users] Connections are closing up?
On 27/03/2014 2:09 p.m., Eliezer Croitoru wrote: I am using clients and it seems like the connection is breaking from unknown reason. I was thinking of maybe the blame is the: client_idle_pconn_timeout. it's a CONNECT connection and I can dig up the logs to see what happens but it seems rather weird. Are there any settings in the linux kernel like sysctl to turn on keep alive for all tcp connections? It should not be the idle timeout if its a CONNECT request unless they are ssl-bumping it. It may be read or request total timeout. Some of the old 3.2 earlier releases did not update those as traffic went through. Amos