Mail System Error - Returned Mail
Your mail 115.127.8.195:6671-88.191.124.161:25 contains contaminated file _From_call0031_tc.umn.edu__Date_28_Nov_2011_14:38:13__Subj_Mail_System_Error___Returned_Mail_/document.pif with virus Email-Worm.Win32.Mydoom.m,so it is dropped.
RE: 'clear table table' clearing only bottom entry? (1.5dev7)
Good Morning! Is there a better place to report/track bugs? -Original Message- From: joe.pr...@vaisala.com [mailto:joe.pr...@vaisala.com] Sent: 09 November 2011 09:32 To: haproxy@formilux.org Subject: 'clear table table' clearing only bottom entry? (1.5dev7) Using 'clear table backend1' on the socket seems to only be clearing the bottom entry of 'show table backend1' wheras the docs say In the case where no options arguments are given all entries will be removed. I'm reasonably sure this worked fine in 1.4. Also, in the docs, my socat (1.7.1.2) doesn't work as the example... I needed to `echo clear table backend1 | socat unix-connect:haproxy.sock stdio` Apologies if this isn't the correct place to report possible bugs, I didn't see anywhere better on the website. My configuration follows: global daemon maxconn 256 #chroot chroot stats socket /home/vaisala/haproxy/haproxy.sock mode 0600 level admin defaults log global option tcplog option logasap mode tcp timeout check 4s timeout connect 4s timeout client 2m timeout server 2m backend backend1 log 127.0.0.1 local0 balance leastconn stick-table type integer size 10k expire 70m stick on dst_port default-server weight 100 maxconn 1 inter 30s downinter 2m fastinter 5s slowstart 15m server server1 address1:port check server server2 address2:port check
HAProxy and TIME_WAIT
Hi, I'm testing HAProxy. Now, what I came up with and it's a real bothering me is that there are a lot of network connections type TIME_WAIT. Here is my environment - on CentOS 6 server I've set up HAProxy in tcp mode to split connections between 2 web servers with SSL / Jetty web server /. All this on the same server. What happens is that when a browser opens a connection to HAProxy, then HAProxy opens connection to web server Browser - HAProxy - WebServer after a while the browser drops the connection and creates a new one, so that FIN packets are send from browser to Haproxy, and this side of connection is closed. Still between Haproxy and Webserver FIN packets are send as well, but now this connection goes into TIME_WAIT state. And on loaded server this will cause trouble. Isn't there a chance for HAProxy to send RST, so that conneciton will be dropped ? Thank you !
Re: HAProxy and TIME_WAIT
On Mon, Nov 28, 2011 at 11:50 AM, Daniel Rankov daniel.ran...@gmail.com wrote: And on loaded server this will cause trouble. Isn't there a chance for HAProxy to send RST, so that conneciton will be dropped ? An RST packet won't make the TIME_WAIT socket disappear. It's part if the TCP protocol, and a socket will sit in that state for 2 minutes after closing. You can put `net.ipv4.tcp_tw_reuse = 1` in your sysctl.conf to allow sockets in TIME_WAIT to be reused is needed. -jim
Re: HAProxy and TIME_WAIT
Yeap, I'm aware of net.ipv4.tcp_tw_reuse and the need of TIME_WAIT state, but still if there is a way to send a RST /either configuration or compile parameter/ the connection will be destroyed. 2011/11/28 James Bardin jbar...@bu.edu On Mon, Nov 28, 2011 at 11:50 AM, Daniel Rankov daniel.ran...@gmail.com wrote: And on loaded server this will cause trouble. Isn't there a chance for HAProxy to send RST, so that conneciton will be dropped ? An RST packet won't make the TIME_WAIT socket disappear. It's part if the TCP protocol, and a socket will sit in that state for 2 minutes after closing. You can put `net.ipv4.tcp_tw_reuse = 1` in your sysctl.conf to allow sockets in TIME_WAIT to be reused is needed. -jim
halog manpage
Does a halog man page exist? If not, it would be great if someone who knows what all the options are could document all of them. The best reference I know of is the following thread, which does not include many of the newer filters and etc. http://www.mail-archive.com/haproxy@formilux.org/msg02962.html Thanks! -Joe -- Name: Joseph A. Williams Email: williams@gmail.com
Re: HAProxy and TIME_WAIT
On Mon, Nov 28, 2011 at 12:28 PM, Daniel Rankov daniel.ran...@gmail.com wrote: Yeap, I'm aware of net.ipv4.tcp_tw_reuse and the need of TIME_WAIT state, but still if there is a way to send a RST /either configuration or compile parameter/ the connection will be destroyed. TIME_WAIT is usually not a problem if port reuse is enabled (I haven't seen an example otherwise), and you will usually have FIN_WAIT1 sockets if there is a problem with connections terminating badly. Now that I recall that the socket option to always send RST packets is called SO_NOLINGER, I noticed that there is an 'option nolinger' for both front and backends in happroxy. -jim
Re: Re: hashing + roundrobin algorithm
We add a new keyword 'vgroup' under 'server' key word. server wsqa 0.0.0.0 vgroup subproxy1 weight 32 check inter 4000 rise 3 fall 3 means request assigned to this server will be treated as set backend 'subproxy1'. Then in backend 'subproxy1' you can configure any load balance strategy. This can be recursive. In source code: At the end of assign_server(), if we found that a server has 'vgroup' property, we will set backend of cur_proxy and call assign_server() again. From: Rerngvit Yanggratoke Date: 2011-11-26 08:33 To: wsq003 CC: Willy Tarreau; haproxy; Baptiste Subject: Re: Re: hashing + roundrobin algorithm Hello wsq003, That sounds very interesting. It would be great if you could share your patch. If that is not possible, providing guideline on how to implement that would be helpful as well. Thank you! 2011/11/23 wsq003 wsq...@sina.com I've made a private patch to haproxy (just a few lines of code, but not elegant), which can support this feature. My condition is just like your imagination: consistent-hashing to a group then round-robin in this group. Our design is that several 'server' will share a physical machine, and 'severs' of one group will be distributed to several physical machine. So, if one physical machine is down, nothing will pass through the cache layer, because every group still works. Then we will get a chance to recover the cluster as we want. From: Willy Tarreau Date: 2011-11-23 15:15 To: Rerngvit Yanggratoke CC: haproxy; Baptiste Subject: Re: hashing + roundrobin algorithm Hi, On Fri, Nov 18, 2011 at 05:48:54PM +0100, Rerngvit Yanggratoke wrote: Hello All, First of all, pardon me if I'm not communicating very well. English is not my native language. We are running a static file distribution cluster. The cluster consists of many web servers serving static files over HTTP. We have very large number of files such that a single server simply can not keep all files (don't have enough disk space). In particular, a file can be served only from a subset of servers. Each file is uniquely identified by a file's URI. I would refer to this URI later as a key. I am investigating deploying HAProxy as a front end to this cluster. We want HAProxy to provide load balancing and automatic fail over. In other words, a request comes first to HAProxy and HAProxy should forward the request to appropriate backend server. More precisely, for a particular key, there should be at least two servers being forwarded to from HAProxy for the sake of load balancing. My question is what load balancing strategy should I use? I could use hashing(based on key) or consistent hashing. However, each file would end up being served by a single server on a particular moment. That means I wouldn't have load balancing and fail over for a particular key. This question is much more a question of architecture than of configuration. What is important is not what you can do with haproxy, but how you want your service to run. I suspect that if you acquired hardware and bandwidth to build your service, you have pretty clear ideas of how your files will be distributed and/or replicated between your servers. You also know whether you'll serve millions of files or just a few tens, which means in the first case that you can safely have one server per URL, and in the later that you would risk overloading a server if everybody downloads the same file at a time. Maybe you have installed caches to avoid overloading some servers. You have probably planned what will happen when you add new servers, and what is supposed to happen when a server temporarily fails. All of these are very important questions, they determine whether your site will work or fail. Once you're able to respond to these questions, it becomes much more obvious what the LB strategy can be, if you want to dedicate server farms to some URLs, or load-balance each hash among a few servers because you have a particular replication strategy. And once you know what you need, then we can study how haproxy can respond to this need. Maybe it can't at all, maybe it's easy to modify it to respond to your needs, maybe it does respond pretty well. My guess from what you describe is that it could make a lot of sense to have one layer of haproxy in front of Varnish caches. The first layer of haproxy chooses a cache based on a consistent hash of the URL, and each varnish is then configured to address a small bunch of servers in round robin. But this means that you need to assign servers to farms, and that if you lose a varnish, all the servers behind it are lost too. If your files are present on all servers, it might make sense to use varnish as explained above but which would round-robin across all servers. That way you make the cache layer and the server layer independant of each other. But this can imply complex replication strategies. As you see, there is no single response, you really need to
Re: Re: hashing + roundrobin algorithm
Hi, On Tue, Nov 29, 2011 at 01:52:31PM +0800, wsq003 wrote: We add a new keyword 'vgroup' under 'server' key word. server wsqa 0.0.0.0 vgroup subproxy1 weight 32 check inter 4000 rise 3 fall 3 means request assigned to this server will be treated as set backend 'subproxy1'. Then in backend 'subproxy1' you can configure any load balance strategy. This can be recursive. In source code: At the end of assign_server(), if we found that a server has 'vgroup' property, we will set backend of cur_proxy and call assign_server() again. Your trick sounds interesting but I'm not sure I completely understand how it works. There was a feature I wanted to implement some time ago, it would be sort of an internal server which would directly map to a frontend (or maybe just a backend) without passing via a TCP connection. It looks like your trick does something similar but I just fail to understand how the LB params are assigned to multiple backends for a given server. Regards, Willy