The Beautiful Price of Molds
Dear Purchasing Manager, Lovely day!I am Steven from China,which specialize in molds and parts for 5 years. Owning two factories, RD center. Noted that you are also in this line, Trust we can find a way to cooperate. Our press tool and parts are crazy popular in your market, annual sales amount for only this item to your market is USD1000,000. do u also have interest in them? Steven Chan (Sale Manager) Mob: +86 15921150018 Skype: stevenchanluck Tel : +0086-021-61765566 Fax: +0086-021-61766018 Website:http://www.ferjm.com Add: Xinwei Road,Zhoushi Town,Kunshan City,Jiangsu Province,PRC
RE: RES: RES: Help with kQueue
Hi Fred, Seems with kqueue the CPU usage, reduces in 50%, which is great. Yep, the polling system is very important to performance, and select() is basically the worst of them. Do you know if they plan to add FTP Protocol support ? I can't speak for Willy, but I did not see anything regarding FTP on the mailing list. I don't think this will land on the roadmap in short term. I Know how to configure it work with passive mode, which isn't hard, but would be cool having native support for active mode ! Couldn't you just source NAT the active data connection on the HAProxy box? Regards, Lukas
RE: upgraded from 1.5dev18-30 to 1.5dev18-50 and it broke my SSL VPN :-(
Hi Arne, I ran sudo haproxy -d -f /etc/haproxy/haproxy.cfg haproxy-d.log 21 to capture the log output, I can't see anything obvious... In fact, I don't see anything wrong with these logs ... Looking at the bisected commit, I highly suspect a SNI related regression. I assume SSLexplorer doesn't support SNI, and sends the client_hello without server name indication. I don't have a lot of time to fully test SNI these days. Arne, would you be able to test SSL after that commit with a SNI capable client and more importantly with a non-SNI capable client (like Win XP + IE)? Perhaps that commit broke SSL for non-SNI capable clients? Anyway, I'm CC'ing Emmanuel, the author of that commit. Regards, Lukas
Re: upgraded from 1.5dev18-30 to 1.5dev18-50 and it broke my SSL VPN :-(
On Fri, May 31, 2013 at 9:41 AM, Lukas Tribus luky...@hotmail.com wrote: Hi Arne, I ran sudo haproxy -d -f /etc/haproxy/haproxy.cfg haproxy-d.log 21 to capture the log output, I can't see anything obvious... In fact, I don't see anything wrong with these logs ... Looking at the bisected commit, I highly suspect a SNI related regression. I assume SSLexplorer doesn't support SNI, and sends the client_hello without server name indication. I don't have a lot of time to fully test SNI these days. Arne, would you be able to test SSL after that commit with a SNI capable client and more importantly with a non-SNI capable client (like Win XP + IE)? Perhaps that commit broke SSL for non-SNI capable clients? Anyway, I'm CC'ing Emmanuel, the author of that commit. Regards, Lukas Apologies for not making this clearer, it is the SSLExplorer _Agent_ that fails. I can spin up an XP VM and test that IE 6 can connect to the SSLExplorer web interface over HAProxy 18-39 but as I'm not using SNI in the HAProxy config, I'm not sure how much use this would be; I would presume that if 18-39 broke non SNI capable clients, others might have already noticed and reported it ? Cheers Arne
RE: upgraded from 1.5dev18-30 to 1.5dev18-50 and it broke my SSL VPN :-(
Apologies for not making this clearer, it is the SSLExplorer _Agent_ that fails. By agent you mean the client which is on the frontend from a HAProxy perspective? I can spin up an XP VM and test that IE 6 can connect to the SSLExplorer web interface over HAProxy 18-39 but as I'm not using SNI in the HAProxy config, I'm not sure how much use this would be I would presume that if 18-39 broke non SNI capable clients, others might have already noticed and reported it ? The commit is pretty young, I don't think a lot of people already run this code. Even if you don't use SNI, the commit could break SSL. The thing we know for certain is that this commit breaks things for you, and we also know for certain that this commit touches SNI/SSL, so it does make sense to check with SNI capable/non capable clients. Regards, Lukas
Getting statistic through socket in multiprocess configuration
Hi everybody First of all I want to say Thanks for work you have been doing and great product you produce. Today all servers are equipped with many cores and a lot of memory and don't use them leads to inefficient utilization resources, especially in case 10G, keepalive and SSL technologies. We use haproxy all over frontend balancers and trying to collect statistic from them through socket but unfortunately as we can see there is no way to do it in multiprocess mode right. Maybe as a temporary workout you can create one socket at process with PID in name, can't you? P.S. I'm pretty sure that this question has been asking for many times but it is actually bothering us. -- The more you know, the less you need. Rgrds, Pavel Morozov
Re: upgraded from 1.5dev18-30 to 1.5dev18-50 and it broke my SSL VPN :-(
Hi, My bad… This fix should solve the issue diff -ru haproxy-ss-20130530/src/ssl_sock.c haproxy-ss-20130530-fix/src/ssl_sock.c --- haproxy-ss-20130530/src/ssl_sock.c 2013-05-29 15:54:14.0 +0200 +++ haproxy-ss-20130530-fix/src/ssl_sock.c 2013-05-31 12:00:38.542448533 +0200 @@ -197,7 +197,7 @@ if (!servername) { return (s-strict_sni ? SSL_TLSEXT_ERR_ALERT_FATAL : - SSL_TLSEXT_ERR_ALERT_WARNING); + SSL_TLSEXT_ERR_NOACK); } for (i = 0; i trash.size; i++) { Regards, Emmanuel Le 31 mai 2013 à 11:38, Lukas Tribus a écrit : Apologies for not making this clearer, it is the SSLExplorer _Agent_ that fails. By agent you mean the client which is on the frontend from a HAProxy perspective? I can spin up an XP VM and test that IE 6 can connect to the SSLExplorer web interface over HAProxy 18-39 but as I'm not using SNI in the HAProxy config, I'm not sure how much use this would be I would presume that if 18-39 broke non SNI capable clients, others might have already noticed and reported it ? The commit is pretty young, I don't think a lot of people already run this code. Even if you don't use SNI, the commit could break SSL. The thing we know for certain is that this commit breaks things for you, and we also know for certain that this commit touches SNI/SSL, so it does make sense to check with SNI capable/non capable clients. Regards, Lukas
RE: upgraded from 1.5dev18-30 to 1.5dev18-50 and it broke my SSL VPN :-(
Arne, Emmanuel, I can successfully reproduce the issue with an old wget build on win32. It seems to me the SSL_TLSEXT_ERR_ALERT_WARNING is upsetting certain clients. Arne, could you try the following patch on top of currend HEAD. Emmanuel, could you share your thoughts about this? Regards, Lukas diff --git a/src/ssl_sock.c b/src/ssl_sock.c index 38e95a8..531cfa1 100644 --- a/src/ssl_sock.c +++ b/src/ssl_sock.c @@ -197,7 +197,7 @@ static int ssl_sock_switchctx_cbk(SSL *ssl, int *al, struct bind_conf *s) if (!servername) { return (s-strict_sni ? SSL_TLSEXT_ERR_ALERT_FATAL : - SSL_TLSEXT_ERR_ALERT_WARNING); + SSL_TLSEXT_ERR_NOACK); } for (i = 0; i trash.size; i++) {
Re: upgraded from 1.5dev18-30 to 1.5dev18-50 and it broke my SSL VPN :-(
On Fri, May 31, 2013 at 11:14 AM, Lukas Tribus luky...@hotmail.com wrote: Arne, Emmanuel, I can successfully reproduce the issue with an old wget build on win32. It seems to me the SSL_TLSEXT_ERR_ALERT_WARNING is upsetting certain clients. Arne, could you try the following patch on top of currend HEAD. Emmanuel, could you share your thoughts about this? Regards, Lukas diff --git a/src/ssl_sock.c b/src/ssl_sock.c index 38e95a8..531cfa1 100644 --- a/src/ssl_sock.c +++ b/src/ssl_sock.c @@ -197,7 +197,7 @@ static int ssl_sock_switchctx_cbk(SSL *ssl, int *al, struct bind_conf *s) if (!servername) { return (s-strict_sni ? SSL_TLSEXT_ERR_ALERT_FATAL : - SSL_TLSEXT_ERR_ALERT_WARNING); + SSL_TLSEXT_ERR_NOACK); } for (i = 0; i trash.size; i++) { As there's nothing quite like displaying my incompetence in public :-) It was only yesterday that I used git bisect and checkout for the first time, I haven't got a clue on how to apply a diff, if somebody could point me in the direction of a suitable howto it would be much appreciated. Apologies for asking newbie questions. Cheers Arne
RE: upgraded from 1.5dev18-30 to 1.5dev18-50 and it broke my SSL VPN :-(
Hi Arne, just git pull, the fix was committed 10 minutes ago (dev18-53). Lukas
Re: upgraded from 1.5dev18-30 to 1.5dev18-50 and it broke my SSL VPN :-(
On Fri, May 31, 2013 at 1:12 PM, Lukas Tribus luky...@hotmail.com wrote: Hi Arne, just git pull, the fix was committed 10 minutes ago (dev18-53). Lukas 18-53 works :-) Many thanks Arne
unsubscribe
unsubscribe
Re: Spy a cell phone anywhere in the world.
I found website www.copy9.com. oh copy9 has many feature such as Ambient Voice Record, Yahoo Messenger Chat, Whatpsapp, Key Logger, Voice Memos, Notes, Photo Camera Roll (Pictures)... with lower price compared with spytic. Oh my god --- posted at http://www.serverphorums.com http://www.serverphorums.com/read.php?10,343992,716611#msg-716611
Re: Spy a cell phone anywhere in the world.
hello all how i can move my spytic account to copy9 website. any help??? --- posted at http://www.serverphorums.com http://www.serverphorums.com/read.php?10,343992,716613#msg-716613
Re: Spy a cell phone anywhere in the world.
oh thank you so much !!! :D --- posted at http://www.serverphorums.com http://www.serverphorums.com/read.php?10,343992,716626#msg-716626
Re: Getting statistic through socket in multiprocess configuration
Hi, You can startup multiple HAProxy processes... Or you can also try this dirty trick: setup a stat backend per process and browse the URL with the ;csv parameter, like on the demo: http://demo.1wt.eu/;csv that way, you'll be able collect stats per process. You can even do it over HTTPS :) Baptiste On Fri, May 31, 2013 at 12:06 PM, Avatar avatar...@gmail.com wrote: Hi everybody First of all I want to say Thanks for work you have been doing and great product you produce. Today all servers are equipped with many cores and a lot of memory and don't use them leads to inefficient utilization resources, especially in case 10G, keepalive and SSL technologies. We use haproxy all over frontend balancers and trying to collect statistic from them through socket but unfortunately as we can see there is no way to do it in multiprocess mode right. Maybe as a temporary workout you can create one socket at process with PID in name, can't you? P.S. I'm pretty sure that this question has been asking for many times but it is actually bothering us. -- The more you know, the less you need. Rgrds, Pavel Morozov
Re: LB Layout Question
On Wed, May 29, 2013 at 6:46 AM, joris dedieu joris.ded...@gmail.comwrote: Hi Syd, I'm guessing an an NFS share from the 2 webservers to the 1 fileserver. However, from a bit of research with load balanced magento setups there seems to be a lot of negative comments about using NFS in this way. It's always better to avoid NFS as it introduce a point of failure. It isn't always better. We have several TB of heavily accessed static media files being served to our web servers over NFS. I don't think we could do this another way viably. If the NFS server is built with redundant power, ECC memory, RAID and is connected to a UPS, I wouldn't be too worried about using NFS as long as the hardware is properly chosen to accommodate the workload and the OS and NFS are set up correctly. Sometimes just syncing the files on both servers with rsync / unison / snapshots / whatever is preferable (it strongly depends on the number of files and the number of file changes). If the amount of data is small-ish and not so heavily accessed, maybe. I can see this getting difficult to manage pretty quickly though. A crashy NFS server can leave inconsistent mount points on the webservers . Agreed. It's important to make sure the server is set up correctly, well tested under load (sysbench, etc.) and built to withstand common failures like disk and power. Anyway it works but you must qualify your server and client version and setups before turning it in production. Avoid lockd unless it's absolutely necessary, I'm not sure I'd worry about lockd at this point. I think it would be better for this workload to leave locking enabled unless absolutely necessary. We run high volume w/ locking (high amount of read traffic, low amount of write traffic). I'm sure it makes sense to disable locks in some cases, but probably not in this case. enable jumbo frames, I wouldn't worry about jumbo frames either at this point. Keep it simple. Our six year old NetApp filer is handling a LOT of traffic easily w/o jumbo frames. find the good rsize, wsize, Agreed. Here are the mount options we use mostly straight out of NetApp best practices (but I think are good options to use in many cases): proto=tcp,rw,nosuid,nodev,hard,nointr,timeo=600,retrans=2,rsize=32768,wsize=32768,bg,nfsvers=3,_netdev,actimeo=60 check and recheck your disks health, your raids settings, your IO performances. Totally agree. What type of disk, RAID levels, etc. are all workload dependent and important to get right. I always put a LOT of thought and research into this. Once the server has been built, I would use sysbench to test raw disk performance of the NFS server locally, then set up NFS and run sysbench over it to make sure performance is still acceptable before putting it into production. I can get near raw disk performance over NFS pretty easily. If possible, use varnish on the web servers for caching static content or serve the static files directly from the file server using nginx. If these two web servers are Apache / PHP, it might just be simpler for now to set up Apache to serve both PHP and the static files. There is a ton of documentation out there on how to do this correctly. I think this would go a LONG way. We have separate pools of servers for dynamic and static files, but if set up right I'm pretty sure we could easily use Apache for the whole thing. I prefer keeping things simple and tweaking only when necessary. Never forget that NFS is slow. We're serving upwards of 4-500Mbps (7000 NFS OPS/s) of consistent traffic from a single six year old NetApp filer over NFS. This thing has two single core 32-bit Xeons and only 2GB of memory. Granted, these boxes were VERY expensive and have 15K RPM fiber channel disk, but I wouldn't have a problem using an NFS server properly built with common hardware. In fact, we're set to replace the filer this year, and I've been looking at hardware from here: http://www.pc-pitstop.com/sas_expanders/ The only other thing I would add is that I'd probably be more comfortable using Redhat / CentOS as an NFS server than Debian/Ubuntu/Others. We're even considering paying for Redhat entitlements and the storage add-on when the time comes. Redhat's NFS implementation seems more stable and better supported than others, and their documentation is very good. Brendon