[SPAM] hello!

2015-05-05 Thread Natalia
hello!

how are you today? What is your name?
my name is Natalia, You frequently are on this site?
I today wanted to talk to you in a chat, but you have already left a chat, 
Ithink that in the near future we with you shall talk, how you consider?
You have yahoo or hotmail ID? if you write to me, ok?
I shall wait from you the letter with impatience

My email kalabukhina@yandex.ru mailto:kalabukhina@yandex.ru

Natalia

Re: [haproxy]: Performance of haproxy-to-4-nginx vs direct-to-nginx

2015-05-05 Thread Krishna Kumar (Engineering)
Hi Willy, Pavlos,

Thank you once again for your advice.

 Requests per second:19071.55 [#/sec] (mean)
  Transfer rate:  9461.28 [Kbytes/sec] received

 These numbers are extremely low and very likely indicate an http
 close mode combined with an untuned nf_conntrack.


Yes, it was due to http close mode, and wrong irq pinning (nf_conntrack_max
was
set to 640K).


  mpstat (first 4 processors only, rest are almost zero):
  Average: CPU%usr   %nice%sys %iowait%irq   %soft  %steal
  %guest  %gnice   %idle
  Average:   00.250.000.750.000.00   98.010.00
  0.000.001.00

 This CPU is spending its time in softirq, probably due to conntrack
 spending a lot of time looking for the session for each packet in too
 small a hash table.


I had not done irq pinning. Today I am getting much better results with irq
pinning
and keepalive.

Note, this is about 2 Gbps. How is your network configured ? You should
 normally see either 1 Gbps with a gig NIC or 10 Gbps with a 10G NIC,
 because retrieving a static file is very cheap. Would you happen to be
 using bonding in round-robin mode maybe ? If that's the case, it's a
 performance disaster due to out-of-order packets and could explain some
 of the high %softirq.


My setup is as follows (no bonding, etc, and Sys stands for baremetal
system, each with
48 core, 128GB mem, ixgbe single ethernet port card).

Sys1-with-ab   -eth0-   Sys1-with-Haproxy, which uses two nginx backend
systems
over the same eth0 card (that is the current restriction, no extra ethernet
interface for
separate frontend/backend, etc). Today I am getting a high of 7.7 Gbps with
your
suggestions. Is it possible to get higher than that (direct to server gets
8.6 Gbps)?

Please retry without http-server-close to maintain keep-alive to the
 servers, that will avoid the session setup/teardown. If that becomes
 better, there's definitely something to fix in the conntrack or maybe
 in iptables rules if you have some. But in any case don't put such a


There are a few iptables rules, which seem clean. The results now are:

ab -k -n 100 -c 500 http://haproxy:80/64   (I am getting some errors
though,
which is not present when running against the backend directly):

Document Length:64 bytes
Concurrency Level:  500
Time taken for tests:   6.181 seconds
Complete requests:  100
Failed requests:18991
   (Connect: 0, Receive: 0, Length: 9675, Exceptions: 9316)
Write errors:   0
Keep-Alive requests:990330
Total transferred:  296554848 bytes
HTML transferred:   63381120 bytes
Requests per second:161783.42 [#/sec] (mean)
Time per request:   3.091 [ms] (mean)
Time per request:   0.006 [ms] (mean, across all concurrent requests)
Transfer rate:  46853.18 [Kbytes/sec] received

Connection Times (ms)
  min  mean[+/-sd] median   max
Connect:00   0.5  0   8
Processing: 03   6.2  31005
Waiting:03   6.2  31005
Total:  03   6.3  31010

Percentage of the requests served within a certain time (ms)
  50%  3
  66%  3
  75%  3
  80%  3
  90%  4
  95%  5
  98%  6
  99%  8
 100%   1010 (longest request)

pidstat (some system numbers are very high, 50%, maybe due to small packet
sizes?):
Average:  UID   PID%usr %system  %guest%CPU   CPU  Command
Average:  110 526016.009.330.00   15.33 -  haproxy
Average:  110 526026.33   11.830.00   18.17 -  haproxy
Average:  110 52603   11.33   17.830.00   29.17 -  haproxy
Average:  110 52604   17.50   30.330.00   47.83 -  haproxy
Average:  110 52605   20.50   38.500.00   59.00 -  haproxy
Average:  110 52606   24.50   51.330.00   75.83 -  haproxy
Average:  110 52607   22.50   51.330.00   73.83 -  haproxy
Average:  110 52608   23.67   47.170.00   70.83 -  haproxy

mpstat (of interesting cpus only):
Average: CPU%usr   %nice%sys %iowait%irq   %soft  %steal
%guest  %gnice   %idle
Average: all2.580.004.360.000.000.890.00
0.000.00   92.17
Average:   06.840.00   11.460.000.002.030.00
0.000.00   79.67
Average:   1   11.150.00   19.850.000.005.290.00
0.000.00   63.71
Average:   28.320.00   12.200.000.002.220.00
0.000.00   77.26
Average:   37.920.00   11.970.000.002.390.00
0.000.00   77.72
Average:   48.810.00   13.760.000.002.390.00
0.000.00   75.05
Average:   56.960.00   12.270.000.002.380.00
0.000.00   78.39
Average:   69.210.00   12.520.000.003.310.00
0.000.00   74.95
Average:   77.560.00   13.650.000.00   

Re: [haproxy]: Performance of haproxy-to-4-nginx vs direct-to-nginx

2015-05-05 Thread Krishna Kumar (Engineering)
Hi Baptiste,

On Wed, May 6, 2015 at 1:24 AM, Baptiste bed...@gmail.com wrote:

  Also, during the test, the status of various backend's change often
 between
  OK to DOWN,
  and then gets back to OK almost immediately:
 
  www-backend,nginx-3,0,0,0,10,3,184,23843,96517588,,0,,27,0,0,180,DOWN
 
 1/2,1,1,0,7,3,6,39,,7,3,1,,220,,2,0,,37,L4CON,,0,0,184,0,0,0,0,00,0,6,Out
  of local source ports on the system,,0,2,3,92,

 this error is curious with the type of traffic your generating!
 Maybe you should let HAProxy manage the source ports on behalf of the
 server.
 Try adding the source 0.0.0.0:1024-65535 parameter in your backend
 description.


Yes, this has fixed the issue - I no longer get state change after an hour
testing.
The performance didn't improve though. I will check the sysctl parameters
that
were different between haproxy/nginx nodes.

Thanks,
- Krishna Kumar


Re: [haproxy]: Performance of haproxy-to-4-nginx vs direct-to-nginx

2015-05-05 Thread Krishna Kumar (Engineering)
Hi Pavlos

On Wed, May 6, 2015 at 1:24 AM, Pavlos Parissis pavlos.paris...@gmail.com
wrote:

Shall I assume that you have run the same tests without iptables and got
 the same results?


Yes, I had tried it yesterday and saw no measurable difference.

May I suggest to try also httpress and wrk tool?


I tried it today, will post it after your result below.


 Have you compared 'sysctl -a' between haproxy and nginx server?


Yes, the difference is very litle:
11c11
 fs.dentry-state = 266125  130939  45  0   0   0
---
 fs.dentry-state = 19119   0   45  0   0   0
13,17c13,17
 fs.epoll.max_user_watches = 27046277
 fs.file-max = 1048576
 fs.file-nr = 1536 0   1048576
 fs.inode-nr = 262766  98714
 fs.inode-state = 262766   98714   0   0   0   0   0
---
 fs.epoll.max_user_watches = 27046297
 fs.file-max = 262144
 fs.file-nr = 1536 0   262144
 fs.inode-nr = 27290   8946
 fs.inode-state = 2729089460   0   0   0   0

134c134
 kernel.sched_domain.cpu0.domain0.max_newidle_lb_cost = 2305
---
 kernel.sched_domain.cpu0.domain0.max_newidle_lb_cost = 3820

(and for each cpu, similar lb_cost)

Have you checked if you got all backends reported down at the same time?


Yes, that has not happened. After Baptiste's suggestion of adding port
number,
this has disappeared completely.

How many workers do you use on your Nginx which acts as LB?


I was using default of 4. Increasing to 16 seems to improve numbers 10-20%.\


 
 www-backend,nginx-3,0,0,0,10,3,184,23843,96517588,,0,,27,0,0,180,DOWN
 1/2,1,1,0,7,3,6,39,,7,3,1,,220,,2,0,,37,L4CON,,0,0,184,0,0,0,0,00,0,6,Out
  of local source ports on the system,,0,2,3,92,
 

 Hold on a second, what is this 'Out  of local source ports on the
 system' message? ab reports 'Concurrency Level:  500' and you said
 that HAProxy runs in keepalive mode(default on 1.5 releases) which means
 there will be only 500 TCP connections opened from HAProxy towards the
 backends, which it isn't that high and you shouldn't get that message
 unless net.ipv4.ip_local_port_range is very small( I don't think so).


It was set to net.ipv4.ip_local_port_range = 3276861000. I have not
seen
this issue after making the change Baptiste suggested. Though I could
increase
the range above and check too.


 # wrk --timeout 3s --latency -c 1000 -d 5m -t 24 http://a.b.c.d
 Running 5m test @ http://a.b.c.d
   24 threads and 1000 connections
   Thread Stats   Avg  Stdev Max   +/- Stdev
 Latency87.07ms  593.84ms   7.85s95.63%
 Req/Sec16.45k 7.43k   60.89k74.25%
   Latency Distribution
  50%1.75ms
  75%2.40ms
  90%3.57ms
  99%3.27s
   111452585 requests in 5.00m, 15.98GB read
   Socket errors: connect 0, read 0, write 0, timeout 33520
 Requests/sec: 371504.85
 Transfer/sec: 54.56MB


I get very strange result:

# wrk --timeout 3s --latency -c 1000 -d 1m -t 24 http://haproxy
Running 1m test @ http://haproxy
  24 threads and 1000 connections
  Thread Stats   Avg  Stdev Max   +/- Stdev
Latency 2.40ms   26.64ms   1.02s99.28%
Req/Sec 8.77k 8.20k   26.98k62.39%
  Latency Distribution
 50%1.14ms
 75%1.68ms
 90%2.40ms
 99%6.14ms
  98400 requests in 1.00m, 34.06MB read
Requests/sec:   1637.26
Transfer/sec:580.36KB

# wrk --timeout 3s --latency -c 1000 -d 1m -t 24 http://nginx
Running 1m test @ http://nginx
  24 threads and 1000 connections
  Thread Stats   Avg  Stdev Max   +/- Stdev
Latency 5.56ms   12.01ms 444.71ms   99.41%
Req/Sec 8.53k   825.8018.50k90.91%
  Latency Distribution
 50%4.81ms
 75%6.80ms
 90%8.58ms
 99%   11.92ms
  12175205 requests in 1.00m, 4.31GB read
Requests/sec: 202584.48
Transfer/sec: 73.41MB

Thank you,

Regards,
- Krishna Kumar


Re: HA proxy configuration

2015-05-05 Thread Pavlos Parissis
On 05/05/2015 07:11 πμ, ANISH S IYER wrote:
 HI
 
 i need to configure HAproxy with apache server as loadbalancer
 

It sounds a bit strange to have a 2-tier load balancing setup using
software load balancer at both tiers, unless you do SSL offloading on
1-tier.

You can configure your Apache load balancer(does Apache have LB
capabilities?) as typical servers in your backend.

 also let me know what type of the protocol can used in HAproxy for load
 balancing, is socks protocol can be used in HAproxy???
 

HAProxy is a TCP/HTTP load balancer. When it runs in HTTP mode it can
analyze request/response and based on information provided by HTTP
protocol it can take some decisions, route, filter traffic.
The same applies when it runs in TCP mode.

I don't think it support SOCKS protocol, but I don't think you need it
as you can run in TCP mode and balance incoming traffic to a set of
SOCKS servers.

For a list of load balancing algorithms have a look here
http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#4-balance

Cheers,
Pavlos




signature.asc
Description: OpenPGP digital signature


Re: [haproxy]: Performance of haproxy-to-4-nginx vs direct-to-nginx

2015-05-05 Thread Pavlos Parissis
On 05/05/2015 02:06 μμ, Krishna Kumar (Engineering) wrote:
 Hi Willy, Pavlos,
 
 Thank you once again for your advice.
 
  Requests per second:19071.55 [#/sec] (mean)
  Transfer rate:  9461.28 [Kbytes/sec] received
 
 These numbers are extremely low and very likely indicate an http
 close mode combined with an untuned nf_conntrack.
 
 
 Yes, it was due to http close mode, and wrong irq pinning
 (nf_conntrack_max was
 set to 640K).
  
 
  mpstat (first 4 processors only, rest are almost zero):
  Average: CPU%usr   %nice%sys %iowait%irq   %soft  %steal
  %guest  %gnice   %idle
  Average:   00.250.000.750.000.00   98.010.00
  0.000.001.00
 
 This CPU is spending its time in softirq, probably due to conntrack
 spending a lot of time looking for the session for each packet in too
 small a hash table.
 
 
 I had not done irq pinning. Today I am getting much better results with
 irq pinning
 and keepalive.
 
 Note, this is about 2 Gbps. How is your network configured ? You should
 normally see either 1 Gbps with a gig NIC or 10 Gbps with a 10G NIC,
 because retrieving a static file is very cheap. Would you happen to be
 using bonding in round-robin mode maybe ? If that's the case, it's a
 performance disaster due to out-of-order packets and could explain some
 of the high %softirq.
 
 
 My setup is as follows (no bonding, etc, and Sys stands for baremetal
 system, each with
 48 core, 128GB mem, ixgbe single ethernet port card).
 
 Sys1-with-ab   -eth0-   Sys1-with-Haproxy, which uses two nginx
 backend systems
 over the same eth0 card (that is the current restriction, no extra
 ethernet interface for
 separate frontend/backend, etc). Today I am getting a high of 7.7 Gbps
 with your
 suggestions. Is it possible to get higher than that (direct to server
 gets 8.6 Gbps)?
 
 Please retry without http-server-close to maintain keep-alive to the
 servers, that will avoid the session setup/teardown. If that becomes
 better, there's definitely something to fix in the conntrack or maybe
 in iptables rules if you have some. But in any case don't put such a
 
  
 There are a few iptables rules, which seem clean. The results now are:
 

Shall I assume that you have run the same tests without iptables and got
the same results?
May I suggest to try also httpress and wrk tool?

Have you compared 'sysctl -a' between haproxy and nginx server?

 ab -k -n 100 -c 500 http://haproxy:80/64   (I am getting some
 errors though,
 which is not present when running against the backend directly):
 
 Document Length:64 bytes
 Concurrency Level:  500
 Time taken for tests:   6.181 seconds
 Complete requests:  100
 Failed requests:18991
(Connect: 0, Receive: 0, Length: 9675, Exceptions: 9316)
 Write errors:   0
 Keep-Alive requests:990330
 Total transferred:  296554848 bytes
 HTML transferred:   63381120 bytes
 Requests per second:161783.42 [#/sec] (mean)
 Time per request:   3.091 [ms] (mean)
 Time per request:   0.006 [ms] (mean, across all concurrent requests)
 Transfer rate:  46853.18 [Kbytes/sec] received
 
 Connection Times (ms)
   min  mean[+/-sd] median   max
 Connect:00   0.5  0   8
 Processing: 03   6.2  31005
 Waiting:03   6.2  31005
 Total:  03   6.3  31010
 
 Percentage of the requests served within a certain time (ms)
   50%  3
   66%  3
   75%  3
   80%  3
   90%  4
   95%  5
   98%  6
   99%  8
  100%   1010 (longest request)
 
 pidstat (some system numbers are very high, 50%, maybe due to small
 packet sizes?):
 Average:  UID   PID%usr %system  %guest%CPU   CPU  Command
 Average:  110 526016.009.330.00   15.33 -  haproxy
 Average:  110 526026.33   11.830.00   18.17 -  haproxy
 Average:  110 52603   11.33   17.830.00   29.17 -  haproxy
 Average:  110 52604   17.50   30.330.00   47.83 -  haproxy
 Average:  110 52605   20.50   38.500.00   59.00 -  haproxy
 Average:  110 52606   24.50   51.330.00   75.83 -  haproxy
 Average:  110 52607   22.50   51.330.00   73.83 -  haproxy
 Average:  110 52608   23.67   47.170.00   70.83 -  haproxy
 
 mpstat (of interesting cpus only):
 Average: CPU%usr   %nice%sys %iowait%irq   %soft 
 %steal  %guest  %gnice   %idle
 Average: all2.580.004.360.000.000.89   
 0.000.000.00   92.17
 Average:   06.840.00   11.460.000.002.03   
 0.000.000.00   79.67
 Average:   1   11.150.00   19.850.000.005.29   
 0.000.000.00   63.71
 Average:   28.320.00   12.200.000.00

Re: [haproxy]: Performance of haproxy-to-4-nginx vs direct-to-nginx

2015-05-05 Thread Baptiste
 Also, during the test, the status of various backend's change often between
 OK to DOWN,
 and then gets back to OK almost immediately:

 www-backend,nginx-3,0,0,0,10,3,184,23843,96517588,,0,,27,0,0,180,DOWN
 1/2,1,1,0,7,3,6,39,,7,3,1,,220,,2,0,,37,L4CON,,0,0,184,0,0,0,0,00,0,6,Out
 of local source ports on the system,,0,2,3,92,

this error is curious with the type of traffic your generating!
Maybe you should let HAProxy manage the source ports on behalf of the server.
Try adding the source 0.0.0.0:1024-65535 parameter in your backend
description.


 Please let me know if this can be fixed, as it might help performance even
 more.

 In short, for small file sizes, haproxy results are *much* better than
 running against a single
 backend server directly (with some failures as shown above). For big files,
 the numbers for
 haproxy are slightly lower.


devil might be in your sysctls.

Baptiste



LED filament bulbs

2015-05-05 Thread Joy Huang
#35242;#24859;#12394;#12427;#38307;#19=979;#12414;#12383;#12399;#22885;#27096;#12290;
 
#33391;#12356;#19968;#26085;#1239=1;#12354;#12426;#12414;#12377;#12424;#12358;#12395;#65281;
 
#31169;#12399;#28145;#12475;#12531;coml=ite#12363;#12425;#12472;#12519;#12452;led#31038;#12289;#20013;=#22269;#12290;#28145;#12475;#12531;comlite#38480;#23450;#23566=;#12363;#12428;#12427;#20027;#35201;#12394;
 led#38598;#39770;#28783;#12513;#12540;=#12459;#12540;#12290; 
#12371;#12371;#12391;#31169;#12383;#12=385;#12398;#29105;#12356;#36009;#22770;#12434;#32057;#20171;#=12375;#12383;#12356;#12392;#12388;#12394;#12364;#12387;#12383;=#38598;#39770;#28783;#12434;#12354;#12394;#12383;#12395;#1237=5;#12414;#12377;
 #12290;=#29305;#24500;#65306; 
1#12390;#12356;#20302;#12395;#12421;#1=2358;#12426;#12423;#12367;#20837;#21147;#12391;#12435;#12354;=#12388;#38651;#22311;#65306;dc12-24v#12290;
 
2.6#12513;#12540;#12488;#12523;#12398=;#12465;#12540;#12502;#12523;#12391;#12289;#28145;#12356;under=wter#12489;#12525;#12483;#12503;#12377;#12427;#12371;#12392;#=12364;#12391;#12365;#12414;#12377;#12290;
 
3#39770;#12434;#22258;#12416;#12424;#1=2358;#12395;#24341;#12365;#12388;#12369;#12427;#12371;#12392;=#12364;#12391;#12365;#12390;#12289;#27700;#12398;#20013;#12391=;#28145;#12356;#12496;#12452;#12488;#12392;#12375;#12390;#123=56;#12427;#12290;
 
4.ip68#12412;#12358;#12377;#12356;#1=2392;360#12393;#12398;#12403;#12444;#12416;#12363;#12367;#1239=3;#12398;#12402;#12429;#12356;#12354;#12363;#12427;#12373;#12=434;#12418;#12388;#12290;4.ip68#38450;#27700;
 
#12392;360#24230;#12398;#12499;#12540;=#12512;#35282;#24230;#12398;#24195;#12356;#26126;#12427;#12373=;#12434;#25345;#12388;#12290;
 
5#12356;#12376;#12423;#12358;#20197;#1=9978;#12398;#12394;#12364;#12356;#38263;#12356;#12376;#12421;=#12415;#12423;#12358;#23551;#21629;3#12376;#12363;#12435;#=26178;#38291;#12290;
 
6#12290;#32209;#12398;#33394;#12398;#2=6368;#39640;#12398;#26032;#39854;#12394;#27700;#12392;#30333;=#12398;#12383;#12417;#12398;#28023;#12398;#27700;#12398;#12383=;#12417;#12395;#12290;
 
#12354;#12394;#12383;#12364;#12381;#12=428;#12395;#33288;#21619;#12364;#12354;#12427;#12398;#12391;#=12377;#12363;#65311;#25105;#12293;#12399;#12289;#20182;#12398;=15-1400w#37347;#12426;#12398;#26126;#12426;#12434;#29983;
 
#29987;#12375;#12414;#12377;#12290;#12=424;#12426;#22810;#12367;#12398;#24773;#22577;#12398;#12383;#=12417;#12395;#12289;#12393;#12358;#12363;#31169;#12395;#36899;=#32097;#12377;#12427;#12371;#12392;#33258;#30001;#12395;#2486=3;#12376;#12414;
 
#12377;#12290;#25105;#12293;#12399;24#=26178;#38291;#12391;#12354;#12394;#12383;#12395;#31572;#12360;=#12414;#12377;#12290;
 #12424;#12429;#12375;#12367;#12289; #12472;#12519;#12452;?#40644; 
comlite#38480;#23450;#12524;#12483;#124=89; 
comlite#12370;#12435;#12390;#12356;#384=80;#23450;#12524;#12483;#12489;
 
#12450;#12489;#12524;#12473;#6530=6;10#12289;#21513;#31077;#12398;#31532;3#12398;#36947;#12289;=#19968;#24515;#12398;#12467;#12511;#12517;#12491;#12486;#12451=;#12289;pingdi#12473;#12488;#12522;#12540;#12488;
 
#31452;#23831;#21306;#28145;#12475;#12=531;#24066;#12289;518117#12289;#20013;#22269;#12290;
 
#12426;#12421;#12358;#31452;#23831;#12=367;#23831;#21306;#12375;#12435;#28145;#12475;#12531;#12375;#=24066;#12289;518117#12289;#12385;#12421;#12358;#12372;#12367;#=20013;#22269;#12290;
 #12472;#12519;#12452;?#40644;#65073;#2=1942;#26989;#37096; 
#38651;#23376;#12513;#12540;#1252=3;#65306;joy0...@foxmail.com 
#12514;#12496;#12452;#12523;#65306;861=3138357905#12473;#12459;#12452;#12503;joy010101