|
Just a quick survey on how robust ab should be.
I think that ab should not seg fault on any user parameters,
I could spent some time making it a little bit more robust.
Is it of any interest, or the general thinking is; if the user enters an out of range or a bogus value and seg fault, this should be is responsibility.
And what about results consistency in the Connection Times Total line, is it bothering anyone?
Thanks,
JJ
>>> [EMAIL PROTECTED] 3/17/2004 10:31:20 AM >>> Hi,
If the "-c" option is given a arbitrarily huge value, ab dumps core. (Try: ab -c 2147483647 http://foo.com/) Here's a patch that limits the concurrency to MAX_CONCURRENCY (= 20000). The actual value of MAX_CONCURRENCY can be raised/lowered if you think the value is not appropriate. Thanks -Madhu Index: ab.c =================================================================== RCS file: /home/cvs/httpd-2.0/support/ab.c,v retrieving revision 1.139 diff -u -r1.139 ab.c --- ab.c 17 Mar 2004 00:06:44 -0000 1.139 +++ ab.c 17 Mar 2004 17:14:38 -0000 @@ -245,6 +245,7 @@ #define ap_min(a,b) ((a)<(b))?(a):(b) #define ap_max(a,b) ((a)>(b))?(a):(b) +#define MAX_CONCURRENCY 20000 /* --------------------- GLOBALS ---------------------------- */ @@ -2158,6 +2159,11 @@ usage(argv[0]); } + if ((concurrency < 0) || (concurrency > MAX_CONCURRENCY)) { + fprintf(stderr, "%s: Invalid Concurrency [Range 0..%d]\n", + argv[0], MAX_CONCURRENCY); + usage(argv[0]; + } if ((heartbeatres) && (requests > 150)) { heartbeatres = requests / 10; /* Print line every 10% of requests */ |
