On 24/04/15 07:44, Andrew Stuart wrote:
Before we speculate more, can you first test it again and see if the numbers
got some X better?
Start command is:
rumprun xen -di -M 1024 \
-n inet,static,192.168.1.33/24 \
-b images/data.iso,/data \
-b images/stubetc.iso,/etc \
-- nginx/objs/nginx -c /data/conf/nginx.conf
The average Transaction rate for Ubuntu and NetBSD was in the range of 5,500
transactions.
Repeatedly running the 20 second siege test results in this:
Transaction rate: 1381.10 trans/sec
Transaction rate: 554.21 trans/sec
Transaction rate: 435.25 trans/sec
Transaction rate: 371.04 trans/sec
Transaction rate: 331.51 trans/sec
Transaction rate: 302.39 trans/sec
Transaction rate: 280.16 trans/sec
Transaction rate: 261.12 trans/sec
[...]
Heh, interesting. Something is leaking and causing a slowdown. The
good news is that we can extrapolate that the actual performance will be
>1.4k/s once we find that leak. You can probably get a hint of that
value by running a 2s blast.
I'll try to repro and find the problem (but most likely not until next
week).
Until eventually the rump-nginx console repeatedly shows this:
base is 0x69a928 caller is 0x12e37
Opening the nginx binary in gdb and doing "l *0x12e37" might show
something useful (or it might not, I can't quite remember without
playing with it myself).
After the tests, the time consumed by rumprun-nginx now appears to be much
lower than it was in previous tests, where it appeared to grow steadily.
root@contiki:/home/ubuntu/temp# xl list
Name ID Mem VCPUs State Time(s)
Domain-0 0 2759 4 r----- 8924.7
rumprun-nginx 14 1024 1 -b---- 626.3
The startup message for rumprun-nginx interestingly says 507MB memory when 1024
is allocated to the VM. As follows:
The Regents of the University of California. All rights reserved.
NetBSD 7.99.4 (RUMP-ROAST)
total memory = 507 MB
Right, and that has to do with some silliness in the memory management.
The application (in this case nginx and libc) and the rump kernel
(i.e. what provides the posix syscalls, TCP/IP, etc.) use separate
memory pools. Currently, they're just split 50/50. Making that more
sane is on the list of things to fix, but for now, just configure 2x
memory. If someone presents a case where they'd like to deploy in a
really memory-tight env, the actual fixing action will trickle to the
top of the list quicker ...
Thanks for the data, very useful.