On 2002.02.18, Simos Gabrielidis <[EMAIL PROTECTED]> wrote:
> Here are the results of the various file sizes and timings:
>
> 30KB: 0.28 sec, 60KB: 0.57 sec, 100KB: 1.1 sec, 1000KB: 13.41 secs,
> 1500KB:19.47 secs, 3000KB: 37.74 secs, 4000KB: 50.56 secs, 6000KB: 74.82
> secs.
Here's the results of my tests. Server is a AS3.4.2 nsd8x without
zippy, no ACS, running on a old AMD K6-233 running Linux 2.4.17 with
96 MB RAM, and 10mbit half-duplex ethernet (!).
30KB: 0.0694 sec
100KB: 0.2236 sec
1500KB: 3.6277 sec
6000KB: 14.6185 sec
At the 6MB end, the theoretical maximum of 10mbit/hdx would be
would be around 4.9154 sec. Considering that IP and TCP overhead
(headers, etc) and half-duplex losing to the back-and-forth
syn/ack traffic, practical maximum is around 70%, or 7mbit/sec.
So, a realistic maximum would be around 7.0221 sec.
Admitted, 14.6 sec compared to 7.0 sec sucks (double the time)
but considering that it's being shoved through AOLserver, it's
not too unreasonable.
Trying it out using the same setup but under Apache and using
Perl CGI, I get:
30KB: 0.0052 sec
100KB: 0.0161 sec
1500KB: 3.1372 sec
6000KB: 14.7145 sec
I'll attach the source to the Perl CGI to show how simple it
is (if I actually used CGI.pm, I'm sure AOLserver would outperform
the Perl CGI without a doubt) but it at least assures me that
AOLserver is comparable to Apache.
I'm not sure why your times are so outrageously large, though.
Doing the same tests using FTP yields timing that's reasonable,
though?
-- Dossy
Here's the evidence of AOLserver:
$ dd if=/dev/zero of=test.30kb bs=1024 count=30
$ ls -l test.30kb
-rw------- 1 dossy users 30720 Feb 18 21:22 test.30kb
Time to write content to file: 69444 microseconds per iteration
$ dd if=/dev/zero of=test.100kb bs=1024 count=100
$ ls -l test.100kb
-rw------- 1 dossy users 102400 Feb 18 21:27 test.100kb
Time to write content to file: 223696 microseconds per iteration
$ dd if=/dev/zero of=test.1500kb bs=1024 count=1500
$ ls -l test.1500kb
-rw------- 1 dossy users 1536000 Feb 18 21:28 test.1500kb
Time to write content to file: 3627705 microseconds per iteration
$ dd if=/dev/zero of=test.6000kb bs=1024 count=6000
$ ls -l test.6000kb
-rw------- 1 dossy users 6144000 Feb 18 21:31 test.6000kb
Time to write content to file: 14618517 microseconds per iteration
Here's the Perl CGI script I used:
------------------------------------------------------------------------
#!/usr/bin/perl -w
use strict;
print "Content-type: text/plain\n\n";
require 'sys/syscall.ph';
my $TIMEVAL_T = "LL";
my $start = pack($TIMEVAL_T, ());
my $done = pack($TIMEVAL_T, ());
syscall(&SYS_gettimeofday, $start, 0) != -1
or die "gettimeofday: $!";
# ========================================================================
open FH, "> /tmp/body" or die "couldn't open /tmp/body: $!";
while (<>) {
print FH;
}
close FH;
# ========================================================================
syscall( &SYS_gettimeofday, $done, 0) != -1
or die "gettimeofday: $!";
my @start = unpack($TIMEVAL_T, $start);
my @done = unpack($TIMEVAL_T, $done);
# fix microseconds
for ($done[1], $start[1]) { $_ /= 1_000_000 }
my $delta_time = sprintf "%.4f", ($done[0] + $done[1]) -
($start[0] + $start[1]);
print $delta_time, "\n";
------------------------------------------------------------------------
--
Dossy Shiobara mail: [EMAIL PROTECTED]
Panoptic Computer Network web: http://www.panoptic.com/
"He realized the fastest way to change is to laugh at your own
folly -- then you can let go and quickly move on." (p. 70)