Hi,

I'm trying to understand the benchmark.py output to identify an upload 
bottleneck in my S3QL deployment. I'd appreciate any pointers you may have.

I'm running benchmark on a c3.4xlarge EC2 instance, and seeing the 
following results.

Preparing test data...
Measuring throughput to cache...
Cache throughput with   4 KiB blocks: 22889 KiB/sec
Cache throughput with   8 KiB blocks: 27699 KiB/sec
Cache throughput with  16 KiB blocks: 30857 KiB/sec
Cache throughput with  32 KiB blocks: 31548 KiB/sec
Cache throughput with  64 KiB blocks: 33375 KiB/sec
Cache throughput with 128 KiB blocks: 36264 KiB/sec
Measuring raw backend throughput..
Backend throughput: 24697 KiB/sec
Test file size: 2.00 MiB
compressing with lzma-6...
lzma compression speed: 11741 KiB/sec per thread (in)
lzma compression speed: 2 KiB/sec per thread (out)
compressing with bzip2-6...
bzip2 compression speed: 51175 KiB/sec per thread (in)
bzip2 compression speed: 1 KiB/sec per thread (out)
compressing with zlib-6...
zlib compression speed: 129069 KiB/sec per thread (in)
zlib compression speed: 126 KiB/sec per thread (out)


With 128 KiB blocks, maximum performance for different compression
algorithms and thread counts is:


Threads:                              1           2           4           8 
         16
Max FS throughput (lzma):    11741 KiB/s  23483 KiB/s  36264 KiB/s  36264 
KiB/s  36264 KiB/s
..limited by:                       CPU         CPU   S3QL/FUSE   S3QL/FUSE 
  S3QL/FUSE
Max FS throughput (bzip2):   36264 KiB/s  36264 KiB/s  36264 KiB/s  36264 
KiB/s  36264 KiB/s
..limited by:                 S3QL/FUSE   S3QL/FUSE   S3QL/FUSE   S3QL/FUSE 
  S3QL/FUSE
Max FS throughput (zlib):    36264 KiB/s  36264 KiB/s  36264 KiB/s  36264 
KiB/s  36264 KiB/s
..limited by:                 S3QL/FUSE   S3QL/FUSE   S3QL/FUSE   S3QL/FUSE 
  S3QL/FUSE

But, when I run when I run the mount with 1MB cache size, 8 upload thread, 
and using bzip2-6 compression, I'm seeing a much lower throughput than 
expected.

> mount.s3ql --threads 8 --nfs --allow-other --cachedir /var/lib/s3ql-cache 
--cachesize 1024 --compress bzip2-6 --backend-options no-ssl  s3://some-bucket 
/some/path
Autodetected 4040 file descriptors available for cache entries
Using cached metadata.
Creating NFS indices...
Mounting filesystem...
> dd if=/dev/zero of=/some/path/speed_test.dat bs=2M count=1
1+0 records in
1+0 records out
2097152 bytes (2.1 MB) copied, 33.6993 s, 62.2 kB/s

Only, 62 KB/s. Any ideas why it's so low, or where to look for the 
bottleneck?

The cache directory is on a volume with good throughput.

> dd if=/dev/zero of=/var/lib/mys3ql-cache/output.dat bs=2M count=1
1+0 records in
1+0 records out
2097152 bytes (2.1 MB) copied, 0.00378582 s, 554 MB/s

Also, when I try the same thing from my home laptop, I see throughput of 
150 MB/s, which is my upload limit, as expected.

Thanks!





-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to