On May 13 2015, [email protected] wrote:
> Hi,
>
> I'm trying to understand the benchmark.py output to identify an upload
> bottleneck in my S3QL deployment. I'd appreciate any pointers you may have.
>
[...]>
>
> Threads: 1 2 4 8
> Max FS throughput (bzip2): 36264 KiB/s 36264 KiB/s 36264 KiB/s 36264
> ..limited by: S3QL/FUSE S3QL/FUSE S3QL/FUSE S3QL/FUSE
>
> But, when I run when I run the mount with 1MB cache size, 8 upload thread,
> and using bzip2-6 compression, I'm seeing a much lower throughput than
> expected.
>
>> mount.s3ql --threads 8 --nfs --allow-other --cachedir /var/lib/s3ql-cache
> --cachesize 1024 --compress bzip2-6 --backend-options no-ssl
> s3://some-bucket
> /some/path
> Autodetected 4040 file descriptors available for cache entries
> Using cached metadata.
> Creating NFS indices...
> Mounting filesystem...
>
>> dd if=/dev/zero of=/some/path/speed_test.dat bs=2M count=1
> 1+0 records in
> 1+0 records out
> 2097152 bytes (2.1 MB) copied, 33.6993 s, 62.2 kB/s
>
> Only, 62 KB/s. Any ideas why it's so low, or where to look for the
> bottleneck?
That's odd. What happens if you increase the cache size to 4 MB? What
happens if you decrease the blocksize (bs) to 128k?
> Also, when I try the same thing from my home laptop, I see throughput of
> 150 MB/s, which is my upload limit, as expected.
I don't believe it. Please recheck.
Best,
-Nikolaus
--
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F
»Time flies like an arrow, fruit flies like a Banana.«
--
You received this message because you are subscribed to the Google Groups
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
For more options, visit https://groups.google.com/d/optout.