On Dec 12 2015, Chris <[email protected]> wrote:
> Preparing test data...
> Measuring throughput to cache...
> Unable to execute ps, assuming process 8286 has terminated.
> Cache throughput with 4 KiB blocks: 2545 KiB/sec
> Cache throughput with 8 KiB blocks: 4503 KiB/sec
> Cache throughput with 16 KiB blocks: 4252 KiB/sec
> Cache throughput with 32 KiB blocks: 5343 KiB/sec
> Cache throughput with 64 KiB blocks: 5850 KiB/sec
> Cache throughput with 128 KiB blocks: 5865 KiB/sec
> Measuring raw backend throughput..
> Enter backend login:
> Enter backend passphrase:
> Backend throughput: 890 KiB/sec
So for some reason S3QL is only able to send data with at most 800 KiB/s
to the backend. That is your bottleneck. Changing compression settings
or number of threads won't change it.
Are you sure that a different application is able to do better? Did you
try the other application right before or after this test?
Best,
-Nikolaus
--
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F
»Time flies like an arrow, fruit flies like a Banana.«
--
You received this message because you are subscribed to the Google Groups
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
For more options, visit https://groups.google.com/d/optout.