sorry for spamming the list -- probably have my answer -- benchmark.py to 
the rescue!

Preparing test data...
Measuring throughput to cache...
Cache throughput with   4 KiB blocks: 9283 KiB/sec
Cache throughput with   8 KiB blocks: 17435 KiB/sec
Cache throughput with  16 KiB blocks: 27689 KiB/sec
Cache throughput with  32 KiB blocks: 49371 KiB/sec
Cache throughput with  64 KiB blocks: 79283 KiB/sec
Cache throughput with 128 KiB blocks: 104054 KiB/sec
Measuring raw backend throughput..
Backend throughput: 48365 KiB/sec
Test file size: 0.55 MiB
compressing with lzma-6...
lzma compression speed: 1563 KiB/sec per thread (in)
lzma compression speed: 1381 KiB/sec per thread (out)
compressing with bzip2-6...
bzip2 compression speed: 2962 KiB/sec per thread (in)
bzip2 compression speed: 2909 KiB/sec per thread (out)
compressing with zlib-6...
zlib compression speed: 9719 KiB/sec per thread (in)
zlib compression speed: 9510 KiB/sec per thread (out)

With 128 KiB blocks, maximum performance for different compression
algorithms and thread counts is:

Threads:                              1           2           4           8
Max FS throughput (lzma):     1563 KiB/s   3126 KiB/s   6252 KiB/s  12504 
KiB/s
..limited by:                       CPU         CPU         CPU         CPU
Max FS throughput (bzip2):    2962 KiB/s   5925 KiB/s  11851 KiB/s  23703 
KiB/s
..limited by:                       CPU         CPU         CPU         CPU
Max FS throughput (zlib):     9719 KiB/s  19438 KiB/s  38876 KiB/s  49427 
KiB/s
..limited by:                       CPU         CPU         CPU      uplink

All numbers assume that the test file is representative and that
there are enough processor cores to run all active threads in parallel.
To compensate for network latency, you should use about twice as
many upload threads as indicated by the above table.




On Tuesday, October 2, 2018 at 8:43:59 PM UTC+1, Casey Stone wrote:
>
> Hello! 
>
> I probably know the answer and will experiment, but just to ask...
>
> I have a 2-core VPS running Nextcloud. CPU is 2x of the following, though 
> they are not dedicated:
>
> processor : 1
> vendor_id : GenuineIntel
> cpu family : 6
> model : 95
> model name : Intel(R) Atom(TM) CPU C3955 @ 2.10GHz
> stepping : 1
> microcode : 0x1
> cpu MHz : 2100.000
> cache size : 4096 KB
> physical id : 1
> siblings : 1
> core id : 0
> cpu cores : 1
> apicid : 1
> initial apicid : 1
> fpu : yes
> fpu_exception : yes
> cpuid level : 13
> wp : yes
> flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat 
> pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm 
> constant_tsc arch_perfmon rep_good nopl cpuid pni pclmulqdq vmx ssse3 cx16 
> sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave rdrand 
> hypervisor lahf_lm 3dnowprefetch cpuid_fault pti ibrs ibpb tpr_shadow 
> vnmi flexpriority ept vpid fsgsbase tsc_adjust smep erms mpx rdseed smap 
> clflushopt xsaveopt xsavec xgetbv1 xsaves arat
> bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass
> bogomips : 4200.00
> clflush size : 64
> cache_alignment : 64
> address sizes : 40 bits physical, 48 bits virtual
>
> Nextcloud chunks uploads into 10MB files in an upload directory. In my 
> setup, that directory is NOT on the s3ql volume. When the upload is done it 
> assembles the chunks and writes the file to final directly -- that's the 
> one where I have mounted the s3ql volume. I uploaded a 1.5GB zip file 
> (which is typical of the usage) and mount.s3ql ties up the CPU completely 
> for about 25 minutes.
>
>  ... it was almost done here:
> 9355 root      20   0 1771592 434140   5728 S 199.0 21.3  42:22.46 mount.s3ql 
>
>
> I'm using an S3-compatible object storage service offered by the VPS 
> provider, in the same datacentre, so latency is low and speed is good. The 
> encryption offered by s3lq is very valuable to me and the compression would 
> on some occasions be useful.
>
> I mounted the volume with all standard options but I increased 
> the --max-obj-size 40960 because it just seemed like a good idea in this 
> situation (was I wrong)?
>
> Is it the compression that's making s3ql a CPU killer or something else? 
> CPU does support AES-NI. Is there a compression option that would easily 
> give up when it detects an already-compressed-file but gives some benefit 
> on 'easy' files?
>
> Thank you! This will be a great solution I think if I can get the CPU 
> usage under control.
>

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to