Enter code here...
Hello,
And thanks for your reply!
I can't seem to run contrib/benchmark.py. I get a divide by zero error when
I do!
Traceback (most recent call last):
File "/root/s3ql-1.18.1/contrib/benchmark.py", line 216, in <module>
main(sys.argv[1:])
File "/root/s3ql-1.18.1/contrib/benchmark.py", line 196, in main
if speed / in_speed[alg] * out_speed[alg] > backend_speed:
ZeroDivisionError: float division by zero
This is the full output of the error:
Preparing test data...
Measuring throughput to cache...
Write took 0 seconds, retrying
Write took 2.28 seconds, retrying
Write took 2.79 seconds, retrying
Cache throughput: 72593 KiB/sec
Measuring raw backend throughput..
Connecting to jf-backup.s3.amazonaws.com...
_do_request(): start with parameters ('GET', '/s3ql_passphrase', None, None,
None, None)
_send_request(): processing request for /s3ql_passphrase
_do_request(): request-id: F5129E17E9996ADD
Connecting to jf-backup.s3.amazonaws.com...
open_write(s3ql_testdata): start
ObjectW(s3ql_testdata).close(): start
_do_request(): start with parameters ('PUT', '/s3ql_testdata', None, None, {
'Content-Length': 1048576}, <open file '<fdopen>', mode 'w+b' at 0x1a2d540>)
_send_request(): sending request for /s3ql_testdata
_send_request(): Waiting for 100-cont..
Waiting for 100-continue...
_do_request(): request-id: 974F0BB2D5CE56E2
open_write(s3ql_testdata): start
ObjectW(s3ql_testdata).close(): start
_do_request(): start with parameters ('PUT', '/s3ql_testdata', None, None, {
'Content-Length': 2097152}, <open file '<fdopen>', mode 'w+b' at 0x1a2d5d0>)
_send_request(): sending request for /s3ql_testdata
_send_request(): Waiting for 100-cont..
Waiting for 100-continue...
_do_request(): request-id: F1DD2D9F91A530AE
open_write(s3ql_testdata): start
ObjectW(s3ql_testdata).close(): start
_do_request(): start with parameters ('PUT', '/s3ql_testdata', None, None, {
'Content-Length': 4194304}, <open file '<fdopen>', mode 'w+b' at 0x1a2d540>)
_send_request(): sending request for /s3ql_testdata
_send_request(): Waiting for 100-cont..
Waiting for 100-continue...
_do_request(): request-id: 01B8D24AE0DE4F32
open_write(s3ql_testdata): start
ObjectW(s3ql_testdata).close(): start
_do_request(): start with parameters ('PUT', '/s3ql_testdata', None, None, {
'Content-Length': 8388608}, <open file '<fdopen>', mode 'w+b' at 0x1a2d5d0>)
_send_request(): sending request for /s3ql_testdata
_send_request(): Waiting for 100-cont..
Waiting for 100-continue...
_do_request(): request-id: 09E1E29E055DDB92
open_write(s3ql_testdata): start
ObjectW(s3ql_testdata).close(): start
_do_request(): start with parameters ('PUT', '/s3ql_testdata', None, None, {
'Content-Length': 16777216}, <open file '<fdopen>', mode 'w+b' at 0x1a2d540
>)
_send_request(): sending request for /s3ql_testdata
_send_request(): Waiting for 100-cont..
Waiting for 100-continue...
_do_request(): request-id: AA53FCDC204694CA
open_write(s3ql_testdata): start
ObjectW(s3ql_testdata).close(): start
_do_request(): start with parameters ('PUT', '/s3ql_testdata', None, None, {
'Content-Length': 33554432}, <open file '<fdopen>', mode 'w+b' at 0x1a2d5d0
>)
_send_request(): sending request for /s3ql_testdata
_send_request(): Waiting for 100-cont..
Waiting for 100-continue...
_do_request(): request-id: C1FFB89E037195E4
open_write(s3ql_testdata): start
ObjectW(s3ql_testdata).close(): start
_do_request(): start with parameters ('PUT', '/s3ql_testdata', None, None, {
'Content-Length': 67108864}, <open file '<fdopen>', mode 'w+b' at 0x1a2d540
>)
_send_request(): sending request for /s3ql_testdata
_send_request(): Waiting for 100-cont..
Waiting for 100-continue...
_do_request(): request-id: 463DFBD5AD4D8C46
open_write(s3ql_testdata): start
ObjectW(s3ql_testdata).close(): start
_do_request(): start with parameters ('PUT', '/s3ql_testdata', None, None, {
'Content-Length': 134217728}, <open file '<fdopen>', mode 'w+b' at 0x1a2d5d0
>)
_send_request(): sending request for /s3ql_testdata
_send_request(): Waiting for 100-cont..
Waiting for 100-continue...
_do_request(): request-id: 41E37E95EE3669A6
open_write(s3ql_testdata): start
ObjectW(s3ql_testdata).close(): start
_do_request(): start with parameters ('PUT', '/s3ql_testdata', None, None, {
'Content-Length': 268435456}, <open file '<fdopen>', mode 'w+b' at 0x1a2d540
>)
_send_request(): sending request for /s3ql_testdata
_send_request(): Waiting for 100-cont..
Waiting for 100-continue...
_do_request(): request-id: C13046D35470E51C
Backend throughput: 16302 KiB/sec
delete(s3ql_testdata)
_do_request(): start with parameters ('DELETE', '/s3ql_testdata', None, None
, None, None)
_send_request(): processing request for /s3ql_testdata
_do_request(): request-id: 81819FBD9EFB24B7
Test file size: 0.00 MiB
compressing with lzma...
lzma compression speed: 0 KiB/sec per thread (in)
lzma compression speed: 359 KiB/sec per thread (out)
compressing with bzip2...
bzip2 compression speed: 0 KiB/sec per thread (in)
bzip2 compression speed: 10 KiB/sec per thread (out)
compressing with zlib...
zlib compression speed: 0 KiB/sec per thread (in)
zlib compression speed: 130 KiB/sec per thread (out)
Uncaught top-level exception:
Traceback (most recent call last):
File "/root/s3ql-1.18.1/contrib/benchmark.py", line 216, in <module>
main(sys.argv[1:])
File "/root/s3ql-1.18.1/contrib/benchmark.py", line 196, in main
if speed / in_speed[alg] * out_speed[alg] > backend_speed:
ZeroDivisionError: float division by zero
Threads: 1 2 4 8
I've captured the output to a file, while logging into the S3 bucket in
another tab.
On Monday, February 1, 2016 at 10:49:37 AM UTC-5, Tim Dunphy wrote:
>
> Hey guys,
>
> I'm trying to use S3QL as a back end for my bacula backups. What I'd like
> to do is use an S3 bucket to actually hold the tapes that bacula writes to.
> One thing that I notice is that it's really really slow. I started a backup
> last night around 10pm, and this morning it hadn't even finished writing to
> a 5GB virutal tape. That's just ridiculously slow. If that were writing to
> EBS instead, it could have filled up in about 1/2 hour. It would be great
> to be able to backup to S3 without the expense of a huge EBS volume!!
>
> Second thing I noticed is that when I went to list the directory I'm
> getting an error message:
>
> [root@ops:~] #ls -lh /backup/tapes/
> ls: /backup/tapes/: Transport endpoint is not connected
>
>
>
> This is the second time I've seen that happen!
>
> And these are the log entries I'm finding in syslog:
>
>
> [root@ops:~] #grep s3ql /var/log/messages |grep "Feb 1"
> Feb 1 01:57:00 ip-172-30-1-80 journal: mount.s3ql[11175] Thread-4: [
> backend] Encountered BadStatusLine exception (''), retrying call to
> ObjectW.close...
> Feb 1 04:21:32 ip-172-30-1-80 journal: mount.s3ql[11175] Thread-4: [
> backend] Encountered BadStatusLine exception (''), retrying call to
> ObjectW.close...
> Feb 1 04:23:10 ip-172-30-1-80 journal: mount.s3ql[11175] Thread-4: [
> backend] Encountered BadStatusLine exception (''), retrying call to
> ObjectW.close...
> Feb 1 05:08:18 ip-172-30-1-80 journal: mount.s3ql[11175] Dummy-18: [
> backend] Encountered BadStatusLine exception (''), retrying call to
> Backend.open_read...
> Feb 1 05:08:25 ip-172-30-1-80 journal: mount.s3ql[11175] Thread-6: [
> backend] Encountered BadStatusLine exception (''), retrying call to
> Backend.delete...
> Feb 1 05:13:59 ip-172-30-1-80 journal: mount.s3ql[11175] Dummy-18: [
> backend] Encountered BadStatusLine exception (''), retrying call to
> Backend.open_read...
> Feb 1 05:17:19 ip-172-30-1-80 journal: mount.s3ql[11175] Thread-4: [
> backend] Encountered BadStatusLine exception (''), retrying call to
> ObjectW.close...
> Feb 1 05:17:29 ip-172-30-1-80 journal: mount.s3ql[11175] Thread-3: [
> backend] Encountered BadStatusLine exception (''), retrying call to
> ObjectW.close...
> Feb 1 05:19:11 ip-172-30-1-80 journal: mount.s3ql[11175] Thread-4: [
> backend] Encountered BadStatusLine exception (''), retrying call to
> ObjectW.close...
> Feb 1 05:19:13 ip-172-30-1-80 journal: mount.s3ql[11175] Thread-3: [
> backend] Encountered BadStatusLine exception (''), retrying call to
> ObjectW.close...
> Feb 1 05:20:36 ip-172-30-1-80 journal: mount.s3ql[11175] Thread-4: [
> backend] Encountered BadStatusLine exception (''), retrying call to
> ObjectW.close...
> Feb 1 05:20:37 ip-172-30-1-80 journal: mount.s3ql[11175] Thread-3: [
> backend]
> ...
"The most important information came after this - unfortunately you
didn't include it."
Whoops! Here's that line:
OSError: [Errno 2] No such file or directory:
'/cache/s3:=2F=2Fjf-backup-cache/8-419'
>> I'm using S3QL 1.18.1.
>That's rather old and receives only critical security updates. If
>at all possible, switch to S3QL 2.x.
I thought I read that only the 1.x branch will run under my operating
system. I'm running CentOS 7. If the 2.x branch will work under that OS,
then yeah I'll absolutely give that a try!!
Thanks,
Tim
--
You received this message because you are subscribed to the Google Groups
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
For more options, visit https://groups.google.com/d/optout.