Hi

Has anyone used Advania S3 storage space,
http://www.advania.com/datacentres/solutions/advania-cloud-services/ --
I'm not having any luck, sorry for originally raising this as a ticket
rather than on this list:

- 
https://bitbucket.org/nikratio/s3ql/issues/151/problem-with-advania-s3-storage-space#issues-comments-container

Using a modified version of the s3_backup.sh script I'm getting errors
like the following (unique strings have been replaced with XXX):

  Starting fsck of s3c://s.qstack.advania.com:443/crin4/
  Using cached metadata.
  Remote metadata is outdated.
  Checking DB integrity...
  Creating temporary extra indices...
  Checking lost+found...
  Checking cached objects...
  Checking names (refcounts)...
  Checking contents (names)...
  Checking contents (inodes)...
  Checking contents (parent inodes)...
  Checking objects (reference counts)...
  Checking objects (backend)...
  
  Checking objects (sizes)...
  Checking blocks (referenced objects)...
  Checking blocks (refcounts)...
  Checking blocks (checksums)...
  Checking inode-block mapping (blocks)...
  Checking inode-block mapping (inodes)...
  Checking inodes (refcounts)...
  Checking inodes (sizes)...
  Checking extended attributes (names)...
  Checking extended attributes (inodes)...
  Checking symlinks (inodes)...
  Checking directory reachability...
  Checking unix conventions...
  Checking referential integrity...
  Dropping temporary indices...
  Dumping metadata...
  ..objects..
  ..blocks..
  ..inodes..
  ..inode_blocks..
  ..symlink_targets..
  ..names..
  ..contents..
  ..ext_attributes..
  Compressing and uploading metadata...
  Wrote 217 bytes of compressed metadata.
  Cycling metadata backups...
  Backing up old metadata...
  Unexpected server reply: expected XML, got:
  200 OK
  x-amz-meta-006: 'object_id': 's3ql_metadata_bak_1',
  x-amz-meta-007: 'encryption': 'AES_v2',
  Content-Length: 0
  x-amz-meta-005: 'signature': b'XXX=',
  x-amz-meta-002: 'nonce': b'XXX=',
  x-amz-meta-003: 'compression': 'None',
  x-amz-meta-000: 'data': b'XXX
  x-amz-meta-001: Hqdkxrg8Pk5zw==',
  x-amz-id-2: XXX
  x-amz-meta-md5: XXX 
  x-amz-meta-004: 'format_version': 2,
  Last-Modified: Thu, 23 Jul 2015 11:09:01 GMT
  ETag: "XXX"
  x-amz-request-id: XXX 
  x-amz-meta-format: raw2
  Content-Type: text/html; charset="UTF-8"
  X-Trans-Id: XXX 
  Date: Thu, 23 Jul 2015 11:09:00 +0000
  Connection: keep-alive
  
  
  Uncaught top-level exception:
  Traceback (most recent call last):
    File "/usr/bin/fsck.s3ql", line 9, in <module>
      load_entry_point('s3ql==2.13', 'console_scripts', 'fsck.s3ql')()
    File "/usr/lib/s3ql/s3ql/fsck.py", line 1307, in main
      cycle_metadata(backend)
    File "/usr/lib/s3ql/s3ql/metadata.py", line 121, in cycle_metadata
      cycle_fn("s3ql_metadata_bak_%d" % i, "s3ql_metadata_bak_%d" % (i + 1))
    File "/usr/lib/s3ql/s3ql/backends/comprenc.py", line 291, in copy
      self._copy_or_rename(src, dest, rename=False, metadata=metadata)
    File "/usr/lib/s3ql/s3ql/backends/comprenc.py", line 325, in _copy_or_rename
      self.backend.copy(src, dest, metadata=meta_raw)
    File "/usr/lib/s3ql/s3ql/backends/common.py", line 52, in wrapped
      return method(*a, **kw)
    File "/usr/lib/s3ql/s3ql/backends/s3c.py", line 404, in copy
      root = self._parse_xml_response(resp, body)
    File "/usr/lib/s3ql/s3ql/backends/s3c.py", line 554, in _parse_xml_response
      raise RuntimeError('Unexpected server response')
    RuntimeError: Unexpected server response

Does anyone have any suggestion regarding what I could try to get this
working? I'm unsure if the problem is with me, Advania or S3QL.

More details, including the script I'm using for the backup, can be
found here:

- https://trac.crin.org/trac/ticket/11#comment:30

All the best

Chris

-- 
Webarchitects Co-operative
http://webarchitects.coop/
+44 114 276 9709
@webarchcoop

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to