Sorry,

"The Store" = the backend/remote storage.   In this case it's Amazon S3/USA.

With respect to fragility, I was simply referring to forgetting to umount 
the remote filesystem (or power failure, or... whatever).  I see links to 
people who leave remote filesystems mounted (via upstart/systemd) and if a 
reboot without umount can corrupt the filesystem to the point it's 
unmountable, and it's unfsckable -- that's an issue.

The log in the original post is a clip of the fsck.log file contained in 
the .s3ql directory after running fsck with the --debug option.

The part that worked was a fresh, new filesystem.   I wanted to ensure that 
my install after upgrading worked.   Upgrading ArchLinux can sometimes 
be.... picky.

The main backup filesystem is still unfsckable.  Here is the console output 
instead of the log.


Using cached metadata.
Remote metadata is outdated.
Checking DB integrity...
Creating temporary extra indices...
Checking lost+found...
Checking cached objects...
Checking names (refcounts)...
Checking contents (names)...
Checking contents (inodes)...
Checking contents (parent inodes)...
Checking objects (reference counts)...
Checking objects (backend)...
..processed 553000 objects so far..
Checking objects (sizes)...
Checking blocks (referenced objects)...
Checking blocks (refcounts)...
Checking blocks (checksums)...
Checking inode-block mapping (blocks)...
Checking inode-block mapping (inodes)...
Checking inodes (refcounts)...
Checking inodes (sizes)...
Checking extended attributes (names)...
Checking extended attributes (inodes)...
Checking symlinks (inodes)...
Checking directory reachability...
Checking unix conventions...
Checking referential integrity...
Dropping temporary indices...
Dumping metadata...
..objects..
..blocks..
..inodes..
..inode_blocks..
..symlink_targets..
..names..
..contents..
..ext_attributes..
Compressing and uploading metadata...
Uncaught top-level exception:
Traceback (most recent call last):
  File "/usr/bin/fsck.s3ql", line 9, in <module>
    load_entry_point('s3ql==2.12', 'console_scripts', 'fsck.s3ql')()
  File "/usr/lib/python3.4/site-packages/s3ql/fsck.py", line 1287, in main
    is_compressed=True)
  File "/usr/lib/python3.4/site-packages/s3ql/backends/common.py", line 46, 
in wrapped
    return method(*a, **kw)
  File "/usr/lib/python3.4/site-packages/s3ql/backends/common.py", line 
258, in perform_write
    return fn(fh)
  File "/usr/lib/python3.4/site-packages/s3ql/backends/comprenc.py", line 
642, in __exit__
    self.close()
  File "/usr/lib/python3.4/site-packages/s3ql/backends/comprenc.py", line 
636, in close
    self.fh.close()
  File "/usr/lib/python3.4/site-packages/s3ql/backends/common.py", line 46, 
in wrapped
    return method(*a, **kw)
  File "/usr/lib/python3.4/site-packages/s3ql/backends/s3c.py", line 844, 
in close
    headers=self.headers, body=self.fh)
  File "/usr/lib/python3.4/site-packages/s3ql/backends/s3c.py", line 407, 
in _do_request
    query_string=query_string, body=body)
  File "/usr/lib/python3.4/site-packages/s3ql/backends/s3c.py", line 649, 
in _send_request
    copyfileobj(body, self.conn, BUFSIZE)
  File "/usr/lib/python3.4/shutil.py", line 69, in copyfileobj
    fdst.write(buf)
  File "/usr/lib/python3.4/site-packages/dugong/__init__.py", line 653, in 
write
    eval_coroutine(self.co_write(buf), self.timeout)
  File "/usr/lib/python3.4/site-packages/dugong/__init__.py", line 1396, in 
eval_coroutine
    if not next(crt).poll(timeout=timeout):
  File "/usr/lib/python3.4/site-packages/dugong/__init__.py", line 679, in 
co_write
    yield from self._co_send(buf)
  File "/usr/lib/python3.4/site-packages/dugong/__init__.py", line 619, in 
_co_send
    len_ = self._sock.send(buf)
  File "/usr/lib/python3.4/ssl.py", line 679, in send
    v = self._sslobj.write(data)
OSError: [Errno 14] Bad address





On Wednesday, February 4, 2015 at 6:58:31 PM UTC-5, Nikolaus Rath wrote:
>
> Jeff Bogatay <[email protected] <javascript:>> writes: 
> > I am in the process of crafting a s3ql backed backup solution. During 
> > the development/testing I left the store mounted, installed some 
> > system updates and rebooted. 
> > 
> > Now I am unable to mount and/or check the store. Running 2.12 on 
> > ArchLinux. It has been several hours since I last wrote to the store. 
> > 
> > My last attempt was to delete the local metadata and have it rebuilt. 
> > Same error as below. 
> > 
> > Not sure what to do next or how to recover. Are these stores typically 
> > this fragile? 
>
> What do you mean with "the store"? Are you talking about a remote 
> storage server? In that case the fragility obviously depends on the 
> server. 
>
> > Also, as a test I created a fresh mount, wrote to it, unmounted it, 
> > and remounted it without any issues. 
> > 
> > 2015-02-04 16:56:35.635 9617:MainThread s3ql.deltadump.dump_metadata: 
> > dump_table(ext_attributes): writing 0 rows 
> [...] 
>
> I am not sure what I'm looking at here. First you say it works, but then 
> you quote an error message (and the formatting is pretty messed 
> up). Can you be more precise as to when exactly the error occurs (and is 
> it always the same)? 
>
>
> Best, 
> -Nikolaus 
>
> -- 
> GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F 
> Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F 
>
>              »Time flies like an arrow, fruit flies like a Banana.« 
>

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to