Hi I am trying to automatically mount  my s3ql fs (     1.17-1~precise1 ) 
using the s3ql.conf script at 
http://www.rath.org/s3ql-docs/mount.html#automatic-mounting
(of course adapted with my credentials etc)

Anyhow, the filesystem isnt mounted at reboot of the machine (ubuntu 12.04)


```
#!shell

cat /var/log/syslog |grep s3ql
```

says:


```
#!python

Apr 18 20:12:29 ip-172-31-15-107 s3ql: Starting fsck of s3://mybucket
Apr 18 20:12:29 ip-172-31-15-107 s3ql: Ignoring locally cached metadata 
(outdated).
Apr 18 20:12:30 ip-172-31-15-107 s3ql: (in batch mode, exiting)
Apr 18 20:12:30 ip-172-31-15-107 s3ql: Backend reports that file system is 
still mounted elsewhere. Either
Apr 18 20:12:30 ip-172-31-15-107 s3ql: the file system has not been 
unmounted cleanly or the data has not yet
Apr 18 20:12:30 ip-172-31-15-107 s3ql: propagated through the backend. In 
the later case, waiting for a while
Apr 18 20:12:30 ip-172-31-15-107 s3ql: should fix the problem, in the 
former case you should try to run fsck
Apr 18 20:12:30 ip-172-31-15-107 s3ql: on the computer where the file 
system has been mounted most recently.
Apr 18 20:12:30 ip-172-31-15-107 s3ql: Enter "continue" to use the outdated 
data anyway:
Apr 18 20:12:30 ip-172-31-15-107 s3ql: > 
Apr 18 20:12:30 ip-172-31-15-107 kernel: [54426751.762698] init: s3ql main 
process (648) terminated with status 1

```


When I try to manually mount it using mount.s3ql, the output is:


```
#!python

Using 4 upload threads.
Using cached metadata.
File system damaged or not unmounted cleanly, run fsck!
```


I can the run fsck.s3ql and then the mounting works.

What can I do to improve this?



btw the process was working properly when i was still using the s3ql 
version from the ubuntu 12.04 official repos.. strange

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to