Re: [s3ql] fsck crashes

2018-04-10 Thread Nikolaus Rath
On Apr 07 2018, Alexandre Gonçalves  wrote:
>> Also, could you try out to open the db in the sqlite3 command line 
>> utility and execute the VACUUM command there? 
>>
>> It gave the same error. To do the vacuuming, sqlite needs a temp dir, 
> which is selected in the order explained here 
> ;
>
> In my case, the /var/tmp hadn't the needed storage available to do the 
> vacuum, which lead to the error. I freed up some space and then the command 
> finished without errors.
>
> To avoid the issue, maybe it's a good idea to do PRAGMA 
> temp_store_directory =--cachedir;

Glad to hear that you resolved this! Yes, I agree this pragma should be
used. Could you file a bug at
https://bitbucket.org/nikratio/s3ql/issues?


Best,
-Nikolaus

-- 
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] fsck crashes

2018-04-07 Thread Alexandre Gonçalves

>
>
> > 
> > 
> > The --cachedir directory /home/s3qlcache has 283G of free storage, 
> > nevertheless it breaks. Is there a limit on the database size? 
>
> I don't think so. What's interesting is that the error happens during 
> the VACUUM operation, which actually shrinks the database size by 
> rewriting it completely. I believe it does so my writing into a 
> completely fresh file. 
>
 

>
> At this point fsck.s3ql has effectively completed, so if you run it 
> again it shouldn't pick up any errors. Is that correct? 
>

Only tried now, and yes, that's correct:

alx@zen:~$ sudo fsck.s3ql  --authfile "/myetc/s3ql/auth/s3ql_authinfo" 
--cachedir "/home/s3qlcache/"  gs://ideiao
[sudo] password for alx:
Requesting new access token
Starting fsck of gs://ideiao/
Using cached metadata.
File system is marked as clean. Use --force to force checking.



> Also, could you try out to open the db in the sqlite3 command line 
> utility and execute the VACUUM command there? 
>
> It gave the same error. To do the vacuuming, sqlite needs a temp dir, 
which is selected in the order explained here 
;

In my case, the /var/tmp hadn't the needed storage available to do the 
vacuum, which lead to the error. I freed up some space and then the command 
finished without errors.

To avoid the issue, maybe it's a good idea to do PRAGMA 
temp_store_directory =--cachedir;


Thanks.
Alexandre

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] fsck crashes

2018-04-07 Thread Nikolaus Rath
On Apr 07 2018, Alexandre Gonçalves  wrote:
>> >> You need storage in the --cachedir directory.
>> >>
>> >
>> > I'm not sure what you mean by this. I didn't mention before, but I 
>> removed
>> > all the contents from the --cachedir to see if it worked.
>>
>> S3QL is trying to write data into the directory that you specify with 
>> --cachedir (defaults to ~/.s3ql/), and getting an out of space 
>> error. You need to provide more space in the filesystem that contains 
>> this directory. 
>>
>
> I moved the --cachedir to another filesystem.
>
> alx@zen:/home/s3qlcache$ sudo fsck.s3ql  --authfile 
> "/myetc/s3ql/auth/s3ql_authinfo" --cachedir "/home/s3qlcache/"  gs://ideiao
> Requesting new access token
[...
> Traceback (most recent call last):
>   File "/usr/local/bin/fsck.s3ql", line 11, in 
> load_entry_point('s3ql==2.22', 'console_scripts', 'fsck.s3ql')()
>   File 
> "/usr/local/lib/python3.4/dist-packages/s3ql-2.22-py3.4-linux-x86_64.egg/s3ql/fsck.py",
>  
> line 1293, in main
> db.execute('VACUUM')
>   File 
> "/usr/local/lib/python3.4/dist-packages/s3ql-2.22-py3.4-linux-x86_64.egg/s3ql/database.py",
>  
> line 98, in execute
> self.conn.cursor().execute(*a, **kw)
>   File "src/cursor.c", line 236, in resetcursor
> apsw.FullError: FullError: database or disk is full
>
> alx@zen:/home/s3qlcache$ df -h
> Filesystem Size  Used Avail Use% Mounted on
> [...]
> /dev/mapper/vg_data-homes  2.7T  2.3T  268G  90% /home
> [...]
>
>
> The --cachedir directory /home/s3qlcache has 283G of free storage, 
> nevertheless it breaks. Is there a limit on the database size?

I don't think so. What's interesting is that the error happens during
the VACUUM operation, which actually shrinks the database size by
rewriting it completely. I believe it does so my writing into a
completely fresh file.

At this point fsck.s3ql has effectively completed, so if you run it
again it shouldn't pick up any errors. Is that correct?

Also, could you try out to open the db in the sqlite3 command line
utility and execute the VACUUM command there?

Best,
-Nikolaus

-- 
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] fsck crashes

2018-04-07 Thread Alexandre Gonçalves
Hi,

Thanks for the reply.


> >> You need storage in the --cachedir directory.
> >>
> >
> > I'm not sure what you mean by this. I didn't mention before, but I 
> removed
> > all the contents from the --cachedir to see if it worked.
>
> S3QL is trying to write data into the directory that you specify with 
> --cachedir (defaults to ~/.s3ql/), and getting an out of space 
> error. You need to provide more space in the filesystem that contains 
> this directory. 
>

I moved the --cachedir to another filesystem.

alx@zen:/home/s3qlcache$ sudo fsck.s3ql  --authfile 
"/myetc/s3ql/auth/s3ql_authinfo" --cachedir "/home/s3qlcache/"  gs://ideiao
Requesting new access token
Starting fsck of gs://ideiao/
Backend reports that file system is still mounted elsewhere. Either
the file system has not been unmounted cleanly or the data has not yet
propagated through the backend. In the later case, waiting for a while
should fix the problem, in the former case you should try to run fsck
on the computer where the file system has been mounted most recently.
You may also continue and use whatever metadata is available in the
backend. However, in that case YOU MAY LOOSE ALL DATA THAT HAS BEEN
UPLOADED OR MODIFIED SINCE THE LAST SUCCESSFULL METADATA UPLOAD.
Moreover, files and directories that you have deleted since then MAY
REAPPEAR WITH SOME OF THEIR CONTENT LOST.
Enter "continue, I know what I am doing" to use the outdated data anyway:
continue, I know what I am doing
Downloading and decompressing metadata...
Reading metadata...
..objects..
..blocks..
..inodes..
..inode_blocks..
..symlink_targets..
..names..
..contents..
..ext_attributes..
Creating temporary extra indices...
Checking lost+found...
Checking cached objects...
Checking names (refcounts)...
Checking contents (names)...
Checking contents (inodes)...
Checking contents (parent inodes)...
Checking objects (reference counts)...
Checking objects (backend)...
Requesting new access token
..processed 321000 objects so far..WARNING: Deleted spurious object 330154
WARNING: Deleted spurious object 330155
WARNING: Deleted spurious object 330156
WARNING: object 330145 only exists in table but not in backend, deleting
WARNING: File may lack data, moved to /lost+found: 
/homes/hourly.0/home/alx/.s3ql/mount.log
WARNING: object 330146 only exists in table but not in backend, deleting
WARNING: File may lack data, moved to /lost+found: 
/homes/hourly.0/home/samba/shares/dev/WORK_BKP/PCALX/sync.ffs_lock

Checking objects (sizes)...
Checking blocks (referenced objects)...
Checking blocks (refcounts)...



Checking blocks (checksums)...
Checking inode-block mapping (blocks)...
Checking inode-block mapping (inodes)...
Checking inodes (refcounts)...
WARNING: Inode 21905216 
(/homes/.sync/home/backups/ideiao.com/weekly.0/backup/site/html/joomla3/tmp/install_5433e6d66c7f1/html/com_easyblt,
 
setting from 15 to 14
[... lots of warnings as above...]
Checking inodes (sizes)...
Checking extended attributes (names)...
Checking extended attributes (inodes)...
Checking symlinks (inodes)...
Checking directory reachability...
Checking unix conventions...
Checking referential integrity...
Dropping temporary indices...
Dumping metadata...
..objects..
..blocks..
..inodes..
..inode_blocks..
..symlink_targets..
..names..
..contents..
..ext_attributes..
Compressing and uploading metadata...
Requesting new access token
Wrote 543 MiB of compressed metadata.
Cycling metadata backups...
Backing up old metadata...
Cleaning up local metadata...
ERROR: Uncaught top-level exception:
Traceback (most recent call last):
  File "/usr/local/bin/fsck.s3ql", line 11, in 
load_entry_point('s3ql==2.22', 'console_scripts', 'fsck.s3ql')()
  File 
"/usr/local/lib/python3.4/dist-packages/s3ql-2.22-py3.4-linux-x86_64.egg/s3ql/fsck.py",
 
line 1293, in main
db.execute('VACUUM')
  File 
"/usr/local/lib/python3.4/dist-packages/s3ql-2.22-py3.4-linux-x86_64.egg/s3ql/database.py",
 
line 98, in execute
self.conn.cursor().execute(*a, **kw)
  File "src/cursor.c", line 236, in resetcursor
apsw.FullError: FullError: database or disk is full



alx@zen:/home/s3qlcache$ df -h
Filesystem Size  Used Avail Use% Mounted on
[...]
/dev/mapper/vg_data-homes  2.7T  2.3T  268G  90% /home
[...]


The --cachedir directory /home/s3qlcache has 283G of free storage, 
nevertheless it breaks. Is there a limit on the database size?

To test the directory, I duplicated the db file, just to be sure that the 
problem is not local storage:



alx@zen:/home/s3qlcache$ ls -lah
total 15G
drwxr-xr-x  2 root root 4.0K Apr  7 04:22 .
drwxr-xr-x 12 2002 5001 4.0K Apr  7 03:25 ..
-rw---  1 root root  15G Apr  7 07:27 gs:=2F=2Fideiao=2F.db
-rw-r--r--  1 root root  202 Apr  7 07:25 gs:=2F=2Fideiao=2F.params


alx@zen:/home/s3qlcache$ sudo cp gs\:\=2F\=2Fideiao\=2F.db 
gs\:\=2F\=2Fideiao\=2F.db.test


alx@zen:/home/s3qlcache$ ls -lah
total 30G
drwxr-xr-x  2 root root 4.0K Apr  7 11:59 .
drwxr-xr-x 12 2002 5001 4.0K Apr  7 

Re: [s3ql] fsck crashes

2018-04-06 Thread Nikolaus Rath
Hi Alexandre,

A: Because it confuses the reader.
Q: Why?
A: No.
Q: Should I write my response above the quoted reply?

..so please quote properly, as I'm doing in the rest of this mail:

On Apr 06 2018, Alexandre Gonçalves  wrote:
>> On Apr 05 2018, Alexandre Gonçalves > > wrote: 
>>> > s3ql.deltadump.SQLITE_CHECK_RC (src/s3ql/deltadump.c:1820) 
>>> > apsw.FullError: database or disk is full 
>>> [..] 
>>> > 
>>> > There is plenty of storage available. Can you help me to solve this? 
>>>
>>> I would be very surprised if that was the case. Did you check the right 
>>> filesystem? You need storage in the --cachedir directory. 
>>
>> Did you check the right filesystem?
>>
>
> Yes I checked the right filesystem.Btw it's the only one I have.

That answer strongly sugggests that you checked the wrong filesystem,
sorry :-).

>> You need storage in the --cachedir directory.
>>
>
> I'm not sure what you mean by this. I didn't mention before, but I removed 
> all the contents from the --cachedir to see if it worked.

S3QL is trying to write data into the directory that you specify with
--cachedir (defaults to ~/.s3ql/), and getting an out of space
error. You need to provide more space in the filesystem that contains
this directory.

Best,
-Nikolaus

-- 
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] fsck crashes

2018-04-06 Thread Alexandre Gonçalves
Thanks for your reply.

 

> Did you check the right filesystem?
>

Yes I checked the right filesystem.Btw it's the only one I have.



You need storage in the --cachedir directory.
>

I'm not sure what you mean by this. I didn't mention before, but I removed 
all the contents from the --cachedir to see if it worked.  


Thanks.


quinta-feira, 5 de Abril de 2018 às 20:49:11 UTC+1, Nikolaus Rath escreveu:
>
> On Apr 05 2018, Alexandre Gonçalves  > wrote: 
> > s3ql.deltadump.SQLITE_CHECK_RC (src/s3ql/deltadump.c:1820) 
> > apsw.FullError: database or disk is full 
> [..] 
> > 
> > There is plenty of storage available. Can you help me to solve this? 
>
> I would be very surprised if that was the case. Did you check the right 
> filesystem? You need storage in the --cachedir directory. 
>
>
> Best, 
> -Nikloaus 
>
> -- 
> GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F 
>
>  »Time flies like an arrow, fruit flies like a Banana.« 
>

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] fsck crashes

2018-04-05 Thread Nikolaus Rath
On Apr 05 2018, Alexandre Gonçalves  wrote:
> s3ql.deltadump.SQLITE_CHECK_RC (src/s3ql/deltadump.c:1820)
> apsw.FullError: database or disk is full
[..]
>
> There is plenty of storage available. Can you help me to solve this?

I would be very surprised if that was the case. Did you check the right
filesystem? You need storage in the --cachedir directory.


Best,
-Nikloaus

-- 
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.