On Thursday, June 22, 2017 at 8:24:38 AM UTC-7, [email protected] wrote:
>
>
>
> On Wednesday, June 21, 2017 at 11:30:45 AM UTC-7, [email protected] 
> wrote:
>>
>>
>>
>> On Wednesday, June 21, 2017 at 10:07:31 AM UTC-7, Nikolaus Rath wrote:
>>>
>>> On Jun 21 2017, joseph via s3ql <[email protected]> wrote: 
>>> > On Tuesday, June 20, 2017 at 7:55:58 PM UTC-7, Nikolaus Rath wrote: 
>>> >> 
>>> >> On Jun 20 2017, joseph via s3ql <[email protected] 
>>> <javascript:>> 
>>> >> wrote: 
>>> >> > jessie has s3ql 2.11.1, stretch has 2.21. 
>>> >> > 
>>> >> > NEWS.Debian.gz seems to indicate I need to upgrade via the 
>>> intermediate 
>>> >> > version 2.14, is this correct? 
>>> >> 
>>> >> No, this should work out of the box. Debian ships a special patch to 
>>> >> enable backwards compatibility with jessie. 
>>> >> 
>>> >> > root@ns3022725:~# s3qladm upgrade 
>>> >> > s3://echo-maher-org-uk-backup/                                     
>>>       
>>> >>                                             
>>> >> > Getting file system parameters.. 
>>> >> > ERROR: Uncaught top-level exception: 
>>> >> > Traceback (most recent call last): 
>>> >> >   File "/usr/lib/s3ql/s3ql/common.py", line 564, in 
>>> thaw_basic_mapping 
>>> >> >     d = literal_eval(buf.decode('utf-8')) 
>>> >> > UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in 
>>> position 0: 
>>> >> > invalid start byte 
>>> >> 
>>> >> Hmm. That looks like a bug. 
>>> >> 
>>> >> Are you confident that the filesystem was unmounted cleanly the last 
>>> >> time it was mounted? In that case you can try a workaround: 
>>> temporarily 
>>> >> move the "s3:=2F=2Fecho-maher-org-uk-backup*" files from ~/.s3ql/ 
>>> >> somewhere else and re-try the upgrade. Does that help? 
>>> >> 
>>> >> 
>>> > mount.log seems to think so. 
>>>
>>> Yes, that looks good. If you still have a jessie box, you can also run 
>>> fsck.s3ql from there to be 100% sure. Then try the workaround. 
>>>
>>> > Before I received your response I upgraded a jessie install to 2.14 
>>> and ran: 
>>> > 
>>> > s3qladm upgrade s3://echo-maher-org-uk-backup/ 
>>> > 
>>> > but this failed with (I think) a timeout: 
>>> > 
>>> > Encountered ConnectionClosed (server closed connection), retrying 
>>> > Backend.lookup (attempt 4)... 
>>> > ..processed 901724/2215676 objects (40.7%, 0 bytes 
>>> rewritten)..Encountered 
>>> > HTTPError (500 Internal Server Error), retrying Backend.lookup 
>>> (attempt 
>>> > 5)... 
>>> [...] 
>>> > 
>>> > Is there any way to recover from this? 
>>>
>>> You should be able to just restart the upgrade. 
>>>
>>>
>>
>> Thanks! I've restarted the upgrade, which started without errors, but it 
>> will take some time to finish. 
>>  
>>
>> >> Do you still have access to a jessie system? If so, can you reproduce 
>>> >> the problem with a freshly created file system? 
>>> >> 
>>> >> 
>>> > I can't reproduce - I just made a new filesystem on a jessie box and 
>>> it 
>>> > upgraded just fine on the stretch box. 
>>>
>>> Hmm.. So the ~/.s3ql folder wasn't shared between jessie and stretch, 
>>> right? That gives hope for the workaround. 
>>>
>>>
>> Yes - they had different .s3ql folders.
>>
>> I have one remaining jessie box with an s3ql filesystem to upgrade to 
>> stretch, which I will do in the next few days. I will let you know if I 
>> have a similar issue with that one.
>>
>> Thanks!
>>
>> Joseph 
>>
>
>
> I upgraded the filesystem from 2.11 to 2.14 on jessie, and then from 2.14 
> to 2.21 on stretch - both upgrades completed without error:
>
> root@ns3022725:~# s3qladm --backend-options tcp-timeout=200 upgrade 
> s3://echo-maher-org-uk-backup/
> Getting file system parameters..
> Using cached metadata.
>
> I am about to update the file system to the newest revision.
> You will not be able to access the file system with any older version
> of S3QL after this operation.
>
> You should make very sure that this command is not interrupted and
> that no one else tries to mount, fsck or upgrade the file system at
> the same time.
>
>
> Please enter "yes" to continue.
> > yes
> Upgrading from revision 22 to 23...
> Dumping metadata...
> ..objects..
> ..blocks..
> ..inodes..
> ..inode_blocks..
> ..symlink_targets..
> ..names..
> ..contents..
> ..ext_attributes..
> Compressing and uploading metadata...
> Wrote 994 MiB of compressed metadata.
> Cycling metadata backups...
> Backing up old metadata...
> File system upgrade complete.
>
> However, I am now unable to mount or fsck the filesystem:
>
> root@ns3022725:~# mount.s3ql --authfile=/root/.s3ql/authinfo2 
> --allow-other --backend-options tcp-timeout=200 
> --cachedir=/mnt/backup/cache/s3ql/ s3://echo-maher-org-uk-backup/ /mnt/s3ql/
> Using 10 upload threads.
> Autodetected 65474 file descriptors available for cache entries
> ERROR: Uncaught top-level exception:
> Traceback (most recent call last):
>   File "/usr/lib/s3ql/s3ql/common.py", line 564, in thaw_basic_mapping
>     d = literal_eval(buf.decode('utf-8'))
> UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 0: 
> invalid start byte
>
> During handling of the above exception, another exception occurred:
>
> Traceback (most recent call last):
>   File "/usr/bin/mount.s3ql", line 11, in <module>
>     load_entry_point('s3ql==2.21', 'console_scripts', 'mount.s3ql')()
>   File "/usr/lib/s3ql/s3ql/mount.py", line 129, in main
>     (param, db) = get_metadata(backend, cachepath)
>   File "/usr/lib/s3ql/s3ql/mount.py", line 363, in get_metadata
>     param = load_params(cachepath)
>   File "/usr/lib/s3ql/s3ql/common.py", line 616, in load_params
>     return thaw_basic_mapping(fh.read())
>   File "/usr/lib/s3ql/s3ql/common.py", line 566, in thaw_basic_mapping
>     raise ThawError()
> s3ql.common.ThawError: Malformed serialization data
>
> Any advice appreciated!
>
> Joseph
>
>
Sorry - that commandline pulled in the old cache - clearing the old cache 
fixed this - everything is working now:

root@ns3022725:~# mount.s3ql --authfile=/root/.s3ql/authinfo2 --allow-other 
--backend-options tcp-timeout=200 --cachedir=/mnt/backup/cache/s3ql/ 
s3://echo-maher-org-uk-backup/ /mnt/s3ql/
Using 10 upload threads.
Autodetected 65474 file descriptors available for cache entries
WARNING: Last file system check was more than 1 month ago, running 
fsck.s3ql is recommended.
Downloading and decompressing metadata...
Reading metadata...
..objects..
..blocks..
..inodes..
..inode_blocks..
..symlink_targets..
..names..
..contents..
..ext_attributes..
Setting cache size to 722591 MB
Mounting s3://echo-maher-org-uk-backup/ at /mnt/s3ql...


Thanks for your advice, and your work on s3ql!

Joseph



 

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to