Re: [s3ql] S3QL 5.2.0 - s3qlcp taking a long time

2024-05-15 Thread Nikolaus Rath
"'Shannon Dealy' via s3ql"  writes:
> I have been using S3QL 3.7.3 (and earlier versions) with Amazon S3 for many
> years. Due to a variety of circumstances I have just setup version 5.2.0 using
> the s3c4 backend with Backblaze. Part of my standard incremental backup script
> is to use s3qlcp to make a copy of the latest backup into a timestamped
> directory. The s3qlcp command has been running for around 2 hours now with a
> subset of the same data set in my old backups and it previously never took 
> even
> a quarter of this time to do the copy.
>
> Is anyone aware of problems like this?

This is the first report that I've seen.

> As I understand it, the s3qlcp copy is
> strictly a database operation so the backend shouldn't even be relevant to
> performance of this command other than the database upload after the copy is
> complete.

That's right.


Best,
-Nikolaus

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/878r0b5yt0.fsf%40vostro.rath.org.


[s3ql] [ANNOUNCE] S3QL 5.2.0 has been released

2024-04-19 Thread Nikolaus Rath
Dear all,

I am pleased to announce a new release of S3QL, version 5.2.0.

>From the changelog:

S3QL 5.2.0 (2024-04-19)
===

* S3QL now needs Python 3.8+. Python 3.7 is end of life as of 2023-06-27.

* S3QL does not depend on packaging anymore. It was an undocumented dependency
  for a simple version compare of the Swift backend. This compare is not
  necessary anymore.

* There is a new s3c4 backend, suitable for storage providers offering an
  S3 compatible API with v4 signatures.

The following people have contributed code to this release:

Daniel Jagszent 
Nikolaus Rath 
xeji <36407913+x...@users.noreply.github.com>


(The full list of contributors is available in the AUTHORS file).

The release is available for download from
https://github.com/s3ql/s3ql/releases

Please report any bugs on the mailing list (s3ql@googlegroups.com) or
the issue tracker (https://github.com/s3ql/s3ql/issues).

Best,
-Nikolaus

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/87v84d674y.fsf%40vostro.rath.org.


Re: [s3ql] Enforce python3 version?

2024-04-18 Thread Nikolaus Rath
Hi, 

The files in bin/ are not build at all, they're static. Run setup.py install 
instead. 

Best, 
Nikolaus

On Thu, 18 Apr 2024, at 10:00, rabidmuta...@gmail.com wrote:
> "python3.11 setup.py [...]“  is exactly what I do, but the built files in 
> bin/ have:
> ```
> ~/s3ql/s3ql-5.1.3 # egrep "^#!" bin/*
> bin/fsck.s3ql:#!/usr/bin/env python3
> bin/mkfs.s3ql:#!/usr/bin/env python3
> bin/mount.s3ql:#!/usr/bin/env python3
> bin/s3ql_oauth_client:#!/usr/bin/env python3
> bin/s3ql_verify:#!/usr/bin/env python3
> bin/s3qladm:#!/usr/bin/env python3
> bin/s3qlcp:#!/usr/bin/env python3
> bin/s3qlctrl:#!/usr/bin/env python3
> bin/s3qllock:#!/usr/bin/env python3
> bin/s3qlrm:#!/usr/bin/env python3
> bin/s3qlstat:#!/usr/bin/env python3
> bin/umount.s3ql:#!/usr/bin/env python3
> ```
> 
> On Thursday, April 18, 2024 at 6:36:31 PM UTC+10 Nikolaus Rath wrote:
>> __
>> Hello, 
>> 
>> Which "build scripts" do you mean? You can just call "python3.x setup.py 
>> [...]“ explicitly, and the files that this installs should use the same 
>> interpreter automatically.
>> 
>> Best, 
>> Nikolaus
>> 
>> On Thu, 18 Apr 2024, at 07:42, rabidmuta...@gmail.com wrote:
>>> This is a little sad, but my host has pythion3.6 as the default python, and 
>>> I use python3.11 for the build.
>>> 
>>> However, all of the build scripts use `#$! ...  python3` which starts 3.6.
>>> 
>>> Is there any way that I can get the build process to either set he python 
>>> version in the "#!" for each file (3.11) or use the python version it was 
>>> built with?
>>> 
>>> Otherwise, I guess I just edit all the python files in bin/, but that seems 
>>> like a bad solution.
>>> 
>>> 
>>> --
>>> You received this message because you are subscribed to the Google Groups 
>>> "s3ql" group.
>>> To unsubscribe from this group and stop receiving emails from it, send an 
>>> email to s3ql+uns...@googlegroups.com.
>>> To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/s3ql/afc78fc0-d4c4-44ba-ba96-4c2dfb02e93an%40googlegroups.com
>>>  
>>> <https://groups.google.com/d/msgid/s3ql/afc78fc0-d4c4-44ba-ba96-4c2dfb02e93an%40googlegroups.com?utm_medium=email_source=footer>.
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "s3ql" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to s3ql+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/s3ql/b4f40505-39bc-4d9f-a812-382362d7db76n%40googlegroups.com
>  
> <https://groups.google.com/d/msgid/s3ql/b4f40505-39bc-4d9f-a812-382362d7db76n%40googlegroups.com?utm_medium=email_source=footer>.

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/2f55aef1-a6e1-4a42-a9dd-dfe43354f28b%40app.fastmail.com.


Re: [s3ql] Enforce python3 version?

2024-04-18 Thread Nikolaus Rath
Hello, 

Which "build scripts" do you mean? You can just call "python3.x setup.py [...]“ 
explicitly, and the files that this installs should use the same interpreter 
automatically.

Best, 
Nikolaus

On Thu, 18 Apr 2024, at 07:42, rabidmuta...@gmail.com wrote:
> This is a little sad, but my host has pythion3.6 as the default python, and I 
> use python3.11 for the build.
> 
> However, all of the build scripts use `#$! ...  python3` which starts 3.6.
> 
> Is there any way that I can get the build process to either set he python 
> version in the "#!" for each file (3.11) or use the python version it was 
> built with?
> 
> Otherwise, I guess I just edit all the python files in bin/, but that seems 
> like a bad solution.
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "s3ql" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to s3ql+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/s3ql/afc78fc0-d4c4-44ba-ba96-4c2dfb02e93an%40googlegroups.com
>  
> .

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/bbebae5d-e95e-4fb3-9833-edb0fed5f925%40app.fastmail.com.


Re: [s3ql] Cant mount or fsck filesystem

2024-02-10 Thread Nikolaus Rath
Hi Rob,

A: Because it confuses the reader.
Q: Why?
A: No.
Q: Should I write my response above the quoted reply?

..so please quote properly, as I'm doing in the rest of this mail:


> 
> On Saturday 10 February 2024 at 10:16:21 UTC Nikolaus Rath wrote:
>> Rob Shaw  writes:
>> 
>> > I want to mount the files on the host again.
>> >
>> > # mkfs.s3ql --authfile=/data/s3ql-authinfo --cachedir=/data/.s3ql
>> > s3c://xxx
>> 
>> 
>> mkfs.s3ql does not mount the filesystem, it creates it. Please take a
>> look at https://www.rath.org/s3ql-docs/mkfs.html and 
>> https://www.rath.org/s3ql-docs/mount.html.
> Thanks. I understand.  
> 
> However when I try to mount I get;
> 
> * ERROR: Backend reports that fs is still mounted elsewhere, abortin*g.
> 
> I'm 100% sure that is not the case, so that is why i'm trying to fsck.

Yep. And you still haven't provided the output that you get when you run 
fsck.s3ql. So there's not much anyone can do to help you.

Best,
-Nikolaus

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/446f46f0-39f0-43bb-8e96-05910eae55f1%40app.fastmail.com.


Re: [s3ql] Cant mount or fsck filesystem

2024-02-10 Thread Nikolaus Rath
Rob Shaw  writes:

> I want to mount the files on the host again.
>
> # mkfs.s3ql --authfile=/data/s3ql-authinfo --cachedir=/data/.s3ql 
> s3c://xxx


mkfs.s3ql does not mount the filesystem, it creates it. Please take a
look at https://www.rath.org/s3ql-docs/mkfs.html and 
https://www.rath.org/s3ql-docs/mount.html.



Best,
-Nikolaus

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/87h6igaakw.fsf%40vostro.rath.org.


Re: [s3ql] Cant mount or fsck filesystem

2024-02-09 Thread Nikolaus Rath
Rob Shaw  writes:

> Hello.
>
> I hope someone can help me.
>
> I have been using s3ql for quite a while without problems; however, after 
> an unexpected server reboot, I can no longer mount my filesystem.
>
> When I try to mount, I get;
> * ERROR: Backend reports that fs is still mounted elsewhere, abortin*g. 
>
> I have had this before, and an fsck normally fixes things.
>
> after running an fsck i get;

What was the output of fsck? Did it work?

>
> *ERROR: Refusing to overwrite existing file system! (use `s3qladm clear` to 
> delete) *

This sounds like you're trying to recreate the filesystem with
mkfs.s3ql. Are you sure this is what you want to do?


Best,
-Nikolaus

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/87jznd9ywt.fsf%40vostro.rath.org.


[s3ql] [ANNOUNCE] S3QL 5.1.3 has been released

2023-12-08 Thread Nikolaus Rath
Dear all,

I am pleased to announce a new release of S3QL, version 5.1.3.

>From the changelog:

S3QL 5.1.3 (2023-12-08)
===

* fsck.s3ql no longer attempts to verify unclean metadata backups, which
  in the past led to spurious warnings and crashes.

* Fixed a crash in the b2 backend.

The following people have contributed code to this release:

Nikolaus Rath 
Paul Harris 

(The full list of contributors is available in the AUTHORS file).

The release is available for download from
https://github.com/s3ql/s3ql/releases

Please report any bugs on the mailing list (s3ql@googlegroups.com) or
the issue tracker (https://github.com/s3ql/s3ql/issues).

Best,
-Nikolaus

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/c828fb7e-17a9-47dc-aeaa-70455e1f76d1%40app.fastmail.com.


Re: [s3ql] backblaze b2 backend

2023-12-03 Thread Nikolaus Rath
Hi,

The s3 backend uses v4 auth.

The s3c backend uses the original S3 specification back from whenever it was 
first published.

Best,
-Nikolaus

On Fri, 1 Dec 2023, at 14:28, Paul Harris wrote:
> Sorry i think i hit the wrong button to reply...
> I thought s3 only supports V4 auth now?  I thought I read that somewhere.
> Or was it b2 only supports V4.
> 
> Why don't we use V4 auth?
> 
> 
> On Friday, December 1, 2023 at 4:24:00 PM UTC+8 Nikolaus Rath wrote:
>> __
>> Hi,
>> 
>> What this means is that Backblaze isn't as fully S3 compatible as it perhaps 
>> advertises. In seems what's missing in particular is support for "V2 
>> Signature auth".
>> 
>> Best,
>> -Nikolaus
>> 
>> On Wed, 29 Nov 2023, at 02:56, Paul Harris wrote:
>>> Doesn't work for me.
>>> 
>>> In authinfo2, I have
>>> [s3c-test]
>>> backend-login: 00etcetc
>>> backend-password: Ketcetc
>>> test-fs: s3c://s3.us-east-005.backblazeb2.com/bucketnameEtcEtc
>>> 
>>> I run
>>> python3 -m pytest tests/t1_backends.py::test_readinto_write_fh
>>> 
>>> and it says HTTPError: 400 (The V2 signature auth is not supported)
>>> 
>>> 
>>> On Wednesday, November 29, 2023 at 9:38:31 AM UTC+8 Daniel Jagszent wrote:
>>>> Hello Paul,
>>>> 
>>>>> [...] Question: how can I set up s3c backend for b2?
>>>>> 
>>>>> I thought I'd run a test for that.
>>>>> 
>>>> https://www.rath.org/s3ql-docs/backends.html#s3-compatible explains the 
>>>> connection string syntax.
>>>> 
>>>> The hostname to use (called endpoint) can be found here:
>>>> https://www.backblaze.com/docs/cloud-storage-call-the-s3-compatible-api
>>> 
>>> 
>>> --
>>> You received this message because you are subscribed to the Google Groups 
>>> "s3ql" group.
>>> To unsubscribe from this group and stop receiving emails from it, send an 
>>> email to s3ql+uns...@googlegroups.com.
>>> To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/s3ql/f4fd43f3-195a-44e0-b8a4-56620aaa6d71n%40googlegroups.com
>>>  
>>> <https://groups.google.com/d/msgid/s3ql/f4fd43f3-195a-44e0-b8a4-56620aaa6d71n%40googlegroups.com?utm_medium=email_source=footer>.
>> 
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "s3ql" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to s3ql+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/s3ql/e5d6f55c-9d27-4de4-92c1-fef3ec8fe7c7n%40googlegroups.com
>  
> <https://groups.google.com/d/msgid/s3ql/e5d6f55c-9d27-4de4-92c1-fef3ec8fe7c7n%40googlegroups.com?utm_medium=email_source=footer>.

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/32a81980-5870-4971-8575-55efbab37b20%40app.fastmail.com.


Re: [s3ql] backblaze b2 backend

2023-12-01 Thread Nikolaus Rath
Hi,

What this means is that Backblaze isn't as fully S3 compatible as it perhaps 
advertises. In seems what's missing in particular is support for "V2 Signature 
auth".

Best,
-Nikolaus

On Wed, 29 Nov 2023, at 02:56, Paul Harris wrote:
> Doesn't work for me.
> 
> In authinfo2, I have
> [s3c-test]
> backend-login: 00etcetc
> backend-password: Ketcetc
> test-fs: s3c://s3.us-east-005.backblazeb2.com/bucketnameEtcEtc
> 
> I run
> python3 -m pytest tests/t1_backends.py::test_readinto_write_fh
> 
> and it says HTTPError: 400 (The V2 signature auth is not supported)
> 
> 
> On Wednesday, November 29, 2023 at 9:38:31 AM UTC+8 Daniel Jagszent wrote:
>> Hello Paul,
>> 
>>> [...] Question: how can I set up s3c backend for b2?
>>> 
>>> I thought I'd run a test for that.
>>> 
>> https://www.rath.org/s3ql-docs/backends.html#s3-compatible explains the 
>> connection string syntax.
>> 
>> The hostname to use (called endpoint) can be found here:
>> https://www.backblaze.com/docs/cloud-storage-call-the-s3-compatible-api
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "s3ql" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to s3ql+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/s3ql/f4fd43f3-195a-44e0-b8a4-56620aaa6d71n%40googlegroups.com
>  
> .

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/4db4657f-34e3-4ddd-941a-54e94f706223%40app.fastmail.com.


Re: [s3ql] ModuleNotFoundError: No module named 'packaging'

2023-10-25 Thread Nikolaus Rath
On Oct 23 2023, "nicol...@gmail.com"  wrote:
> Hi Everyone,
>
> I built s3ql 5.1.2 with those deps :
[...]
>
> Then when running s3qladm :
>
> Traceback (most recent call last):
> 102  from 
> packaging.version import Version
> 103 
> ModuleNotFoundError:
>  
> No module named 'packaging'
>
> Any idea why ?

Seems like a pretty clear error message to me. You did not install the
"packaging" module.

Best,
-Nikolaus

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/87lebrrs46.fsf%40vostro.rath.org.


[s3ql] [ANNOUNCE] S3QL 5.1.2 has been released

2023-09-27 Thread Nikolaus Rath
Dear all,

I am pleased to announce a new release of S3QL, version 5.1.2.

>From the changelog:

S3QL 5.1.2 (2023-09-26)
===

* Various small bugfixes, b2 backend should be working again.


The following people have contributed code to this release:

Aloxaf 
Nikolaus Rath 

(The full list of contributors is available in the AUTHORS file).

The release is available for download from
https://github.com/s3ql/s3ql/releases

Please report any bugs on the mailing list (s3ql@googlegroups.com) or
the issue tracker (https://github.com/s3ql/s3ql/issues).

Best,
-Nikolaus

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/87y1gsm6nr.fsf%40vostro.rath.org.


Re: [s3ql] Change metadata block size

2023-09-13 Thread Nikolaus Rath
Hi Antoine,

This would work if there were no existing metadata objects in the backend. If 
you create additional objects with a different blocksize, you'll probably run 
into trouble. Removing all the existing ones might work, but may be a bit risky 
if something goes wrong...

Best,
-Nikolaus

On Tue, 12 Sep 2023, at 09:23, Antoine Colombier wrote:
> Hi Nikolaus and thanks for your reply.
> 
> I have started looking at the code and as I though I understood how to 
> implement this, I came across upgrade function in src/s3ql/adm.py. Am I 
> correct in thinking that adding some flag allowing to shortcut the following 
> condition would be enough?
> 
> elif local_params['revision'] >= CURRENT_FS_REV:
> print('File system already at most-recent revision')
> return
> 
> If yes, is there any risk or potential side effect on upgrading a filesystem 
> again I should be aware of and potentially guard against? At first glance, 
> this function looks idempotent but I'm not yet familiar with all the helper 
> it relies on.
> 
> Best,
> Antoine
> Le vendredi 8 septembre 2023 à 13:14:28 UTC+1, Nikolaus Rath a écrit :
>> On Sep 06 2023, Antoine Colombier  wrote: 
>> > Hi all, 
>> > 
>> > I have recently started a filesystem migration to a new S3QL filesystem 
>> > using the S3 backend and I went with the default setting for metadata 
>> > block 
>> > size. 
>> > 
>> > Unfortunately, the FS is will be storing about 23 millions of files and 
>> > the 
>> > database is growing fast (already 28268 block of 64k, and I've barely 
>> > uploaded a third of the total data) 
>> > 
>> > Since the copy process is due to take 10+ days (already 3 days in), I was 
>> > wondering if I could change the metadata block after creation, since (if I 
>> > understand correctly) this is only used to segment the SQLite DB? 
>> 
>> In principle it's possible. In practice, no one has written the code to 
>> do this. 
>> 
>> 
>> Best, 
>> -Nikolaus
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "s3ql" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to s3ql+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/s3ql/f41db2f5-05d6-4b88-8dda-fd061932b349n%40googlegroups.com
>  
> <https://groups.google.com/d/msgid/s3ql/f41db2f5-05d6-4b88-8dda-fd061932b349n%40googlegroups.com?utm_medium=email_source=footer>.

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/8f9c8cbb-4362-47a1-8898-998a79cf0dba%40app.fastmail.com.


Re: [s3ql] Re: Can you help with S3QL code reviews?

2023-09-08 Thread Nikolaus Rath
Great, thank you!

Best,
-Nikolaus

On Sep 06 2023, Antoine Colombier  wrote:
> Hi,
>
> I'd be happy to try and review things I feel confortable on (usually more 
> Python)
>
> My handle is `acolombier`
>
> Best,
> Antoine
>
> Le dimanche 23 juillet 2023 à 19:28:01 UTC+1, Nikolaus Rath a écrit :
>
>> Hi all,
>>
>> Could you help with S3QL development by doing code reviews? If so,
>> please let me know your GitHub handle and I'll CC when there are pull
>> requests to review.
>>
>>
>> (In the last weeks, I've somewhat randomly tagged Daniel Jagszent in
>> some pull requests because he's done reviews in the past, and he's
>> identified some bugs that eluded the unit tests. Therefore, I think it
>> would be great if we could find a larger pool of people who look at code
>> before it's submitted).
>>
>>
>> Best,
>> -Nikolaus
>>
>
> -- 
> You received this message because you are subscribed to the Google Groups 
> "s3ql" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to s3ql+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/s3ql/cead5fb7-f2ce-4e37-b1aa-cda7807f030bn%40googlegroups.com.

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/871qf8lv7k.fsf%40vostro.rath.org.


[s3ql] [ANNOUNCE] S3QL 5.1.1 has been released

2023-08-06 Thread Nikolaus Rath
Dear all,

I am pleased to announce a new release of S3QL, version 5.1.1.

>From the changelog:

S3QL 5.1.1 (2023-08-06)
===

* Fixed a DATA LOSS issue: Metadata upload now works correctly if the cache 
directory
  contains a symlink in its path. In S3QL 5.0.0 and 5.1.0, this would result in 
metadata
  silently not being uploaded to the backend.

NOTE: Due to an unfortunate process failure, the secret key for signing
the S3QL 5.1.x release series was lost. This series will therefore be
signed with the same key as S3QL 5.0.x
(signify/s3ql-5.0.pub). Attempting to verify the signature against
signify/s3ql-5.1.pub will fail.

The release is available for download from
https://github.com/s3ql/s3ql/releases

Please report any bugs on the mailing list (s3ql@googlegroups.com) or
the issue tracker (https://github.com/s3ql/s3ql/issues).

Best,
-Nikolaus

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/87h6pcqy9l.fsf%40vostro.rath.org.


[s3ql] Can you help with S3QL code reviews?

2023-07-23 Thread Nikolaus Rath
Hi all,

Could you help with S3QL development by doing code reviews? If so,
please let me know your GitHub handle and I'll CC when there are pull
requests to review.


(In the last weeks, I've somewhat randomly tagged Daniel Jagszent in
some pull requests because he's done reviews in the past, and he's
identified some bugs that eluded the unit tests. Therefore, I think it
would be great if we could find a larger pool of people who look at code
before it's submitted).


Best,
-Nikolaus

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/87zg3mjxee.fsf%40vostro.rath.org.


Re: [s3ql] Problems mounting upgraded filesystem

2023-07-16 Thread Nikolaus Rath
Hi,

Thanks for the report! A few questions:

 • Does it work if you try with a different (empty) cache directory?
 • Can you reproduce the problem with a new filesystem (created with a previous 
release)?

Best,
Nikolaus

On Sun, 16 Jul 2023, at 05:13, Xomex wrote:
> I've happily been using s3ql for many years and across many versions. With 
> backends currently on Amazon, Wasabi, and Google, using s3://  s3c:// and 
> gs://
> With the recent release of V5.0.0 I upgraded all the backend filesystems 
> according to the docs. This seemed to go smoothly although my internet has 
> been a bit flaky lately.
> 
> Now, I can't mout the upgraded filesystems getting the dreaded:
> "ERROR: File system revision too old, please run `s3qladm upgrade` first."
> 
> fsck.s3ql --force works fine on all systems, takes a while but completes 
> without error.
> 
> What's the most likely problem here and how to fix it? Is it possible the 
> something local is stale or didnt get completed in the filesystem upgrade?
> 
> Appreciate any help or suggestions. Thanks.
> 
> 
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "s3ql" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to s3ql+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/s3ql/e630983c-a624-48c1-b18b-38f9e5133a22n%40googlegroups.com
>  
> .

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/88c9b35f-4833-4bd8-a223-422297c1ecbb%40app.fastmail.com.


[s3ql] Anyone using S3QL with Backblaze B2?

2023-07-15 Thread Nikolaus Rath
Hello,

Is anyone using S3QL with Backblaze B2? If so, it would be great if you
could grab the most recent revision from the next branch and test it:

https://github.com/s3ql/s3ql/tree/next

There have been lots af changes throughout all the backends, but I do
not have a Backblaze account and there is no Mock server available
either, so I've been unable to test any of them.

Alternatively, if someone could give me credentials for a test server,
that would also help.


Best,
-Nikolaus

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/87fs5p142d.fsf%40vostro.rath.org.


Re: [s3ql] longintrepr.h missing (s3ql 4.0.0 with Python 3.11)

2023-07-10 Thread Nikolaus Rath


On Fri, 7 Jul 2023, at 17:06, 'jos...@maher.org.uk' via s3ql wrote:
> 
> 
> In python3.11 longintepr.h got moved, it's now in 
> /usr/include/python3.11/cpython/longintrepr.h
> 
> Is there an easy fix for this?

Running `setup.py build_cython` (with a recent Cython installed) should fix 
this.

Best,
-Nikolaus

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/4deb8114-bfc8-4c4e-a660-ecda202636e7%40app.fastmail.com.


[s3ql] [ANNOUNCE] S3QL 5.0.0 has been released

2023-07-08 Thread Nikolaus Rath
Dear all,

I am pleased to announce a new release of S3QL, version 5.0.0.

>From the changelog:

S3QL 5.0.0 (2023-07-08)
===

* The internal file system revision has changed. File systems created with this 
version of
  S3QL are NOT COMPATIBLE with prior S3QL versions.

  Existing file systems must be upgraded before they can be used with current
  S3QL versions. This procedure is NOT REVERSIBLE.

  To update an existing file system, use the `s3qladm upgrade` command. This 
upgrade
  should not take longer than a regular mount + unmount sequence.

* S3QL no longer supports storage backends that do not provide immediate 
consistency.

* S3QL no longer maintains entire filesystem metadata in a single storage 
object. Instead,
  the database file is distributed across multiple backend objects with a block 
size
  configured at mkfs time. This means that (1) S3QL also no longer needs to 
upload the
  entire metadata object on unmount; and (2) there is no longer a size limit on 
the
  metadata.

* The Google Storage backend now retries on network errors when doing the 
initial
  validation of the bucket.


The following people have contributed code to this release:

Daniel Jagszent 
halsbox 
Nikolaus Rath 
Viktor Szépe 

(The full list of contributors is available in the AUTHORS file).

The release is available for download from
https://github.com/s3ql/s3ql/releases

Please report any bugs on the mailing list (s3ql@googlegroups.com) or
the issue tracker (https://github.com/s3ql/s3ql/issues).

Best,
-Nikolaus

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/875y6ulg45.fsf%40vostro.rath.org.


Re: [s3ql] aws s3 metadata

2023-06-21 Thread Nikolaus Rath
Hi,

That'd be another option.

In any case, someone would need to write the code and submit a pull
request for this to happen.

Best,
-Nikolaus

On Jun 20 2023, Peter Marshall  wrote:
> Or just disable etag checking based on a configuration option, and let the 
> user decide?
>
> The documentation can explain the consequences - i.e. none if encryption 
> used, and
> potential undetectable corruption if not.
>
> That way when all etags are md5 checksums, we don't lose anything. And when 
> they are not,
> we can mount what is currently a broken FS by disabling the check.
>
> On 19/06/2023 21:57, Nikolaus Rath wrote:
>> On Jun 19 2023, Peter Marshall  wrote:
>>> Could the md5 (or some other signature) of the data be stored in metadata, 
>>> and we check
>>> that instead of the Etag on reading? I've only briefly looked at the source 
>>> - maybe an
>>> existing header is suitable.
>> Yes, that is possible in principle but not currently done. We'd have to
>> extend the metadata format to store this checksum.
>>
>> I'm just not convinced that it's worth it, since this effectively
>> duplicates what's already done when using encryption.
>>
>> So perhaps the right answer is to disable ETag checking completely and
>> require encryption to be used? Or disable it when encryption is active,
>> so that it affects cases?
>>
>> Best,
>> -Nikolaus
>>
>
> -- 
> You received this message because you are subscribed to the Google Groups 
> "s3ql" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to s3ql+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/s3ql/0f58bff2-cede-36ab-818e-061b32259a7b%40goteck.co.uk.

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/87ilbhjm0j.fsf%40vostro.rath.org.


Re: [s3ql] aws s3 metadata

2023-06-19 Thread Nikolaus Rath
On Jun 19 2023, Peter Marshall  wrote:
> Could the md5 (or some other signature) of the data be stored in metadata, 
> and we check
> that instead of the Etag on reading? I've only briefly looked at the source - 
> maybe an
> existing header is suitable.

Yes, that is possible in principle but not currently done. We'd have to
extend the metadata format to store this checksum.

I'm just not convinced that it's worth it, since this effectively
duplicates what's already done when using encryption.

So perhaps the right answer is to disable ETag checking completely and
require encryption to be used? Or disable it when encryption is active,
so that it affects cases?

Best,
-Nikolaus

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/87fs6nmdhb.fsf%40vostro.rath.org.


Re: [s3ql] aws s3 metadata

2023-06-19 Thread Nikolaus Rath
Thanks for digging this up!

I do not think this is of much help, unfortunately. S3QL doesn't know
how the object was originally uploaded (let alone what the MD5 of the
parts was), so it still can't check the ETag on download.

Best,
-Nikolaus

On Jun 17 2023, "'r0ps3c' via s3ql"  wrote:
> In case it helps, 
> https://docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html#large-object-checksums
>  
> seems to document the algorithm. I'd had some similar issues to the OP a 
> while ago and did some testing that indicated the documentation was 
> accurate, but haven't checked recently.
>
> On Friday, June 16, 2023 at 5:29:46 AM UTC-4 Nikolaus Rath wrote:
>
>> Hi,
>>
>> I'm skeptical that this is a future proof solution, since
>> https://docs.aws.amazon.com/AmazonS3/latest/API/API_Object.html
>> just says that "If an object is created by either the Multipart
>> Upload or Part Copy operation, the ETag is not an MD5 digest, regardless
>> of the method of encryption.". If there is no documentation from AWS
>> about how to compute the ETag for these cases, we should not attempt to
>> do so.
>>
>>
>> Best,
>> -Nikolaus
>>
>> On Jun 15 2023, Peter Marshall  wrote:
>> > I found this info on how to calculate Etags on a local file:
>> >
>> > https://teppen.io/2018/06/23/aws_s3_etags/ and the linked
>> > https://teppen.io/2018/10/23/aws_s3_verify_etags/
>> >
>> > -- 
>> > You received this message because you are subscribed to the Google 
>> Groups "s3ql" group.
>> > To unsubscribe from this group and stop receiving emails from it, send 
>> an email to s3ql+uns...@googlegroups.com.
>> > To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/s3ql/ce3756b1-79f3-4adf-0912-baadf8103a61%40goteck.co.uk
>> .
>>
>
> -- 
> You received this message because you are subscribed to the Google Groups 
> "s3ql" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to s3ql+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/s3ql/53689c84-5e86-47b4-911e-11583c084c42n%40googlegroups.com.

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/87ilbjmdlc.fsf%40vostro.rath.org.


Re: [s3ql] aws s3 metadata

2023-06-16 Thread Nikolaus Rath
Hi,

I'm skeptical that this is a future proof solution, since
https://docs.aws.amazon.com/AmazonS3/latest/API/API_Object.html
just says that "If an object is created by either the Multipart
Upload or Part Copy operation, the ETag is not an MD5 digest, regardless
of the method of encryption.". If there is no documentation from AWS
about how to compute the ETag for these cases, we should not attempt to
do so.


Best,
-Nikolaus

On Jun 15 2023, Peter Marshall  wrote:
> I found this info on how to calculate Etags on a local file:
>
> https://teppen.io/2018/06/23/aws_s3_etags/ and the linked
> https://teppen.io/2018/10/23/aws_s3_verify_etags/
>
> -- 
> You received this message because you are subscribed to the Google Groups 
> "s3ql" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to s3ql+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/s3ql/ce3756b1-79f3-4adf-0912-baadf8103a61%40goteck.co.uk.

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/87legjn4ai.fsf%40vostro.rath.org.


Re: [s3ql] aws s3 metadata

2023-06-15 Thread Nikolaus Rath
Hi Peter,

Thanks for following up. Looking at
https://docs.aws.amazon.com/AmazonS3/latest/API/API_Object.html, it
looks like there is a bug in S3QL: The S3 backend expects the ETag to
match the MD5 of the content.

This hasn't been a problem so far because when S3QL itself uploads the
objects, this is the case. But when you're modifying objects with an
external tool, this assumption no longer holds.

I'm not sure how to best fix it. One way would be to just not verify the
content. As long as encryption is being used, it will detect any
corruption. However, for un-encrypted buckets this could result in
undetected corruption.

The above page talks about the "algorithm that was used to create a
checksum of an object", which seems to be what we want. However, there
is no mention of an actual checksum other than the ETag (which seemingly
cannot be validated by the client). Does anyone know if Amazon provides
other checksums that could be used (e.g. Content-MD5).

Best,
-Nikolaus



On Jun 14 2023, Peter Marshall  wrote:
> I've experimented, and it's not due to encryption, so I presume it is related 
> to multipart
> during the AWS backup or restore process.
>
> e.g. original bucket, with mountable s3ql:
>
> object ncs3ql_data_1 has ETag 6af237cd6aa167ec276eb58f9f9a52c6
>
> Same object on restored bucket: ETag dc3d145a13fa955d024aaa4165826530-1
>
> If I download the file from the restored bucket, and run md5 on it, I get 
> back the
> original ETag, so it is the same data.
>
> If I try fsck on the restored bucket, it's v not happy:
>
> WARNING: MD5 mismatch for s3ql_passphrase: b428f7203f2bfd8b547be1ade86a74a3-1 
> vs
> 0c8d069f75210a79332014f7cb38454a
> WARNING: MD5 mismatch for s3ql_passphrase: b428f7203f2bfd8b547be1ade86a74a3-1 
> vs
> 0c8d069f75210a79332014f7cb38454a
> WARNING: MD5 mismatch for s3ql_passphrase: b428f7203f2bfd8b547be1ade86a74a3-1 
> vs
> 0c8d069f75210a79332014f7cb38454a
> Encountered BadDigestError (BadDigest: ETag header does not agree with 
> calculated MD5),
> retrying Backend.perform_read (attempt 3)...
> WARNING: MD5 mismatch for s3ql_passphrase: b428f7203f2bfd8b547be1ade86a74a3-1 
> vs
> 0c8d069f75210a79332014f7cb38454a
> Encountered BadDigestError (BadDigest: ETag header does not agree with 
> calculated MD5),
> retrying Backend.perform_read (attempt 4)...
>
> -- 
> You received this message because you are subscribed to the Google Groups 
> "s3ql" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to s3ql+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/s3ql/00dfa783-12e1-b8e9-85ca-d7809b44e029%40goteck.co.uk.

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/87o7lgn1hw.fsf%40vostro.rath.org.


Re: [s3ql] aws s3 metadata

2023-06-14 Thread Nikolaus Rath
Hi,

The etags are checksums calculated over the contents of the data. If the etags 
change, then this means the content of the object has changed. So this should 
not happen on backup/restore.

Best,
-Nikolaus

On Wed, 14 Jun 2023, at 07:49, Peter Marshall wrote:
> I'm impressed using s3ql (v 3.8.1) - it does (nearly) exactly what I want!
>
> I've run into two issues recently with s3 metadata - one I've managed to 
> fix, the other I'm looking for advice.
>
> When syncing an s3 bucket containing an s3ql file system to a new one, 
> and then trying to mount the new one, I was getting errors with the 
> metadata. The reason was that aws sync will lose metadata if it uses 
> multipart transfers.
>
> I fixed this using "aws configure set default.s3.multipart_threshold 
> 20MB" to increase the multipart threshold above the size of the s3ql 
> data blocks.
>
> I also used "aws s3 sync --metadata-directive COPY" but I believe that 
> is the default anyway, so it was probably superfluous.
>
> Then I tried to use AWS Backup on s3 buckets to do a similar job. This 
> is where the problem came in - the ETag is not always preserved on 
> restore, and is not an MD5 when multipart uploads get used. The decision 
> on whether AWS uses multi-part uploads on the restore seems to be out of 
> my control.
>
> I've not yet looked into the internals of s3ql - is it possible to fix 
> the ETag issue? I know ETags can't be changed, but could s3ql recover 
> from the ETags changing?
>
> Thanks,
>
> Pete.
>
> -- 
> You received this message because you are subscribed to the Google 
> Groups "s3ql" group.
> To unsubscribe from this group and stop receiving emails from it, send 
> an email to s3ql+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/s3ql/8cf45e3e-d5c0-584a-467d-ab2cdd18d0bf%40goteck.co.uk.

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/0581d81d-e0d6-4725-b814-f972b556be67%40app.fastmail.com.


Re: [s3ql] S3QL 5: _sqlite3ext.cpp missing?

2023-05-24 Thread Nikolaus Rath
On Mon, 22 May 2023, at 19:58, Daniel Jagszent wrote:
> Hello Nikolaus,
>
> compiling the extension from the release tarball works now, thanks!
>
> It looks like the release tarball (
> https://github.com/s3ql/s3ql/releases/download/release-5.0.0-pre1/s3ql-5.0.0.tar.gz
> ) is from another commit than the corresponding tag (
> https://github.com/s3ql/s3ql/archive/refs/tags/release-5.0.0-pre1.tar.gz
> ). Have a look at the attached diff for the changes.

Wops, looks like I accidentally created the tarball from the wrong branch. 
Should be fixed now.

Best,
-Nikolaus

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/51aabe50-7213-4521-bf3c-c6957751d94c%40app.fastmail.com.


[s3ql] [ANNOUNCE] New S3QL pre-release available

2023-05-17 Thread Nikolaus Rath
Hi all,

I have recently started working on S3QL again and am excited to announce
a pre-release of S3QL 5.0!

There's been a large number of internal cleanups, but the most important
change is that:

S3QL (finally!) no longer limits the compressed metadata size to 5 GB
because it no longer maintains entire filesystem metadata in a single
storage object.

Instead, the database file is distributed across multiple backend
objects with a block size configured at mkfs time. This means that S3QL
also no longer needs to upload the entire metadata object on unmount;
and there is no longer a size limit on the metadata.

It would be great if people could give this version a spin, but note
that there may still be bugs.

The pre-release is available for download from 
https://github.com/s3ql/s3ql/releases/tag/release-5.0.0-pre1


Best,
-Nikolaus

-- 
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/87v8gric26.fsf%40vostro.rath.org.


signature.asc
Description: PGP signature


Re: [s3ql] ERROR: File system revision needs upgrade (or backend data is corrupted)

2023-03-17 Thread Nikolaus Rath
On Fri, 17 Mar 2023, at 09:42, Alessandro Boem wrote:
> File system was created with version 3.3.2 and that's the version in use at 
> the time of crash.
[...]

> 
> Ok nice. I ran s3qladm download-metadata s3c://r1-it.storage.cloud.it/bdrive 
> --authfile=/etc/s3ql.authinfo --cachedir=/var/cache/s3ql/bdrive/
> but I got the same error: ERROR: File system revision needs upgrade (or 
> backend data is corrupted)

If you really did not change S3QL versions, then this sounds as if the 
s3ql_passphrase object in the cloud has somehow been corrupted.

I have no idea how this could possibly happen (since S3QL never writes to it 
after mkfs), nor how it could be related to a local system crash.

Maybe check if there's a backup of this object somewhere? Otherwise you can use 
's3qladm recover-key' to restore this object from your offline copy of the 
master key (which you hopefully created at mkfs time).

This object contains the master key, so without it you can't decrypt any of the 
other objects. 

Best,
-Nikolaus
--
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«


-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/bf26b983-25d2-4bde-8fdb-599834b2816d%40app.fastmail.com.


Re: [s3ql] ERROR: File system revision needs upgrade (or backend data is corrupted)

2023-03-16 Thread Nikolaus Rath


On Thu, 16 Mar 2023, at 14:50, Alessandro Boem wrote:
> Hi Nikolaus,
> 
> 
> Il giorno giovedì 16 marzo 2023 alle 12:49:56 UTC+1 Nikolaus Rath ha scritto:
>> __
>> Hi Alessandro,
>> 
>> 
>> On Wed, 15 Mar 2023, at 14:57, Alessandro Boem wrote:
>>> We're trying to recover the consistency of the db from a machine power 
>>> outage.
>>> I know that the project is no longer developed, but we're looking for an 
>>> extra docs/help trying to recover the data.
>>> 
>>> Reading and following the available documentation, we have already tried 
>>> these:
>>> fsck.s3ql s3c://r1-it.storage.cloud.it/bdrive --authfile=/etc/s3ql.authinfo 
>>> --cachedir=/var/cache/s3ql/bdrive/
>>> s3ql_verify --authfile=/etc/s3ql.authinfo 
>>> --cachedir=/var/cache/s3ql/bdrive/ s3c://r1-it.storage.cloud.it/bdrive
>>> s3qladm upgrade s3c://r1-it.storage.cloud.it/bdrive 
>>> --authfile=/etc/s3ql.authinfo --cachedir=/var/cache/s3ql/bdrive/
>>> We always receive these message:
>>> ERROR: File system revision needs upgrade (or backend data is corrupted)
>>> We have a backup of the db file and we restored it without success (we 
>>> still receive the previous error)
>> 
>> Can you provide a bit more information? How exactly did you restore the 
>> metadata backup (full command and output)? Which backups did you try?
>> 
> First I ran all the cited three command in that order and all of them return 
> the error.
> I did not restore the metadata from backend but I've tried to restore the 
> file with .db extension in /var/cache/s3ql/bdrive from machine last backup 
> before crash (backup was performed at same day of the crash at midnigh, the 
> machine power outage was at 09:30)

The error message refers to what is stored in the cloud. It's quite possible 
that nothing at all is wrong with the files in /var/cache/s3ql.



--
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«


> 
>  
>> 
>> Is it possible that you accidentally upgraded from an old S3QL version (so 
>> your data isn't corrupted at all, and you just have to upgrade through some 
>> not-quite-as-old S3QL version)?
>> 
> We 're using s3ql version 3.3.2 on an Ubuntu 20.04 server release


That's what you're using now, right? Which version did you use before the crash?



> I also tried to upgrade to a later version the package (4.0.0) compiling and 
> installing it with setup.py, then I ran
> s3qladm upgrade s3c://r1-it.storage.cloud.it/bdrive 
> --authfile=/etc/s3ql.authinfo --cachedir=/var/cache/s3ql/bdrive/
> again with 4.0.0 release but it return the same error: ERROR: File system 
> revision needs upgrade (or backend data is corrupted)

Of course. What I said is that you may need an *older* release (if you 
accidentally upgraded, that is).


>> 
>>> Taking a look at the backed data we can see that all the metadata copy have 
>>> the same datetime:
>> 
>> 
>> The metadata backups may be created either by copying or by moving 
>> operations: https://github.com/s3ql/s3ql/blob/master/src/s3ql/metadata.py
>> 
>> It is possible that your backend uses copy, and sets the modification date 
>> to the date of the copy (rather than the modification date of the source). 
>> Is that possible? Are the *contents* of the backups identical as well?
> I checked with a HEX editor the metadata copy on the backend and they are 
> different.
> Can I use a backup copy of metadata to restore file system coherence?

Yes, that is what 's3qladm download-metadata' is intended for.

Best,
-Nikolaus

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/3905b2cf-6f82-4a7c-9a6b-73f870174123%40app.fastmail.com.


Re: [s3ql] longintrepr.h missing (s3ql 4.0.0 with Python 3.11)

2022-12-07 Thread Nikolaus Rath
On Wed, 7 Dec 2022, at 04:01, Brian Hill wrote:
> Cython is up to version 0.29.32 (at least for Python 3.11). That wasn't 
> working. I just tried downgrading to 0.29.25, but that yielded the same error.
> 
> On Monday, December 5, 2022 at 1:19:53 PM UTC-8 dan...@jagszent.de wrote:
>> Hello Brian,
>> 
>> 
>>> Is there a simple fix to this problem when bulding s3ql 4.0.0 with Python 
>>> 3.11? [...]
>> 
>> looks like you need at least Cython Version 0.29.25 for Python 3.11.
>> https://github.com/cython/cython/commit/0f7bd0d1b159d085f321cc32a3f6ade24844e545

Do you have Python development headers installed? That's where I get the file 
from:

nikratio@vostro ~> dpkg -S longintrepr.h
libpython3.9-dev:amd64: /usr/include/python3.9/longintrepr.h


Best,
-Nikolaus
--
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/336be998-c705-4210-8dad-814458d63f5e%40app.fastmail.com.


[s3ql] ZFS-on-NBD - follow-up

2022-09-29 Thread Nikolaus Rath
Hi,

Following up on my last post on this topic (since there seemed to be at least 
some interest): after evaluating the setup for a few weeks, I have decided that 
this is not as good a solution as I had hoped. 

In particular, the expected benefits of splitting data between a special vdev 
backed by a bucket with small object size and a normal vdev backed by a bucket 
with much larger object size did not materialize.

More details (including histograms) are available at 
https://www.rath.org/zfs-on-nbd-my-verdict.html

Best,
-Nikolaus
--
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/5275f1fe-8516-4f0d-8b47-62b9fec591ae%40app.fastmail.com.


[s3ql] S3QL vs ZFS-on-NBD

2022-09-12 Thread Nikolaus Rath
Hi all,

I've been experimenting with running ZFS on NBD as a potential alternative 
solution to the problem that I originally designed S3QL for.

In case someone is interested, here is the (rather long) write-up: 
https://www.rath.org/s3ql-vs-zfs-on-nbd.html

Best,
-Nikolaus

--
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/0f14139b-03b5-4e29-bc3a-785607fa71b1%40www.fastmail.com.


Re: [s3ql] Please test S3QL from master

2022-04-24 Thread Nikolaus Rath
Hi Henry,

Glad to hear that, thanks for reporting back!

Best
-Nikolaus

--
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«



On Sun, 24 Apr 2022, at 06:33, Henry Wertz wrote:
> Just wanted to report on mine, so far so good!  I have (using local backend 
> on ext4 filesystem) a 4TB drive, 8TB drive, and a 1TB drive (in a portable 
> system), all running s3ql with a variety of workloads (some movie storage -- 
> I know s3ql won't do anything to shrink or dedup that but OK... rsync 
> backups, VirtualBox activities, both .ova files and running VMs out of there, 
> some source code, some games as well.)   Nothing to report!   Unmounted 
> filesystem(s), updated s3ql.  The s3qladm upgrade process was quick and 
> painless, on the 2GB database it took a minute or two, on the others it was 
> nearly instantaneous.  It shrunk the databases about 10-15%.  I decided to 
> fsck.s3ql --force the filesystems too, no problems. The 8TB has a lot of 
> duplication due to rsync backups on it, about 8.8TB data on the 8TB drive 
> using 5.23TB disk space.  This one the DB is now 1.9GB and shrunk about by 
> 200MB.  4TB, it's like 90MB now, it shrunk about 9MB.  It definitely mounts, 
> unmounts, and fscks faster (since there's 1 fewer tables for it to read in, 
> write back, or check.)  I didn't think of doing any benchmarks but "seat of 
> the pants" it does seem a bit faster in use too; not a surprise, when you're 
> pushing the IOPS on there one potential "choke point" are s3qlite synchronous 
> writes, and removing one table means that much less stuff for s3qlite to have 
> to write out.   
> 
> 
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "s3ql" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to s3ql+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/s3ql/CAOuPqTEjw%3DhJ-k_MXUyPuruCo-ts8ssY7WiyEXyF0JFk5ESk2w%40mail.gmail.com
>  
> .

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/0fa66ff0-9a05-461c-a75d-36ef78706441%40www.fastmail.com.


[s3ql] Please test S3QL from master

2022-04-18 Thread Nikolaus Rath
Hi,

I have just merged a large patch into S3QL. It removes an internal
abstraction layer that was never used. It should make the code more
maintainable, performance better, and reduce database size.

Unfortunately, it also makes backwards incompatible changes to the
filesystem structure.

It would be great if some more people could grab the current version
from Git master and give it a spin before this makes it into the next
stable release.

This will most likely be my last contribution to S3QL for a long time
since other things have taken priority in my life. I'll continue to
apply pull requests and make releases as long and as regularly as I can,
but if someone wants to take a more active role in the project then I
would be delighted to step back even more.


Best,
-Nikolaus

-- 
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/87a6ciz7fk.fsf%40vostro.rath.org.


Re: [s3ql] tons of 'had to retry' messages and a crash

2022-03-17 Thread Nikolaus Rath
On Mar 16 2022, "Brian C. Hill"  wrote:
> Hello,
>
> I am using s3ql 3.8.1, Python 3.9.2 w/ pyfuse 3.2.1 on CentOS 7 (with 
> kernel-ml 15.16.12).
>
> I am seeing/tons/of these during a copy (rdiff-backup, specifically):
>
>Mar 15 16:03:38 myhost mount.s3ql[1455]: Server did not provide
>Content-Type, assuming XML
>Mar 15 16:03:38 myhost mount.s3ql[1455]: Had to retry 470 times over
>the last 60 seconds, server or network problem?
>Mar 15 16:03:39 myhost mount.s3ql[1455]: Had to retry 471 times over
>the last 60 seconds, server or network problem?
>Mar 15 16:03:39 myhost mount.s3ql[1455]: Had to retry 472 times over
>the last 60 seconds, server or network problem?
>Mar 15 16:03:39 myhost mount.s3ql[1455]: Had to retry 466 times over
>the last 60 seconds, server or network problem?

This means that S3QL encountered network errors or temporary server
errors, so it had to resent requests many times before it got a
successful response from the server. 

> During one run, mount.3sql exited:
[...]
>43, in wrapper
>Mar 15 21:02:29 myhost mount.s3ql[1455]: await fn(*args, **kwargs)
>Mar 15 21:02:29 myhost mount.s3ql[1455]: File "src/pyfuse3.pyx",
>line 781, in main
>Mar 15 21:02:29 myhost mount.s3ql[1455]: File
>"/opt/python/3.9.2/lib/python3.9/site-packages/trio/_core/_run.py",
>line 813, in __aexit__
>Mar 15 21:02:29 myhost mount.s3ql[1455]: raise
>combined_error_from_nursery
>Mar 15 21:02:29 myhost mount.s3ql[1455]: File
>"/opt/python/3.9.2/lib/python3.9/site-packages/_pyfuse3.py", line
>43, in wrapper
>Mar 15 21:02:29 myhost mount.s3ql[1455]: await fn(*args, **kwargs)
>Mar 15 21:02:29 myhost mount.s3ql[1455]: File "src/internal.pxi",
>line 272, in _session_loop

This looks incomplete. Are you sure there weren't more lines of output?

Best,
-Nikolaus

-- 
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/87tubxj6wp.fsf%40vostro.rath.org.


[s3ql] [ANNOUNCE] S3QL 3.8.1 has been released

2022-01-10 Thread Nikolaus Rath

Dear all,

I am pleased to announce a new release of S3QL, version 3.8.1.

>From the changelog:

2022-01-10, S3QL 3.8.1

  * Update fsck.s3ql to remove empty direcrtories from local backend's storage
directory.  As blocks are added subdirectories are created
on demand so there are not too many blocks in each directory, but as blocks
are removed from storage empty directories are not automatically removed.
The number of empty directories can become quite large over
time, slowing down mount.s3ql a little and fsck.s3ql considerably.

  * Fix for fsck.s3ql removing .tmp files in cache directory.

  * Fix bug that would cause incorrect size to be recorded for a block
if the file had zero-bytes added to the end by using truncate followed
by close.  (The recorded size does not count the zero bytes.)  Both rsync's
-S (sparse) option and VirtualBox do this for example. No data corruption,
but contrib/fixup_block_sizes.py should be run to fix this.

  * contrib/fix_block_sizes.py updated to check all block sizes, not just
ones with size multiple of 512, so it detects blocks affected by the above
bug.


The following people have contributed code to this release:

Beorn Facchini 
hwertz <52359623+hwe...@users.noreply.github.com>
Lorentz Kim 
Nikolaus Rath 


(The full list of contributors is available in the AUTHORS file).

The release is available for download from
https://github.com/s3ql/s3ql/releases

Please report any bugs on the mailing list (s3ql@googlegroups.com) or
the issue tracker (https://github.com/s3ql/s3ql/issues).

Best,
-Nikolaus

-- 
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/87sftvl7r2.fsf%40vostro.rath.org.


signature.asc
Description: PGP signature


Re: [s3ql] fix_block_sizes.py usage

2021-12-27 Thread Nikolaus Rath
On Dec 21 2021, "Brian C. Hill"  wrote:
> Hello,
>
> I am using CentOS 7 and s3ql 3.8.0 (though my fs was origiinally created with 
> 3.7.1, I
> think).
>
> s3sql_verify reported this:
>
> WARNING: Object 2076394 is corrupted (expected size 258048, actual size 
> 256871)
>
> I assume that I need to use fix_block_sizes.py to fix that,

This is not what the tool was intended for. fix_block_sizes.py was
designed to fix one specific problem caused by a bug in previous S3QL
versions that resulted in files being null-padded to the next 512 byte
boundary (i.e., the metadata indicates a larger size than what is
physically stored).

In your case, the stored data seems to be 1177 byte longer than what the
metadata says. In other words, this is either a different S3QL bug, or
the block was corrupted on the remote server.

fix_block_sizes.py will simply update the metadata to match the physical
size and thereby get rid of the padding. I am not sure what this would
do in your case - you may end up appending bogus data to a file or
loosing valuable data. The safer choice would be to remove the damaged
object, after which fsck.s3ql will tell you what files may need to be
recovered from elsewhere.


> but I don't see any
> documentation for fix_block_sizes.py, and it doesn't provide a 'usage' 
> summary when run
> without arguments.

Just pass it the storage url, e.g. fix_block_sizes.py s3://foobar

> Can fsck.s3ql not fix that problem?

No, fsck.s3ql only checks for logical consistency, it does not attempt
to download the entire filesystem data (which is what s3ql_verify doese).

Best,
-Nikolaus

-- 
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/87czliclms.fsf%40vostro.rath.org.


Re: [s3ql] Package for Debian Bullseye?

2021-11-19 Thread Nikolaus Rath
Hi Tor,

On Nov 18 2021, Tor Krill  wrote:
> What is the recommended way to run s3ql on a Debian Bullseye system? There 
> seems to be no package provided by Debian.

The Debian package is lacking a maintainer. I did this for a while but
no longer have the time. So it doesn't regularly get updated, and most
likely no one checks why it's not propagating from sid to stable.

A quick look at https://tracker.debian.org/pkg/s3ql reveals:

> Migration status for s3ql (- to 3.7.3+dfsg-1): BLOCKED: Rejected/violates 
> migration policy/introduces a regression
> Issues preventing migration:
> Updating s3ql introduces new bugs: #982381

Looking at https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=982381, the
problem is seemingly just the absence of a proper versioned dependency
on python3-trio. This should be easy to fix.


Best,
-Nikolaus

-- 
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/87h7c8cv6s.fsf%40vostro.rath.org.


Re: [s3ql] How to auto mount in Ubuntu 20.04

2021-11-09 Thread Nikolaus Rath
On Nov 07 2021, Jaimala D  wrote:
> Hello,
> I am very new to linux(using for almost 15 days) and I am loving it.
> I am stuck I am not able to find a guide how to auto mount s3ql after 
> startup. I found few guides but they were of Upstart(which is discontinued).
> If someone can help I will be really thankful.

Since you're new to Linux, I'd like to point out that what Daniel
described is "automatic mounting on system start". That's one definition
of "automatic", but not the only one. So it's good to me more exact.

For example, you could also attempt to automatically mount the
filesystem on access through https://linux.die.net/man/8/automount

(No idea if that works for S3QL, just pointing it out as an option).

Best,
-Nikolaus

-- 
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/874k8ledvk.fsf%40vostro.rath.org.


[s3ql] [ANNOUNCE] S3QL 3.8.0 has been released

2021-11-07 Thread Nikolaus Rath
Dear all,

I am pleased to announce a new release of S3QL, version 3.8.0.

>From the changelog:

2021-11-07, S3QL 3.8.0

  * The way to build the documentation has changed. Instead of running
`setup.py build_sphinx`, run `./build_docs.sh`. To generate PDF
documentation, follow this with `cd doc/pdf && make`.

  * The `s3ql_verify` tool is now able to detect the kind of corruption that
was introduced by the fsck.s3ql data corruption bug described below.

  * The new contrib/fixup_block_sizes.py tool is now available to fix most of
the issues caused by the fsck.s3ql bug described below.


The following people have contributed code to this release:

Nikolaus Rath 
r0ps3c 

(The full list of contributors is available in the AUTHORS file).

The release is available for download from
https://github.com/s3ql/s3ql/releases

Please report any bugs on the mailing list (s3ql@googlegroups.com) or
the issue tracker (https://github.com/s3ql/s3ql/issues).

Best,
-Nikolaus


-- 
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/877ddkdxgj.fsf%40vostro.rath.org.


Re: [s3ql] Re: Crashing s3ql 3.7.3

2021-11-06 Thread Nikolaus Rath
On Nov 06 2021, Amos T  wrote:
> Such errors should not be in the logs and should be catched properly with 
> exception handling and
> clear description what does happen !!!

Right you are! My sincere apologies for the bad service. I will forward
your complaint to the customer service department, who will contact you
shortly about refunding your service fee for this month.

Best,
-Nikolaus

-- 
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/87ee7te71l.fsf%40vostro.rath.org.


Re: [s3ql] Crashing s3ql 3.7.3

2021-11-06 Thread Nikolaus Rath
On Nov 05 2021, Amos T  wrote:
> "/usr/local/lib/python3.9/dist-packages/s3ql-3.7.3-py3.9-linux-x86_64.egg/s3ql/backends/s3c.py",
>  
> line 565, in _parse_error_response
> raise get_S3Error(tree.findtext('Code'), tree.findtext('Message'), 
> resp.headers)
> s3ql.backends.s3c.S3Error: NoSuchEntity: The entity in your request cannot 
> be found.

Your server is emitting malformed error messages. They do not contain a
"Message" tag which information about what the problem is.

Best,
-Nikolaus

-- 
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/87h7cpe73n.fsf%40vostro.rath.org.


Re: [s3ql] Enable WAL?

2021-09-12 Thread Nikolaus Rath
Hi Henry,

A: Because it confuses the reader.
Q: Why?
A: No.
Q: Should I write my response above the quoted reply?

..so please quote properly, as I'm doing in the rest of this mail:


On Sep 11 2021, Henry Wertz  wrote:
>> > What do you think about enabling WAL (Write Ahead Logging)?
>> >
>> [...]
>> >
>> > I didn't benchmark anything, but rsync'ing in small files is visibly 
>> > faster, and the file system is better under load (i.e. I can copy stuff 
>> in 
>> > and any simultaneous directory lookups, copying stuff out, etc. is 
>> > noticeably faster and more responsive.)
>>
>> As I understand, WAL should not result in any speed-ups, it just
>> improves reliability in case of a crash. So I'd be very interested to
>> see actual benchmark data here rather than subjective impressions :-).
>
> I would think you're write -- you're writing your data into a log, then 
> writing into the DB, that seems like it'd be slower.  But...
>
> WAL docs (https://sqlite.org/wal.html)  say "WAL is significantly faster in 
> most scenarios." (... to be fair they're comparing it to the regular 
> journal mode, though, not journal_mode=off.)   They say WAL provides more 
> concurrency (readers and writers don't usually block each other... I"m
> [...]


That's the critical point. S3QL currently does not use a journal at
all. So enabling WAL just means that the data is written to the journal
before (just like now) the database is updated. So it's not clear to me
that this willl result in a speedup. Note also that S3QL currently
disables fsync() calls on the journal.


Best,
-Nikolaus

-- 
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/87ee9ukmwt.fsf%40vostro.rath.org.


Re: [s3ql] Enable WAL?

2021-09-11 Thread Nikolaus Rath
On Sep 10 2021, Henry Wertz  wrote:
> What do you think about enabling WAL (Write Ahead Logging)?
>
[...]
>
> I didn't benchmark anything, but rsync'ing in small files is visibly 
> faster, and the file system is better under load (i.e. I can copy stuff in 
> and any simultaneous directory lookups, copying stuff out, etc. is 
> noticeably faster and more responsive.)

As I understand, WAL should not result in any speed-ups, it just
improves reliability in case of a crash. So I'd be very interested to
see actual benchmark data here rather than subjective impressions :-).

Best,
-Nikolaus

-- 
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/875yv7icjq.fsf%40vostro.rath.org.


Re: [s3ql] Busy to create the most exciting backup with s3ql, need help

2021-09-09 Thread Nikolaus Rath
On Sep 09 2021, Amos T  wrote:
> But now is my final question.  If a lot of files are NOT CACHED, so they 
> are in the data files of
> s3ql in the s3 bucket...  What happens if I do run borgbackup?  It will 
> check on the metadata
> ctime, size, inode but the actually data of the file is not stored 
> locally...
>
> So in that case my a safely assume when borgbackup checks for data 
> modification, the
> file in question is not downloaded ?  Because in that case, my backup will 
> generate a lot of traffic...

I do not know how borgbackup checks if a file has been modified. If it
just compares ctime, mtime and file size the file will not be downloaded
for that. If it tries to read the file contents, S3QL obviously has no
other choice than to download the data.

Best,
-Nikolaus

-- 
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/874katel3c.fsf%40vostro.rath.org.


Re: [s3ql] Busy to create the most exciting backup with s3ql, need help

2021-09-09 Thread Nikolaus Rath
On Sep 09 2021, Amos T  wrote:
> But whatabout the metadata and data of s3ql?  Do you provide some parity 
> there?

No, S3QL relies completely on the filesystem where this data is being
stored.


> If not, can that be extended by s3qlcmd so in a very worse case
> scenaria, when metadata and data of the s3ql files happens, it can
> recover?

Well, in principle everything is possible. I don't think anyone is
planning to do that work though.

It's also not clear to me why people worried about filesystem corruption
can't just use a file system that provides the necessary guarantees
instead of relying on individual applications to make up for that...

Best,
Nikolaus


-- 
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/877dfpev1d.fsf%40vostro.rath.org.


Re: [s3ql] Busy to create the most exciting backup with s3ql, need help

2021-09-09 Thread Nikolaus Rath
On Sep 09 2021, Amos T  wrote:
> In fact what happens if the cache got corrupted?  how does s3ql detects 
> corrupted cached files? Suppose I put this on a single drive, not on raid, 
> and cache gets corrupted.

S3QL does not detect corruption on the cache. It would just upload/use
the corrupted data. You're expected to put the cache on a sufficiently
reliable filesystem.

> So are inodes stable in S3QL ?

Yes. A files inode never changes, no matter if it's cached or not, or
remounted.

Best,
-Nikolaus

-- 
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/87a6kmdth8.fsf%40vostro.rath.org.


Re: [s3ql] Directory structure of S3QL backend files

2021-09-06 Thread Nikolaus Rath
On Sep 06 2021, Moon silvery  wrote:
> Hello,
>
> I love the design of S3QL so much that I want to use it in my homelab to 
> replace my whole
> storage pool. But I have encountered a problem that when storing over 10 
> million files in
> a mount point, S3QL creates so many objects in a single bucket, without any 
> prefix to
> divide these files into sub-directories. This behavior is causing my object 
> storage
> backend (self-hosted MINIO) creating over 10 million files under a single 
> directory, which
> is very slow on any operation that need list folder (any filesystem could not 
> handle so
> many files in a single directory).
> I have checked the document carefully, but I could not find any instruction 
> on how could I
> change this behavior. What I want is let S3QL stores objects with prefix, 
> like when it
> want to store s3ql_data_1197116, the directory structure would be
> s3ql_data_1/s3ql_data_1197/s3ql_data_1197116, so that any single directory 
> would have no
> more than 1k files/sub-directories.
> Could you please help to take a look and share some idea? I know this could 
> be a backend
> specific problem, that AWS or other cloud provider do not have this issue. 
> But I indeed
> appreciate the design of S3QL and want to use it in every place I
> could. Thanks a lot!

Have you considered using the local:// S3QL backend instead of going
through Minio?

Best,
-Nikolaus

-- 
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/87czpmt3l4.fsf%40vostro.rath.org.


[s3ql] Re: S3QL cache directory and SSD longevity

2021-06-22 Thread Nikolaus Rath
On Jun 22 2021, Ivan Shapovalov  wrote:
> I tried with 10 MiB block size and this made s3ql to re-upload 300 GiB
> when just 30 were changed this particular day.

Does "30" refer to "30 MB of changes"? If so, then this isn't saying
anything.

If you change one byte every 10 MB, then S3QL would have to upload every
block even though you only changed 30 MB of data. If, on the other hand,
you write 30 MB in sequence, then S3QL should only upload 3 blocks. If
anything else happens, there is a bug (and it would be great if you
could construct a small testcase that reproduces it).


Best,
-Nikolaus

-- 
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/87lf7243z6.fsf%40vostro.rath.org.


[s3ql] Re: S3QL cache directory and SSD longevity

2021-06-22 Thread Nikolaus Rath
On Jun 22 2021, Ivan Shapovalov  wrote:
> I'm also worried about the s3ql block cache. If I'm going to use s3ql
> in this setup at all, no matter the block size, it means I'll be
> writing 1 TB of data daily(!) to the SSD that holds the block cache.
>
> Is this solvable in s3ql somehow? I'd just put it in RAM (reducing the
> cache size to something like 1 GiB which I can spare), but the cache
> directory also holds the metadata, which needs to be non-volatile.

The cache is in a -cache subdirectory. You might be able to symlink that
/ mount over it. Note that the name depends on the storage URL though.


Best,
-Nikolaus

-- 
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/87o8by444n.fsf%40vostro.rath.org.


Re: [s3ql] I have two questions

2021-06-17 Thread Nikolaus Rath
On Jun 16 2021, Bob Chen  wrote:
> 1. Is this project a personal work?

Depends on your definition, I guess. S3QL is open source and contains
contributions from a number of people, some of which have been paid for
working on S3QL.

> And why Rust?

Can you elaborate on the question? S3QL is written in C and Python.

> 2. Does it have cache mechanism, like kernel's page cache?

I think this is explained in the manual and manpage, 
http://www.rath.org/s3ql-docs/mount.html#notes-about-caching.

Best,
-Nikolaus

-- 
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/87tulx3p8l.fsf%40vostro.rath.org.


Re: [s3ql] FileNotFoundError: [Errno 2] fuse_lowlevel_notify_inval_entry returned: No such file or directory

2021-06-08 Thread Nikolaus Rath
> Thanks! Looks like https://github.com/s3ql/s3ql/issues/217  - I'm currently 
> not able to reproduce this, but I will add extra information if I can...

If you are seeing this issue then there's no need for more info. It is well 
understood what is happening here - someone just needs to figure out how to 
this can be fixed (and then implement the fix).

Best,
-Nikolaus


--
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/903a4e97-e53c-4250-824e-da4f75fc9069%40www.fastmail.com.


Re: [s3ql] FileNotFoundError: [Errno 2] fuse_lowlevel_notify_inval_entry returned: No such file or directory

2021-06-07 Thread Nikolaus Rath
On Jun 06 2021, "'Joseph Maher' via s3ql"  wrote:
> Just switched over to google as the backend and everything seems to be 
> working fine.
>
> I built 3.7.3 on debian testing/bullseye, though I needed to do a
>
> pip3 install trio
>
> to get a more recent version of trio, but then it built successfully and all 
> the tests it
> ran passed.  (It skipped some but I assume this is normal.)
>
> Everything seems to be working fine - I do get things like these in the mount 
> log, are
> they anything to be concerned about?
>
>
>
> Traceback (most recent call last):
>   File "src/internal.pxi", line 125, in pyfuse3._notify_loop
>   File "src/pyfuse3.pyx", line 918, in pyfuse3.invalidate_entry
> FileNotFoundError: [Errno 2] fuse_lowlevel_notify_inval_entry returned: No 
> such file or
> directory
> 2021-06-06 22:22:08.583 187962:Thread-2583 pyfuse3.run: Failed to submit 
> invalidate_entry
> request for parent inode 23973443, name b'onlisp.ps'
>
>
> I'll get a bunch of them together all referencing the same inode...

This is probably https://github.com/s3ql/s3ql/issues/217 or
https://github.com/s3ql/s3ql/issues/222 (though this one should be fixed
in 3.7.3 - please re-open if it's not).



Best,
-Nikolaus

-- 
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/87a6o26q3o.fsf%40vostro.rath.org.


[s3ql] [ANNOUNCE] S3QL 3.7.3 has been released

2021-06-03 Thread Nikolaus Rath
Dear all,

I am pleased to announce a new release of S3QL, version [PASTE VERSION].

>From the changelog:

2021-06-03, S3QL 3.7.3

  * Fixed a DATA CORRUPTION bug in fsck.s3ql that caused the recorded size of 
uploaded
dirty blocks to be rounded up to the next multiple of 512 bytes, 
effectively appending
up to 512 zero-bytes to the end of affected files.

This problem was introduced in version 3.4.1 (released 2020-05-08) as part 
of a
seemingly very minor improvement to cache usage calculation.

You can tell that a file has (likely) been affected from fsck.s3ql messages 
of the
form:

WARNING: Writing dirty block  of inode 

followed later by:

WARNING: Size of inode  () does not agree with number of 
blocks, \
setting from  to 

(The full list of contributors is available in the AUTHORS file).

The release is available for download from
https://github.com/s3ql/s3ql/releases

Please report any bugs on the mailing list (s3ql@googlegroups.com) or
the issue tracker (https://github.com/s3ql/s3ql/issues).

Best,
-Nikolaus


-- 
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/87czt27mxu.fsf%40vostro.rath.org.


signature.asc
Description: PGP signature


[s3ql] [ANNOUNCE] S3QL 3.7.2 has been released

2021-05-04 Thread Nikolaus Rath
Dear all,

I am pleased to announce a new release of S3QL, version 3.7.2.

>From the changelog:

2021-05-04, S3QL 3.7.2

  * Fixed a crash with `dugong.StateError` in the Google Storage backend when 
the
authentication token expires in the middle of an upload.

  * S3QL is now compatible with setuptools >= 47.


The following people have contributed code to this release:

Daniel Jagszent 
Nikolaus Rath 

(The full list of contributors is available in the AUTHORS file).

The release is available for download from
https://github.com/s3ql/s3ql/releases

Please report any bugs on the mailing list (s3ql@googlegroups.com) or
the issue tracker (https://github.com/s3ql/s3ql/issues).

Best,
-Nikolaus

-- 
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/87k0oeopo9.fsf%40vostro.rath.org.


signature.asc
Description: PGP signature


Re: [s3ql] Re: S3QL 3.3 performance

2021-04-15 Thread Nikolaus Rath
On Apr 15 2021, Grunthos  wrote:
> OK...possibly made a silly mistake...I have the bucket mounted (for a copy) 
> elsewhere, while running clone-fs. While no writes are occurring, the 
> metadata does get uploaded periodically, I believe.
>
> Am I therefor likely now to have a corrupt clone? It doesn't look like 
> clone-fs has 'rsync-like' features, so I can't just unmount and re-do to 
> get updates AFAICT. Any thoughts?

If there were no changes, then the metadata has not changed and thus
will not be uploaded.

If there were minor changes (e.g. file atime but no file contents), then
running fsck.s3ql on the clone should clear the dirty flag without
finding any problems. After that, you're good to go.

Best,
-Nikolaus

-- 
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/87y2djkc1k.fsf%40vostro.rath.org.


Re: [s3ql] Re: S3QL 3.3 performance

2021-04-14 Thread Nikolaus Rath
On Apr 14 2021, Grunthos  wrote:
> That's unfortunate: I assume close-fs.py will suffer the same performance 
> issues.

No, it should be as fast as rclone.

clone.fs uses multiple threads and downloads each block only
once. You can try to adjust the number of threads, but apart from that I
think you fundamentally cannot do any better than this (neither with
other tools nor with major code changes to S3QL).

Best,
-Nikolaus

-- 
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/871rbcll71.fsf%40vostro.rath.org.


Re: [s3ql] Re: S3QL 3.3 performance

2021-04-14 Thread Nikolaus Rath
On Apr 14 2021, Grunthos  wrote:
> On Wednesday, April 14, 2021 at 5:36:30 PM UTC+10 niko...@rath.org wrote:
>
>>
>> Yes, all of these would be possible and probably be faster. I think 
>> option (2) would me the best one. 
>>
>> Pull requests are welcome :-). 
>>
>>
> I had a funny feeling that might be the answer...and in terms of utility 
> and design, ISTM that " add a special s3ql command to do a 'tree copy' -- 
> it would know exactly which blocks it needed and download them en-masse 
> while restoring files (and would need a lot of cache, possibly even a 
> temporary cache drive)" is a good plan.
>
> I am not at all sure I am up for the (probable) deep-dive required, but if 
> I were to look at this could you give some suggested starting points? My 
> very naieve approach (not knowing the internals at all) would be to build a 
> list of all required blocks, do some kind of topo sort, then start multiple 
> download threads. As each block was downloaded, determine if a new file can 
> be copied yet, and if so, copy it, then release and blocks that are no 
> longer needed.
>
> ...like I said, naieve, and hightly dependant on internals...and maybe 
> should use some kind of private mount to avoid horror.

I think there's a simpler solution.

1. Add a new special xattr to trigger the functionality (look at
s3qlcp.py and copy_tree() in fs.py) 

2. Have fs.py write directly to the destination directory (which should
be outside the S3QL mountpoint)

3. Start a number of async workers (no need for threads) that, in a
loop, download blocks and write them to a given offset in a given fh.

4. Have the main thread recursively traverse the source and issue "copy"
requests to the workers (through a queue)

5. Wait for all workers to finish.

6. Profit.


I wouldn't even bother putting blocks in the cache - just download and
write to the destination on the fly. It may be worth checking if a block
is *already* in the cache and, if so, skip download though.


With this implementation, blocks referenced by multiple files will be
downloaded multiple times. I think this can be improved upon once the
minimum functionality is working.


Best,
-Nikolaus


-- 
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/874kg9l06t.fsf%40vostro.rath.org.


Re: [s3ql] Re: S3QL 3.3 performance

2021-04-14 Thread Nikolaus Rath
On Apr 13 2021, Grunthos  wrote:
> OK..I just found this http://www.rath.org/s3ql-docs/tips.html which 
> suggests exactly the solution I have used.
>
> I wonder if there *might* be a better option for full file-system restore, 
> one of:
>
>
>- copy the entire collection S3QL data from the remote to local first, 
>then do a restore-from-local
>- add a special s3ql command to do a 'tree copy' -- it would know 
>exactly which blocks it needed and download them en-masse while restoring 
>files (and would need a lot of cache, possibly even a temporary cache 
> drive)
>- a limited version of the above option to pre-fill the cache with all 
>remote data blocks needed for a particular part of the tree

Yes, all of these would be possible and probably be faster. I think
option (2) would me the best one.

Pull requests are welcome :-).


Best,
-Nikolaus

-- 
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/877dl5l3mg.fsf%40vostro.rath.org.


[s3ql] [ANNOUNCE] S3QL 3.7.1 has been released

2021-03-07 Thread Nikolaus Rath
Dear all,

I am pleased to announce a new release of S3QL, version 3.7.1.

>From the changelog:

2021-03-07, S3Ql 3.7.1

  * The tcp-timeout backend option of the B2 Backend works now.

  * mount.s3ql no longer crashes with "No Upload Threads available" when not 
running in
foreground.


The following people have contributed code to this release:

Daniel Jagszent 
Iain Parris 
nand2 
Nikolaus Rath 
r0ps3c 
Valentin Kulesh 


(The full list of contributors is available in the AUTHORS file).

The release is available for download from
https://github.com/s3ql/s3ql/releases

Please report any bugs on the mailing list (s3ql@googlegroups.com) or
the issue tracker (https://github.com/s3ql/s3ql/issues).

Best,
-Nikolaus

-- 
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/87tupnvz90.fsf%40vostro.rath.org.


signature.asc
Description: PGP signature


Re: [s3ql] Crashed filesystem after upgrade from s3ql 2.21 to 3.7.0

2021-02-13 Thread Nikolaus Rath
On Feb 11 2021, Ben Hymers  wrote:
> "/usr/local/lib/python3.8/dist-packages/s3ql-3.7.0-py3.8-linux-x86_64.egg/s3ql/block_cache.py",
>  
> line 536, in _queue_upload
> raise NoWorkerThreads('no upload threads')
> s3ql.block_cache.NoWorkerThreads: no upload threads

Does it help if you run with `--fg`? If so, you justs need to wait for the
next release for a proper fix.

Best,
-Nikolaus

-- 
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/87tuqgduew.fsf%40vostro.rath.org.


[s3ql] [ANNOUNCE] S3QL 3.7.0 has been released

2021-01-03 Thread Nikolaus Rath

Dear all,

I am pleased to announce a new release of S3QL, version 3.7.0.

>From the changelog:

2021-01-03, S3Ql 3.7.0

  * S3QL now supports newer AWS S3 regions like eu-south-1.

  * mount.s3ql now again includes debugging information in its log output when
encountering an unexpected exception. This was broken in version 3.4.0, 
resulting in
mount.s3ql seemingly terminating at random in such a situation.

  * mount.s3ql now properly handles SIGTERM (instead of crashing). This means 
it exits as
quickly as possible without data corruption. For a proper unmount, always 
use
`umount.s3ql`, `umount`, or `fusermount -u` and wait for the mount.s3ql 
process to
terminate.


The following people have contributed code to this release:

Charles Cooper 
Daniel Jagszent 
Ivan Shapovalov 
Nikolaus Rath 
Viktor Szépe 


(The full list of contributors is available in the AUTHORS file).

The release is available for download from
https://github.com/s3ql/s3ql/releases

Please report any bugs on the mailing list (s3ql@googlegroups.com) or
the issue tracker (https://github.com/s3ql/s3ql/issues).

Best,
-Nikolaus


-- 
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/87czym6vop.fsf%40vostro.rath.org.


signature.asc
Description: PGP signature


Re: [s3ql] S3QL crashes when uploading large files with inconclusive logs (yet again)

2020-12-29 Thread Nikolaus Rath
On Dec 28 2020, Ivan Shapovalov  wrote:
> On 2020-12-28 at 13:41 +0000, Nikolaus Rath wrote:
>> On Dec 28 2020, Ivan Shapovalov  wrote:
>> > 2020-12-27 19:04:33.819 211867 DEBUG    Thread-1
>> > s3ql.backends.b2.b2_backend._do_request: RESPONSE: POST 400  97
>> > 2020-12-27 19:04:33.820 211867 DEBUG    MainThread
>> > s3ql.block_cache.with_event_loop: upload of 8652 failed
>> > NoneType: None
>> > 2020-12-27 19:04:33.827 211867 DEBUG Thread-1 s3ql.mount.exchook:
>> > recording exception 400
>> > : bad_request - Checksum did not match data received
>> > zsh: terminated  mount.s3ql b2:// /mnt/b2/files -o
>> > -- 8< --
>> > 
>> > Leaving out the question of why journald eats the last line, the
>> > situation is pretty clear. The backend (B2Backend._do_request)
>> > raises
>> > an exception (B2Error) which is not considered a "temporary
>> > failure".
>> > 
>> > I have just patched up error handling in the B2 backend to consider
>> > the
>> > checksum mismatch a transient failure (testing now).
>> 
>> Is B2 not using SSL for its data connection? That should make sure
>> that
>> there are no checksum errors
>
> Indeed it does. I have added some proper exception logging and found
> the actual problem, which is — unsurprisingly — combination of user
> error, unclear system requirements and broken logging.

Great, thanks for your help!

Could you tell what exactly you changed to make the exception
information appear?


>
> The B2 backend creates a temporary file for each object that is being
> uploaded. My s3ql instance has object size = 1 GiB, and with threads=8,
> that means at most 8 GiB worth of temporary files at once. Thing is,
> temporary files are created in /tmp, which is a tmpfs and has a size
> limit.
>
> -- 8< --
> 2020-12-28 16:39:12.924 340652 ERRORThread-3 s3ql.mount.exchook: 
> Unhandled exception in thread, terminating
> Traceback (most recent call last):
>  
>   File "/usr/lib/python3.9/site-packages/s3ql/backends/common.py", line 279, 
> in perform_write
> return fn(fh) 
>  
>   File "/usr/lib/python3.9/site-packages/s3ql/block_cache.py", line 334, in 
> do_write
> fh.write(buf) 
>  
>   File "/usr/lib/python3.9/site-packages/s3ql/backends/b2/object_w.py", line 
> 36, in write
> self.fh.write(buf)
>  
> OSError: [Errno 28] No space left on device
>
> During handling of the above exception, another exception occurred:
>
> Traceback (most recent call last):
>  
>   File "/usr/lib/python3.9/site-packages/s3ql/mount.py", line 58, in 
> run_with_except_hook
> run_old(*args, **kw)  
>  
>   File "/usr/lib/python3.9/threading.py", line 892, in run
> self._target(*self._args, **self._kwargs)
>   File "/usr/lib/python3.9/site-packages/s3ql/block_cache.py", line 319, in 
> _upload_loop
> self._do_upload(*tmp) 
>  
>   File "/usr/lib/python3.9/site-packages/s3ql/block_cache.py", line 376, in 
> _do_upload
> obj_size = backend.perform_write(do_write, 's3ql_data_%d'
>   File "/usr/lib/python3.9/site-packages/s3ql/backends/common.py", line 108, 
> in wrapped
> return method(*a, **kw)   
>  
>   File "/usr/lib/python3.9/site-packages/s3ql/backends/common.py", line 279, 
> in perform_write
> return fn(fh) 
>  
>   File "/usr/lib/python3.9/site-packages/s3ql/backends/b2/object_w.py", line 
> 79, in __exit__
> self.close()


I consider this a bug in the B2 backend (and other backends may have the
same problem). If the backend returns an exception from write(), this
should not result in a second exception from close(). Either write()
should update the checksum to reflect the partial data that was written
(thus eliminating the checksum error on upload), or perhaps it should
set a flag that this object should not be uploaded at all on close.

https://github.com/s3ql/s3ql/issues/228


>   File "/usr/lib/python3.9/site-packages/s3ql/backends/common.py", line 108, 
> in wrapped
> return method(*a, **kw)   
>  
&g

Re: [s3ql] S3QL crashes when uploading large files with inconclusive logs (yet again)

2020-12-28 Thread Nikolaus Rath
On Dec 28 2020, Paul Tirk  wrote:
> Am Mo, 28. Dez, 2020 um 1:41 P. M. schrieb Nikolaus Rath :
>> On Dec 28 2020, Ivan Shapovalov  wrote:
>>>  2020-12-27 19:04:33.819 211867 DEBUGThread-1 
>>> s3ql.backends.b2.b2_backend._do_request: RESPONSE: POST 400  97
>>>  2020-12-27 19:04:33.820 211867 DEBUGMainThread 
>>> s3ql.block_cache.with_event_loop:
>>> upload of 8652 failed
>>>  NoneType: None
>>>  2020-12-27 19:04:33.827 211867 DEBUG Thread-1 s3ql.mount.exchook: 
>>> recording exception
>>> 400
>>>  : bad_request - Checksum did not match data received
>>>  zsh: terminated  mount.s3ql b2:// /mnt/b2/files -o
>>>  -- 8< --
>>>
>>>  Leaving out the question of why journald eats the last line, the
>>>  situation is pretty clear. The backend (B2Backend._do_request) raises
>>>  an exception (B2Error) which is not considered a "temporary failure".
>>>
>>>  I have just patched up error handling in the B2 backend to consider the
>>>  checksum mismatch a transient failure (testing now).
>>
>> Is B2 not using SSL for its data connection? That should make sure that
>> there are no checksum errors
>>
>
> I think this error refers to B2 not receiving the correct data. When an 
> object is
> uploaded, the checksum is provided through a header, B2 then checks the 
> received data if
> it has the same checksum as the one provided.
>
> Since this is a case which should not happen (and never happened to me) it 
> was not clear
> how to handle it. But probably it would be best to also retry it for some 
> time in case it
> is really just temporary..

I think if this is not a result of the network connection flipping some
bits, then it is either a bug in S3QL (perhaps the data is being
modified while uploaded), or a faulty machine on the local side, or a
faulty machine on the receiving side.

I think in all these cases retrying is likely to succeed and will thus
result in the problem not being surfaced. I would, personally, prefer to
know about such issues so that I can replace RAM/disks/CPU, switch
storage providers, or find the S3QL bug *before* something more serious
happens and results in un-fixable corruption.


Best,
-Nikolaus

-- 
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/87wnx1g9v7.fsf%40vostro.rath.org.


Re: [s3ql] S3QL crashes when uploading large files with inconclusive logs (yet again)

2020-12-28 Thread Nikolaus Rath
On Dec 28 2020, Nikolaus Rath  wrote:
> Hi Ivan,
>
> Please do not Cc me, I am reading the list.
>
>
> On Dec 28 2020, Ivan Shapovalov  wrote:
>> Finally, exchook() from mount.py:setup_exchook() gets called and sends
>> SIGTERM to the mount process (mount.py:687).
>>
>> Does that sound plausible?
>
> Ah, I completely forgot about this, good find!
>
> When S3QL was using llfuse rather than pyfuse3, sending SIGTERM to
> itself was the way to signal to libfuse to exit the main event loop. The
> signal handler is installed by llfuse, not S3QL, so that's not obvious
> from the code.
>
> I suspect that this part needed a change when switching from llfuse to
> pyfuse3 but was forgotten. I will try to take a look at it.

Ok, turns out we needed some extra building blocks in pyfuse3 for
this. I have just committed
https://github.com/libfuse/pyfuse3/commit/3b9c7dfd7f68b1dbac17b325deeb7d66a66a2b05
which adds a new pyfuse3.terminate() function that S3QL should use
instead of sending SIGTERM to itself.

As a first order solution this replacement should make things
better. However, we should keep track that we terminated due to an
unhandled exception and re-raise this after the main loop has
terminated. I filed https://github.com/s3ql/s3ql/issues/227 to track
this.




Best,
-Nikolaus

-- 
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/87zh1yexsz.fsf%40vostro.rath.org.


Re: [s3ql] S3QL crashes when uploading large files with inconclusive logs (yet again)

2020-12-28 Thread Nikolaus Rath
On Dec 28 2020, Ivan Shapovalov  wrote:
> 2020-12-27 19:04:33.819 211867 DEBUGThread-1 
> s3ql.backends.b2.b2_backend._do_request: RESPONSE: POST 400  97
> 2020-12-27 19:04:33.820 211867 DEBUGMainThread 
> s3ql.block_cache.with_event_loop: upload of 8652 failed
> NoneType: None
> 2020-12-27 19:04:33.827 211867 DEBUG Thread-1 s3ql.mount.exchook: recording 
> exception 400
> : bad_request - Checksum did not match data received
> zsh: terminated  mount.s3ql b2:// /mnt/b2/files -o
> -- 8< --
>
> Leaving out the question of why journald eats the last line, the
> situation is pretty clear. The backend (B2Backend._do_request) raises
> an exception (B2Error) which is not considered a "temporary failure".
>
> I have just patched up error handling in the B2 backend to consider the
> checksum mismatch a transient failure (testing now).

Is B2 not using SSL for its data connection? That should make sure that
there are no checksum errors


Best,
-Nikolaus

-- 
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/8735zqgi96.fsf%40vostro.rath.org.


Re: [s3ql] S3QL crashes when uploading large files with inconclusive logs (yet again)

2020-12-28 Thread Nikolaus Rath
Hi Ivan,

Please do not Cc me, I am reading the list.


On Dec 28 2020, Ivan Shapovalov  wrote:
> Finally, exchook() from mount.py:setup_exchook() gets called and sends
> SIGTERM to the mount process (mount.py:687).
>
> Does that sound plausible?

Ah, I completely forgot about this, good find!

When S3QL was using llfuse rather than pyfuse3, sending SIGTERM to
itself was the way to signal to libfuse to exit the main event loop. The
signal handler is installed by llfuse, not S3QL, so that's not obvious
from the code.

I suspect that this part needed a change when switching from llfuse to
pyfuse3 but was forgotten. I will try to take a look at it.


Best,
-Nikolaus

-- 
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/875z4mgibk.fsf%40vostro.rath.org.


Re: [s3ql] S3QL crashes when uploading large files with inconclusive logs (yet again)

2020-12-27 Thread Nikolaus Rath
On Dec 27 2020, Ivan Shapovalov  wrote:
> Full mount.s3ql invocation is:
>
> -- 8< --
> /usr/bin/mount.s3ql b2:// /mnt/b2/files -o 
> systemd,log=none,authfile=/etc/s3ql/authinfo2,cachedir=/var/tmp/s3ql,debug,allow-other,compress=none,cachesize=10485760,threads=8,keep-cache,backend-options=disable-versions
> -- 8< --

Please also run this under gdb:

$ gdb python3
> run /usr/bin/mount.s3ql b2:// /mnt/b2/files -o 
> systemd,log=none,authfile=/etc/s3ql/authinfo2,cachedir=/var/tmp/s3ql,debug,allow-other,compress=none,cachesize=10485760,threads=8,keep-cache,backend-options=disable-versions


When the process terminates, run:

> thread apply all bt


>
> Full s3ql log (captured stdout/stderr) is available here: 
> https://intelfx.name/files/persistent/2020-12-27%20s3ql/s3ql.2.log.zst
> https://intelfx.name/files/persistent/2020-12-27%20s3ql/s3ql.3.log.zst
> (caution -- each log is ~150 MB unpacked, which is a separate concern
> of mine but irrelevant to this problem.)
>
> Manual excerpt from s3ql log relevant to the block that failed:
> -- 8< --
> mount.s3ql[117923]: 2020-12-27 13:02:03.135 117923 DEBUGMainThread
> s3ql.block_cache.upload_if_dirty: started with  inode=11545, blockno=109>

Debug logging won't help with this problem, you can safely disable that.

> Strangely, systemd tells me that mount.s3ql does not just exit
> normally, but is terminated by SIGTERM:
>
> -- 8< --
> $ systemctl status s3ql@mnt-b2-files.service  
> ● s3ql@mnt-b2-files.service - s3ql file system at /mnt/b2/files
>  Loaded: loaded (/etc/systemd/system/s3ql@.service; enabled; vendor 
> preset: disabled)
>  Active: inactive (dead) since Sun 2020-12-27 13:02:05 MSK; 1h 27min ago
> Process: 117923 ExecStart=/usr/bin/mount.s3ql ${What} /mnt/b2/files -o 
> systemd,log=none,authfile=/etc/s3ql/authinfo2,cachedir=/var/tmp/s3ql,${Options}
>  (code=killed, signal=TERM)
> Process: 135718 ExecStop=/usr/bin/umount /mnt/b2/files (code=exited, 
> status=32)
>Main PID: 117923 (code=killed, signal=TERM)
>  IP: 40.5M in, 8.5G out
> CPU: 6min 11.868s
> -- 8< --

What does your kernel log say at this time (dmesg)?

Could it be that you're running out of memory, and the OOM killer is
killing mount.s3ql to free up memory?


The TERM signal does not make sense to me, to this a non-fatal signal
that should result in S3QL gracefully exiting.


Could you try what happens when you manually send SIGTERM to a running
mount.s3ql process? Does it terminate properly with full logging until
the end?

> mount.s3ql[117923]: NoneType: None
>
> I'd gladly go debug it myself, but I don't know where to start with
> this async stuff. Any pointers? How do I read this log? What is this
> strange "NoneType: None"?

This doesn't make any sense to me either.

Normally I would suggest to add e.g. line numbers and file names to
https://github.com/s3ql/s3ql/blob/master/src/s3ql/logging.py#L86 to see
where this message is generated, but looking at it it seems that it
should already inculde the function name.. So whatever logs this is
bypassing the regular logging code path.


So, in summary:

- Run standalone under gdb (and not as a systemd service)
- Check kernel logs
- Check memory usage
- Try to send SIGTERM to a non-problematic mount



Best,
-Nikolaus

-- 
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/87blefwk0s.fsf%40vostro.rath.org.


Re: [s3ql] S3QL crashing when backing up to cloud using WiFi (Google) but not when backing up locally or through cable

2020-12-27 Thread Nikolaus Rath
On Dec 23 2020, "Jules F."  wrote:
> Actually it now happens when using the network cable to connect to the 
> internet as well. I'm thinking it might be from one of the latest versions 
> then. I still get no errors in mount.log between the mount time and 
> crashing time. It works fine when backing up to a USB drive.

This is *very* unlikely. Can you try running mount.s3ql in foreground on
the console (--fg) and watch how it terminates?

The only way for it to terminate without writing details into mount.log
is for it to segfault (which should also be visible in your kernel
logs). And even in this case, you should see details in
~/.s3ql/mount.s3ql_crit.log.

(I am assuming you have ruled out permissions and disk full issues for
the log directory).

I would be very hesitant to use S3QL until you have figured this out -
no matter which backend or network connection you use. Something is
fundamentally wrong with your installation.


Best,
-Nikolaus

-- 
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/87eejbwles.fsf%40vostro.rath.org.


Re: [s3ql] compatibility problem with s3ql-3.6.0 and Python 3.9?

2020-12-07 Thread Nikolaus Rath
Hi Xoxex,

A: Because it confuses the reader.
Q: Why?
A: No.
Q: Should I write my response above the quoted reply?

..so please quote properly, as I'm doing in the rest of this mail:


On Dec 06 2020, Xomex  wrote:
>  ERROR at setup of test_read_write[local/aes] 
> _
> Traceback (most recent call last):
>   File "/usr/local/src/s3ql-3.6.0/tests/pytest_checklogs.py", line 137, in 
> pytest_runtest_setup
> check_output(item)
>   File "/usr/local/src/s3ql-3.6.0/tests/pytest_checklogs.py", line 133, in 
> check_output
> check_test_log(item.catch_log_handler)
> AttributeError: 'Function' object has no attribute 'catch_log_handler'
> ___ ERROR at teardown of test_read_write[local/aes] 
> ___
> Traceback (most recent call last):
>   File "/usr/local/src/s3ql-3.6.0/tests/pytest_checklogs.py", line 143, in 
> pytest_runtest_teardown
> check_output(item)
>   File "/usr/local/src/s3ql-3.6.0/tests/pytest_checklogs.py", line 133, in 
> check_output
> check_test_log(item.catch_log_handler)
> AttributeError: 'Function' object has no attribute 'catch_log_handler'

Should be fixed by
https://github.com/s3ql/s3ql/commit/5316c60b447dbf85a8b80de523fb1a570bf01c11.

Best,
-Nikolaus

-- 
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/87pn3lsy5p.fsf%40vostro.rath.org.


Re: [s3ql] compatibility problem with s3ql-3.6.0 and Python 3.9?

2020-12-06 Thread Nikolaus Rath
On Sun, 6 Dec 2020, at 05:06, Xomex wrote:
> I've just d/l and built s3ql-3.6.0 on a system newly updated to Fedora 33 
> which provides Python3.9. It seems to build OK but on running any s3ql 
> command or the tests I get a fatal error from defusedxml about cElementTree:
> "
> import defusedxml.cElementTree as ElementTree
> /usr/lib/python3.9/site-packages/defusedxml/cElementTree.py:13: in 
> raise ImportError("cElementTree has been removed from Python 3.9")
> E   ImportError: cElementTree has been removed from Python 3.9
> "
> Is this a new incompatibility? and what's the simplest work around?
> Thanks.

Hi,

You should be able to just change "cElementTree" to "ElementTree" in the source 
code. cElementTree (used to) be a re-implementation of ElementTree in C (for 
better performance).

Best,
-Nikolaus

--
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/7667c54a-d259-4153-9e27-81d1998f49f3%40www.fastmail.com.


Re: [s3ql] "operation not supported" when trying to overwrite file

2020-12-03 Thread Nikolaus Rath
On Dec 03 2020, Johann Bauer  wrote:
> Hello,
>
> I'm seeing some strange issues when trying to overwrite files in my s3ql 
> file system:
>
> # echo "" > s3ql/data/test && echo OK
> zsh: operation not supported: 
> s3ql/data/appdata_ocp7izw7z2ll/css/icons/icons-vars.css

This is probably https://github.com/s3ql/s3ql/issues/182 (since the
first fix there's been a proper one too, https://github.com/s3ql/s3ql/pull/197).


Best,
-Nikolaus

-- 
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/87zh2ud2i4.fsf%40vostro.rath.org.


Re: [s3ql] Rounding error in dates?

2020-11-24 Thread Nikolaus Rath
Hi,

Thanks for looking into this! Let me try to help by clarifying some of
the questions that you've raised:

 - I think it's fine to break pyfuse3 backwards compatibility, as long
   as we are explicit about it and increment the major version
   number. That is what it's there for.

 - I would rather not break backwards compatibility in S3QL, since this
   means that we have to write upgrade code and everyone has to upgrade
   their filesystem.

 - I do not think S3QL needs to support nanosecond resolution (the
   VFS does not expect this from filesystems in general
   either). However, we should do better than seconds.

 - Currently, S3QL stores data as an int64 of nanoseconds (in both
   Python and SQLite). I think this should be fine - in 64 bits we can
   store 9223372036854775808 ns which is more than 292 years (starting
   at 1970).

- pyfuse3 is not intended to use floating point values at all (see
  e.g. http://www.rath.org/pyfuse3-docs/data.html#pyfuse3.EntryAttributes). My
  best guess is that code that uses "1e9" should read "1 000 000 000"
  and that I (or whoever wrote the code) forgot that 1e9 gives a float
  rather than an integer.


All the best,
-Nikolaus
 

On Nov 23 2020, Grunthos  wrote:
> I've added a bug to pyfuse3; the ability for me to consider a patch depends 
> in very large part on what level of API compatibility you want to break! 
>
> Since 64 bits is insufficient to represent a nano-second timestamp, 
> something has to become incompatible. I think the choice is down to:
>
>- add numpy (or similar) to the pyfuse dependencies
>- remove the *_ns properties (or at least make them return a 
>struct/object instead)
>
> On Monday, November 23, 2020 at 10:42:47 PM UTC+11 Grunthos wrote:
>
>> Further, if my analysis is correct, then it suggests anything that uses 
>> the pyfuse composite (double/float) field may have breakage.
>>
>> On Monday, November 23, 2020 at 10:30:52 PM UTC+11 Grunthos wrote:
>>
>>> *definitely* looks like a rounding problem with using a double-precision 
>>> value to represent seconds + ns since 1970. For a standard double, only 6 
>>> decimal digits work...for ns, you need 9 places.
>>>
>>> Try this in python:
>>>
>>> >>> 157788.999
>>>
>>> <- this is the source of the problem.
>>>
>>
>
> -- 
> You received this message because you are subscribed to the Google Groups 
> "s3ql" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to s3ql+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/s3ql/0a4a5cb4-bf43-4f72-bacd-69636d26d37en%40googlegroups.com.


-- 
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/87h7pfds31.fsf%40vostro.rath.org.


Re: [s3ql] Rounding error in dates?

2020-11-23 Thread Nikolaus Rath
On Nov 22 2020, Grunthos  wrote:
> I have a local file with a date as reported by ```ls -l 
> --time-style=full-iso``` of "2020-05-09 00:36:07.99900 +".
>
> When I copy it locally using ```cp -a``` on a local btrfs file system with 
> Ubuntu 20.4.1, the date is preserved.
>
> When I copy it to a local S3QL file system, one second is added to the 
> date. and it is reported as "2020-05-09 00:36:*08*.99900".
>
> When I copy it back, the date retains the '08' seconds. If I copy it back 
> again to S3QL, it advances one more second.
>
> It looks like a rounding error, since dates ending in " 99000" work as 
> expected.

Thanks for the report! This is probably a bug in how pyfuse3 converts
timestamps from (or to) nanoseconds - see
https://github.com/libfuse/pyfuse3/blob/master/src/pyfuse3.pyx#L298 and 
https://github.com/libfuse/pyfuse3/blob/master/src/macros.c#L17.

Patches and unit tests welcome! :-)

Best,
-Nikolaus

-- 
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/87lfesea4v.fsf%40vostro.rath.org.


[s3ql] [ANNOUNCE] S3QL 3.6.0 has been released

2020-11-09 Thread Nikolaus Rath
Dear all,

I am pleased to announce a new release of S3QL, version 3.6.0.

>From the changelog:

2020-11-09, S3QL 3.6.0

  * Added ability to specify domain-name, project-domain-name and tenant-name 
as options
for the OpenStack Swift (Keystone v3) backend for providers that prefer 
name to id.

  * The open() syscall supports the O_TRUNC flag now.

  * `mount.s3ql` now exits gracefully on CTRL-C (INT signal)

  * `mount.s3ql` now supports the `--dirty-block-upload-delay` option to 
influence the
time before dirty blocks are written from the cache to the storage backend.


The following people have contributed code to this release:

Aurelio <19254254+panslot...@users.noreply.github.com>
Daniel Jagszent 
greemo 
nand2 
Nikolaus Rath 
Panslothda <19254254+panslot...@users.noreply.github.com>
Sascha Falk 


(The full list of contributors is available in the AUTHORS file).

The release is available for download from
https://github.com/s3ql/s3ql/releases

Please report any bugs on the mailing list (s3ql@googlegroups.com) or
the issue tracker (https://github.com/s3ql/s3ql/issues).

Best,
-Nikolaus

-- 
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/878sba7rxl.fsf%40vostro.rath.org.


signature.asc
Description: PGP signature


Re: [s3ql] Error when compiling s3ql 3.5.0

2020-10-01 Thread Nikolaus Rath
On Thu, 1 Oct 2020, at 11:39, Chris Robinson wrote:
> I also get the error: *ImportError: cannot import name 'INTEGER*' when 
> running *'sudo s3qladm'* or any other s3qlxxx command globally.
> 
> Ubuntu 18.04, S3QL 3.5.1 installed with *python3 setup.py build_ext 
> --inplace* and then *sudo python3 setup.py install.*

This means that the path that the deltadump.so (filename approximate) was 
installed into is in your user's $PYTHONPATH, but not in root's PYTHONPATH.

Not sure why, but hopefully this will help you to find out and fix it :-).

-Nikolaus

--
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«


-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/2f3e7294-170c-48b2-a313-d9f5f8015b48%40www.fastmail.com.


Re: [s3ql] BUG ALERT: Dirty inode was destroyed!

2020-09-17 Thread Nikolaus Rath
Great work, thank you for tracking this down Nicolas!

Best,
-Nikolaus

--
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«



On Thu, 17 Sep 2020, at 17:26, Nicolas Deschildre wrote:
> Hello,
> 
> Ok I found it!! (Or at least, one of them).
> And a sure way to reproduce the crash.
> All this time I was looking at a async/await bug... but it was finally 
> another kind of concurrency issue :-)
> 
> The bug :
> If you move a file from a folder to another, and the inode cache happens to 
> be in a very specific condition (file to be copied uncached, destination 
> folder in cache about to be evicted on next inode cache miss, origin folder 
> in cache about to be evicted next after).
> --> fs.py:654 Reduce a lot the occurence of this bug by ensuring the origin 
> and destination folder are in inode cache
> --> fs.py:658 self._lookup() load the inode of the file to be copied, 
> evicting the destination folder inode.
> --> fs.py:719 self.inodes[id_p_old] Load the origin folder which was still in 
> cache
> --> fs.py:720 self.inodes[id_p_new] Load the destination folder, evicts the 
> origin folder inode.
> --> fs.py:721/723 Changes are made on the origin folder inode, which is no 
> longer handled by the inode cache. We have the bug.
> 
> Fix here : https://github.com/s3ql/s3ql/pull/199
> 
> Thanks for the pointers!
> 
> Nicolas Deschildre
> 
> On Friday, September 11, 2020 at 8:50:24 AM UTC+2 Nicolas Deschildre wrote:
>> Hello,
>> 
>> Thanks, Ok, I now understand the code better.
>> InodeCache holds 100 Inodes as cache in a ring. When a new inode not in 
>> cache is requested, an inode in the cache is flushed, and the new inode is 
>> stored in the cache instead.
>> The bug : Race condition : 2 inodes are requested from InodeCache at the 
>> same time. Thread 1 requesting inode 1 flush and remove inode 2 from cache. 
>> Thread 2 got inode 2 before it was removed from cache by Thread 1, but makes 
>> changes after it was removed and flushed by Thread 1. Thead 2 ends, there 
>> are no longer references to inode 2, python garbage collect it, and this 
>> trigger the bug.
>> 
>> I see 2 possibles solutions : 
>> 1/ https://github.com/s3ql/s3ql/pull/196 : The _Inode class keeps a 
>> reference to the InodeCache. On __del__, if we encouter the above bug, we 
>> flush ourselves. The problem (and I'm not familiar enough with python) : I 
>> guess garbage collection could happen at shutdown, when the InodeCache SQL 
>> connection is no longer valid. Do you see a way to make this approach work?
>> 2/ (Not coded yet) : The InodeCache is a ring, first in first out. What if 
>> we store access time on InodeCache.__getitem__, and the inode to be removed 
>> is the most old accessed one? This solution should reduce a lot (but not 
>> eliminate) the race condition. What do you think?
>> 
>> Finally : I tried to reproduce locally, with a unencrypted, uncompressed 
>> local backend, with mass parralel attribute changes, but I was not able to 
>> reproduce the bug. 
>> 
>> Thanks,
>> Nicolas Deschildre
>> 
>> On Wednesday, September 9, 2020 at 5:50:19 PM UTC+2 dan...@jagszent.de wrote:
>>> Hello Nicolas,
>>>  
>>> 
 
> S3QL somehow manages to delete/garbage collects an _Inode object that is 
> dirty (i.e. has an attribute modification that is not persisted in the 
> S3QL database) 
 
 So, if I understand correctly, since it is a pending modification on a now 
 deleted inode, this is not really a problem, right? Said otherwise, the 
 filesystem is not corrupted? [...]
>>> 
>>> No, the inode/file does not have to be deleted. There is a pending metadata 
>>> modification (access time, modification time, file size, file mode) that 
>>> should have been persisted (written in the Sqlite-Database) but it did not 
>>> made it in the database.
>>> File data is not corrupted, but some metadata of your files might not be in 
>>> the correct state.
>>> 
>>> 
> 

> --
> You received this message because you are subscribed to the Google Groups 
> "s3ql" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to s3ql+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/s3ql/e1882d14-f582-42ce-8468-1d07d0eada3cn%40googlegroups.com
>  
> .

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/c482cb9e-a295-45d6-b4c8-8137029da6f3%40www.fastmail.com.


Re: [s3ql] Recovering Deleted Files After rynsc --delete

2020-09-10 Thread Nikolaus Rath
You might be interested in s3qlcp and the expire-backups script 

Nikolaus

--
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«



On Thu, 10 Sep 2020, at 17:47, toffer...@gmail.com wrote:
> Okay. Thanks so much for the response. Looks like I need to do some more 
> research to adjust my future backup plans. I had only considered whole drive 
> failure rather than just sporadic filesystem error and user error.
> 
> On Wednesday, September 9, 2020 at 12:12:44 PM UTC-7 niko...@rath.org wrote:
>> On Sep 09 2020, "toffer...@gmail.com"  wrote: 
>> > I use rysnc to sync some local files with a S3QL mount on DreamObjects. On 
>> > the last run of rsync, it deleted two files from my S3QL mount that I 
>> > assume had been deleted from the local drive due to filesystem errors on 
>> > the local drive. 
>> > 
>> > I am using S3QL and have not unmounted the S3QL filesystem since this 
>> > occurred. Although there are some empty folders in /.Trash-1000 folder on 
>> > the S3QL these deleted files are not. lost+found on the S3QL mount is also 
>> > empty. 
>> 
>> Yeah, these directories are only used by applications. I don't think 
>> rsync uses either of them. 
>> 
>> > Is there any way to recover the deleted files? Possibly in the local 
>> > cache? 
>> 
>> Nope, sorry. 
>> 
>> Best, 
>> -Nikolaus 
>> 
>> -- 
>> GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F 
>> 
>> »Time flies like an arrow, fruit flies like a Banana.« 
> 

> --
> You received this message because you are subscribed to the Google Groups 
> "s3ql" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to s3ql+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/s3ql/279435f0-a1eb-4b58-99e4-5565cd2a1a0en%40googlegroups.com
>  
> .

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/63b41527-667f-4322-8248-ce1d77bcad48%40www.fastmail.com.


Re: [s3ql] Recovering Deleted Files After rynsc --delete

2020-09-09 Thread Nikolaus Rath
On Sep 09 2020, "toffer...@gmail.com"  wrote:
> I use rysnc to sync some local files with a S3QL mount on DreamObjects. On 
> the last run of rsync, it deleted two files from my S3QL mount that I 
> assume had been deleted from the local drive due to filesystem errors on 
> the local drive.
>
> I am using S3QL and have not unmounted the S3QL filesystem since this 
> occurred. Although there are some empty folders in /.Trash-1000 folder on 
> the S3QL these deleted files are not. lost+found on the S3QL mount is also 
> empty.

Yeah, these directories are only used by applications. I don't think
rsync uses either of them.

> Is there any way to recover the deleted files? Possibly in the local
> cache?

Nope, sorry.

Best,
-Nikolaus

-- 
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/87sgbqbw7t.fsf%40vostro.rath.org.


[s3ql] [ANNOUNCE] S3QL 3.5.1 has been released

2020-09-04 Thread Nikolaus Rath
Dear all,

I am pleased to announce a new release of S3QL, version 3.5.1.

>From the changelog:

2020-09-04, S3QL 3.5.1

  * `s3qlctrl upload-meta` now works properly again (previously, only the first
invocation had an effect).

  * The O_TRUNC flag of the open() syscall is no longer silently ignored, but
the open() call fails with ENOTSUP. A proper implementation will hopefully
follow soon.


The following people have contributed code to this release:

Nikolaus Rath 

(The full list of contributors is available in the AUTHORS file).

The release is available for download from
https://github.com/s3ql/s3ql/releases

Please report any bugs on the mailing list (s3ql@googlegroups.com) or
the issue tracker (https://github.com/s3ql/s3ql/issues).

Best,
-Nikolaus


-- 
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/87imctn5vf.fsf%40vostro.rath.org.


signature.asc
Description: PGP signature


Re: [s3ql] Errors during test, and operation

2020-09-03 Thread Nikolaus Rath
On Sep 03 2020, Esteban Fonseca  wrote:
> *1. Build*
>
> [root@hostname]# python3.6 setup.py build_ext --inplace
> running build_ext
> building 's3ql.deltadump' extension
> creating build
> creating build/temp.linux-x86_64-3.6
> creating build/temp.linux-x86_64-3.6/src
> creating build/temp.linux-x86_64-3.6/src/s3ql
> gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -O2 -g -pipe -Wall
> -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong
> --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic
> -D_GNU_SOURCE -fPIC -fwrapv -fPIC -I/usr/include/python3.6m -c
> src/s3ql/deltadump.c -o build/temp.linux-x86_64-3.6/src/s3ql/deltadump.o
> -Wall -Wextra -Wconversion -Wsign-compare
> src/s3ql/deltadump.c: In function ‘__pyx_pw_4s3ql_9deltadump_5load_table’:
> src/s3ql/deltadump.c:7177:48: warning: ‘__pyx_v_int64’ may be used
> uninitialized in this function [-Wmaybe-uninitialized]
>  __pyx_v_int64 = (__pyx_v_int64 +
> ((__pyx_cur_scope->__pyx_v_col_args[__pyx_v_j]) +
> (__pyx_cur_scope->__pyx_v_int64_prev[__pyx_v_j])));
> ^
> src/s3ql/deltadump.c:5904:11: note: ‘__pyx_v_int64’ was declared here
>int64_t __pyx_v_int64;
>^
> src/s3ql/deltadump.c:11993:13: warning: ‘__pyx_v_row_count’ may be used
> uninitialized in this function [-Wmaybe-uninitialized]
>  return PyInt_FromLong((long) value);
>  ^
> src/s3ql/deltadump.c:5903:11: note: ‘__pyx_v_row_count’ was declared here
>int64_t __pyx_v_row_count;
>^
> creating build/lib.linux-x86_64-3.6
> creating build/lib.linux-x86_64-3.6/s3ql
> gcc -pthread -shared -Wl,-z,relro -g
> build/temp.linux-x86_64-3.6/src/s3ql/deltadump.o -L/usr/lib64 -lpython3.6m
> -o build/lib.linux-x86_64-3.6/s3ql/deltadump.cpython-36m-x86_64-linux-gnu.so
> -lsqlite3
> copying build/lib.linux-x86_64-3.6/s3ql/
> deltadump.cpython-36m-x86_64-linux-gnu.so -> src/s3ql

Nothing wrong here.


> *2. Test*
>
> [root@hostname]# python3.6 -m pytest tests/
[...]
> Traceback (most recent call last):
>   File "/usr/src/s3ql-3.5.0/tests/pytest_checklogs.py", line 137, in
> pytest_runtest_setup
> check_output(item)
>   File "/usr/src/s3ql-3.5.0/tests/pytest_checklogs.py", line 133, in
> check_output
> check_test_log(item.catch_log_handler)
> AttributeError: 'Function' object has no attribute 'catch_log_handler'


Maybe we are not compatible with pytest 6.x - can you try an earlier
version (say 5.x)?

> *3. Mount (or any other operation)*
>
> "/usr/local/lib64/python3.6/site-packages/s3ql-3.5.0-py3.6-linux-x86_64.egg/s3ql/mount.py",
> line 18, in 
> from .metadata import (download_metadata, upload_metadata,
> dump_and_upload_metadata,
>   File
> "/usr/local/lib64/python3.6/site-packages/s3ql-3.5.0-py3.6-linux-x86_64.egg/s3ql/metadata.py",
> line 13, in 
> from .deltadump import INTEGER, BLOB, dump_table, load_table
> ImportError: cannot import name 'INTEGER'

This one confuses me, but I'd focus on the tests first.


Best,
-Nikolaus

-- 
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/87pn73uye7.fsf%40vostro.rath.org.


[s3ql] [ANNOUNCE] S3QL 3.5.0 has been released

2020-07-15 Thread Nikolaus Rath
Dear all,

I am pleased to announce a new release of S3QL, version 3.5.03

>From the changelog:

2020-07-15, S3QL 3.5.0

  * S3QL now requires version 3.x rather than 1.x of the pyfuse3 module.

  * Triggering a metadata upload (either through ``s3qlctrl upload-meta`` or the
``--metadata-upload-interval``) works again (since release 3.4.0, this 
would continue
to upload metadata continuously after the first trigger).

  * There is new backend for accessing Backblaze B2.


The following people have contributed code to this release:

amvoegeli <55159422+amvoeg...@users.noreply.github.com>
Daniel Jagszent 
Nikolaus Rath 
Paul Tirk 
r0ps3c <15878019+r0p...@users.noreply.github.com>


(The full list of contributors is available in the AUTHORS file).

The release is available for download from
https://github.com/s3ql/s3ql/releases

Please report any bugs on the mailing list (s3ql@googlegroups.com) or
the issue tracker (https://github.com/s3ql/s3ql/issues).

Best,
-Nikolaus


-- 
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/871rlcg6qu.fsf%40vostro.rath.org.


signature.asc
Description: PGP signature


Re: [s3ql] Re: Partial block caching implementation

2020-07-15 Thread Nikolaus Rath
On Jul 11 2020, Ivan Shapovalov  wrote:
> On 2020-07-11 at 12:13 +0100, Nikolaus Rath
> wrote:
>> On Jul 11 2020, Ivan Shapovalov  wrote:
>> > On 2020-07-10 at 19:54 +0100, Nikolaus Rath wrote:
>> > > On Jul 10 2020, Daniel Jagszent  wrote:
>> > > > > Ah yes, compression and probably encryption will indeed preclude any 
>> > > > > sort of
>> > > > > partial block caching. An implementation will have to be limited to 
>> > > > > plain
>> > > > > uncompressed blocks, which is okay for my use- case though (borg 
>> > > > > provides its
>> > > > > own encryption and compression anyway).  [...]
>> > > > Compression and encryption are integral parts of S3QL and I would 
>> > > > argue that
>> > > > disabling them is only an edge case.
>> > >  If I were to write S3QL from scratch, I would probably not support this 
>> > > at all,
>> > > right. However, since the feature is present, I think we ought to 
>> > > consider it fully
>> > > supported ("edge case" makes it sound as if this isn't the case).
>> > > 
>> > > 
>> > > > I might be wrong but I think Nikolaus (maintainer of S3QL) will not 
>> > > > accept such a
>> > > > huge change into S3QL that is only beneficial for an edge case.
>> > >  Never say never, but the bar is certainly high here. I think there are 
>> > > more
>> > > promising avenues to explore - eg. storing the compressed/uncompressed 
>> > > offset
>> > > mapping to make partial retrieval work for all cases.
>> >  Hmm, I'm not sure how's that supposed to work.
>> > 
>> > AFAICS, s3ql uses "solid compression", meaning that the entire block is 
>> > compressed at
>> > once. It is generally impossible to extract a specific range of 
>> > uncompressed data
>> > without decompressing the whole stream.[1]
>>
>>  At least bzip2 always works in blocks, IIRC blocks are at most 900 kB (for 
>> highest
>> compression settings). I wouldn't be surprised if the same holds for LZMA.
>
> True, I forgot that bzip2 is inherently block-based. Not sure about LZMA or 
> gzip, but
> there is still a significant obstacle: how would you extract this information 
> from the
> compression libraries?

No need to extract it, S3QL hands data to the compression library in
smaller chunks (IIRC 128 kB), so we just have to keep track of what goes
into and comes out of the compression library.


Best,
-Nikolaus

-- 
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/874kq9f5o7.fsf%40vostro.rath.org.


Re: [s3ql] Re: Partial block caching implementation

2020-07-11 Thread Nikolaus Rath
On Jul 11 2020, Ivan Shapovalov  wrote:
> On 2020-07-10 at 19:54 +0100, Nikolaus Rath wrote:
>> On Jul 10 2020, Daniel Jagszent  wrote:
>> > > Ah yes, compression and probably encryption will indeed preclude
>> > > any sort of partial block caching. An implementation will have to
>> > > be limited to plain uncompressed blocks, which is okay for my
>> > > use- case though (borg provides its own encryption and
>> > > compression anyway).  [...]
>> > Compression and encryption are integral parts of S3QL and I would
>> > argue that disabling them is only an edge case.
>> 
>> If I were to write S3QL from scratch, I would probably not support
>> this at all, right. However, since the feature is present, I think we
>> ought to consider it fully supported ("edge case" makes it sound as
>> if this isn't the case).
>> 
>> 
>> > I might be wrong but I think Nikolaus (maintainer of S3QL) will not
>> > accept such a huge change into S3QL that is only beneficial for an
>> > edge
>> > case.
>> 
>> Never say never, but the bar is certainly high here. I think there
>> are
>> more promising avenues to explore - eg. storing the
>> compressed/uncompressed offset mapping to make partial retrieval work
>> for all cases.
>
> Hmm, I'm not sure how's that supposed to work.
>
> AFAICS, s3ql uses "solid compression", meaning that the entire block is
> compressed at once. It is generally impossible to extract a specific
> range of uncompressed data without decompressing the whole stream.[1]

At least bzip2 always works in blocks, IIRC blocks are at most 900 kB
(for highest compression settings). I wouldn't be surprised if the same
holds for LZMA.

We could track the size of each compressed block, and store it as part
of the metadata of the object (so it doesn't blow-up the SQLite table).

> Encryption does not pose this kind of existential problem — AES is used
> in CTR mode, which theoretically permits random-access decryption — but
> the crypto library in use, python-cryptography, doesn't seem to permit
> this sort of trickery.

Worst case you can feed X bytes of garbage into the decrypter and then
start with the partial block - with CTR you should get the right
output.

Best,
Nikolaus

-- 
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/87pn92nw27.fsf%40thinkpad.rath.org.


Re: [s3ql] Re: Partial block caching implementation

2020-07-10 Thread Nikolaus Rath
On Jul 10 2020, Ivan “intelfx” Shapovalov  wrote:
>> 10 июля 2020 г., в 21:51, Nikolaus Rath  написал(а):
>> 
>> On Jul 10 2020, Ivan Shapovalov  wrote:
>>> (I will probably need a fork anyway, as Nikolaus has apparently
>>> rejected a specific optimization in the B2 backend, absence of which
>>> makes my s3ql hit a certain API rate limit very often.)
>> 
>> Hu? S3QL does not have a B2 backend at all, so I don't think I could
>> have rejected optimizations for it.
>
> Then how am I using it? :)
> https://github.com/s3ql/s3ql/pull/116

I stand corrected.. I guess I didn't do a release since that so the
documentation isn't updated yet. Apologies.

That said, my point about the optimization stands, I do not remember
rejecting anything here. Do you have a link for that too? :-)

Best,
-Nikolaus

-- 
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/87sgdznq5f.fsf%40thinkpad.rath.org.


[s3ql] [ANNOUNCE] S3QL 3.4.1 has been released

2020-05-08 Thread Nikolaus Rath
Dear all,

I am pleased to announce a new release of S3QL, version 3.4.1.

>From the changelog:

2020-05-08, S3QL 3.4.1

  * Fixed mount.s3ql "NoneType can't be used in 'await' expression" crash.

The following people have contributed code to this release:

Daniel Jagszent 
Nikolaus Rath 
vthriller 
vthriller 


(The full list of contributors is available in the AUTHORS file).

The release is available for download from
https://github.com/s3ql/s3ql/releases

Please report any bugs on the mailing list (s3ql@googlegroups.com) or
the issue tracker (https://github.com/s3ql/s3ql/issues).

Best,
-Nikolaus


-- 
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/877dxm4lb8.fsf%40vostro.rath.org.


signature.asc
Description: PGP signature


Re: [s3ql] [question] how to fix corrutped files returned by verify

2020-04-25 Thread Nikolaus Rath
On Sat, 25 Apr 2020, at 16:51, Marcin Ciesielski wrote:
> Hi,

> is there a way to fix corrupted files returned by s3ql_verify?
> I still might have the source files that were backed up to s3ql.

Did you read the documentation 
(http://www.rath.org/s3ql-docs/fsck.html#detecting-and-handling-backend-data-corruption
 in particular)?

Best,
-Nikolaus

--
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/85aa04e9-dbeb-4b30-b9da-fae76d85c13a%40www.fastmail.com.


Re: [s3ql] Re: Consistency failure during fsck after downloading old metadata

2020-04-10 Thread Nikolaus Rath
On Apr 09 2020, Rabid Mutant  wrote:
> On Friday, April 10, 2020 at 4:09:01 AM UTC+10, Nikolaus Rath wrote:
>>
>>
>> My guess is that it is simple corruption of the data. The original email 
>> said that the server crashed. This means that the SQLite database can 
>> get corrupted. 
>>
>
> Looking through snapshots of the database, it was corrupted 2 hours before 
> the server was shut down (didn't crash, but an s3ql session was in progress 
> at shutdown). The corruption was, I think, due to the device being full 
> during an unmount, and SQLite not being able to rollback properly. This 
> seems to fit with the history pretty well (mount.log showing device full 
> errors, database snapshots being good before those errors, corrupt after).
>
> Obviously I am rethinking the hourly mount/rsync/unmount cycle since it's 
> harder to ensure unmount at system shutdown, and have not moved everything 
> to a drive with more space.

What makes it hard? You probably just have to increase a timeout
somewhere...


> Unfortunately, I have a lot of s3ql snapshots, which means I have a huge DB 
> which takes about 5 minutes to sync to amazon; that time would be added to 
> any shutdown so permanently mounting it is not a clear win.

Shutdown only requires a metadata upload if the data has changed since
the last upload (which you can trigger with s3qlctrl without needing to
unmount), so it's not always slow.


> If I do a lazy dismount, then shutdown, am I correct an fsck will still be 
> needed? I guess the solution there would to always do an fsck before mount 
> after system boot.

I'd very much advise against crashing the filesystem by default. I'd
just ensure that shutdown is blocked until the file system is unmounted,
no matter if you mount temporarily or permanently.


Best,
-Nikolaus
-- 
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/871rovpeso.fsf%40vostro.rath.org.


Re: [s3ql] Re: Consistency failure during fsck after downloading old metadata

2020-04-09 Thread Nikolaus Rath
On Apr 09 2020, Daniel Jagszent  wrote:
> Hi,
>
>> [...] I am not particularly concerned with losing 2 hours, but I am
>> very interested to know if this is the correct approach to have used,
>> or if there are better approaches.
>
> I would probably have spend some time to try to repair the local
> metadata. It's a SQLite  database so
> installing the sqlite CLI and opening the database up with that tool can
> help most of the time. Downloading the metadata backups should be the
> very last resort.
> PS: do you upload metadata backups every hour (i.e. you changed
> --metadata-upload-interval)? Normally they get uploaded every 24 hours
> so you would loose up to two days, not two hours.
>
> The error from fsck (apsw.ConstraintError: UNIQUE constraint failed:
> contents.parent_inode, contents.name_id) means that you somehow had two
> files in a directory with the same name. I have no clue how that could
> have happened,

My guess is that it is simple corruption of the data. The original email
said that the server crashed. This means that the SQLite database can
get corrupted.

Best,
-Nikolaus

-- 
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/87imi8o7l6.fsf%40vostro.rath.org.


[s3ql] [ANNOUNCE] S3QL 3.4.0 has been released

2020-03-19 Thread Nikolaus Rath
Dear all,

I am pleased to announce a new release of S3QL, version 3.4.0.

>From the changelog:

2020-03-19, S3QL 3.4.0

  * There have been significant changes in the internal implementation. 
Asynchronous I/O
is now used in favor of threads in many places, and libfuse 3.x is used 
instead of
libfuse 2.x.

  * S3QL is now uses kernel-side writeback caching, which should significantly 
improve
write performance for small block sizes.

  * The dependency on the llfuse Python module has been dropped. A dependency 
on the `trio
<https://github.com/python-trio/trio`_ and `pyfuse3
<https://github.com/libfuse/pyfuse3/>`_ modules has been added instead.

The following people have contributed code to this release:

Daniel Jagszent 
Ionuț Ciocîrlan 
Nicolas Gif 
Nikolaus Rath 
Tor Krill 
Viktor Szépe 


(The full list of contributors is available in the AUTHORS file).

The release is available for download from
https://github.com/s3ql/s3ql/releases

Please report any bugs on the mailing list (s3ql@googlegroups.com) or
the issue tracker (https://github.com/s3ql/s3ql/issues).

Best,
-Nikolaus

-- 
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/87mu8ckqqk.fsf%40vostro.rath.org.


signature.asc
Description: PGP signature


Re: [s3ql] Problems upgrading an old file system

2020-03-08 Thread Nikolaus Rath
On Mar 08 2020, David Given  wrote:
> On Sat, 7 Mar 2020 at 13:54, Nikolaus Rath  wrote:
> [...]
>
>> > I'm not sure the bucket is locked down; later versions appear to read
>> files
>> > from the bucket fine (but then fail with object format mismatches).
>>
>> S3QL switched from using v3 authentication to using v4 authentication
>> for Amazon S3 at some point. Maybe AWS no longer supports v3?
>>
>
> I verified the credentials with both the aws tool and s3fs. I also tried
> creating brand new credentials, and they behave in precisely the same way.
> So I'm confident it's not a credential problem.

Right you are. There is nothing wrong your credentials, what has changed
is the way in which S3QL uses them to authenticate you to AWS.

>
> s3fs was very unhappy about mounting a bucket with dots in the name. I know
> s3ql used to have this problem, but I thought it'd been fixed. If I rename
> my bucket, will s3ql still mount it, or are there internal references to
> the bucket name in the metadata?

I don't think there are, but I also don't think that renaming would help.

Best,
-Nikolaus

-- 
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/871rq3w2i9.fsf%40vostro.rath.org.


Re: [s3ql] Problems upgrading an old file system

2020-03-07 Thread Nikolaus Rath
On Mar 06 2020, David Given  wrote:
> s3ql.backends.s3c.S3Error: AllAccessDisabled: All access to this object has
> been disabled
> ---snip---
>
> I'm not sure the bucket is locked down; later versions appear to read files
> from the bucket fine (but then fail with object format mismatches).

S3QL switched from using v3 authentication to using v4 authentication
for Amazon S3 at some point. Maybe AWS no longer supports v3?


Best,
-Nikolaus

-- 
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/877dzwpbon.fsf%40vostro.rath.org.


Re: [s3ql] Keystone V3 on Python 3.4

2019-12-26 Thread Nikolaus Rath
On Dec 26 2019, Viktor Szépe  wrote:
> Found a link about this problem 
> https://github.com/agronholm/pythonfutures/issues/85

Well, yes, but all of this seems beyond control of S3QL, isn't it?

Best,
-Nikolaus


-- 
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/87h81naw32.fsf%40vostro.rath.org.


Re: [s3ql] Keystone V3 on Python 3.4

2019-12-26 Thread Nikolaus Rath
On Dec 24 2019, Viktor Szépe  wrote:
> Few servers run Debian Jessie with Python 3.4
>
> Do I have any chance to use Keystone V3 as OVH object storage will 
> deactivate V2 in March?
>
>   File 
> "/usr/local/lib/python3.4/dist-packages/concurrent/futures/_base.py", line 
> 357
> raise type(self._exception), self._exception, self._traceback
>^
> SyntaxError: invalid syntax

This is a syntax error in the concurrent.futures package, I don't think
there's anything that S3QL can do about it...

Best,
-Nikolaus

-- 
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/87k16jax0f.fsf%40vostro.rath.org.


Re: [s3ql] Meta key data is missing

2019-11-12 Thread Nikolaus Rath
On Nov 11 2019, Brian Pribis  wrote:
> We had a very old version of s3ql installed.  1.17.  I've been able to 
> incrementally upgrade to 2.12 without any problems.  But trying to upgrade 
> to 2.13 fails.
> The output for upgrading looks like:
[..]
> Upgrading from revision 21 to 22...
> ..processed 146/7371 objects (2.0%, 0 bytes rewritten)..Uncaught top-level 
[...]
> meta_new['data'] = meta['data']
>   File 
> "/usr/local/lib/python3.4/dist-packages/dugong-3.7.5-py3.4.egg/dugong/__init__.py",
>  
> line 1637, in __getitem__
> return self._store[key.lower()][1]
> KeyError: 'data'

Looks like there is an object stored that doesn't have the expected
format. It's hard to say more than that. Maybe modify the code to print
the object name? If it's just one object, you may get away with just
removing that. If it affects multiple objects, then something else is
wrong..

Best,
-Nikolaus

-- 
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/87zhh1i08i.fsf%40thinkpad.rath.org.


[s3ql] [ANNOUNCE] S3QL 3.3.2 has been released

2019-10-20 Thread Nikolaus Rath
Dear all,

I am pleased to announce a new release of S3QL, version XXX.

>From the changelog:

2019-10-20, S3QL 3.3.2

  * Fixed occasional crashes with "KeyError" when using Google Storage
backend.

  * Switched to semantic versioning. An increase in the major (first) version 
number will
indicate a backwards incompatible change (eg a change in the file system 
revision), a
change in the minor version (the second element) indicates new features, 
and a change
in the third element indicates bugfixes.


The following people have contributed code to this release:

Daniel Jagszent 
Nikolaus Rath 

(The full list of contributors is available in the AUTHORS file).

The release is available for download from
https://github.com/s3ql/s3ql/releases.

Please report any bugs on the mailing list (s3ql@googlegroups.com) or
the issue tracker (https://github.com/s3ql/s3ql/issues).

Best,
-Nikolaus

-- 
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/87pnirn0u7.fsf%40vostro.rath.org.


signature.asc
Description: PGP signature


Re: [s3ql] advice on stalled upgrade?

2019-09-18 Thread Nikolaus Rath
On Sep 15 2019, "'Joseph Maher' via s3ql"  wrote:
> Thanks for s3ql!
>
> Just upgraded to debian buster, and am running an upgrade on an s3ql 
> filesystem:
>
> root@nsxx:~# s3qladm --authfile=/root/.s3ql/authinfo2 --backend-options 
> tcp-timeout=200 --cachedir=/mnt/backup/cache/s3ql/ upgrade 
[...]
> Upgrading from revision 23 to 24...
> ..processed 6976175/6976177 objects (99%)..
>
>
> However, it's now been stuck on the last two objects for several hours, 
> strace gives me:
>
> root@nsxx:~# strace -p 16678
> strace: Process 16678 attached
> futex(0x7f7e6c000d50, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 0, 
> NULL, FUTEX_BITSET_MATCH_ANY

That is probably just looking at the main thread which is doing nothing
but waiting for workers to complete.

> Do I need to crtl-C it?  It's just that last time I did that it seemed to 
> download all the metadata again, and though it was a bit faster processing 
> the objects on the second run , it still takes days rather than
> hours...

What do you mean with "last time"? Did you attempt this specific upgrade
before, or was that for from an older file system revision? Unless you
have tried this specific one (23->24), please try it one more time
before we attempt more debugging.



Best,
-Nikolaus

-- 
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/878sqmja1g.fsf%40thinkpad.rath.org.


[s3ql] [ANNOUNCE] S3QL 3.3 has been released

2019-09-08 Thread Nikolaus Rath
Dear all,

I am pleased to announce a new release of S3QL, version 3.3.

>From the changelog:

2019-09-08, S3QL 3.3

  * Added support for Keystone V3 authentication.

The following people have contributed code to this release:

Erick Brown 
Erick Brown 
Nikolaus Rath 


(The full list of contributors is available in the AUTHORS file).

The release is available for download from
https://github.com/s3ql/s3ql/releases.

Please report any bugs on the mailing list (s3ql@googlegroups.com) or
the issue tracker (https://github.com/s3ql/s3ql/issues).

Best,
-Nikolaus


-- 
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/87woeij0ra.fsf%40vostro.rath.org.


signature.asc
Description: PGP signature


Re: [s3ql] clone_fs exception / old metadata files

2019-08-02 Thread Nikolaus Rath
On Aug 01 2019, Shannon Dealy  wrote:
> Hello Nikolaus,
>
> I am trying to copy a large S3QL file system from the "local" backend to the 
> "s3c" backend
> using "clone_fs.py". I made a small test file system to try it out which 
> worked fine, but
> with the real file system it crashes immediately
> with the following exception:
>
>s3ql.backends.common.CorruptedObjectError: Invalid object header:
>b'\x80\x02}q\x00(X\x0b\x00'
>
> The file name it was attempting to process was:
>
>s3ql_metadata_bak_5_pre21
>
> I assume from the "_pre21" suffix in the file name and the 2014 date stamp 
> that this is an
> old metadata file backup made while upgrading to version 21 of the file 
> system. If my
> assumption is correct, then presumably the exception
> was caused by the fact that this file is from an old/incompatible version of 
> the file
> system.

That's right.

> Looking at the top level directory for the S3QL file system, there are a 
> number of
> other files with not only the "_pre21" suffix, but with names that include
> "_pre2.13_metadata_" as well as some with "#" in the name such as these:
>
>s3ql_pre2.13_metadata
>s3ql_metadata_bak_2#23037-140201956722432
>
> This leads me to the following questions:
>
> 1 - Am I correct in assuming that all of these files can/should be deleted?
> Presumably this would fix my clone_fs problem.

Yes.


> 2 - Should there be (or is there already) some way of listing and/or cleaning
> up cruft files in the S3QL file system metadata directory.

There is no tool, since there should be almost none of these files.

The files with '#' in there should not exist unless mount.s3ql is
hard-crashed (SIGKILL or power cycle). The _pre files were only created
during one upgrade process. 

> 3 - Assuming I am correct and all of the above files should be discarded,
> perhaps some documentation should be added to the clone_fs program about
> this issue (not sure if this affects other programs). Ideally it would
> actually recognize that these files can be skipped, but I know that more
> work is the last thing you need.

Yeah, it'd be a good idea for clone_fs to just skip over the files. Pull
requests welcome :-).


Best,
-Nikolaus

-- 
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/87v9vfq4wr.fsf%40thinkpad.rath.org.


  1   2   3   4   5   >