Re: [s3ql] Upgrade from S3QL 1.15 to 2.7

2014-02-27 Thread Nikolaus Rath
On 02/27/2014 09:04 AM, Dan Johansson wrote:
 One question though, can I set the --no-ssl globally in some
 config-file? That way I do not have to update my scripts that implement
 s3ql.

No, that's not possible at this time. Sorry.

Best,
-Nikolaus

-- 
Encrypted emails preferred.
PGP fingerprint: 5B93 61F8 4EA2 E279 ABF6  02CF A9AD B7F8 AE4E 425C

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [s3ql] Mount.s3ql hangs after 403 responses

2014-03-01 Thread Nikolaus Rath
Nicola Cocchiaro nicola.cocchi...@gmail.com writes:
 Nikolaus,

 A while ago we had discussed an llfuse exception in a thread called 
 strange llfuse exception that was not being correctly formed due to a 
 Cython bug. With the workaround in llfuse 0.40 the issue with the exception 
 was resolved, but the same triggering event of Google Storage returning 403 
 intermittently still triggers an exception that results in bad behavior. 
 Specifically, after an AccessDenied exception (or two as in the logs 
 below), mount.s3ql hangs and with it all file system operations also hang 
 (e.g., accessing the file system extended attributes or running the 'ls' 
 and 'df' commands).
[...]

 This triggers another AccessDenied exception, but that's where the log 
 ends. At this point S3ql is no longer responsive, 'ls' and 'df' hang, and 
 any other file system operation also seems to hang. This also happened 
 consistently with all S3ql processes I had running on other boxes that 
 happened to be uploading data in the time window when Google intermittently 
 returned 403.

Thanks for the report. I'll try to reason out what's happening
here. Could you please report this issue at
https://bitbucket.org/nikratio/s3ql/issues to make sure it doesn't get
lost?

Also, if you can reproduce this issue (or encounter it again) and
mount.s3ql hangs, could you please try to obtain a Python stacktrace as
explained on
https://bitbucket.org/nikratio/s3ql/wiki/Providing%20Debugging%20Info?
That'd make it much easier to diagnoses what's going on.


 Question is, is this due to the timing of those (repeated) exceptions and 
 how S3ql handles them, or a FUSE bug?

No, this is probably an S3QL bug.

Best,
Nikolaus

-- 
Encrypted emails preferred.
PGP fingerprint: 5B93 61F8 4EA2 E279 ABF6  02CF A9AD B7F8 AE4E 425C

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [s3ql] Possible to run S3QL on Debian Wheezy (Python 3.2)?

2014-03-11 Thread Nikolaus Rath
Tor Krill tor.kr...@gmail.com writes:
 Hi all,

 I really would like to run S3Ql 2.x on Debian Wheezy. Is this
 possible?

Yes, you just need to install Python 3.3.

 Looking at the PPA for Ubuntu S3QL requires Python 3.3 but Wheezy only
 has Python 3.2. I investigated the possibilities to backport Python
 3.3 to Wheezy but that seems like quite an undertaking.

Hmm. I think the last time I checked, it was enough to just recompile
the jessy packages for wheezy:

apt-get source python3.3/testing
apt-get build-dep python3.3
(cd python3*; dpkg-buildpackage -us -uc)

Have you tried that?

 So to summarize, is it possible to run S3QL 2.x with Python 3.2?  If
 not, what is the problem?

S3QL 2.x won't run on Python 3.2 without changes.

There are some missing modules (faulthandler, lzma), some exceptions
don't have names yet
(http://docs.python.org/3/whatsnew/3.3.html#pep-3151), yield from does
not exist, contextlib.ExitStack is missing, hmac.compare_digest is
missing, and probably some more things.


Best,
-Nikolaus

-- 
Encrypted emails preferred.
PGP fingerprint: 5B93 61F8 4EA2 E279 ABF6  02CF A9AD B7F8 AE4E 425C

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Private message regarding: [s3ql] HTTP timeouts

2014-03-12 Thread Nikolaus Rath
Hi Nicola,

Nicola Cocchiaro nicola.cocchi...@gmail.com writes:
 The reason I originally asked was due to seeing some outbound connections 
 not completing but just hanging, until using umount.s3ql would let them 
 return with a TimeoutError (no less than 15 minutes later in all cases 
 seen). I was not able to dig much deeper at the time, but to experiment 
 more I put together a simple patch to add a configurable socket timeout to 
 all S3ql tools that may make use of it. I've attached it if you'd like to 
 consider it.

Thanks for the patch! I'm generally rather reluctant to add new
command-line options unless they are absolutely crucial. The problem is
that the number of possible configurations (and potential interaction
bug) goes up exponentially with every option.

In this case, I am not sure I fully understand in which situation this
option is intended to be used (so I'm pretty sure that a regular user
wouldn't know when to use it either, which is always a bad sign for a
command line argument). Could you give some additional details on that?

For example, if I'm not happy with the system timeout (which seems to be
15 minutes in your case), shouldn't this be adjusted on the os level as
well? And if not, is there really a need to make the timeout
configurable rather than having S3QL simply use an internal default?

Best,
-Nikolaus

-- 
Encrypted emails preferred.
PGP fingerprint: 5B93 61F8 4EA2 E279 ABF6  02CF A9AD B7F8 AE4E 425C

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Private message regarding: [s3ql] HTTP timeouts

2014-03-13 Thread Nikolaus Rath
Nicola Cocchiaro nicola.cocchi...@gmail.com writes:
 On Tuesday, March 11, 2014 8:06:42 PM UTC-7, Nikolaus Rath wrote:

 Hi Nicola, 

 Nicola Cocchiaro writes: 
  The reason I originally asked was due to seeing some outbound 
 connections 
  not completing but just hanging, until using umount.s3ql would let them 
  return with a TimeoutError (no less than 15 minutes later in all cases 
  seen). I was not able to dig much deeper at the time, but to experiment 
  more I put together a simple patch to add a configurable socket timeout 
 to 
  all S3ql tools that may make use of it. I've attached it if you'd like 
 to 
  consider it. 

 Thanks for the patch! I'm generally rather reluctant to add new 
 command-line options unless they are absolutely crucial. The problem is 
 that the number of possible configurations (and potential interaction 
 bug) goes up exponentially with every option. 

 In this case, I am not sure I fully understand in which situation this 
 option is intended to be used (so I'm pretty sure that a regular user 
 wouldn't know when to use it either, which is always a bad sign for a 
 command line argument). Could you give some additional details on that? 

 For example, if I'm not happy with the system timeout (which seems to be 
 15 minutes in your case), shouldn't this be adjusted on the os level as 
 well? And if not, is there really a need to make the timeout 
 configurable rather than having S3QL simply use an internal default? 



 The problem is that there was no apparent system timeout, or those 
 connections did not seem to be timing out on their own in the cases 
 observed; I did not have the means at the time to figure out why exactly 
 this would happen, but the root cause of all this was a temporary 
 malfunction on the Google Storage side. The 15 minutes come from a 
 different process which had its own timeout (15 minutes in fact) for 
 allowing S3ql to unmount; in response to the timeout firing it called 
 umount.s3ql again, and that in turn seemed to allow the connections to be 
 recognized as timed out (possibly in response to sending SIGTERM to the 
 mount.s3ql process? This was my theory after looking at the timing from a 
 number of logs but again, unfortunately I do not have 100% solid evidence 
 that this was the reason).

Hmm. I don't think this is likely. umount.s3ql does not send SIGTERM. It
sets a termination flag that is checked for the in main file system
loop. So just calling umount.s3ql would not result in any pending socket
operations to terminate.

Is there any way to reproduce the problem you had? 

 A static, internal default may be enough and in fact it helped when I
 first tried it, but more advanced users may want to adapt the timeout
 to their own use case when relying on the OS doesn't seem to help like
 in the cases observed. I understand the reluctance to add more options
 and increase complexity for all users, but I thought I'd share this
 patch for consideration, perhaps as an extra option in a set of
 clearly marked advanced options.

Understood, thanks. I don't want to rule out such an option yet, but if
we add it, add the very least there should be some documentation
explaining what exactly the option does, and when it should be used. At
the moment, this seems rather unclear to me (see above).


Best,
-Nikolaus

-- 
Encrypted emails preferred.
PGP fingerprint: 5B93 61F8 4EA2 E279 ABF6  02CF A9AD B7F8 AE4E 425C

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] encryption + compression + deduplication?

2014-03-14 Thread Nikolaus Rath
On 03/14/2014 08:11 AM, aweber1nj wrote:
 Do all the features in the subject work well together?  That is, can I
 enable all three options and assume:
 
  1. All my data will be stored encrypted.
  2. Data is compressed before storing.
  3. Identical blocks (of original content, pre-compression and
 encryption I would assume) are only stored once (correctly
 de-duplicated)?

Yes, they all work together. The order is: (1) deduplication, (2)
compression, (3) encryption.

Best,
-Nikolaus

-- 
Encrypted emails preferred.
PGP fingerprint: 5B93 61F8 4EA2 E279 ABF6  02CF A9AD B7F8 AE4E 425C

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] limiting bandwidth?

2014-03-26 Thread Nikolaus Rath
Hi,

On 03/26/2014 06:23 AM, Balázs wrote:
 Hi Niko,
 
 I was wondering if there is any way to limit / control the available
 bandwidth that is available to the s3ql backend for its upload to the
 cloud. The amount of data I have to move on a daily basis hangs into the
 production hours, and other services suffer. I was hoping I can somehow
 limit the upload total bandwidth, so that the syncs can run during the
 day and finish...

Not in S3QL itself, but the Linux IP stack is your friend :-).

As an alternative workaround, maybe it helps to use fewer compression
threads and lzma compression at a high level? That ought to slow things
down quite a bit...

Best,
-Nikolaus
-- 
Encrypted emails preferred.
PGP fingerprint: 5B93 61F8 4EA2 E279 ABF6  02CF A9AD B7F8 AE4E 425C

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] s3ql quotas

2014-03-27 Thread Nikolaus Rath
Hi PA,

When replying to emails on this list, please do not put your reply
above the quoted text, and do not quote the entire message you're
answering to. This makes it unnecessarily hard for other readers to
understand the context of your email. Instead, please cut quoted parts
that are not relevant to your reply, and insert your responses right
after the points you're replying to (as I have done below). Thanks!

PA Nilsson p...@zid.nu writes:
 A follow up question on this.

 When mounting a file system using an s3c backend, running 'df' will
 report that the filsystem has 1TB size. There is no such information
 coming from the backend, but is is easily made available.

 Can information on this somehow be propagated?

As far as I know, there is no such information from most
backends. Google Storage, Amazon S3, OpenStack et al all have
effectively unlimited storage. I believe only the local backend could
effectively report a capacity. Which backend do you have in mind?

But even if we got a number from the backend, it's not clear what we
should report to df. How do we take into account compression and
deduplication?


Best,
-Nikolaus

-- 
Encrypted emails preferred.
PGP fingerprint: 5B93 61F8 4EA2 E279 ABF6  02CF A9AD B7F8 AE4E 425C

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] CentOS 6.5 installation with failures - NameError: name 'xrange' is not defined

2014-04-01 Thread Nikolaus Rath
On 04/01/2014 07:45 AM, Randy Black wrote:
[...]
  -- Download s3ql and install
 
 cd /tmp; curl -O http://s3ql.googlecode.com/files/s3ql-1.14.tar.bz2
 tar -jxvf s3ql-1.14.tar.bz2; cd s3ql-1.14
 python3.3 ./setup.py install

If you have Python 3.3, you don't need to use the (outdated) S3QL 1.x
branch. Use the up-to-date 2.x branch instead (current version is 2.8),
and everything should be fine.


Best,
-Nikolaus

-- 
Encrypted emails preferred.
PGP fingerprint: 5B93 61F8 4EA2 E279 ABF6  02CF A9AD B7F8 AE4E 425C

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] Test errors s3ql-2.8.1 on Centos 6.5 64bit

2014-04-13 Thread Nikolaus Rath
Cristian Manoni nethc...@gmail.com writes:
 and now the test:
 # python3 runtests.py tests 

 after some succesful tests:
 tests/t2_block_cache.py:114: cache_tests.test_destroy_deadlock FAILED

 
  
 FAILURES 
 
 ___ 
 cache_tests.test_destroy_deadlock 
 
 Traceback (most recent call last):
   File /root/s3ql/s3ql-2.8.1/tests/t2_block_cache.py, line 163, in 
 test_destroy_deadlock
 self.cache.destroy()
   File _pytest.python, line 1009, in __exit__
 pytest.fail(DID NOT RAISE)
   File _pytest.runner, line 456, in fail
 raise Failed(msg=msg, pytrace=pytrace)
 Failed: DID NOT RAISE
 ! 
 Interrupted: stopping after 1 failures 
 !
  1 failed, 104 
 passed, 97 skipped in 5.15 seconds 
 

 What is missing or what is wrong?
 Can you help me?

Hmm. Does it by any chance help if you apply the following patch?

diff --git a/tests/t2_block_cache.py b/tests/t2_block_cache.py
--- a/tests/t2_block_cache.py
+++ b/tests/t2_block_cache.py
@@ -156,6 +156,7 @@
 
 # Shutdown threads
 llfuse.lock.release()
+time.sleep(10)
 try:
 with catch_logmsg('Unable to flush cache, no upload threads left 
alive',
   level=logging.ERROR):


Thanks,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] dugong.UnsupportedResponse: No content-length and no chunked encoding

2014-04-14 Thread Nikolaus Rath
On 04/14/2014 10:06 AM, Adam Watkins wrote:
 This comes from another server running Debian 7.4 and using the S3QL
 1.11 package:
 This server _isn't_ running on my home network, and I've no reason to
 believe that it's connection is particularly unreliable.
 
[]

Thanks! Could you please file this as a bug on
https://bitbucket.org/nikratio/s3ql/issues? I'll see what I can do.

Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[s3ql] [ANNOUNCE] S3QL 1.18 has been released

2014-04-26 Thread Nikolaus Rath
Dear all,

I am pleased to announce a new maintenance release of S3QL, version 1.18.

Please note that this is only a maintenance release. Development of S3QL
takes place in the 2.x series. The 1.x releases receive only selected
bugfixes and are only maintained for older systems that do not support
Python 3.3. For systems with Python 3.3 support, using the most recent
S3QL 2.x version is strongly recommended.

From the changelog:

2014-04-26, S3QL 1.18

  * Fixed a problem with mount.s3ql crashing with `KeyError` under
some circumstances.

  * Fixed a problem with mount.s3ql (incorrectly) reporting corrupted
data for compressed blocks of some specific sizes. Many thanks to
Balázs for extensive debugging of this problem.

  * s3qlrm now also works when running as root on a file system
mounted by a regular user.

Please report any bugs on the mailing list (s3ql@googlegroups.com) or
the issue tracker (https://bitbucket.org/nikratio/s3ql/issues).


Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

  »Time flies like an arrow, fruit flies like a Banana.«



signature.asc
Description: OpenPGP digital signature


[s3ql] [ANNOUNCE] S3QL 1.18.1 has been released

2014-04-28 Thread Nikolaus Rath
Dear all,

I am pleased to announce a new maintenance release of S3QL, version 1.18.1.

Please note that this is only a maintenance release. Development of S3QL
takes place in the 2.x series. The 1.x releases receive only selected
bugfixes and are only maintained for older systems that do not support
Python 3.3. For systems with Python 3.3 support, using the most recent
S3QL 2.x version is strongly recommended.

From the changelog:

2014-04-28, S3QL 1.18.1

  * No changes in S3QL itself.

  * The S3QL 1.18 tarball accidentally included a copy of the Python
dugong module. This has been fixed in the 1.18.1 release.

2014-04-26, S3QL 1.18

  * Fixed a problem with mount.s3ql crashing with `KeyError` under
some circumstances.

  * Fixed a problem with mount.s3ql (incorrectly) reporting corrupted
data for compressed blocks of some specific sizes. Many thanks to
Balázs for extensive debugging of this problem.

  * s3qlrm now also works when running as root on a file system
mounted by a regular user.


Please report any bugs on the mailing list (s3ql@googlegroups.com) or
the issue tracker (https://bitbucket.org/nikratio/s3ql/issues).


Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

  »Time flies like an arrow, fruit flies like a Banana.«





signature.asc
Description: OpenPGP digital signature


Re: [s3ql] Detect that a bucket is mounted elsewhere?

2014-05-09 Thread Nikolaus Rath
On 05/09/2014 05:55 AM, andycr...@gmail.com wrote:
 Folks,
 
 I am using s3ql-1.17 (on CentOS5  6)  and I have a question.
 From each system, it does fsck.s3ql to see if the bucket was previously
 formatted, and if not it does mkfs.s3ql.

This sounds dangerous. Can't you create the file system once ahead of
time? If not, why (and how) do you use fsck.s3ql for this purpose?
Wouldn't it be enough to always call mkfs.s3ql without --force?

 The issue is that if the bucket happened to be mounted on another
 system, the second one blocks forever in fsck.s3ql.

What do you mean by that? Do you mean it's waiting for input? In that
case, just redirect from /dev/null.

 Is there a way to detect that the bucket is mounted elsewhere, so I can
 avoid this?

There is no standalone program, but you should be able to just try to
mount or fsck it. If the file system is mounted elsewhere, this should
fail with an error message. Whether this is 100% reliable depends on
your backend, e.g. with Amazon S3 you are not guaranteed to always get
current data, so you can never be absolutely sure that the file system
is not mounted elsewhere.


Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] Force fsck to continue?

2014-06-20 Thread Nikolaus Rath
PA Nilsson p...@zid.nu writes:
 It's a directory, not a file, and it is created when mount.s3ql 
 starts. If this directory (with its contents) disappears if you reboot 
 the system several minutes later, you have a real problem. 

 If I reboot several minutes later, the directory is there and everything 
 works. I need to force the reboot within seconds after the mount process, 
 or maybe even during it while the metadata is read/written from the
 server.

Ah, you didn't say that before. In that case there might be an easy
fix that can be implemented in S3QL. Stay tuned, I'll send you a patch
soonish.


Best,
-Nikolaus
-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] Force fsck to continue?

2014-06-21 Thread Nikolaus Rath
PA Nilsson p...@zid.nu writes:
 So my thinking is that this is a problem that we have with our flash based 
 file system. The file is simply not yet written to flash.
 This will be running on an non maintained system with no possibility for 
 user interaction.

If there is a high chance that the system will be power-cycled without a
proper shutdown, you may want to apply the following patch as swell (in
addition to the patch from my other mail). It reduces the likelyhood of
metadata data corruption on power loss, but also reduces performance (so
this is not going to go into the official S3QL code):

diff --git a/src/s3ql/database.py b/src/s3ql/database.py
--- a/src/s3ql/database.py
+++ b/src/s3ql/database.py
@@ -32,7 +32,7 @@
# locking_mode to EXCLUSIVE, otherwise we can't switch the locking
# mode without first disabling WAL.
'PRAGMA synchronous = OFF',
-   'PRAGMA journal_mode = OFF',
+   'PRAGMA journal_mode = NORMAL',
#'PRAGMA synchronous = NORMAL',
#'PRAGMA journal_mode = WAL',
 

Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] fsck.s3ql Deleted spurious object

2014-06-21 Thread Nikolaus Rath
On 06/21/2014 05:38 PM, Warren Daly wrote:
 
 Could you please zip all the mount.log* and fsck.log* files and put
 them
 somewhere on the web? I'd like to see when this problem started
 appearing.
 
 http://www.invisibleagent.com/20mntfsk.tar 

$ wget http://www.invisibleagent.com/20mntfsk.tar
--2014-06-21 17:44:26--  http://www.invisibleagent.com/20mntfsk.tar
Resolving www.invisibleagent.com (www.invisibleagent.com)...
108.162.196.30, 108.162.197.30, 2400:cb00:2048:1::6ca2:c41e, ...
Connecting to www.invisibleagent.com
(www.invisibleagent.com)|108.162.196.30|:80... connected.
HTTP request sent, awaiting response... 403 Forbidden
2014-06-21 17:44:27 ERROR 403: Forbidden.


Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] memory usage and --compress options

2014-07-01 Thread Nikolaus Rath
On 06/30/2014 10:22 PM, Brice Burgess wrote:
 I went ahead and used the --compress none flag to disable compression. I
 had to re-create the fileksystem to get it working (as it was previously
 mounted with LZMA compression, and thus rightly retains this setting
 upon remounting it).

It should not. What makes you think it retained the setting? Mounting
the (existing) file system with --compress alg should change the
compression of all new objects. There is no need to worry about
decompressing the old LZMA compressed objects, that only takes a small
amount of memory.

 Can anyone forsee issues running with --compress none? Obviously traffic
 is increased; but memory (and CPU) is tight on these small VMs. Would
 bzip/zlib be a better option?

Yes. Both require orders of magnitude less memory than lzma, so I'd
definitely try one of those instead of not compressing at all.

Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] ports used by s3ql for firewall?

2014-07-11 Thread Nikolaus Rath
On 07/11/2014 09:36 AM, Andy Cress wrote:
 Which ports do I need to make sure are open in a firewall in order to
 use s3ql mount points?

S3QL uses only outgoing connections. By default, it uses only TCP ports
80 and 443 (for HTTP and HTTP via TLS). If you specify a different port
number in your https_proxy environment variable, or in the storage url
for your backend, S3QL will obviously use that port as well.


Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] High CPU usage when writing large files

2014-07-16 Thread Nikolaus Rath
Jimmy Tanzil jimmy.tan...@ichthysmedia.com writes:
 I am having high CPU load issues when writing large files (above 5 GB) into 
 the mounted s3ql filesystem.

Could you quantify that? How do you measure it, and how does it depend
on file size? Is there really a jump at 5 GB, or is it rather a smooth
increase?


 My mount command: mount.s3ql --compress none --debug all --allow-other 
 s3://bucket /backup15

Using --debug all is certainly a way to eat up a lot of CPU time. But I
assume the problem also happens with debugging disabled?


Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] High CPU usage when writing large files

2014-07-16 Thread Nikolaus Rath
Hi Jimmy,

When replying to emails on this list, please do not put your reply
above the quoted text, and do not quote the entire message you're
answering to. This makes it unnecessarily hard for other readers to
understand the context of your email. Instead, please cut quoted parts
that are not relevant to your reply, and insert your responses right
after the points you're replying to (as I have done below). Thanks!


Jimmy Tanzil jimmy.tan...@ichthysmedia.com writes:
 I am having high CPU load issues when writing large files (above 5
 GB) into the mounted s3ql filesystem.

 Could you quantify that? How do you measure it, and how does it depend
 on file size? Is there really a jump at 5 GB, or is it rather a smooth
 increase?

 When writing less than 100 MB data, it takes only a few seconds and it
 finished very quick. But when the file size is larger, the CPU usage
 keeps climbing and stays there until the file write is complete.

 I created a 2 minute video to show you what I mean:
 http://youtu.be/-HMac-Hr3bw

Sorry, that's not helpful at all. Please provide quantitative data in
text form.

 The first file copied in the video is 80 MB in size
 The second file copied in the video is 5 GB in size

How long does each copy take? Do both files with into the local cache?
Are taking into account the time it takes to flush the cache?

 Is this normal?

Is what normal?


Best,
-Nikolaus
-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] Time Skewed error repeats forever

2014-07-16 Thread Nikolaus Rath
On 07/16/2014 01:33 PM, Andy Cress wrote:
 
 I encountered a client s3ql system with its system date/time set at
 about 4 hours before the  actual time.
 On that system, attempting to do fsck.s3ql to the cloud resulted in an
 infinite set of retries with these messages:
 
 Encountered RequestTimeTooSkewedError exception (RequestTimeTooSkewed: The
 difference between the request time and the current time is too large.),
 retrying call to Backend.open_read
 
 Is there a way to limit the retries on this?  If it returned, it would
 be easy to check for this condition.
 Or is there a different way to check for time skew before calling fsck.s3ql?

No, but I think S3QL should really not retry on this error at all.

Could you report a bug? https://bitbucket.org/nikratio/s3ql/issues/new


Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] High CPU usage when writing large files

2014-07-16 Thread Nikolaus Rath
On 07/16/2014 01:27 PM, Jimmy Tanzil wrote:
 Is what normal?

 The CPU being utilized more than 100% during writing larger files.
 
 My concern is not the timing, the timing is excellent. 
 
 I am more concerned about the CPU showing above 100% when writing one 5
 GB file.
 
 Is there any other tricks that can be done to lower that?

I do not understand your concern. Why would you want to artifically slow
down S3QL by forcing it not to use all the available CPU cycles?

You can add a lot of sleep(1) statements to the write() operation in
fs.py. That should bring CPU utilization down to  1 %.


Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] Enabling AWS server-side encryption in S3QL?

2014-07-20 Thread Nikolaus Rath
Mark Mielke mark.mie...@gmail.com writes:
 On Sat, Jul 19, 2014 at 11:19 PM, Nikolaus Rath nikol...@rath.org wrote:
 It has been requested
 (https://bitbucket.org/nikratio/s3ql/issue/62/add-support-for-aws-server-side-encryption)
 that I enable AWS server-side encryption in S3QL.

 I am ambigious on the matter. On one side, there does not seem to be
 any technical drawback. On the other side, there does not seem to be
 any (significant) technical advantage either, so I'm still hesitant
 to enable this without a good reason.

 If anyone has some thoughts on the question, please chime in.

 Presuming I understand how it works... I think having AWS perform the
 encryption partially defeats the purpose of encryption.
[..]

Oh, absolutely. S3QL would continue to do client-side encryption as
before. The question is: is there any good reason to activate (or not
activate) server-side encryption *in addition* to that.

I am very irritated by this feature for the same reasons you mentioned
above. It doesn't seem to add any value, but it doesn't seem to have any
drawbacks either. So why is Amazon even requiring a choice at all?


Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] High CPU usage when writing large files

2014-07-20 Thread Nikolaus Rath
Jimmy Tanzil jimmy.tan...@ichthysmedia.com writes:

 There is no indication of any crash (nor any other problem) in the log
 file at all. Where did you get the File system appears to have crashed
 message from?

 I see it when I try to do umount.s3ql /mountpoint
 Then I read on your documentation to just do umount /mountpoint which works

 I then did fsck.s3ql which completes under 1 minute, then I can mount it
 back successfully.

 I tried to unmount it in the first place because I cannot access the
 mounted file system. If I do ls or any command including s3qlstats it just
 sits there without any response.

Is the mount.s3ql process still running? (check with ps)

If so, can you obtain a stack trace
(https://bitbucket.org/nikratio/s3ql/wiki/Providing%20Debugging%20Info)?



Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] High CPU usage when writing large files

2014-07-20 Thread Nikolaus Rath
On 07/20/2014 04:06 PM, Jimmy Tanzil wrote:
 Is there an option on the mount command to utilize 80% of the available
 CPU cores?

No, and I don't think that introducing it would make sense.


Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] Using folders in s3ql

2014-07-24 Thread Nikolaus Rath
On 07/24/2014 02:24 PM, Andy Cress wrote:
 After checking this out on both Amazon and Google, I can see that the 
 implementation is consistent, but not what I expected.
 
 A folder is created by the administrator (e.g. 'myfolder'), and the s3ql 
 
 data for mybucket/myfolder goes into blocks in the bucket root which have 
 the following naming convention:
   myfolders3ql_data_*
 
 (using s3ql-1.17)
 
 I'm curious. Why do those files go into the bucket root instead of into 
 myfolder?  

There is no such as things as folders in Amazon S3 or Google Storage.
The string folder/file is the name of an object, just like
folder+file or folderfile. The / character has no special meaning at
all (see also
http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingObjects.html). You
can easily check that as follows:

1. Create an object folder/file1
3. Try to rename folder to folder_new - you will get an error, because
there is no object named folder.

For convenience, the gsutil command and the AWS Management console allow
you to pretend that '/' has any meaning, but it really has not.

If you want s3ql to store its objects as myfolders/s3ql_data_*, you need
to use myfolder/ as a prefix, not myfolder.


Best,
Nikolaus
-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[s3ql] [ANNOUNCE] S3QL 2.10 has been released

2014-07-27 Thread Nikolaus Rath
Dear all,

I am pleased to announce a new release of S3QL, version 2.10.

From the changelog:

2014-07-27, S3QL 2.10

  * The internal file system revision has changed. File systems
created with S3QL 2.10 or newer are not compatible with prior S3QL
versions. To update an existing file system to the newest
revision, use the 's3qladm upgrade' command.

It is strongly recommended to run the (new) s3ql_verify command
with the --data option at shortly after the upgrade. This is
necessary to ensure that the upgrade to the next (2.11) S3QL
release will run smoothly.

  * The User's Guide now contains a description of the possible
failure modes of mount.s3ql.

  * The --debug command line parameter now generates a bit less
output by default, and there is an additional --debug-module
parameter to activate additional messages.

  * When using encryption, S3QL now checks that a storage object's
key corresponds to the data stored under it.

The lack of this check in previous versions allowed an attacker
with control over the storage server to interchange blocks within
the same file system (which would have resulted in apparent data
corruption in the file system). Targeted modification of specific
files, however, would have been very unlikely, because the
interchange of blocks had to be done blindly (with the attacker
not knowing which file any block belongs to, nor what its contents
are).

Fixes https://bitbucket.org/nikratio/s3ql/issue/52/.

  * S3QL now aborts immediately instead of retrying if the server
storage server reports that the local clock is skewed.

  * There is a new 's3ql_verify' command. This program retrieves and
checks every storage object to ensure that all objects are
available and have not been tampered with. In contrast to
fsck.s3ql, s3ql_verify does not trust the object list provided by
the storage server and actually attempts to download the objects
one by one.

  * S3QL now requires version 3.2 or newer of the dugong module.


As usual, the release is available for download from
https://bitbucket.org/nikratio/s3ql/downloads

Please report any bugs on the mailing list (s3ql@googlegroups.com) or
the issue tracker (https://bitbucket.org/nikratio/s3ql/issues).


Starting with version 2.0, S3QL requires Python 3.3 or newer. For older
systems, the S3QL 1.x branch (which only requires Python 2.7) will
continue to be supported for the time being. However, development
concentrates on S3QL 2.x while the 1.x branch only receives selected
bugfixes. When possible, upgrading to S3QL 2.x is therefore strongly
recommended.


Enjoy,

   -Nikolaus


-- 
Encrypted emails preferred.
PGP fingerprint: 5B93 61F8 4EA2 E279 ABF6  02CF A9AD B7F8 AE4E 425C

 »Time flies like an arrow, fruit flies like a Banana.«









signature.asc
Description: OpenPGP digital signature


Re: [s3ql] possible to upgrade filesystem from v1.12 to 2.9

2014-07-30 Thread Nikolaus Rath
Nikolaus Rath nikol...@rath.org writes:
 This bucket was created using s3ql v1.12 so I was a little confused by the 
 Fyle system already at most-recent revision' output.
 Could this have anything to do with my failed attempt to upgrade it from
 1.2 to 2.9 already.

 (I assume you mean 1.12 in the last line).

 Outside of that question, I then attempted to upgrade again on my 2.9 host 
 and got the following output:

 s3qladm upgrade s3://s3ql-test2
 Enter backend login: 
 Enter backend passphrase: 
 Enter file system encryption passphrase: 
 Uncaught top-level exception:
 [..]
   File /usr/lib/s3ql/s3ql/backends/common.py, line 612, in open_read
 metadata = self._unwrap_meta(fh.metadata)
   File /usr/lib/s3ql/s3ql/backends/common.py, line 571, in _unwrap_meta
 raise ObjectNotEncrypted()
 s3ql.backends.common.ObjectNotEncrypted

 That is odd. I tried to reproduce this on my system, but it seems that
 S3QL 1.12 isn't compatible with the more recent rest of my system
 anymore.

Ok, I set up an old wheezy chroot, and when I try it than upgrading from
1.12 works fine:

(wheezy)root@vostro:~# mkfs.s3ql --version
S3QL 1.12
(wheezy)root@vostro:~# mkfs.s3ql s3://nikratio-test/upgrade/
Enter backend login: 
Enter backend password: 
Before using S3QL, make sure to read the user's guide, especially
the 'Important Rules to Avoid Loosing Data' section.
Enter encryption password: 
Confirm encryption password: 
Generating random encryption key...
Creating metadata tables...
Dumping metadata...
..objects..
..blocks..
..inodes..
..inode_blocks..
..symlink_targets..
..names..
..contents..
..ext_attributes..
Compressing and uploading metadata...
Wrote 0.00 MiB of compressed metadata.
(wheezy)root@vostro:~# 

..and on the other system:

$ bin/s3qladm --version
S3QL 2.9

$ bin/s3qladm upgrade s3://nikratio-test/upgrade/
Enter file system encryption passphrase: 
Getting file system parameters..

I am about to update the file system to the newest revision.
You will not be able to access the file system with any older version
of S3QL after this operation.

You should make very sure that this command is not interrupted and
that no one else tries to mount, fsck or upgrade the file system at
the same time.


Please enter yes to continue.



Best,
-Nikolaus
-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] Installation fails at self tests

2014-08-03 Thread Nikolaus Rath
Alexandre Goncalves alexandre@gmail.com writes:
 Domingo, 3 de Agosto de 2014 4:29:35 UTC+1, Nikolaus Rath escreveu:

 On 08/02/2014 08:13 PM, Alexandre Goncalves wrote: 
  Hello, 
  
  
  I want to install S3QL in my CentOS 6.5 (64bit) box. 
  
  
  For that I did: 
  
  install Python3.4.1 (from source) 
  install setuptools (using pip3.4) 
  install PyCrypto (using pip3.4) 
  install defusedxml (using pip3.4) 
  
  install sqlite 3.8.5 from source 
  
  install apsw (using setup.py) 
 [...] 
  /root/temp/s3ql-2.10.1/src/s3ql/deltadump.cpython-34m.so: undefined 
  symbol: sqlite3_compileoption_get 

 Are you sure that you compiled both apsw and s3ql against the sqlite 
 3.8.5 library that you installed, rather than some other version that 
 may already be present on your system? 

 Try to run 'ldd 
 /root/temp/s3ql-2.10.1/src/s3ql/deltadump.cpython-34m.so' and 'ldd 
 /wherever/it/is/apsw.cpython-34m.so'. 

 The output of your commands:

 ldd /usr/local/lib/python3.4/site-packages/apsw.cpython-34m.so
 linux-vdso.so.1 =  (0x7fffe33ff000)
 libpython3.4m.so.1.0 = /usr/local/lib/libpython3.4m.so.1.0 
 (0x7f9e8bd6c000)
 libpthread.so.0 = /lib64/libpthread.so.0 (0x7f9e8bb42000)
 libc.so.6 = /lib64/libc.so.6 (0x7f9e8b7ae000)
 libdl.so.2 = /lib64/libdl.so.2 (0x7f9e8b5aa000)
 libutil.so.1 = /lib64/libutil.so.1 (0x7f9e8b3a6000)
 libm.so.6 = /lib64/libm.so.6 (0x7f9e8b122000)
 /lib64/ld-linux-x86-64.so.2 (0x003aac40)

No mention of sqlite, so you have probably included sqlite directly in
apsw. Not elegant, but it works and ensures that you get the version you
want.

 I compiled apsw using sqlite 3.8.5, since I used:

 python3.4 setup.py fetch --all --sqlite --version=3.8.5  
 --missing-checksum-ok  build --enable-all-extensions  install test

Yeah, 'fetch' means no dynamic linking.

 BUT i suspect that I should have used the local compilation!... Correct?

Would have been nicer, but it should still work.

 ldd /root/temp/s3ql-2.10.1/src/s3ql/deltadump.cpython-34m.so
 linux-vdso.so.1 =  (0x7fff88dff000)
 libpython3.4m.so.1.0 = /usr/local/lib/libpython3.4m.so.1.0 
 (0x7f3a798ac000)
 libsqlite3.so.0 = /usr/lib64/libsqlite3.so.0 (0x7f3a7961)
 libpthread.so.0 = /lib64/libpthread.so.0 (0x7f3a793f3000)
 libc.so.6 = /lib64/libc.so.6 (0x7f3a7905f000)
 libdl.so.2 = /lib64/libdl.so.2 (0x7f3a78e5a000)
 libutil.so.1 = /lib64/libutil.so.1 (0x7f3a78c57000)
 libm.so.6 = /lib64/libm.so.6 (0x7f3a789d3000)
 /lib64/ld-linux-x86-64.so.2 (0x003aac40)

 On s3q, I don't know how to tell the locationof the SQlite 3.8.5, and I 
 think it is picking the other version!

Yes, that's probably the problem.


You could try something like

CFLAGS=-I/sqlite_directory/include -L/sqlite_directory/lib 
-Wl,-rpath=/sqlite_directory/lib setup.py build_ext


Best,
Nikolaus
-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] S3QL and automount

2014-08-03 Thread Nikolaus Rath
On 08/03/2014 08:07 PM, Alexandre Goncalves wrote:
 Hello,
 
 
 I want mount my bucket as needed, so I tried to configure autofs to
 mount the gs bucket.
 
 Tried this, without luck:
 
 auto.master:
 
 /gs-backup  /etc/auto.gs--timeout=300
 
 
 auto.gs:
 * -fstype=s3ql,rw,allow_other gs://ideiao-bkp --authfile
 /root/scripts/.s3ql_authinfo2
 
 Note:
 
 Mounting manually, works perfectly:
 
 mount.s3ql gs://ideiao-bkp  /mnt/1 --authfile /root/scripts/.s3ql_authinfo2
 Using 10 upload threads.
 Autodetected 4034 file descriptors available for cache entries
 Using cached metadata.
 Setting cache size to 6935 MB
 Mounting filesystem...
 
 Any suggestions?

If you elaborate on what without luck means (maybe even include a
specific error message), someone might be able to help you.


Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] Crash s3ql-2.9.1 in Fedora 20 packages with Swift

2014-08-06 Thread Nikolaus Rath
On 08/06/2014 04:22 PM, motomura wrote:
 I use s3ql-2.9.1 in Fedora 20 packages with OpenStack Swift (/v1) as
 backed storage.
 s3ql crash often.
 
 Below the log when s3ql is crashed.
 
 Is it a bug of s3ql?
 
 -
 2014-08-06 17:19:09.067 [pid=674, thread='Thread-6', module='root',
 fn='excepthook', line=163]: Uncaught top-level exception:
 Traceback (most recent call last):
[...]
 return method(*a, **kw)
   File /usr/lib64/python3.3/site-packages/s3ql/backends/s3c.py, line
 758, in close
 headers=self.headers, body=self.fh)
   File /usr/lib64/python3.3/site-packages/s3ql/backends/swift.py, line
 214, in _do_request
 shutil.copyfileobj(body, self.conn, BUFSIZE)
   File /usr/lib64/python3.3/shutil.py, line 71, in copyfileobj
 fdst.write(buf)
   File /usr/lib/python3.3/site-packages/dugong/__init__.py, line 592,
 in write
 eval_coroutine(self.co_write(buf))
   File /usr/lib/python3.3/site-packages/dugong/__init__.py, line 1309,
 in eval_coroutine
 assert next(crt).poll()
   File /usr/lib/python3.3/site-packages/dugong/__init__.py, line 604,
 in co_write
 raise StateError('No active request with pending body data')


Yes, that looks like a bug. Could you try to reproduce that with S3QL
2.10.1 and report it at https://bitbucket.org/nikratio/s3ql/?


Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] Upgrading from revision 20 to 21...

2014-08-07 Thread Nikolaus Rath
Serge Victor sergi...@pawlowicz.name writes:
 The s3ql worked fine. After upgrading from 
 s3ql_2.9+dfsg-2~26~ubuntu14.04.1_amd64.deb to 
 s3ql_2.10.1+dfsg-1~27~ubuntu14.04.1_amd64.deb, I am trying to upgrade 
 revision, as required, which fails:
[...]
 Cycling metadata backups...
 Backing up old metadata...
 Encountered ConnectionClosed exception (connection closed unexpectedly), 
 retrying call to Backend.copy for the 3-th time...
[...]

Are you able to create, mount, modify and umount a new filesystem using
2.10.1? I'd like to narrow down if the problem is specific to the
upgrade procedure, or applies to regular usage as well.


Best,
-Nikolaus
-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] Crash s3ql-2.9.1 in Fedora 20 packages with Swift

2014-08-09 Thread Nikolaus Rath
Hi motomura,


On 08/08/2014 01:03 AM, motomura wrote:
 s3ql crashed in version 2.10.1.
 Below the log when s3ql is crashed.
 
 -
 2014-08-08 16:55:01.509 [pid=8686, thread='Thread-6', module='root',
 fn='excepthook', line=163]: Uncaught top-level exception:
 Traceback (most recent call last):
   File
[...]


Thanks for reproducing. Could you report this at
https://bitbucket.org/nikratio/s3ql/issues so it doesn't get lost?


Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] Unexpected server reply to copy operation (upgrading from 20 to 21)

2014-08-14 Thread Nikolaus Rath
On 08/14/2014 03:17 PM, Adam Watkins wrote:
 Upgrading from revision 20 to 21...
 Unexpected server reply to copy operation:
 200 OK
 Date: Thu, 14 Aug 2014 21:04:15 +
 Server: RestServer/1.0
 Content-Length: 189
 Content-Type: application/xml
 ETag: d7ed6a310a47b69a91e13e655245fac9
 Cache-Control: no-cache
 Connection: close
 
 ?xml version=1.0 encoding=UTF-8?
 CopyObjectResultLastModified2014-08-14T21:04:15.000Z/LastModified
 ETagquot;d7ed6a310a47b69a91e13e655245fac9quot;/ETag
 /CopyObjectResult
 
[..]
 
 I have now been able to get the upgrade to succeed (including a
 subsequent verify with --data) by patching line 359 of backends/s3c.py,
 to read:
 if root.tag == 'CopyObjectResult':
 
 The S3-compatible backend in this case is StorageQloud (
 https://www.greenqloud.com/storageqloud/ ).
 Does this look like a deviation from S3-compatibility, which I should
 report to GreenQloud?

Yes. They are not declaring the proper XML namespace (which would be
http://s3.amazonaws.com/doc/2006-03-01/) for the CopyObjectResult tag.


Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] Problems after upgrading from 2.7 to 2.10

2014-08-24 Thread Nikolaus Rath
On 08/24/2014 07:25 AM, Dan Johansson wrote:
 Hi All,
 
 After upgrading to 2.10.1 I encounter the following error each time I
 try to do anything (fsck, s3qladm upgrade):
 
 # /usr/bin/fsck.s3ql --no-ssl --batch s3://abcdef.dmj.nu
 Using CONNECT proxy proxy.dmj.nu:8080
 Encountered ConnectionError exception (Tunnel connection failed: 403
 Forbidden), retrying call to Backend.open_read for the 3-th time...
 Encountered ConnectionError exception (Tunnel connection failed: 403
 Forbidden), retrying call to Backend.open_read for the 4-th time...
 
 Running the same command from a second machine (not yet upgraded) works
 fine:
 
 # /usr/bin/fsck.s3ql --no-ssl --batch s3://abcdef.dmj.nu
 Starting fsck of s3://test.dmj.nu
 Using cached metadata.
 Checking DB integrity...
 Creating temporary extra indices...
 --- 8 ---
 Completed fsck of s3://abcdef.dmj.nu
 
 Both machines are in the same network and use the same proxy.
 
 What's happening?

Looks like you can't connect to the proxy from the first machine. Are
you sure you are using this proxy on the second machine? Note the
conspicuous absence of the Using CONNECT proxy proxy.dmj.nu:8080 in
your second quoted output.


Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] Problems after upgrading from 2.7 to 2.10

2014-08-25 Thread Nikolaus Rath
On 08/25/2014 12:58 PM, Dan Johansson wrote:
 On 24.08.2014 23:37, Nikolaus Rath wrote:
 On 08/24/2014 07:25 AM, Dan Johansson wrote:
 Hi All,

 After upgrading to 2.10.1 I encounter the following error each time I
 try to do anything (fsck, s3qladm upgrade):

 # /usr/bin/fsck.s3ql --no-ssl --batch s3://abcdef.dmj.nu
 Using CONNECT proxy proxy.dmj.nu:8080
 Encountered ConnectionError exception (Tunnel connection failed: 403
 Forbidden), retrying call to Backend.open_read for the 3-th time...
 Encountered ConnectionError exception (Tunnel connection failed: 403
 Forbidden), retrying call to Backend.open_read for the 4-th time...

 Running the same command from a second machine (not yet upgraded) works
 fine:

 # /usr/bin/fsck.s3ql --no-ssl --batch s3://abcdef.dmj.nu
 Starting fsck of s3://test.dmj.nu
 Using cached metadata.
 Checking DB integrity...
 Creating temporary extra indices...
 --- 8 ---
 Completed fsck of s3://abcdef.dmj.nu

 Both machines are in the same network and use the same proxy.

 What's happening?

 Looks like you can't connect to the proxy from the first machine. Are
 you sure you are using this proxy on the second machine? Note the
 conspicuous absence of the Using CONNECT proxy proxy.dmj.nu:8080 in
 your second quoted output.
 
 I can connect to the proxy allright:
 # wget www.google.com
 --2014-08-25 21:54:29--  http://www.google.com/
 Resolving proxy.dmj.nu (proxy.dmj.nu)... 192.168.1.1
 Connecting to proxy.dmj.nu (proxy.dmj.nu)|192.168.1.1|:8080... connected.
[...]

Not relevant, sorry. wget doesn't use CONNECT proxying.


Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] s3ql crash by read timeout of swift

2014-08-27 Thread Nikolaus Rath
On 08/27/2014 03:25 AM, motomura wrote:
 I use s3ql-2.10.1 in with OpenStack Swift (/v1) as backed storage.
 s3ql crashed.
 
 Below logs (swift.log and mount.log) when s3ql is crashed.
 
 May be the reason of s3ql'crash is read timeout of swift?
 
 Can I avoid this problem by setting of s3ql or swift?
 
 s3ql server's ip address is 192.168.21.66.
 
[...]
   File
 /usr/lib64/python3.3/site-packages/s3ql-2.10.1-py3.3-linux-x86_64.egg/s3ql/backends/swift.py,
 line 232, in _do_request
 raise HTTPError(resp.status, resp.reason, resp.headers)
 s3ql.backends.s3c.HTTPError: 408 Request Timeout

The problem is that the swift server returned a 408 HTTP code, and S3QL
doesn't know what to do with it.

Probably retrying would be the right choice, but to be sure I'd really
need to have some documentation of the status codes and their meaning
that Swift may return.

For example, S3 has a comprehensive listing of status codes and the
expected response of the client at
http://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html. But
there is nothing comparable in the Swift documentation at
http://docs.openstack.org/api/openstack-object-storage/1.0/content/PUT_createOrReplaceObject__v1__account___container___object__storage_object_services.html.
This is why, in general terms, S3QL's Google Storage and Amazon S3
support is more robust than OpenStack/swift support.


If you report this issue at https://bitbucket.org/nikratio/s3ql/issues I
can add support for 408 error to S3QL. However, in order to fix this
properly it'd be great if you could contact your storage provider and
ask them if they have information about the error codes that the swift
server may return, and how the client is supposed to react to them.


Best,
-Nikolaus
-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[s3ql] [ANNOUNCE] S3QL 1.19 has been released

2014-08-27 Thread Nikolaus Rath
Dear all,

I am pleased to announce a new maintenance release of S3QL, version 1.19.

Please note that this is only a maintenance release. Development of S3QL
takes place in the 2.x series. The 1.x releases receive only selected
bugfixes and are only maintained for older systems that do not support
Python 3.3. For systems with Python 3.3 support, using the most recent
S3QL 2.x version is strongly recommended.

From the changelog:

2014-08-25, S3QL 1.19

  * SECURITY UPDATE. Fixed a remote code execution vulnerability.

For non-encrypted file systems, an attacker with control over the
communication with the storage backend or the ability to
manipulate the data stored in the backend was able to trigger
execution of arbitrary code by mount.s3ql.

Encrypted file systems were protected against this if the attacker
did not know the file system passphrase. Mounting an encrypted
file system prepared by an attacker (which is possible if the
attacker shares the file system passphrase) thus allowed the
attacker to execute arbitrary code even when using encryption.


Please report any bugs on the mailing list (s3ql@googlegroups.com) or
the issue tracker (https://bitbucket.org/nikratio/s3ql/issues).


Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

  »Time flies like an arrow, fruit flies like a Banana.«



-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] Unexpected server reply to copy operation (upgrading from 20 to 21)

2014-09-01 Thread Nikolaus Rath
On 09/01/2014 07:54 AM, Rich B wrote:
 On Thursday, August 14, 2014 9:02:40 PM UTC-4, Nikolaus Rath wrote:
 
 On 08/14/2014 03:17 PM, Adam Watkins wrote:
  Upgrading from revision 20 to 21...
  Unexpected server reply to copy operation:
  200 OK
  Date: Thu, 14 Aug 2014 21:04:15 +
  Server: RestServer/1.0
  Content-Length: 189
  Content-Type: application/xml
  ETag: d7ed6a310a47b69a91e13e655245fac9
  Cache-Control: no-cache
  Connection: close
 
  ?xml version=1.0 encoding=UTF-8?
 
 CopyObjectResultLastModified2014-08-14T21:04:15.000Z/LastModified
  ETagquot;d7ed6a310a47b69a91e13e655245fac9quot;/ETag
  /CopyObjectResult
 
 
 [...]
 
  Does this look like a deviation from S3-compatibility, which I should
  report to GreenQloud?
 
 Yes. They are not declaring the proper XML namespace (which would be
 http://s3.amazonaws.com/doc/2006-03-01/
 
 http://www.google.com/url?q=http%3A%2F%2Fs3.amazonaws.com%2Fdoc%2F2006-03-01%2Fsa=Dsntz=1usg=AFQjCNHS3kxugwRdqhZ6oS1Bzav7PPtrmQ)
 for the CopyObjectResult tag.
 
 Best,
 -Nikolaus
 
  
 Sorry to re-open an old thread, But I've run into a similar problem with
 Dream Host's S3 Compatible DreamObjects. When I create a filesystem,
 touch a file, and then umount it I get the following:
 
[...]
 Backing up old metadata...
 Unexpected server reply: expected XML, got:
 200 OK
 Date: Mon, 01 Sep 2014 14:45:08 +
 Server: Apache
 Transfer-Encoding: chunked
 Content-Type: binary/octet-stream
 
 
 Is this the same problem?

Almost. The Dreamhost response is different from the GreenQloud
response, but they're both not S3-like.

 If so, I'd like to report the bug to Dream Host, but the link Nikolaus
 provided above does not work, so I don't know how to describe the bug to
 them. 

That wasn't a link, http://s3.amazonaws.com/doc/2006-03-01/; is the XML
namespace (which happens to look like an URL), that was not correctly
declared by GreenQloud.

 Could some one provide a link to (of description of) the S3 API call
 that I piont to when submitting a bug report to D H?

Here's an example for a proper response:

200 OK
Date: Mon, 01 Sep 2014 14:45:08 +
Server: Apache
Transfer-Encoding: chunked
Content-Type: text/xml; charset=utf-8

?xml version=1.0 encoding=UTF-8?
CopyObjectResult xmlns=http://s3.amazonaws.com/doc/2006-03-01/;
   LastModified2009-10-28T22:32:00/LastModified
   ETag9b2cf535f27731c974343645a3985328/ETag
/CopyObjectResult

Note the difference in Content-Type, and the presence of a proper
response body.

You can find more information at
This from
http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectCOPY.html, but
note that the examples at the end are actually incomplete (or outdated)
and do not correspond to what S3 is actually returning.


Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] mount.s3ql hangs on version 2.11

2014-09-03 Thread Nikolaus Rath
On 09/03/2014 01:25 PM, Ido wrote:
 Hello,
 
 I've been using s3ql for about a year now, and I'm very happy / excited
 about it!  Unfortunately, the recent upgrade seems to hang it on my system.
 
 This is what I'm getting:
 sudo mount.s3ql --cachesize=500 --authfile=[...] [s3_dest]
 [mount_root]
 Using 8 upload threads.
 Autodetected 499944 file descriptors available for cache entries
 Using cached metadata.
 Mounting filesystem...
 
 And then it just hangs there...
 
 Any ideas?

This was a deliberate change. From the changelog:

  * mount.s3ql no longer daemonizes on its own. With a modern init
system this should no longer be necessary, and when running
mount.s3ql from the command line the shell can be used to put the
process into background.


I have since learned that many people want this feature though, so this
change will be reverted in the next release.


Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] Pros/Cons of using --nfs option

2014-09-04 Thread Nikolaus Rath
On 09/04/2014 07:31 AM, Nick Carboni wrote:
 I am using s3ql for an application that may require the user to read
 large amounts of data over an NFS (or CIFS) export of the s3ql file
 system.  I came across the --nfs option for mount.s3ql which is
 described as enabling some optimizations for exporting the file system
 over NFS.  I'm assuming there is some downside to specifying this
 option as the default is false.  Is there any documentation on what this
 option does and how it could hurt or help performance?

It increases the size of the local metadata database a bit, and it may
reduce overall performance of metadata operations a bit (because S3QL
has to keep an additional index up-to-date).

Conversely, it should significantly increase the performance of some NFS
requests, but I don't know how frequent this specific operation is
(lookup of . or .. by name).

As far as I know, no one has yet measured how big any of these effects
actually are.

Best,
-Niko

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] Re: Unexpected server reply to copy operation (upgrading from 20 to 21)

2014-09-09 Thread Nikolaus Rath
On 09/09/2014 12:20 PM, Adam Watkins wrote:
 
 Regarding my original issue with StorageQloud, GreenQloud have now
 responded after 25 days with the following:
 
 Part of our object storage solution involves third party software,
 so we will have to ask them to fix the namespace declaration.
 Unfortunately this will not going to happen soon. but probably will
 be fixed when we upgrade the version.

If GreenQloud can guarantee that copies always succeed if the status is
200 (in contrast to S3, where error may only be indicated in the
response body), this will probably help you:

https://bitbucket.org/nikratio/s3ql/issue/86/add-copy-always-succeeds-backend-option


Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] clone_fs.py ChecksumError: Invalid metadata

2014-09-22 Thread Nikolaus Rath
On 09/22/2014 12:17 PM, Nick Carboni wrote:
 Here is everything I've done.  The commands are actually entered from a
 C program using fork(2) and execl(3), but should be functionally
 equivalent to running them from the command line which I've done below:
 
 $ mkfs.s3ql local:///mnt/bds/7625eec6-843c-4a4d-80fe-e0718dbab53b
 
[...]
 
 I also tried the clone_fs.py command without the folder name on the
 target location.
 I'm running version 1.19 on Ubuntu 12.04 :

I'm baffled. Could you try to run my sequence of commands (including the
wget download)?


Thanks,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] clone_fs.py ChecksumError: Invalid metadata

2014-09-24 Thread Nikolaus Rath
On 09/23/2014 06:16 AM, Nick Carboni wrote:
 I did some additional testing and eventually got it to work with other
 directories.
 
 When I create the backend mountpoint through my C program I drop some
 metadata in the directory in a file named meta and a lost+found
 directory also gets created.
 After removing these the clone_fs.py script ran just fine.
 Is it possible that either of those could be what was tripping up the
 clone script?

Yes. Don't do that. The directory you pass to s3ql is for s3ql's
exclusive use, no other files allowed.


Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] Broken pipe error

2014-09-30 Thread Nikolaus Rath
Hi Brian,

On 09/30/2014 06:36 AM, Brian Pribis wrote:
 Running tests for s3ql-2.11.1  on test: 
 test_put_s3error_med[plain-mock_gs]  I get the following error:
 
 Traceback (most recent call last):
   File
 /usr/local/lib/python3.4/dist-packages/dugong-3.0-py3.4.egg/dugong/__init__.py,
 line 557, in _co_send
 len_ = self._sock.send(buf)
 BrokenPipeError: [Errno 32] Broken pipe

Thanks for reporting. Upgrading to python-dugong 3.3 should fix this
problem.

Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] Unclear on zero deduplication and cache for local backend

2014-09-30 Thread Nikolaus Rath
p6su5t8...@snkmail.com writes:
 Hi,

 I have recently tried out s3ql on Debian testing, and I have a few
 questions.

 I'm using s3ql with local storage, without encryption or compression.
 I set threads to 1 as a baseline
[...] .
 I find when I specify cachesize manually to be small or zero that my
 write throughput goes down by several orders of magnitude.  Is using
 no cache unsupported?

Yes, this is not supported. You are right that if the backend storage is
a local disk, this could be made to work. However, S3QL was designed for
network storage, and the local storage backend was added for use
with a network file system (like sshfs) and testing, and not as an
efficient method to utilize your local disk.

In theory, there are several optimizations one could implement with the
local backend (not requiring a cache being one of them). However, I
don't think this is worth it. I don't think that even with additional
optimizations, there'd be little reason not to use e.g. dm-crypt with
btrfs to get very similar features with orders of magnitude better
performance.

 I don't mind a small performance loss but when I use a zero cache size
 I get throughput of around 50 kilobytes per second, which suggests
 that I'm running up against an unexpected code path.  Read performance
 is okay even in that case.

I think with zero cache, S3QL probably downloads, updates, uploads and
removes a cache entry for every single read() call.

 The next thing I'm wondering a lot about is the deduplication.  In my
 test, I'm writing all zeroes.  I write a megabyte using one block of a
 1MB block size using dd, and then I write 1024 blocks of a kilobyte
 each.  I then also write 2MB or 4MB at a time.  I'd expect that
 deduplication would catch these very trivial cases and that I'd only
 see one entry of at most 2^n bytes, where 2^n represents the
 approximate block size of the deduplication.

Yes, this is what should happen.

 I'd also expect 2^n to be smaller than a megabyte (maybe like a single
 64k block).

That's probably not the case. S3QL de-duplicates on the level of storage
objects. You specify the maximum storage object size at mkfs.s3ql time
with the --blocksize option, and the default is 10 MB.

To see de-duplication in action, you either need to write more data, or
you need to write smaller, but identical files:

$ echo hello, world  foo
$ echo hello, world  bar

..in this case s3ql will store only one storage object (containing
hello, world) in the backend.


Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] S3QL backup restore

2014-10-07 Thread Nikolaus Rath
On 10/07/2014 04:41 AM, Sven Martin wrote:
 I have an Amazon S3 bucket mounted on an Ubuntu 14.04 server, which
 serves as my ownCloud server (EFSS). This bucket is configured as the
 data location for ownCloud. So all the users' files end up here.
 
 What is the best way to:
 - backup this mounted S3 bucket

I'd suggest following the steps described at
http://www.rath.org/s3ql-docs/contrib.html#s3-backup-sh.

 - restore this backup in the event of data loss

You can just mount the file system and copy everything back using cp,
rsync or (for slightly better performance), pcp.py
(http://www.rath.org/s3ql-docs/contrib.html#pcp-py).

 Remark:
 I have installed S3QL using these instructions
 https://bitbucket.org/nikratio/s3ql/wiki/installation_ubuntu. I do not
 see the /contrib folder anywhere.

Did you look at /usr/share/doc/s3ql/README.Debian?

Best,
-Nikolaus
-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] S3QL backup restore

2014-10-17 Thread Nikolaus Rath
Sven Martin sproeg...@gmail.com writes:
 My only concern is: Will I allways be able to mount the s3ql file system? 
 Can this get corrupted somehow?

I'm pretty sure there still a number of bugs in S3QL. And even if there
were none, there's always the danger of broken hardware on your computer
or the remote backend. So yeah, things can get corrupted.

That said, there a number of people using S3QL regularly without having
their file system corrupted.


Best,
-Nikolaus
-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] I need some help S3 + s3ql +ec2

2014-10-17 Thread Nikolaus Rath
Raquel Carrillo raquel.carrillo.solorz...@gmail.com writes:
When list (ls) the contents of the mounted directory, using de 
commandline in ec2, all that shows to me is lost+found. But When I see the 
content of the bucket using the web console in s3 I see files like: 
s3ql_metadata s3ql_seq_no_1 Why can't I see these files on my mounted 
directory? and what do they represent?

There is no 1:1 mapping between files in S3QL and storage objects in
S3. A single file can be spread between multiple objects, and an object
can hold data for multiple files. You should treat the contents of the
bucket as a black-box.

To give an analogy: if you open a .jpg file in a text editor, you
wouldn't expect to see the picture (even if it contained some text). So
if you open an S3QL file system in some generic S3 browser, you're not
going to see your file system contents.

2. 

I made a new directory, using commandline in the s3ql directory with: 
sudo mkdir test-dir sudo touch testfile.txt

 But I can't see these files in the web console of my bucket in s3.

Same reason.

 3- This one is conceptual if for example I need to save my_photo.jpg in the 
 bucket and I use s3ql, the file will be saved, and I will be able to see it 
 with the web console like my_photo.jpg  or it will be saved in another 
 format?

You should be able to answer that question yourself now :-).


Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] S3QL on local storage backend provided by NFS - suspected incompatibility?

2014-10-17 Thread Nikolaus Rath
Super Strobi strobis...@gmail.com writes:
 **In summary**: is this setup supported by S3QL: local storage backend by 
 NFS share?

Yes, that should work.

Not sure why you're having performance issues. I'd try to isolate the
problem. For example, does the same thing happen if you use the local
backend? Or use the NFS backend, but don't export the mountpoint via
CIFS or NFS?


Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] S3QL on local storage backend provided by NFS - suspected incompatibility?

2014-10-18 Thread Nikolaus Rath
On 10/18/2014 12:00 AM, Super Strobi wrote:
 PS. as per documentation uncertainty: if the local ~/.s3ql is lost, is
 the s3ql filesystem still mountable?

If the file system was unmounted properly, yes. Otherwise you may loose
all data that was modified since the last metadata upload.

Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] S3QL on local storage backend provided by NFS - suspected incompatibility?

2014-10-18 Thread Nikolaus Rath
On 10/18/2014 01:48 PM, Super Strobi wrote:
 [root@strobiserver .s3ql]# cat /etc/mtab
[...]
 local:///mnt/local-nfs/myfsdata-ttt /mnt/s3ql-ttt-local fuse.s3ql
 rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other
 0 0
[...]
 
 ls -alR /mnt/s3ql-ttt-local does show the content, but files cannot be
 opened (hangs)
 
 ls -alR /mnt/local-nfs hangs too.

The former is caused by the latter. If requests to the backend directory
hang, S3QL is bound to hang as well once the cache is full. Fix your NFS
mount, and the S3QL mountpoint will become accessible again as well.


Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] S3QL on local storage backend provided by NFS - suspected incompatibility?

2014-10-19 Thread Nikolaus Rath
On 10/19/2014 05:50 AM, Super Strobi wrote:
 Dumping metadata...
 ..objects..
 ..blocks..
 Uncaught top-level exception:
 Traceback (most recent call last):
   File /bin/fsck.s3ql, line 9, in module
 load_entry_point('s3ql==2.9', 'console_scripts', 'fsck.s3ql')()
   File /usr/lib/python3.3/site-packages/s3ql/fsck.py, line 1217, in main
 dump_metadata(db, fh)
   File /usr/lib/python3.3/site-packages/s3ql/metadata.py, line 137, in
 dump_metadata
 dump_table(table, order, columns, db=db, fh=fh)
   File deltadump.pyx, line 317, in s3ql.deltadump.dump_table
 (src/s3ql/deltadump.c:4096)
   File deltadump.pyx, line 364, in s3ql.deltadump.dump_table
 (src/s3ql/deltadump.c:3746)
 ValueError: Can't dump NULL values


This looks like a bug. Could you report it
athttps://bitbucket.org/nikratio/s3ql/issues? Thanks!


 Mounting is not possible:
 
 [root@strobiserver ~]# mount.s3ql local:///mnt/local-nfs/myfsdata-ttt
 /mnt/s3ql-ttt-local --allow-other --nfs
 Using 4 upload threads.
 Autodetected 4052 file descriptors available for cache entries
 Enter file system encryption passphrase:
 Using cached metadata.
 File system damaged or not unmounted cleanly, run fsck!
 
 Rerunning fsck gives same output as above.
 
 Suggestions to get out of this?

Does the file system metadata (i.e., file names, sizes, permissions)
contain confidential information? If not, can you upload it somewhere
and send me the link? Then I'll take a look.

Otherwise I can give you some commands to execute, but it'll take
several emails back and  forth...

Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] Cache is not being uploaded to the backend

2014-10-19 Thread Nikolaus Rath
On 10/19/2014 08:00 AM, Rich B wrote:
 When I run s3qlstat it will sometimes hang and sometimes crash:
 
 $ s3qlstat --debug /mnt/MyBucket/
 2014-10-19 10:08:09.369 4344 MainThread root.excepthook: Uncaught
 top-level exception:
 Traceback (most recent call last):
   File /usr/bin/s3qlstat, line 9, in module
 load_entry_point('s3ql==2.11.1', 'console_scripts', 's3qlstat')()
   File /usr/lib/s3ql/s3ql/statfs.py, line 49, in main
 buf = llfuse.getxattr(ctrlfile, 's3qlstat', size_guess=256)
   File fuse_api.pxi, line 206, in llfuse.capi.getxattr
 (src/llfuse/capi_linux.c:21045)
 OSError: [Errno 5] Input/output error: '/mnt/MyBucket/.__s3ql__ctrl__'

This is probably not related to your other problems, but it might be
easier to debug.

What does ~/.s3ql/mount.log and ~/.s3ql/mount.s3ql_crit.log contain
after such a crash?

Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] Cache is not being uploaded to the backend

2014-10-19 Thread Nikolaus Rath
On 10/19/2014 08:00 AM, Rich B wrote:
 Hello,

 I'm trying to use S3QL with Google Storage, but I'm consistently
 running into a problem where blocks from the cache are not making it
 to the backend. I'm running on Ubuntu 14.04 and using S3QL from the
 PPA (version  2.11.1+dfsg-1~trusty1). 

 My mount command looks like this:

 $ mount.s3ql --threads 4 --metadata-upload-interval 7200 --allow-other
 --debug --cachedir /data/s3ql-cache --cachesize 1048567
 gs://MyBucket/MyPrefix /mnt/MyBucket

 Here is a sample from the mount log sometime after copying 200+ files
 (0.5-3MB each) into the mount point:

 2014-10-19 09:57:42.902 28314:Dummy-23 (name)s.statfs: statfs(): start
 2014-10-19 09:57:44.365 28314:CommitThread (name)s.put: timeout, returning
 2014-10-19 09:57:44.365 28314:CommitThread (name)s.put: waiting for
 reader..
 2014-10-19 09:57:49.382 28314:CommitThread (name)s.put: timeout, returning
 2014-10-19 09:57:49.382 28314:CommitThread (name)s.put: waiting for
 reader..
 2014-10-19 09:57:52.902 28314:Dummy-20 (name)s.statfs: statfs(): start
 2014-10-19 09:57:53.237 28314:Thread-6 (name)s.close:
 ObjectW(s3ql_data_4).close(): start
 2014-10-19 09:57:53.238 28314:Thread-6 (name)s._do_request: preparing
 PUT /MyPrefixs3ql_data_4?None, qs=None
 2014-10-19 09:57:53.239 28314:Thread-6 (name)s._send_request:
 _send_request(): PUT /MyPrefixs3ql_data_4
 2014-10-19 09:57:53.351 28314:Thread-5 (name)s.close:
 ObjectW(s3ql_data_3).close(): start
 2014-10-19 09:57:53.352 28314:Thread-5 (name)s._do_request: preparing
 PUT /MyPrefixs3ql_data_3?None, qs=None
 2014-10-19 09:57:53.353 28314:Thread-5 (name)s._send_request:
 _send_request(): PUT /MyPrefixs3ql_data_3
 2014-10-19 09:57:54.383 28314:CommitThread (name)s.put: timeout, returning
 2014-10-19 09:57:54.383 28314:CommitThread (name)s.put: waiting for
 reader..
 2014-10-19 09:57:54.702 28314:Thread-3 (name)s.close:
 ObjectW(s3ql_data_1).close(): start
 2014-10-19 09:57:54.702 28314:Thread-3 (name)s._do_request: preparing
 PUT /MyPrefixs3ql_data_1?None, qs=None
 2014-10-19 09:57:54.703 28314:Thread-3 (name)s._send_request:
 _send_request(): PUT /MyPrefixs3ql_data_1
 2014-10-19 09:57:59.384 28314:CommitThread (name)s.put: timeout, returning
 2014-10-19 09:57:59.384 28314:CommitThread (name)s.put: waiting for
 reader..
 2014-10-19 09:57:59.793 28314:Thread-4 (name)s.close:
 ObjectW(s3ql_data_2).close(): start
 2014-10-19 09:57:59.794 28314:Thread-4 (name)s._do_request: preparing
 PUT /MyPrefixs3ql_data_2?None, qs=None
 2014-10-19 09:57:59.795 28314:Thread-4 (name)s._send_request:
 _send_request(): PUT /MyPrefixs3ql_data_2
 2014-10-19 09:58:02.902 28314:Dummy-24 (name)s.statfs: statfs(): start
 2014-10-19 09:58:03.506 28314:Thread-6 (name)s.wrapped: Encountered
 ConnectionTimedOut exception (send/recv timeout exceeded), retrying
 call to ObjectW.close for the 33-th time...
 2014-10-19 09:58:03.631 28314:Thread-5 (name)s.wrapped: Encountered
 ConnectionTimedOut exception (send/recv timeout exceeded), retrying
 call to ObjectW.close for the 33-th time...
 2014-10-19 09:58:04.385 28314:CommitThread (name)s.put: timeout, returning
 2014-10-19 09:58:04.385 28314:CommitThread (name)s.put: waiting for
 reader..
 2014-10-19 09:58:05.282 28314:Thread-3 (name)s.wrapped: Encountered
 ConnectionTimedOut exception (send/recv timeout exceeded), retrying
 call to ObjectW.close for the 33-th time...
 2014-10-19 09:58:09.386 28314:CommitThread (name)s.put: timeout, returning
 2014-10-19 09:58:09.386 28314:CommitThread (name)s.put: waiting for
 reader..
 2014-10-19 09:58:10.943 28314:Thread-4 (name)s.wrapped: Encountered
 ConnectionTimedOut exception (send/recv timeout exceeded), retrying
 call to ObjectW.close for the 33-th time...
 2014-10-19 09:58:12.902 28314:Dummy-19 (name)s.statfs: statfs(): start
 2014-10-19 09:58:14.387 28314:CommitThread (name)s.put: timeout, returning
 2014-10-19 09:58:14.387 28314:CommitThread (name)s.put: waiting for
 reader..
 2014-10-19 09:58:19.388 28314:CommitThread (name)s.put: timeout, returning
 2014-10-19 09:58:19.388 28314:CommitThread (name)s.put: waiting for
 reader..
 2014-10-19 09:58:22.902 28314:Dummy-18 (name)s.statfs: statfs(): start
 2014-10-19 09:58:24.389 28314:CommitThread (name)s.put: timeout, returning
 2014-10-19 09:58:24.389 28314:CommitThread (name)s.put: waiting for
 reader..
Could you give some more context? In particular, I'd like to see the
logs from the 32st and 1st retry.

 If I manually kill mount.s3ql and umount the filesystem, then run
 fsck.s3ql, the cache does get uploaded (with some difficulty):

 $ fsck.s3ql --cachedir /data/s3ql-cache 
 gs://MyBucket/MyPrefix
  

 Starting fsck of gs://MyBucket/MyPrefix
 Using cached metadata.
 Remote metadata is outdated.
 Checking DB integrity...
 Creating temporary extra indices...
 Checking lost+found...
 Checking cached objects...
 Committing block 0 of inode 143 to backend
 Committing block 0 of inode 289 to backend
 

Re: [s3ql] S3QL on local storage backend provided by NFS - suspected incompatibility?

2014-10-19 Thread Nikolaus Rath
On 10/19/2014 11:05 AM, Super Strobi wrote:
 Does the file system metadata (i.e., file names, sizes, permissions)
 contain confidential information? If not, can you upload it somewhere
 and send me the link? Then I'll take a look.
 
 
 I can share if it, however I run into this:
 
 [root@strobiserver local-nfs]# s3qladm download-metadata
 local:///mnt/local-nfs/myfsdata-ttt/
[...]

Sorry, I should have been more clear. Just upload the .db file from
~/.s3ql/. No need to download anything.

Best,
-Nikolaus
-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] benchmark.py AttributeError: 'ArgumentParser' object has no attribute 'add_ssl'

2014-10-20 Thread Nikolaus Rath
good4...@gmail.com writes:
 Debian Wheezy. Python 3.3.0 running in pyvenv.

 (virtualenv-3.3.0) nick@host:~/build/s3ql-2.11/contrib$ python -V
 Python 3.3.0
 (virtualenv-3.3.0) nick@host:~/build/s3ql-2.11/contrib$ python benchmark.py 
 Traceback (most recent call last):
   File benchmark.py, line 225, in module
 main(sys.argv[1:])
   File benchmark.py, line 71, in main
 options = parse_args(args)
   File benchmark.py, line 55, in parse_args
 parser.add_ssl()
 AttributeError: 'ArgumentParser' object has no attribute 'add_ssl'

 Can anyone help?

If you make a small change to benchmark.py it should work:

diff --git a/contrib/benchmark.py b/contrib/benchmark.py
--- a/contrib/benchmark.py
+++ b/contrib/benchmark.py
@@ -52,7 +52,7 @@
 parser.add_authfile()
 parser.add_quiet()
 parser.add_debug()
-parser.add_ssl()
+parser.add_backend_options()
 parser.add_version()
 parser.add_storage_url()
 parser.add_argument('file', metavar='file', 
type=argparse.FileType(mode='rb'),


Thansk for the report, I'll fix this in the next release.


Best,
-Nikolaus
-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] Cache is not being uploaded to the backend

2014-10-20 Thread Nikolaus Rath
Rich B rich...@gmail.com writes:
 On Sunday, October 19, 2014 1:45:54 PM UTC-4, Nikolaus Rath wrote:

 On 10/19/2014 08:00 AM, Rich B wrote: 
  When I run s3qlstat it will sometimes hang and sometimes crash: 

 This is probably not related to your other problems, but it might be 
 easier to debug. 

 What does ~/.s3ql/mount.log and ~/.s3ql/mount.s3ql_crit.log contain 
 after such a crash? 

  It took a couple tries to recreate the problem:

 $ date;s3qlstat /mnt/gl-s3ql
 Sun Oct 19 15:26:44 EDT 2014
 Uncaught top-level exception:
 Traceback (most recent call last):
   File /usr/bin/s3qlstat, line 9, in module
 load_entry_point('s3ql==2.11.1', 'console_scripts', 's3qlstat')()
   File /usr/lib/s3ql/s3ql/statfs.py, line 49, in main
 buf = llfuse.getxattr(ctrlfile, 's3qlstat', size_guess=256)
   File fuse_api.pxi, line 206, in llfuse.capi.getxattr 
 (src/llfuse/capi_linux.c:21045)
 OSError: [Errno 5] Input/output error: '/mnt/gl-s3ql/.__s3ql__ctrl__'

 mount.log:
[...]
 2014-10-19 15:26:44.990 18447:Dummy-18 (name)s.getxattr: getxattr(2, 
 b's3qlstat'): start
 2014-10-19 15:26:44.991 18447:Dummy-18 (name)s.extstat: extstat(%d):
 start
[ Nothing more from Dummy-18 ]

I'm afraid at the moment it beats me completely how this can possibly
happen.


Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] Cache is not being uploaded to the backend

2014-10-20 Thread Nikolaus Rath
Rich B rich...@gmail.com writes:
 Could you give some more context? In particular, I'd like to see the logs 
 from the 32st and 1st retry.

 Because of log rotations, I had to re-run the test to get the 1st and 32nd 
 retry. This time more of the blocks managed to get uploaded. Some of the 
 threads seemed to have better luck than others:
[...]

  If I manually kill mount.s3ql and umount the filesystem, then run 
 fsck.s3ql, the cache does get uploaded (with some difficulty):

 $ fsck.s3ql --cachedir /data/s3ql-cache  
 gs://MyBucket/MyPrefix   


 Starting fsck of gs://MyBucket/MyPrefix
 Using cached metadata.
 Remote metadata is outdated.
 Checking DB integrity...
 Creating temporary extra indices...
 Checking lost+found...
 Checking cached objects...
 Committing block 0 of inode 143 to backend
 Committing block 0 of inode 289 to backend
 Committing block 0 of inode 212 to backend
 Committing block 0 of inode 338 to backend
 Committing block 0 of inode 334 to backend
 Committing block 0 of inode 154 to backend
 Committing block 0 of inode 350 to backend
 Committing block 0 of inode 354 to backend
 Committing block 0 of inode 224 to backend
 Committing block 0 of inode 215 to backend
 Committing block 0 of inode 221 to backend
 Committing block 0 of inode 339 to backend
 Committing block 0 of inode 278 to backend
 Committing block 0 of inode 178 to backend
 Encountered ConnectionTimedOut exception (send/recv timeout exceeded), 
 retrying call to ObjectW.close for the 3-th time...
 Encountered ConnectionTimedOut exception (send/recv timeout exceeded), 
 retrying call to ObjectW.close for the 4-th time...
 Encountered ConnectionTimedOut exception (send/recv timeout exceeded), 
 retrying call to ObjectW.close for the 5-th time...
 Encountered ConnectionTimedOut exception (send/recv timeout exceeded), 
 retrying call to ObjectW.close for the 6-th time...
 Encountered ConnectionTimedOut exception (send/recv timeout exceeded), 
 retrying call to ObjectW.close for the 7-th time...
 Encountered ConnectionTimedOut exception (send/recv timeout
 exceeded),
[...]


To me this looks just like a crappy network connection. You could try to
increase the send/recv timeout (using the --backend-options argument,
see manual).


 I'm stumped. Has anyone got an idea of what might be causing my problem?
  

 Do you have the same problem when you store data in Amazon S3 from the 
 same system?

 Yes, I just tried Amazon and I get similar results.

This is consistent with a bad connection.

 If you have the necessary SSL-fu, you could also do a traffic capture using 
 Wireshark. That's gonna be a bit tricky, but it should tell us a lot more.

 I've worked with tcpdump/wireshark before, but my SSL-fu is weak can
 you tell me what I should look for?

I'm afraid not, I never got it to work myself. But you're lucky, if you
have the same problem with S3, just use --backend-options nossl, and
capture that traffic.


Best,
-Nikolaus
-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] Benchmarks and Clustering.

2014-10-21 Thread Nikolaus Rath
gi...@liquidhealthcare.com writes:
 Hi Guys

 I've been looking through the wiki and I've noticed that this file system 
 doesn't support mounting on multiple machines, I wanted to check whether 
 there has been any benchmarks with this software.

I don't think anything has been published. You can always run
benchmark.py on your system to get some rough numbers.

 How many do we know how many uploads this can take before it maxes
 out.

I don't understand the question. There's no limit on uploads. Do you
mean the total number of files? That should scale n * log(n), and
I have tested with around ~30 million files. You can easily go above
that, but the database size scales with the number of files.

 We'd normally be looking for something which supports clustering and I 
 wanted to ask what would be involved in making this support
 clustering?

What do you mean with clustering?


Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] ghost folder on nfs client

2014-10-25 Thread Nikolaus Rath
On 10/21/2014 09:33 PM, Maxim Kostrikin wrote:
 Suddenly I found ghost folder on nfs client of s3ql.
 Its name misticaly doubles ls output
 On nfs server with s3ql no such behavior


 |
 user@server://nfs/sharedfs/logs$ ls -la
 ls: reading directory .: Too many levels of symbolic links
 total 0
 drwxr-xr-x. 1 sfinx0 sfinx 0 Oct 21 23:23 .
 drwxr-xr-x. 1 root   root  0 Oct 21 01:47 ..
 drwxr-xr-x. 1 sfinx0 sfinx 0 Oct 21 23:23 fenix
 drwxr-xr-x. 1 sfinx0 sfinx 0 Oct 21 23:23 fenix
 [...]


 
 on nfs server:
 mount.s3ql --cachedir /nfsexport/.s3cache/ --cachesize 33554432
 --max-cache-entries 1048576  --threads 16 --nfs s3://bucker
 /nfsexport/sharedfs/
 root@nfs:/nfsexport/sharedfs/logs# ls -la
 total 0
 drwxr-xr-x 1 1100 1100 0 Oct 21 23:24 fenix
 root@nfs:/nfsexport/sharedfs/logs# ls *
 test

 |


Thanks for the report! This looks like a bug in S3QL when being exported
over NFS. Could you report it at
https://bitbucket.org/nikratio/s3ql/issues? I'll look into it as soon as
possible.

Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] Cache is not being uploaded to the backend

2014-10-25 Thread Nikolaus Rath
On 10/22/2014 07:25 AM, Rich B wrote:
 In the mean time, I have some questions about the tcp-timeout backend
 option. The docs don't say what the units are, what the default value
 is, or how to use the option.
  From the code, it looks like the units are seconds, and the syntax is
 --backend-options tcp-timeout=integer, however when I try this
 syntax, python barfs:

 $ /usr/bin/mount.s3ql --debug --cachedir /data/s3ql-cache --cachesize
 1048567 --backend-options no-ssl,tcp-timeout=12 s3://myBucket/myPrefix
 /mnt/myMount
 Using 8 upload threads.
 Autodetected 4040 file descriptors available for cache entries
 Uncaught top-level exception:
 Traceback (most recent call last):
   File /usr/bin/mount.s3ql, line 9, in module
 load_entry_point('s3ql==2.11.1', 'console_scripts', 'mount.s3ql')()
   File /usr/lib/s3ql/s3ql/mount.py, line 121, in main
 [...]
 return eval_coroutine(self.co_read_response(), self.timeout)
   File /usr/lib/python3/dist-packages/dugong/__init__.py, line 1361,
 in eval_coroutine
 if not next(crt).poll(timeout=timeout):
   File /usr/lib/python3/dist-packages/dugong/__init__.py, line 115,
 in poll
 return bool(poll.poll(timeout*1000)) # convert to ms
 TypeError: timeout must be an integer or None

 Am I using tcp-timeout incorrectly? What is the default timeout?

Yes, this looks like a bug in S3QL. Could you report it at
https://bitbucket.org/nikratio/s3ql/issues?

Thanks!
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] S3QL backup restore

2014-11-04 Thread Nikolaus Rath
Hi Sven,

When replying to emails on this list, please do not put your reply
above the quoted text, and do not quote the entire message you're
answering to. This makes it unnecessarily hard for other readers to
understand the context of your email. Instead, please cut quoted parts
that are not relevant to your reply, and insert your responses right
after the points you're replying to (as I have done below). Thanks!


Sven Martin sproeg...@gmail.com writes:
 Hi Nikolaus,

 The server I mounted a S3QL file system on is still in test. I installed
 owncloud and configured its data directory to be on a mounted S3 volume
 using S3QL. Rebooting automounts the volume. I was on holiday for a week
 and when I came back the system had no free disk space. It turned out the
 volume was not mounted and the mountpoint got filled up with owncloud data.
 I don't knwo what caused the volume to unmount. I deleted the data (user
 data, but it's test so no issue) to free up space, rebooted the server. Now
 I am unable to mount the S3 filesystem. It keeps saying Mounting
 filesystem... and does not return to the prompt. Running fsck.s3ql --force
 works but does not help.

 Any tips  tricks?

What S3QL version are you using? Note the following entries in the
Changelog:

2014-09-04, S3QL 2.11.1

  * By popular demand, mount.s3ql is now able to daemonize again
and also does so by default.

2014-08-27, S3QL 2.11

  * mount.s3ql no longer daemonizes on its own. With a modern init
system this should no longer be necessary, and when running
mount.s3ql from the command line the shell can be used to put the
process into background.


Best,
Nikolaus

PS: Please remember the first paragraph when replying.
-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] Does it make sense to rsync raw s3ql data?

2014-11-05 Thread Nikolaus Rath
Super Strobi strobis...@gmail.com writes:
 Hello,

 I'm currently investigating the possibility to synchronize 2 s3ql 
 filesystems, but without mounting them.

I assume you mean without mounting them at the same time? Otherwise I
don't understand what you're doing.


 What I'm trying to achieve:


- NAS1: contains the s3ql filesystem (which is s3ql-mounted by a linux 
server, through sshfs - but not 24/7)

NAS1 is a computer, or a directory? What does it mean for the s3ql
filesystem to be s3ql-mounted by a linux server? Is NAS1 that Linux
server? Is sshfs the S3QL backend, or are you accessing S3QL via sshfs?

- NAS2: contains a scp -pr of the NAS1 contents as a baseline.

Of what exactly? The s3ql file system? Or the backend of the s3ql file
system?

 I use the following command on NAS1 to rsync the delta's made by linux 
 server, after which (off course) the s3ql filesystem got umounted properly 
 - the delta should be around 2GB.

I don't understand. What delta's? How are they made?

2. Is there a smarter way to synchronise *offline *s3ql file systems 
(so *without* having to mount them)?

Yes, but the procedure depends on the backend. With the local backend,
use rsync. With the remote storage backend, you need to use a tool
specific to that service.


Best,
-Nikolaus


-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] Does it make sense to rsync raw s3ql data?

2014-11-05 Thread Nikolaus Rath
On 11/05/2014 02:37 PM, Super Strobi wrote:
 Let me rephrase the question differently:
 
 If I watch the original myfsdata directory, and looking at nodes like:
 
 myfsdata/s3ql_data_/188/s3ql_data_188998
 
 Is there any reason why rsync *thinks* they have changed with having
 added only 2GB of data inside the file system.

I can't speak for rsync, but as far as S3QL is concerned, these files
are write-once. In other words, they are created with their final
content and may be deleted at some point, but in regular operation
(i.e., excluding fsck.s3ql after a crash) they are never modified or
recreated, no matter what you do inside the S3QL mountpoint.

Best,
-Nikolaus
-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[s3ql] [ANNOUNCE] S3QL 2.12 has been released

2014-11-09 Thread Nikolaus Rath
Dear all,

I am pleased to announce a new release of S3QL, version 2.12.

From the changelog:

2014-11-09, S3QL tip

  * The s3c backend has a new 'dumb-copy' option. If this option
is enabled, copy operations are assumed to have succeeded
as soon as the server returns a '200 OK' status, without
checking the contents of the response body.

  * The swift and s3c backends have a new 'disable-expect100' to
disable support for the 'Expect: continue' header. Using this
option allows S3QL to work with storage servers without proper
HTTP 1.1 support, but may decrease performance as object data will
be transmitted to the server more than once in some circumstances.

  * contrib/benchmark.py is now working again.

  * The `tcp-timeout` backend option is now actually working
instead of resulting in a TypeError.

  * Fixed a problem where saving metadata would fail with
ValueError: Can't dump NULL values.

  * s3qlstat now also gives information about cache usage.


As usual, the release is available for download from
https://bitbucket.org/nikratio/s3ql/downloads

Please report any bugs on the mailing list (s3ql@googlegroups.com) or
the issue tracker (https://bitbucket.org/nikratio/s3ql/issues).


Starting with version 2.0, S3QL requires Python 3.3 or newer. For older
systems, the S3QL 1.x branch (which only requires Python 2.7) will
continue to be supported for the time being. However, development
concentrates on S3QL 2.x while the 1.x branch only receives selected
bugfixes. When possible, upgrading to S3QL 2.x is therefore strongly
recommended.


Enjoy,

   -Nikolaus

-- 
Encrypted emails preferred.
PGP fingerprint: 5B93 61F8 4EA2 E279 ABF6  02CF A9AD B7F8 AE4E 425C

 »Time flies like an arrow, fruit flies like a Banana.«

















signature.asc
Description: OpenPGP digital signature


Re: [s3ql] S3QL 2.12 loops fsck

2014-11-10 Thread Nikolaus Rath
Hi Maxim,

When replying to emails on this list, please do not put your reply
above the quoted text, and do not quote the entire message you're
answering to. This makes it unnecessarily hard for other readers to
understand the context of your email. Instead, please cut quoted parts
that are not relevant to your reply, and insert your responses right
after the points you're replying to (as I have done below). Thanks!

Maxim Kostrikin mkostri...@gmail.com writes:
 вторник, 11 ноября 2014 г., 1:13:49 UTC+6 пользователь Nikolaus Rath 
 написал:
 Maxim Kostrikin mkost...@gmail.com javascript: writes: 
  2014-11-10 05:11:33.867 4539:MainThread s3ql.fsck.check: Creating 
  temporary extra indices... 
  2014-11-10 05:11:35.765 4539:MainThread s3ql.fsck.check_lof: Checking 
  lost+found... 
  2014-11-10 05:11:35.765 4539:MainThread s3ql.fsck.check_cache: Checking 
  cached objects... 
  2014-11-10 05:11:39.559 4539:MainThread s3ql.fsck.check_names_refcount: 
  Checking names (refcounts)... 
  
  Last line stick for hours with no progress. 
  fsck.s3ql eat on core 100% constantly. 
  
  Please advise. 

 Is this a new problem with 2.12, i.e. was it working with 2.11? 

 What is the output if you press Ctrl-C? 

 I hoped that 2.12 was the cause.
 2.11.1 same result.
 Ctrl-C typing C^
 Kill PID - interrupts the fsck process.

You mean, you cannot abort the process with Ctrl-C?

In that case, please apply the attached patch and send SIGUSR1 when it's
hanging, this should give a proper stack trace.


Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«
-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--- a/src/s3ql/fsck.py
+++ b/src/s3ql/fsck.py
@@ -30,6 +30,7 @@
 import textwrap
 import time
 import atexit
+import faulthandler
 
 log = logging.getLogger(__name__)
 
@@ -1139,6 +1140,9 @@
 options = parse_args(args)
 setup_logging(options)
 
+faulthandler.enable()
+faulthandler.register(signal.SIGUSR1)
+
 # Check if fs is mounted on this computer
 # This is not foolproof but should prevent common mistakes
 if is_mounted(options.storage_url):


[s3ql] [ANNOUNCE] New Dugong Version Released

2014-11-29 Thread Nikolaus Rath
Dear all,

I have just released a new version of python-dugong (3.4). This release
adds proper HTTP (i.e., non HTTPS) proxy support. If you would like to
use this feature, you can update python-dugong and S3QL will use it
automatically.

Best,
-Nikolaus
-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] mkfs/mount 403 Forbidden error

2014-12-03 Thread Nikolaus Rath
On 12/03/2014 10:50 AM, Andy Cress wrote:
 Using s3ql-1.17 and fuse-2.8.7-2.el6.x86_64
 All other account/buckets on S3 work fine from the same system.
 Only this account/bucket gets this error, what might be wrong with it?
 This bucket gets the same error from multiple systems. 
 Below is the sequence:
mkfs 403 error
clear
download-metadata ok, empty
mkfs works
mount gets 403 error
download-metadata error (with debug)
 
 I'm confused about what might be wrong with this particular bucket (or
 account).
 Is there something with AWS IAM that might cause this?

That's quite possible, I never tried IAM.

 Other ideas?

Disable IAM and see if it works?


Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] s3ql permissions for read-only mount (without s3ql knowing that it is read-only)

2014-12-08 Thread Nikolaus Rath
Peter Schüller schuelle...@gmail.com writes:
 Is there a way to mount s3ql read-only by setting bucket permissions in a 
 way that
 * mounting is still possible, but
 * changing the file system is not possible?

 If I disable all Put/Delete actions of S3 then mounting fails, probably 
 because the fact that the FS is mounted is stored in some special file in 
 the bucket. I can understand this, but what happens if I allow writing just 
 to that file and not to the rest of the bucket? Is it even one file?

Well, it's just one file, but a different on every mount. On every
mount, S3QL creates a file s3ql_seq_no_XX, where XX is an increasing
number, and deletes the n-th most recent s3ql_seq_no_YY file.

If you can configure AWS to allow that, then it should work.

 To clarify: I do not want to mount in multiple locations, I just want to 
 have a user that has no possibility to destroy the data but can still read 
 it.

That user will likely get warnings and/or crashes because of apparent
file system corruption if he tries to access data that has been modified
by the other active mount.

Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] Connexion Cloudwatt

2014-12-10 Thread Nikolaus Rath
Hi netplus,

When replying to emails on this list, please do not put your reply
above the quoted text, and do not quote the entire message you're
answering to. This makes it unnecessarily hard for other readers to
understand the context of your email. Instead, please cut quoted parts
that are not relevant to your reply, and insert your responses right
after the points you're replying to (as I have done below). Thanks!

netplus.r...@gmail.com writes:
 Hello,

 Thank you for your help.
 It seems that it's not the good URL.
 But the URL is not classic and I have got this message instead
 Invalid storage URL (because it's not well parse by python script I
 guess).

 This is an example of URL :
 
 swiftks://storage.fr1.cloudwatt.com/v1/AUTH_abcd1234efgh5678/region:container


 Like you see, the hostname is
 storage.fr1.cloudwatt.com/v1/AUTH_abcd1234efgh5678 and it's a
 problem.

No, that's not the hostname. The hostname is
storage.fr1.cloudwatt.com. You are not supposed to put the v1/... part
into the storage URL, S3QL autodetects that.



Best,
-Nikolaus

PS: Please remember the first paragraph when replying :-).
-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] Connection closed unexpectedly, fuse,s3ql

2014-12-23 Thread Nikolaus Rath

On 12/23/2014 04:17 AM, michael karotsieris wrote:

Hello ,

I get and HTTPerror() and a connection closed unexpectedly error when i
try to mount a s3ql container on a fuse fs in my machine .What happens
next is that whenever i try to access the fuse fs ,the process i am
using will go in uninterrubtible sleep and the only thing i can do is
reboot.Does this sound like a known issue yet?Thanks in advance


No, I haven't any any such reports before.

Best,
-Nikolaus
--
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

--
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] increasing upload performance

2015-01-26 Thread Nikolaus Rath

On 01/25/2015 09:55 PM, Di Pe wrote:

We have our local swift cluster and we have a few use cases for s3ql.
One of them is keeping a mirror of a posix file system. For this we want
to maximize write throughput per process. I am aware that each copy
process  is limited by the write throughput of the swift cluster which
is between 50 and 60 MB/s. I thought that compression would be able to
improve upload performance but no matter which compression type I use my
1GB test file always needs 24-28 sec to upload and rsync shows between
30 and 40 MB/s throughput. Is zlib perhaps too slow to do real
on-the-fly compression and has to do most of the work after the file is
uploaded?


contrib/benchmark.py should be able to answer that question.

Best,
-Nikolaus

--
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

--
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] OSError: [Errno 14] Bad address

2015-02-04 Thread Nikolaus Rath
Jeff Bogatay j...@bogatay.com writes:
 I am in the process of crafting a s3ql backed backup solution. During
 the development/testing I left the store mounted, installed some
 system updates and rebooted.

 Now I am unable to mount and/or check the store. Running 2.12 on
 ArchLinux. It has been several hours since I last wrote to the store.

 My last attempt was to delete the local metadata and have it rebuilt.
 Same error as below.

 Not sure what to do next or how to recover. Are these stores typically
 this fragile?

What do you mean with the store? Are you talking about a remote
storage server? In that case the fragility obviously depends on the
server.

 Also, as a test I created a fresh mount, wrote to it, unmounted it,
 and remounted it without any issues.

 2015-02-04 16:56:35.635 9617:MainThread s3ql.deltadump.dump_metadata:
 dump_table(ext_attributes): writing 0 rows
[...]

I am not sure what I'm looking at here. First you say it works, but then
you quote an error message (and the formatting is pretty messed
up). Can you be more precise as to when exactly the error occurs (and is
it always the same)?


Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] Looking for best practices and some beginners questions: Using s3ql as backup target

2015-02-04 Thread Nikolaus Rath
Nikolaus Rath nikol...@rath.org writes:
 4) How can I realize an additional backup of (the current state of) an
 s3ql file system e.g. to an external drive?

 Not easily. Instead I'd suggest to run an rsync job from your data
 directory to an S3QL mount point, and to your external drive.

See
https://bitbucket.org/nikratio/s3ql/wiki/FAQ#!is-there-a-way-to-make-the-cache-persistent-access-the-file-system-offline
for details.


Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] Anyone depend on API authentication for Google Storage?

2015-02-02 Thread Nikolaus Rath
Hi Andy,

[ When replying to emails on this list, please do not put your reply
  above the quoted text, and do not quote the entire message you're
  answering to. This makes it unnecessarily hard for other readers to
  understand the context of your email. Instead, please cut quoted parts
  that are not relevant to your reply, and insert your responses right
  after the points you're replying to (as I have done below). Thanks! ]

Andy Cress andycr...@gmail.com writes:
 On Sun, Feb 1, 2015 at 4:43 PM, Nikolaus Rath nikol...@rath.org wrote:
 Hello,

 Does anyone depend on using API / old-style authentication for accessing
 Google Storage with S3QL?

 I am considering to make OAuth2 the only supported authentication method
 for Google Storage.

 I believe that our usage could leverage whatever S3QL supports, but we
 do currently use the gs legacy API keys with s3ql v1.17 (i.e.
 https://code.google.com/apis/console/#project:467632721924:storage:legacy).
 The issue  for us would be migrating existing users from the legacy
 API to OAuth2 without disrupting their access to a given bucket.  If
 that is clearly documented and not too difficult, we could manage the
 transition.

Thanks for your reply! This change would affect only S3QL 2.x. So the
bigger disruption for you would probably be the upgrade of S3QL, not the
change of the authentication method. As a matter of fact, you can use
the legacy and OAuth2 API at the same time, so I think the transition
should be very smooth.

 For s3ql usage, would OAuth2 change the tag from 'gs' to 'gs2', as was
 done with swift?  I would be a little concerned if the logic changed
 for 'gs' to mean something else from version to version.

No, there probably would not be a change of the tag. You can already use
OAuth2 authentification with S3QL at the moment if you specify oauth2
as the login and your refresh token as the password. The change would
(probably) be that S3QL just aborts if you specify anything but 'oauth2'
as the login.


Best,
Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] s3ql.backends.s3c.S3Error: CannotPutData: (upgrading from 21 to 22)

2015-02-02 Thread Nikolaus Rath
Adam Watkins adam.gc.watk...@gmail.com writes:
 Encountered ConnectionClosed exception (connection closed
 unexpectedly), retrying call to Backend.lookup for the 3-th time...
 ..processed 147/16379 objects (0.9%, 0 bytes rewritten)..Uncaught
 top-level exception:

 Original/inner traceback (most recent call last):
 Traceback (most recent call last):
 File
 /usr/local/lib/python3.4/site-packages/s3ql-2.13-py3.4-linux-i686.egg/s3ql/common.py,
 line 482, in run

Are you sure that you did not omit something here? This does not make
sense, it should look sort-of like this:

..processed 147/16379 objects (0.9%, 0 bytes rewritten)..Uncaught top-level 
exception:
Traceback (most recent call last):
  File /home/nikratio/in-progress/s3ql/bin/s3qladm, line 26, in module
s3ql.adm.main(sys.argv[1:])
  File /home/nikratio/in-progress/s3ql/src/s3ql/adm.py, line 94, in main
return upgrade(options)
  File /home/nikratio/in-progress/s3ql/src/s3ql/common.py, line 549, in 
wrapper
return fn(*a, **kw)
  File /home/nikratio/in-progress/s3ql/src/s3ql/adm.py, line 339, in upgrade
update_obj_metadata(backend, backend_factory, db, options.threads)
  File /home/nikratio/in-progress/s3ql/src/s3ql/common.py, line 549, in 
wrapper
return fn(*a, **kw)
  File /home/nikratio/in-progress/s3ql/src/s3ql/adm.py, line 452, in 
update_obj_metadata
t.join_and_raise()
  File /home/nikratio/in-progress/s3ql/src/s3ql/common.py, line 503, in 
join_and_raise
raise EmbeddedException(exc_info, self.name)
s3ql.common.EmbeddedException: caused by an exception in thread Thread-19.
Original/inner traceback (most recent call last): 
Traceback (most recent call last):
  File /home/nikratio/in-progress/s3ql/src/s3ql/common.py, line 482, in run
self.run_protected()


It'd be great if you could post the *full* traceback again (ideally also
while preserving indentation and newlines, if your mailer insists on
rewrapping you can attach the traceback as a separate file).


Thanks,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] OSError: [Errno 14] Bad address

2015-02-11 Thread Nikolaus Rath
Jeff Bogatay j...@bogatay.com writes:
 I was running 1.0.2.

 I still had 1.0.1 in my archlinux pkg cache, so I downgraded and
 everything worked fine. Fsck completed without any issues.

 For anybody else out there experiencing this issue -- it's clearly an
 openssl problem.

It'd be great if you could file an OpenSSL bug then. You probably don't
want to stay with version 1.0.1 forever :-).

Best,
-Nikolaus
-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] S3QL 2.13 upgrade broken (runtests.py fails on HostnameNotResolvable)

2015-02-11 Thread Nikolaus Rath
Strobi strobis...@gmail.com writes:
 S3QL needs at least dugong 3. Are you sure that you've got that
 installed? Try to run

That should have been dugang 3.4, sorry. But it's correct in the
user's guide :-P.

 # python3 -c 'import dugong; print(dugong.__version__)'
 
 

 # python3 -c 'import dugong; print(dugong.__version__)'
 3.2

Well, there's your problem.

Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] fsck 2.12 hangs processing objects

2015-02-17 Thread Nikolaus Rath
On Feb 16 2015, Guilherme Barile guig...@gmail.com wrote:
 with high I/O scenarios in the past (using the 1.x series), where the 
 mount daemon would die under high load, but managed to solve them all 
 with fsck.s3ql after rebooting. 

 Actually that doesn't sound like a solution at all. mount.s3ql should 
 never crash, now matter how high the load is. Can you still reproduce 
 that? If so, it'd be great if you could post the backtrace. 

 This time it happened while I was performing 2 full backups with lots of 
 small files, from my mounted s3qlfs to another s3 bucket via duplicity. 
 I've experienced this when apache was under high load. When mount.s3ql 
 stops, all other processes start to wait for io, increasing the load 
 constantly, I can try to force this behaviour on another bucket after I 
 restore this one.

Please do!

 Btw, which version is it now? Your subject says 2.12. 


 Filesystem was created on 2.11, I tried upgrading to 2.13 but couldn't due 
 to db revision upgrade, so I had to compile 2.12 to try fsck/upgrade my 
 volume - /usr/local/bin/fsck.s3ql --debug s3://my-bucket/home


  Now fsck.s3ql is hanging at ..processed 99500 objects so far.. 
  fsck.log (with --debug) shows a lot of HEAD requests, It seems to be 
  running a verify for every block (about 140). 

 That should not happen. Please post more context for the logfile. What's 
 the last message before the HEAD requests are starting? 


 As this should not happen, I stopped fsck and ran it again to check the 
 logs, the output doesn't show any errors

 https://gist.github.com/guigouz/7a6a624d97d12918b3f6

I think you may be running into a bug that has been fixed in S3QL
2.13. Could you try the attached patch? It backports the relevant change
to 2.12.

 I also tried 
 downloading a metadata backup, but it hangs at the same point, here's 
 an excerpt of the ongoing log: 

 What do you mean with at the same point? Downloading metadata should 
 not issue any requests for data objects at all. 

 fsck hangs at ..processed 99500 objects so far.., no matter which db backup 
 I use

Ah, so there is no problem with downloading the metadata backup.


Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
diff --git a/src/s3ql/backends/s3c.py b/src/s3ql/backends/s3c.py
--- a/src/s3ql/backends/s3c.py
+++ b/src/s3ql/backends/s3c.py
@@ -204,7 +204,7 @@
 log.debug('list(%s, %s): start', prefix, start_after)
 
 keys_remaining = True
-marker = start_after
+marker = self.prefix + start_after
 prefix = self.prefix + prefix
 ns_p = self.xml_ns_prefix
 
diff --git a/tests/mock_server.py b/tests/mock_server.py
--- a/tests/mock_server.py
+++ b/tests/mock_server.py
@@ -222,7 +222,7 @@
 'IsTruncatedfalse/IsTruncated' ]
 
 count = 0
-for key in self.server.data:
+for key in sorted(self.server.data):
 if not key.startswith(prefix):
 continue
 if marker and key = marker:


Re: [s3ql] fsck 2.12 hangs processing objects

2015-02-17 Thread Nikolaus Rath
On Feb 17 2015, Guilherme Barile guig...@gmail.com wrote:
 Have you seen s3ql running on similar setups ? Serving webpages /
 files directly ?

No.


Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] fsck 2.12 hangs processing objects

2015-02-16 Thread Nikolaus Rath
On Feb 16 2015, Guilherme Barile guig...@gmail.com wrote:
 Hello,

 I use s3ql for quite some time as my apache webroot, which serves
 about 300gb of files and some php applications. I had few problems
^^^

you already said that. SCNR.

 with high I/O scenarios in the past (using the 1.x series), where the
 mount daemon would die under high load, but managed to solve them all
 with fsck.s3ql after rebooting.

Actually that doesn't sound like a solution at all. mount.s3ql should
never crash, now matter how high the load is. Can you still reproduce
that? If so, it'd be great if you could post the backtrace.

 My current server was running s3ql 2.11 on ubuntu 14.04. Earlier
 today, I was performing full backups from that node to another s3
 bucket using duplicity, and after lots of gbs transferred, s3ql failed
 and I had to reboot the server.

 Now fsck.s3ql is hanging at ..processed 99500 objects so far..
 fsck.log (with --debug) shows a lot of HEAD requests, It seems to be
 running a verify for every block (about 140).

That should not happen. Please post more context for the logfile. What's
the last message before the HEAD requests are starting?

 I also tried
 downloading a metadata backup, but it hangs at the same point, here's
 an excerpt of the ongoing log:

What do you mean with at the same point? Downloading metadata should
not issue any requests for data objects at all.

Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] fsck 2.12 hangs processing objects

2015-02-16 Thread Nikolaus Rath
On Feb 16 2015, Guilherme Barile guig...@gmail.com wrote:
 Hello,

 I use s3ql for quite some time as my apache webroot, which serves
 about 300gb of files and some php applications. I had few problems
 with high I/O scenarios in the past (using the 1.x series), where the
 mount daemon would die under high load, but managed to solve them all
 with fsck.s3ql after rebooting.

 My current server was running s3ql 2.11 on ubuntu 14.04. Earlier
[...]

Btw, which version is it now? Your subject says 2.12.


Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] fsck 2.12 hangs processing objects

2015-02-18 Thread Nikolaus Rath
On Feb 18 2015, Guilherme Barile guig...@gmail.com wrote:
 Nikolaus,

 After fscking and upgrading my volume to the new db revision, I was
 able to mount the filesystem and access all the files without problems
 (even though I stress the filesystem a lot, I never lost data using
 s3ql).

 With the volume back online, I started to rsync about 280gb of files
 *from* S3QL to an EBS volume (everything running inside AWS, on the
 same region). I had 3 jobs in parallel - one syncing small videos
 (15000 ~100mb files), one syncing photos (15000 ~5mb files) and
 another one syncing a 3GB volume with about 100.000 small (10-500kb)
 files.

 The high io wait situation occurred after about 3 hours of processing.
 I use newrelic to monitor the server, so I could notice a spike of
 writes on my local cache disk, which I documented here -
 https://docs.google.com/document/d/1S927JPyMG4SCxkiIcReQDHWACM1qozYOMTa86pGQ1k4/edit?usp=sharing

What does ~/.s3ql/mount.log say?

Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] Reduced file handles?

2015-01-23 Thread Nikolaus Rath
Rabid Mutant rabidmutantstarg...@gmail.com writes:
 On Saturday, January 24, 2015 at 3:00:20 AM UTC+11, Nikolaus Rath wrote:

 On 01/21/2015 08:16 PM, Rabid Mutant wrote:
  
  Does S3QL really need 1 file handle per cache entry?

 In principle, no. The way it's currently programmed, yes.

 Looking briefly at the code, it seems I might be able to replace
 access to the file handle with a call to a cache manager, and
 everything should just work...but that's based on one quick
 look. Would that be a correct assessment?

It sounds right, but I haven't looked in detail either.

 I could use rsync to compare files and update only the new  changed
 files without any unnecessary network I/O. It would also allow for
 the possibility of offline use.

 rsync by default uses file name, modification time, and size to check if 
 a file has changed, so it won't incur any network IO apart from what's 
 necessary to transfer new and changed files. 

 This changes if you use the -c option, but I'd be rather curious why 
 you'd need that. 



 Some applications (notably PostgreSQL) do not update inode dates when they 
 update files, specifically to reduce IO load. ie. the data is changed, but 
 the modification dates (and quite probably size) are not.

Are you sure that's correct? Updating the inode dates happens in the
kernel, and as far as I know there is no way for an userspace
application to prevent this. You can of course reset the times to the
original values, but this would increase the IO load rather than reduce
it.

 So the -c option becomes important, at least in this case.

 I also (sometimes) change the file modification dates on photos to the
 original photo date after trivial edits: eg changing EXIF data. In this
 case the date and size remain the same.

That might cause problems indeed. But seriously, I'd simply not do that
instead of trying to patch S3QL to support a bigger cache.

 Another factor, and I agree it's probably minor, but decryption is usually 
 considerably faster than compression, and my expectation was that using 
 'rsync -c' on a fully cached file system (thereby comparing uncompressed 
 data) would be faster than compressing the data and comparing the 
 checksums.

The checksum is calculated before compression. Compression and
encryption only happens after the checksum has been calculated and no
matching existing block has been found.

 Since the following would occur:

 Normal Copy:

 1. compress chunk
 2. compare hash in DB
 3. If different:
   a. send compressed from step 1

No. It's

1. Calculate checksum
2. If different:
  a. compress
  b. encrypt
  c. upload

 rsync -c copy with complete cache:

 1. decompress chunk from cache
 2. compare data
 3. if different:
   a. Compress chunk
   b. send

No, that wrong as well. The cache is stored uncompressed.


Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] Uncaught top-level exception - ver 1.17 on Ubuntu 12.04LTS

2015-01-27 Thread Nikolaus Rath
Hi Warren,

Warren Daly invisibleagentreco...@gmail.com writes:
[...]
   File 
 /usr/local/lib/python2.7/dist-packages/s3ql-1.17-py2.7-linux-x86_64.egg
 /s3ql/backends/s3c.py, line 281, in open_read
 resp = self._do_request('GET', '/%s%s' % (self.prefix, key))
   File 
 /usr/local/lib/python2.7/dist-packages/s3ql-1.17-py2.7-linux-x86_64.egg
 /s3ql/backends/s3c.py, line 405, in _do_request
 tree = ElementTree.parse(resp).getroot()
   File string, line 62, in parse
   File string, line 38, in parse
 ParseError: no element found: line 1, column 0

This looks as if Amazon send you a malformed XML error message. If you
use a current S3QL version, it'd give more debugging information so that
we could find out if it's truly an Amazon S3 problem (S3QL development
has uncovered quite a few of those so far) and if there might be a
workaround.

As long as you stay with 1.x, though, the chances of doing anything
about this are pretty slim. I don't run that version on any of my
systems anymore, the required changes are probably non-trivial, and this
seems like a pretty rare occurence, so I'm unlikely to spend much time
on it.

Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] Corrupt metadata

2015-01-06 Thread Nikolaus Rath
Hi Eric,

When replying to emails on this list, please do not put your reply
above the quoted text, and do not quote the entire message you're
answering to. This makes it unnecessarily hard for other readers to
understand the context of your email. Instead, please cut quoted parts
that are not relevant to your reply, and insert your responses right
after the points you're replying to (as I have done below). Thanks!

Eric Eijkelenboom eric.eijkelenb...@gmail.com writes:
 Hi Nikolaus 

 Thanks a lot for your advice. I will definitely refrain from messing
 with the metadata in the future :)

 I renamed s3ql_metadata to something not starting with 's3ql', but
 running:

 fsck.s3ql s3://mybucket 

 results in this exception: 

 Starting fsck of s3://mybucket
 Uncaught top-level exception:
 Traceback (most recent call last):
 File /usr/lib/s3ql/s3ql/backends/s3c.py, line 253, in lookup
 resp = self._do_request('HEAD', '/%s%s' % (self.prefix, key))
 File /usr/lib/s3ql/s3ql/backends/s3c.py, line 477, in _do_request
 self._parse_error_response(resp)
 File /usr/lib/s3ql/s3ql/backends/s3c.py, line 494, in _
 parse_error_response
 raise HTTPError(resp.status, resp.reason, resp.headers)
 s3ql.backends.s3c.HTTPError: 404 Not Found

Aeh, yeah, I forgot about that. fsck.s3ql doesn't cope well with
completely absent metadata either...

Can you temporarily apply the attached patch? Hopefully that will fix
your problem. Just make sure to use it only once, and then revert to the
vanilla 2.12 version.

Best,
Nikolaus

PS: Please remember the first paragraph when replying :-).

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
diff --git a/src/s3ql/fsck.py b/src/s3ql/fsck.py
--- a/src/s3ql/fsck.py
+++ b/src/s3ql/fsck.py
@@ -1159,7 +1159,7 @@
 param = pickle.load(fh)
 if param['seq_no']  seq_no:
 log.info('Ignoring locally cached metadata (outdated).')
-param = backend.lookup('s3ql_metadata')
+param = backend.lookup('s3ql_metadata_bak_0')
 else:
 log.info('Using cached metadata.')
 db = Connection(cachepath + '.db')
@@ -1169,12 +1169,12 @@
 log.warning('File system has not been unmounted cleanly.')
 param['needs_fsck'] = True
 
-elif backend.lookup('s3ql_metadata')['seq_no'] != param['seq_no']:
+elif backend.lookup('s3ql_metadata_bak_0')['seq_no'] != param['seq_no']:
 log.warning('Remote metadata is outdated.')
 param['needs_fsck'] = True
 
 else:
-param = backend.lookup('s3ql_metadata')
+param = backend.lookup('s3ql_metadata_bak_0')
 assert not os.path.exists(cachepath + '-cache')
 # .db might exist if mount.s3ql is killed at exactly the right instant
 # and should just be ignored.
@@ -1287,6 +1287,7 @@
   is_compressed=True)
 log.info('Wrote %s of compressed metadata.', pretty_print_size(obj_fh.get_obj_size()))
 log.info('Cycling metadata backups...')
+backend.copy('s3ql_metadata_new', 's3ql_metadata')
 cycle_metadata(backend)
 with open(cachepath + '.params', 'wb') as fh:
 pickle.dump(param, fh, PICKLE_PROTOCOL)


Re: [s3ql] Reduced file handles?

2015-02-04 Thread Nikolaus Rath
Rabid Mutant rabidmutantstarg...@gmail.com writes:
 I must admit I see the 'rsync -c' option as a much stronger guarantee
 than a copy based on file dates  sizes, and this could be irrational.
 But, if 100% cache can be achieved at nearly zero performance impact
 *and* it supports rsync -c as well as an offline mode, this seems to
 me like a big gain at minimal cost.

I think what you want in this case is use something like lsyncd. It uses
inotify to determine changed files. So it can't miss anything even if
you mess with file time stamps, and it does not require you to read the
entire file:

See also: 
https://bitbucket.org/nikratio/s3ql/wiki/FAQ#!is-there-a-way-to-make-the-cache-persistent-access-the-file-system-offline


Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] Corrupt metadata

2015-01-05 Thread Nikolaus Rath
Eric Eijkelenboom eric.eijkelenb...@gmail.com writes:
 Hi guys

 Here’s what happened:

 1. Due to a server shutdown, the filesystem was not cleanly umount’ed.
 Hereafter, fsck resulted in ‘fs still mounted elsewhere, aborting’
 errors (obviously, the backend was not mounted anywhere else).

Did you run with --batch? Normally, fsck will prompt you and you can
tell it to continue running anyway. No need to mess with the metadata at
all.

 2. I tried restoring from backup by renaming
 s3://mybucket/s3ql_metadata_bak_0 to to s3://mybucket/s3ql_metadata,
 hereby overwriting s3ql_metadata (yes, big screw-up).

Yep. That breaks things, because it looks as if some malicious other guy
is trying to get you to use outdated metadata.

 After discovering s3qladm download-metadata, I tried to:

 1. s3qladm download-metadata s3://mybucket
 2. Choose a backup
 3. Move the generated files to ~/.s3ql
 4. Run fsck. 

 fsck does it’s thing, but crashes in the end with the same exception
 as above:

 ...
 Backing up old metadata...
 Uncaught top-level exception:
 Traceback (most recent call last):
 File /usr/bin/fsck.s3ql, line 9, in module
 load_entry_point('s3ql==2.12', 'console_scripts', 'fsck.s3ql')()
 File /usr/lib/s3ql/s3ql/fsck.py, line 1291, in main
 cycle_metadata(backend)
 File /usr/lib/s3ql/s3ql/metadata.py, line 127, in cycle_metadata
 backend.copy(s3ql_metadata, s3ql_metadata_bak_0)
 File /usr/lib/s3ql/s3ql/backends/comprenc.py, line 290, in copy
 self._copy_or_rename(src, dest, rename=False, metadata=metadata)
 File /usr/lib/s3ql/s3ql/backends/comprenc.py, line 302, in _
 copy_or_rename
 (nonce, meta_old) = self._verify_meta(src, meta_raw)
 File /usr/lib/s3ql/s3ql/backends/comprenc.py, line 139, in _
 verify_meta
 % (stored_key, key))
 s3ql.backends.common.CorruptedObjectError: Object content does not
 match its key (s3ql_metadata_new vs s3ql_metadata)

 Questions: 
 1. What can I do from here?

Delete the s3ql_metadata object from the bucket (or rename it to
something that does not start with s3ql_).

 2. How to avoid this in the future?

* Never, ever, modify the bucket contents with anything but the S3QL
  tools.

* Don't use s3qladm download-metadata before you've asked on this list,
  fsck.s3ql should be able to handle 99% of all problems.

* Don't shut down the server while the file system is still mounted :-)

Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] Trying to use s3ql to attach to Swift

2015-03-31 Thread Nikolaus Rath
On Mar 31 2015, Shane Wilson coswort...@gmail.com wrote:
 First, I'm very new to Swift, so I'm sure I'm doing something wrong.

 I'm trying to mount swift to Ubuntu 14.04 linux using s3ql, but I getting 
 an error during the mkfs.s3ql.

 Here is the command line:
 root@owncloud:~/.s3ql# mkfs.s3ql --plain --backend-options no-ssl 
 swiftks://192.102.218.230:5000/regionOne:s3ql
 Enter backend login:
 Enter backend passphrase:
 No accessible object storage service found in region regionOne (available 
 regions: )
 root@owncloud:~/.s3ql#

 Here is my authinfo2
 [swift]
 backend-login: tenant:demo
 backend-password: 19monza67
 storage-url: swiftks://192.102.218.230:5000/regionOne:s3ql

 The s3ql container is there when I do a swift list.  Its as if it doesn't 
 know about the region.  There should only be one region, my swift install 
 is from the Juno install docs and only creates regionOne.  I hope you can 
 point me in the right direction, I'm sure it something simple I'm missing.  
 I've read everything and I'm at a lose for direction.

Apply the attached patch and post the contents of the response.txt
file that it should create.

Best,
-Nikolaus
-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
diff --git a/src/s3ql/adm.py b/src/s3ql/adm.py
--- a/src/s3ql/adm.py
+++ b/src/s3ql/adm.py
@@ -398,6 +398,8 @@
 
 def yield_objects():
 for (id_,) in db.query('SELECT id FROM objects'):
+if id_  my_starting_id:
+continue
 yield 's3ql_data_%d' % id_
 for obj_id in extra_objects:
 yield obj_id
diff --git a/src/s3ql/backends/swiftks.py b/src/s3ql/backends/swiftks.py
--- a/src/s3ql/backends/swiftks.py
+++ b/src/s3ql/backends/swiftks.py
@@ -97,7 +97,10 @@
 elif resp.status  299 or resp.status  200:
 raise HTTPError(resp.status, resp.reason, resp.headers)
 
-cat = json.loads(conn.read().decode('utf-8'))
+body = conn.read()
+with open('response.txt', 'wb') as fh:
+fh.write(body)
+cat = json.loads(body.decode('utf-8'))
 self.auth_token = cat['access']['token']['id']
 
 avail_regions = []


Re: [s3ql] Re: Preventing huge network traffic

2015-04-20 Thread Nikolaus Rath
[Quoting repaired]

On Apr 20 2015, Viktor Szépe szepe.vik...@gmail.com wrote:
 On Saturday, April 18, 2015 at 10:53:05 AM UTC+2, Viktor Szépe wrote:

 Is there a way to upload only changed file, thus doing incremental backup?
 My usecase is making backup of a website that has 1 or two new files per 
 backup.

 Excuse me that was a wrong question.
 The right one is: Will s3ql read the file's contents to determine whether 
 it has change or reads only the file's metadata (mtime, size etc.)?

I don't think that's the right question either.

S3QL is a file system. When a client application (like your backup
program) tells it to write data to a file, it checks if it already has
identical data stored and (if so) saves only one copy.

There is no way for S3QL itself to compare mtime and size. S3QL does not
know when a file has reached its final size or mtime, and the mtime is
the time of last write, i.e. when S3QL receives a write request it is
always the current time. Finally, S3QL also has no idea which other file
to compare any of this against.

What you want is a feature of the backup program. For example, if you
use rsync it will only copy a file if its attributes have changed. Look
at http://www.rath.org/s3ql-docs/contrib.html#s3-backup-sh for an example.


Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] Storage backend hung

2015-04-28 Thread Nikolaus Rath
On Apr 28 2015, netplus.r...@gmail.com wrote:
 Hmm. After a 502 error S3QL tries to re-establish the connection
 Completely. The only difference is that it does not attempt to contact
 The auth server again. Could it be that the auth server sends you to a
 Different endpoint when you restart S3QL?

 If you run a traffic dump tool like Wireshark, you should be able to see
 If S3QL connects to the same server when you restart.

 Yes, there is indeed two VIP for the auth part and the storage part.
 Is there any workaround to force the connection to contact the auth server ?

No, but it might be worth implementing.

Could you file an issue at https://bitbucket.org/nikratio/s3ql/issues?

Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] S3 performance on lots small files

2015-04-28 Thread Nikolaus Rath
On Apr 28 2015, Maxim Kostrikin mkostri...@gmail.com wrote:
 Hello,
   I am faced with performance issue. My s3ql mount attached to s3 backend 
 with options: --cachesize 2000 --max-cache-entries 100 --threads 16 
 --allow-other --backend-options no-ssl,tcp-timeout=30 --nfs
   I copy into s3ql mount many small files 100 - 1000 bytes. With time copy 
 speed slowdown almost stall.

Please be more precise. How did you measure the speed? How fast was it
initially, how fast at the end?

   I see almost 16 connections with s3 servers, but tcpdump shows small rate 
 of uploading PUTs.

What do you mean with small? What do you expect it to be, and what did
you find instead?

Generally every file needs one network request, which means tens to
hundreds of milliseconds of network latency per file.

   Ok, I remounted with --threads 1024 ( I bet it was a bad idea), but 
 connections with s3 was 24 max and lots of cross locks between
 threads.

How did you determine that? 

   1. How to increase parallels request to s3? Or increase upload 
 performance?

In theory more threads should give you more simulateous connection -
unless you're limited by something else.

   2. What limits could be reasonable for --threads parameter? Why? SQLite 
 bottleneck?

Difficult to say without further information.


Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] S3 performance on lots small files

2015-05-04 Thread Nikolaus Rath
On May 04 2015, Maxim Kostrikin mkostri...@gmail.com wrote:
I see almost 16 connections with s3 servers, but tcpdump shows small 
  rate 
   of uploading PUTs. 
  
  What do you mean with small? What do you expect it to be, and what did 
  you find instead? 
  
  200 bytes - 1k  per file. 
  The dirty size was big (1-2GiB). 

 No, I was talking about the small rate of PUTs that you saw in tcpdump. 

 If we assume about 100 ms network latency, then with 200 byte files 
 you'd be able to transfer at most 200 bytes / 100 ms = 2 kB/s for each 
 active thread. So your 5 kb/s isn't completely the wrong order of 
 magnitude. 

 I would expect that the initial fast period is just when you file up 
 your cache. Am I right that, if you set the cache to something small 
 (say 2 * number of threads entries), you're getting a constant 
 transfer rate from the very beginning? 

 In tcpdump, how many PUTs do you see per second? Is that number constant? 

 All right, but why only 16 threads were transferring?

I could try to determine that, if you'd answer my question...


Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] S3 performance on lots small files

2015-04-30 Thread Nikolaus Rath
On Apr 30 2015, Maxim Kostrikin mkostri...@gmail.com wrote:
 среда, 29 апреля 2015 г., 2:55:05 UTC+6 пользователь Nikolaus Rath написал:
 On Apr 28 2015, Maxim Kostrikin mkost...@gmail.com javascript: wrote: 
I am faced with performance issue. My s3ql mount attached to s3 backend 
  with options: --cachesize 2000 --max-cache-entries 100 --threads 
  16 
  --allow-other --backend-options no-ssl,tcp-timeout=30 --nfs 
I copy into s3ql mount many small files 100 - 1000 bytes. With time copy 
  speed slowdown almost stall. 

 Please be more precise. How did you measure the speed? How fast was it 
 initially, how fast at the end? 

 I was transfering files with tar c|dd|ssh s3qluser@s3ql_host tar xC
 /s3ql_mount kill -USR1 $dd_pid shows the speed.  It starts from 5M/sec
 and falls to 5k/s or even freeze.

Well, did it fall to 5 k/s or did it freeze?

 Now I tar-pipe into temp directory and from the temp directory move into 
 /s3ql_mount

   I see almost 16 connections with s3 servers, but tcpdump shows small 
 rate 
  of uploading PUTs. 

 What do you mean with small? What do you expect it to be, and what did 
 you find instead? 

 200 bytes - 1k  per file.
 The dirty size was big (1-2GiB).

No, I was talking about the small rate of PUTs that you saw in tcpdump.

If we assume about 100 ms network latency, then with 200 byte files
you'd be able to transfer at most 200 bytes / 100 ms = 2 kB/s for each
active thread. So your 5 kb/s isn't completely the wrong order of
magnitude.

I would expect that the initial fast period is just when you file up
your cache. Am I right that, if you set the cache to something small
(say 2 * number of threads entries), you're getting a constant
transfer rate from the very beginning?

In tcpdump, how many PUTs do you see per second? Is that number constant?
 
   Ok, I remounted with --threads 1024 ( I bet it was a bad idea), but 
 connections with s3 was 24 max and lots of cross locks between 
 threads. 

 How did you determine that? 

 strace -fp $mount.s3ql_pid showed many  FUTEX messages.

I think that doesn't mean that there's lock contention, just that locks
are being used.


Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] Not detecting mounted elsewhere

2015-05-08 Thread Nikolaus Rath
On May 08 2015, Andy Cress andycr...@gmail.com wrote:
 I’m using s3ql-1.17, and this pertains to Amazon S3.

 I configured an S3QL mount point with mkfs.s3ql and mount.s3ql on one
 system, then I did fsck.s3ql and mount.s3ql to the same bucket (same
 credentials) on a second system, but it worked and did not give me the
 ‘still mounted elsewhere’ message that I expected.

 What functions should I use instead to detect this case?   Or should
 this work in fsck.s3ql / mount.s3ql in a later version?

What region is your bucket in?

S3QL tries to detect double mounts as best as it can, but there are
fundamental limits imposed by the storage backend. For the detection to
work reliably, the backend must guarantee either immediate get-after-put
or list-after-put consistency. The us-standard storage region, for
example, offers neither.

Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] Understanding benchmark.py output

2015-05-13 Thread Nikolaus Rath
On May 13 2015, joshua.phill...@semanticbits.com wrote:
 Hi,

 I'm trying to understand the benchmark.py output to identify an upload 
 bottleneck in my S3QL deployment. I'd appreciate any pointers you may have.

[...]

 Threads:   1   2   4   8 
 Max FS throughput (bzip2):  36264 KiB/s  36264 KiB/s  36264 KiB/s  36264 
 ..limited by:   S3QL/FUSE   S3QL/FUSE   S3QL/FUSE   S3QL/FUSE 

 But, when I run when I run the mount with 1MB cache size, 8 upload thread, 
 and using bzip2-6 compression, I'm seeing a much lower throughput than 
 expected.

 mount.s3ql --threads 8 --nfs --allow-other --cachedir /var/lib/s3ql-cache 
 --cachesize 1024 --compress bzip2-6 --backend-options no-ssl  
 s3://some-bucket 
 /some/path
 Autodetected 4040 file descriptors available for cache entries
 Using cached metadata.
 Creating NFS indices...
 Mounting filesystem...

 dd if=/dev/zero of=/some/path/speed_test.dat bs=2M count=1
 1+0 records in
 1+0 records out
 2097152 bytes (2.1 MB) copied, 33.6993 s, 62.2 kB/s

 Only, 62 KB/s. Any ideas why it's so low, or where to look for the 
 bottleneck?

That's odd. What happens if you increase the cache size to 4 MB? What
happens if you decrease the blocksize (bs) to 128k?

 Also, when I try the same thing from my home laptop, I see throughput of 
 150 MB/s, which is my upload limit, as expected.

I don't believe it. Please recheck.



Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] load balancing/fail over for NFS with s3ql on swift

2015-05-15 Thread Nikolaus Rath
On May 15 2015, Jeb Baxley jeb.bax...@gmail.com wrote:
 So I'm looking at using S3QL on swift, and was wondering if it's possible 
 to provide a NFS cluster using s3ql on swift?

Depends what you mean by NFS cluster. You can export an S3QL mountpoint
over NFS.

 Thinking about it, I'm not sure it would be given how both of these
 technologies work - but was wondering if it's possible to provide a
 customer/client with an nfs target that is serviced by more than one
 s3ql mount/server instance.

I've never heard of having one NFS file system served by multiple
servers. Are you sure that's possible?

Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] max-obj-size with OpenStack / Swift

2015-05-13 Thread Nikolaus Rath
On May 13 2015, Maxim Kostrikin mkostri...@gmail.com wrote:
 I think the only reason for bigger max-obj-size is a db size. But 62.8
 MiB is considerably small. I have a setup with 10Gb+ of sqlite DB and
 it works, I guess the cons of bigger max-obj-size is multicore
 processing. If block is big, then s3ql should wait till next big block
 will be copied to start using next core. Or not?

That's correct.

Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
s3ql group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


  1   2   3   4   5   >