Re: [s3ql] Any way to stabilize S3QL in my convoluted Amazon Cloud Drive setup?
I must recommend *do not use s3ql on top acd_cli * *But i have a solution to this*, im still using s3ql on amazon cloud drive, its abit complicated. but it works. if people interested i can do a tutorial how to setup it. atm i have 2-3Tb in amazon drive via s3ql and i can stream full hd or even a 4K movie where bitrate is over 5k. its super fast usage. it can be done, but there is alot caution needed on use it.(on mounting systems watch out wipes etc) explanation about how movie stream is passing: amazon_acd_cli -> s3ql -> server where is owncloud and s3ql -> WEBDAV -> Kodi on my home connect via webdav to server -> TV 4k -> enjoy explanation what i have done in short: using combination of unionfs, s3ql with encryption, owncloud as frontend, and alot of scripts to automate everything it works, it is dirty but works, and it is fast as hell :D only bad on this setup is if server crash the s3ql.fsck takes more than 2 hours on a single fs, i have 3 fs: you always want split fs:s in case one breaks then you don't lost everything -- You received this message because you are subscribed to the Google Groups "s3ql" group. To unsubscribe from this group and stop receiving emails from it, send an email to s3ql+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.
Re: [s3ql] Any way to stabilize S3QL in my convoluted Amazon Cloud Drive setup?
Well, I thought I would respond back to this so people don't waste their time. I'd recommend not using s3ql on top of acd_cli, it's too unstable. I found myself with a crashed file system and having to run fsck everyday. My reasons for wanting to use s3ql were: 1. The chunked storage format I thought would be better for doing things like resuming movies in the middle on my plex server 2. The encryption it provides makes me feel like a sneaky secrete agent. Instead, I recommend you just upload your movies normally and mount using acd_cli. For #1, chunking the files doesn't seem necessary. I don't know what black magic the acd_cli mount is doing, but my movies resume from the middle just fine, and I'd actually say that they play a little faster, s3ql seems to add some overhead. I do miss out on the encryption, but I really don't think amazon cares about what I upload. The other thing you need to consider with encryption and s3ql in general is that you're locking yourself in to a particular technology. Right now I'm running plex on linux, but if I ever decide to buy a windows license so I can watch netflix on it, I'd have to re-download all my movies then re-upload them in some new format. Finally, if encryption is really important to you, there's a pull request for supporting encryptfs with acd_cli that seems a lot more mature then the pull request here for supporting s3ql. Some more details on what I ended up doing: 1. I am still using union-fs. I keep everything on the local disk until I need to make some room, then I archive stuff to acd. It's sorta like a poor mans version of the disk cache s3ql provides. 2. rclone is amazing. Instead of using acd_cli to upload which I find to be a little slow, I use rclone. I actually got banned for a little bit from my vps until I throttled it because I was uploading at 50-80 MBps (capital B is intentional!), which was most of their 1Gbps pipe. Now I upload at 20MBps consistently. Rclone also seems a little bit more stable then acd_cli, although their mount stuff is not currently as good. Anyways, the caching + encryption stuff that s3ql does is cool and I might revisit if it ever gets official support. It would be nice if it did because then I could remove all my unionfs and acd_cli stuff and just use s3ql. But what I have right now is good enough for my needs. I've had 1 file system crash in the last 2 weeks, and recovery from that is a lot faster since I don't have to run fsck. I know a lot of people are trying to get this combo working, so hopefully my post helps. On Wednesday, October 26, 2016 at 3:04:25 PM UTC-4, Nikolaus Rath wrote: > > On Oct 26 2016, Jonas Lippuner > wrote: > > I'm using a similar setup. Mount ACD locally with acd_cli and then use > an > > s3ql volume inside the mounted ACD path. The reason for using ACD rather > > than S3 is very simple: cost. > > Well, that explains it. You get what you pay for :-). > > [...] > > Here's the backtrace (from mount.log) of the error I usually see: > [...] > > > > "/usr/local/lib/python3.5/dist-packages/s3ql-2.20-py3.5-linux-x86_64.egg/s3ql/backends/local.py", > > > > line 304, in __init__ > > self.fh = open(name, 'wb', buffering=0) > > OSError: [Errno 14] Bad address: > > > '/home/jlippuner/cloud_storage/ACD/remote/s3ql_backup/s3ql_data_/698/s3ql_data_69888#7693-140501390976768.tmp' > > > > Yes, that's ACD being buggy. > > > I wonder how hard it would be to add a native ACD interface to S3QL... > > Some people are trying, see https://github.com/s3ql/s3ql/pull/7 > > > Best, > -Nikolaus > -- > GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F > Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F > > »Time flies like an arrow, fruit flies like a Banana.« > -- You received this message because you are subscribed to the Google Groups "s3ql" group. To unsubscribe from this group and stop receiving emails from it, send an email to s3ql+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.
Re: [s3ql] Any way to stabilize S3QL in my convoluted Amazon Cloud Drive setup?
On Oct 26 2016, Jonas Lippuner wrote: > I'm using a similar setup. Mount ACD locally with acd_cli and then use an > s3ql volume inside the mounted ACD path. The reason for using ACD rather > than S3 is very simple: cost. Well, that explains it. You get what you pay for :-). [...] > Here's the backtrace (from mount.log) of the error I usually see: [...] > "/usr/local/lib/python3.5/dist-packages/s3ql-2.20-py3.5-linux-x86_64.egg/s3ql/backends/local.py", > > line 304, in __init__ > self.fh = open(name, 'wb', buffering=0) > OSError: [Errno 14] Bad address: > '/home/jlippuner/cloud_storage/ACD/remote/s3ql_backup/s3ql_data_/698/s3ql_data_69888#7693-140501390976768.tmp' Yes, that's ACD being buggy. > I wonder how hard it would be to add a native ACD interface to S3QL... Some people are trying, see https://github.com/s3ql/s3ql/pull/7 Best, -Nikolaus -- GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F »Time flies like an arrow, fruit flies like a Banana.« -- You received this message because you are subscribed to the Google Groups "s3ql" group. To unsubscribe from this group and stop receiving emails from it, send an email to s3ql+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.
Re: [s3ql] Any way to stabilize S3QL in my convoluted Amazon Cloud Drive setup?
I'm using a similar setup. Mount ACD locally with acd_cli and then use an s3ql volume inside the mounted ACD path. The reason for using ACD rather than S3 is very simple: cost. Currently, I'm using 166 GB on ACD and I expect this to grow by a factor of 2 or 3 within the next weeks. One gets unlimited storage on ACD for $60 per year, i.e. $5 per month. $5/mo buys about 167 GB of standard S3 storage (infrequent access or glacier would be cheaper, but not more than about 700 GB) and that's not counting any request cost. So ACD is a lot cheaper once one has more than a few hundred GB. Here's what I've found with acd_cli + s3ql. 1) Stability is markedly improved by running acd_cli in single-thread mode. Without this, I had times when even the metadata backup rotation would fail every time. Unfortunately, this cuts my bandwidth by a factor of 2 or 3. 2) S3QL is actually very good with data integrity, even when it crashes. Multiple times, I copied 10's of GBs to my S3QL volume (it all goes into the cache) and then the S3QL file system crashes after uploading a few GB (see one backtrace below). However, the mount.s3ql process will keep going, uploading all the data until all the data is uploaded (I can monitor network traffic and the ACD web interface). Once all the data is uploaded, I can kill mount.s3ql and run fsck.s3ql, which cleans everything up. Sometimes a few data blocks got corrupted and so some files get moved to lost+found. I can then copy those files again into the mounted S3QL volume, which is very fast because thanks to deduplication only the missing data blocks need to be uploaded. Then I can delete the file in lost+found. 3) If I run acd_cli in multi-threaded mode (default) then S3QL crashes very quickly, but it still keeps uploading all the data as described above, however this upload after the crash seems to be done in a single-threaded manner, which again reduces the bandwidth. So I've decided to run acd_cli in single-thread mode since the upload will the single-threaded after the crash anyway and this way I can at least delay the crash for a substantial amount of time and metadata backups will only work in single-threaded mode. Here's the backtrace (from mount.log) of the error I usually see: 2016-10-24 17:58:46.920 7693:Thread-5 root.excepthook: Uncaught top-level exception: Traceback (most recent call last): File "/usr/local/lib/python3.5/dist-packages/s3ql-2.20-py3.5-linux-x86_64.egg/s3ql/mount.py", line 64, in run_with_except_hook run_old(*args, **kw) File "/usr/lib/python3.5/threading.py", line 862, in run self._target(*self._args, **self._kwargs) File "/usr/local/lib/python3.5/dist-packages/s3ql-2.20-py3.5-linux-x86_64.egg/s3ql/block_cache.py", line 405, in _upload_loop self._do_upload(*tmp) File "/usr/local/lib/python3.5/dist-packages/s3ql-2.20-py3.5-linux-x86_64.egg/s3ql/block_cache.py", line 432, in _do_upload % obj_id).get_obj_size() File "/usr/local/lib/python3.5/dist-packages/s3ql-2.20-py3.5-linux-x86_64.egg/s3ql/backends/common.py", line 108, in wrapped return method(*a, **kw) File "/usr/local/lib/python3.5/dist-packages/s3ql-2.20-py3.5-linux-x86_64.egg/s3ql/backends/common.py", line 339, in perform_write with self.open_write(key, metadata, is_compressed) as fh: File "/usr/local/lib/python3.5/dist-packages/s3ql-2.20-py3.5-linux-x86_64.egg/s3ql/backends/comprenc.py", line 228, in open_write fh = self.backend.open_write(key, meta_raw) File "/usr/local/lib/python3.5/dist-packages/s3ql-2.20-py3.5-linux-x86_64.egg/s3ql/backends/local.py", line 102, in open_write dest = ObjectW(tmpname) File "/usr/local/lib/python3.5/dist-packages/s3ql-2.20-py3.5-linux-x86_64.egg/s3ql/backends/local.py", line 304, in __init__ self.fh = open(name, 'wb', buffering=0) OSError: [Errno 14] Bad address: '/home/jlippuner/cloud_storage/ACD/remote/s3ql_backup/s3ql_data_/698/s3ql_data_69888#7693-140501390976768.tmp' I wonder how hard it would be to add a native ACD interface to S3QL... Best, Jonas On Saturday, October 15, 2016 at 2:16:42 PM UTC-7, Nikolaus Rath wrote: > > On Oct 15 2016, Mike Beaubien > wrote: > > Hi, > > > > I'm doing the usual workaround to get s3ql on top of acd using unionfs > to > > merge a local s3ql file system in RW mode with some additional data > files > > stored in ACD and mounted through acd_cli in RO. > > I very much hope thata this is not truly a "usual" configuration. Why > don't you use the S3 backend? > > > > > It works well when it works, but occasionally s3ql will crash with an > error > > like: > > > [..] > > buf = fh.read(9) > > OSError: [Errno 70] Communication error on send > > > > It's probably just some temporary error for whatever network reason. Is > > there anyway to get s3ql to ignore and retry on these errors? > > Not in practice, no. If S3QL attempts to read data from a file, and the > operating system returns an error, then t
Re: [s3ql] Any way to stabilize S3QL in my convoluted Amazon Cloud Drive setup?
On Oct 15 2016, Mike Beaubien wrote: > Hi, > > I'm doing the usual workaround to get s3ql on top of acd using unionfs to > merge a local s3ql file system in RW mode with some additional data files > stored in ACD and mounted through acd_cli in RO. I very much hope thata this is not truly a "usual" configuration. Why don't you use the S3 backend? > > It works well when it works, but occasionally s3ql will crash with an error > like: > [..] > buf = fh.read(9) > OSError: [Errno 70] Communication error on send > > It's probably just some temporary error for whatever network reason. Is > there anyway to get s3ql to ignore and retry on these errors? Not in practice, no. If S3QL attempts to read data from a file, and the operating system returns an error, then this typically means that something is seriosly wrong and needs immediate attention. Crashing is the best course of action to make sure the problem is noticed. There is no way for S3QL to determine that in this case the problem is caused by an apparently buggy acd_cli. And even if there was a way, it would not be feasible for S3QL to anticipate and handle them for all the system calls that could possibly fail. Best, -Nikolaus -- GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F »Time flies like an arrow, fruit flies like a Banana.« -- You received this message because you are subscribed to the Google Groups "s3ql" group. To unsubscribe from this group and stop receiving emails from it, send an email to s3ql+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.