heya!

Thanks for the script! this helped me ALOT!
There is some bugs im using s3ql as local:///, it means so after 999 
datafile comes 1000 and that file goes into 100/ folder and so on. so the 
script is not for that, i think i can fixed it, it seems more complicated 
because dirs involved now. but i can manage with it.

my attempt is using Amazon cloud for unlimited file storage, and it is now 
90% ready only deletion is not working but after i tune this script it will 
work. 
here is how i made it:

1. mount amazon cloud as read only via acd_cli mount.
2. join that read only fs with rw unionfs filesystem.
3. have local filesystem for metadata.
4. have script what is looping and sends 30minute+ older datafiles to the 
cloud(via acd_cli) also it updates if datafile is modified. and ofc it 
deletes local file after send. to conserve that low diskspace.
5. after send file it will be on ro mount and rw-unionfs.

deletion process should go this way:
run script now or then manually on umounted fs.i think it will work mounted 
fs aswell as long there is no new data coming at that moment, there is 
always option to remount s3ql as ro while running the script. also there is 
possibility to find only a certain time files like 30min+ older ones to 
avoid touching new datafiles.

i know amazon cloud is not supported by s3ql, but this way it works. as 
noone has done backend for it this is only way now how it works with s3ql. 
i think its possible to do a backend and tell it to use acd_cli command to 
put/get/delete/dir but my coding skills are very novice :D and after 
looking code lol it was not so simple :D Other option is build a swift 
gateway between swift <-> acd_cli...


-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to