Actually, ignore the max object size question - I read your response to my 
other question about billing, and see about the (file size)/(object size) 
GET/PUT ratio. I will adjust accordingly for the size of files on average I 
store to keep object counts lower.

Thanks, and very good software you've created here, it's basically answered 
my needs for encrypted cloud backup and real-time file storage/access.

Brandon

On Sunday, June 19, 2016 at 6:47:35 AM UTC-5, Brandon wrote:
>
> Hello,
>
> Per the error - I realized that I am using an older version of S3QL that 
> is distributed in Ubuntu and was installed with its aptitude package 
> manager. I have installed and upgraded to the latest release version 
> (2.18), and will be using it and seeing if I have the same issues. I had 
> just removed the directory structure and re-copied it, with threading 
> turned on per default, and it successfully copied everything (for the first 
> time!) without giving me the transport error. So, we'll see what happens.
>
> Also another question - what is the benefit to changing the maximum file 
> size option during mkfs from the default 10MB? In one case I have 
> particularly larger files on average than 10MB - would I see some kind of 
> benefit by changing this to the average file size? Does this option exist 
> to keep the number of files stored on the fs lower, so as to increase some 
> kind of read speed for the filesystem or something? For instance if I have 
> a lot of large files, in excess of 1GB of each on average, and that is 
> pretty much the only files a particular S3QL filesystem is going to store, 
> should I set that size to 100MB to keep the number of data files low in 
> order to get a performance increase of some kind?
>
>
> Thank you!
>
> Brandon
>
> On Friday, June 17, 2016 at 4:10:55 PM UTC-5, Nikolaus Rath wrote:
>>
>> On Jun 17 2016, Brandon Orwell <[email protected]> wrote: 
>> > I've been using S3QL for a few days now, and whenever I am copying over 
>> > large amounts of data the mount point seems to 'lock up' for a period 
>> of 
>> > time (as well as anything else trying to access it), and then I start 
>> > getting 'transport endpoint not connected' errors. I "umount" the mount 
>> > point, run fsck on it. and then continue archiving until the problem 
>> > happens again. 
>>
>> I've heard this kind of story before, but it still amazes me. What train 
>> of thought led you to this procedure, instead of reporting the problem? 
>>
>> > Does anyone know what would cause these problems? 
>>
>> Where did you look for the answer? 
>>
>> https://bitbucket.org/nikratio/s3ql/wiki/FAQ#!what-does-the-transport-endpoint-not-connected-error-mean
>>  
>> says: 
>>
>> ,---- 
>> | What does the "Transport endpoint not connected" error mean? 
>> | 
>> | It means that the file system has crashed. Please check mount.log for a 
>> | more useful error message and report a bug if appropriate. If you can't 
>> | find any errors in mount.log, the mount process may have 
>> | "segfaulted". To confirm this, look for a corresponding message in the 
>> | dmesg output. If the mount process segfaulted, please try to obtain a C 
>> | backtrace (see Providing Debugging Info) of the crash and file a bug 
>> | report. 
>> | 
>> | To make the mountpoint available again (i.e., unmount the crashed file 
>> system), use the fusermount -u command. 
>> | 
>> | Before reporting a bug, please make sure that you're not just using the 
>> | most recent S3QL version, but also the most-recent version of the most 
>> | important dependencies (python-llfuse, python-apsw, python-lzma). 
>> `---- 
>>
>>
>> Best, 
>> -Nikolaus 
>>
>> -- 
>> GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F 
>> Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F 
>>
>>              »Time flies like an arrow, fruit flies like a Banana.« 
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to