Hi I have the same problem, i'm using hubic i did a similar software as 
S3ql but more simple and without own FileSystem as do s3ql.

Hubic is the most cheapest OpenStack Storage in the market i think you guys 
should add the directory feature in the filesystem backend files. atless 
add a parameter in the -backend-options specific only for openstack that 
allow create automatic directories only to fix the 50K files limit.

i did some number, using 50Mb Chunk file you get minim of 262144 using the 
12,5TB of space on hubic. ( if you have the space full of course)

if you make.. dont know a random directory structure(because i undersntad 
that in the backend its not required to have any logic) it could fix the 
problem.
and may improve the speed of get the directory info, because in openstack 
you can filter the lengh of the list of files sended by the openstack using 
delimiters

you can for example delimiter that you only want to get the files in 
/folder1/random2/  
if you have 50K chunks and you want to list all the files using the 
openstack API, and you supose that the json of every chunk it will be 500 
chars it's mean you have  500*50000 = 25000000/1024=24414KB
its too much for only 1 get (LIST) petition, if you separate the chunks for 
1000 chunks for folder  = 500*1000=488kb  more fastter.

Sorry for my bad english :P

and thank you for read us problems.

bests!


El martes, 8 de diciembre de 2015, 20:35:01 (UTC+1), Riku Bister escribió:
>
> i dont see any reason to change provider, this is cheap unmetered have 
> alot disk-space and works as fast internet works.. on europe. i don't see 
> any other provider who has the same throughput in europe for that price.. 
> or even working this fast.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to