+Krutika for any other inputs you may need.

On Sat, Apr 22, 2017 at 12:21 PM, Pranith Kumar Karampuri <
[email protected]> wrote:

> Sorry for the delay. The only internal process that we know would take
> more time is self-heal and we implemented a feature called granular entry
> self-heal which should be enabled with sharded volumes to get the benefits.
> So when a brick goes down and say only 1 in those million entries is
> created/deleted. Self-heal would be done for only that file it won't crawl
> the entire directory.
>
>
>
> On Wed, Apr 12, 2017 at 8:11 PM, David Spisla <[email protected]>
> wrote:
>
>> Dear Gluster-Community,
>>
>>
>>
>> If I use the shard feature it may happen that I will have a huge number
>> of shard-chunks in the hidden folder .shard
>>
>> Does anybody has some experience what is the maximum number of files in
>> one .shard-Folder?
>>
>>
>>
>> If I have 1 Million files in such a folder, some operations like
>> self-healing or another internal operations would need
>>
>> a lot of time, I guess.
>>
>>
>>
>> Sincerely
>>
>>
>>
>>
>>
>> *David Spisla*
>>
>> Software Developer
>>
>> [email protected]
>>
>> www.iTernity.com <http://www.iternity.com/>
>>
>> Tel:       +49 761-590 34 841
>>
>>
>>
>> [image: cid:[email protected]]
>>
>>
>>
>> iTernity GmbH
>> Heinrich-von-Stephan-Str. 21
>> 79100 Freiburg – Germany
>> ---
>> unseren technischen Support erreichen Sie unter +49 761-387 36 66
>> ---
>>
>> Geschäftsführer: Ralf Steinemann
>> Eingetragen beim Amtsgericht Freiburg: HRB-Nr. 701332
>> USt.Id de-24266431
>>
>>
>>
>>
>>
>> _______________________________________________
>> Gluster-users mailing list
>> [email protected]
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>
>
>
>
> --
> Pranith
>



-- 
Pranith
_______________________________________________
Gluster-users mailing list
[email protected]
http://lists.gluster.org/mailman/listinfo/gluster-users

Reply via email to