On Tue, May 19, 2015 at 05:12:35AM -0400, Krutika Dhananjay wrote:
> Hi, 
> 
> The following patch fixes an issue with readdir(p) in shard xlator: 
> http://review.gluster.org/#/c/10809/ whose details can be found in the commit 
> message. 
> 
> One side effect of this is that from shard xlator, the size of the dirents 
> list returned to the translators above it could be greater than the 
> requested size in the wind path (thanks to Pranith for pointing this out 
> during the review of this patch), with the worst-case scenario returning (2 * 
> requested_size) worth of entries. 
> For example, if fuse requests readdirp with 128k as the size, in the worst 
> case, 256k worth of entries could be unwound in return. 
> How important is it to strictly adhere to this size limit in each iteration 
> of readdir(p)? And what are the repercussions of such behavior? 
> 
> Note: 
> I tried my hand at simulating this issue on my volume but I have so far been 
> unsuccessful at hitting this test case. 
> Creating large number of files on the root of a sharded volume, triggering 
> readdirp on it until ".shard" becomes the last entry read in a given 
> iteration, winding the next 
> readdirp from shard xlator, and then concatenating the results of two 
> readdirps into one is proving to be an exercise in futility. 
> Therefore, I am asking this question here to know what could happen "in 
> theory" in such situations. 

How about modifying xlators/mount/fuse/src/fuse-bridge.c and increasing
the size of the readdir(p) argument there? The FUSE kernel side would
not know about the increased size, but the other xlators would request
a bigger size in subsequent readdir(p) FOPs.

I suspect that the additional dentries in readdir(p) get dropped, and
the next getdents() call from the kernel continues are the offset of the
readdir(p) of the last returned dentry.

Niels
_______________________________________________
Gluster-devel mailing list
[email protected]
http://www.gluster.org/mailman/listinfo/gluster-devel

Reply via email to