On Mon, Jul 24, 2017 at 5:11 PM, Raghavendra G <[email protected]> wrote:
> > > On Fri, Jul 21, 2017 at 6:39 PM, Vijay Bellur <[email protected]> wrote: > >> >> On Fri, Jul 21, 2017 at 3:26 AM, Raghavendra Gowdappa < >> [email protected]> wrote: >> >>> Hi all, >>> >>> We've a bug [1], due to which read-ahead is completely disabled when the >>> workload is read-only. One of the easy fix was to make read-ahead as an >>> ancestor of open-behind in xlator graph (Currently its a descendant). A >>> patch has been sent out by Rafi to do the same. As noted in one of the >>> comments, one flip side of this solution is that small files (which are >>> eligible to be cached by quick read) are cached twice - once each in >>> read-ahead and quick-read - wasting up precious memory. However, there are >>> no other simpler solutions for this issue. If you've concerns on the >>> approach followed by [2] or have other suggestions please voice them out. >>> Otherwise, I am planning to merge [2] for lack of better alternatives. >>> >> >> >> Since the maximum size of files cached by quick-read is 64KB, can we have >> read-ahead kick in for offsets greater than 64KB? >> > > I got your point. We can enable read-ahead only for files whose size is > greater than the size eligible for caching quick-read. IOW, read-ahead gets > disabled if file size is less than 64KB. Thanks for the suggestion. > I added a comment on the patch to move the xlators in reverse to the way the patch is currently doing. Milind I think implemented it. Will that lead to any problem? > > >> >> Thanks, >> Vijay >> >> _______________________________________________ >> Gluster-devel mailing list >> [email protected] >> http://lists.gluster.org/mailman/listinfo/gluster-devel >> > > > > -- > Raghavendra G > > _______________________________________________ > Gluster-devel mailing list > [email protected] > http://lists.gluster.org/mailman/listinfo/gluster-devel > -- Pranith
_______________________________________________ Gluster-devel mailing list [email protected] http://lists.gluster.org/mailman/listinfo/gluster-devel
