Re: [Gluster-devel] Status update : Brick Mux threads reduction

2018-10-03 Thread Atin Mukherjee
I have rebased [1] and triggered brick-mux regression as we fixed one
genuine snapshot test failure in brick mux through
https://review.gluster.org/#/c/glusterfs/+/21314/ which got merged today.

On Thu, Oct 4, 2018 at 10:39 AM Poornima Gurusiddaiah 
wrote:

> Hi,
>
> For each brick, we create atleast 20+ threads, hence in a brick mux use
> case, where we load multiple bricks in the same process, there will 100s of
> threads resulting in perf issues, memory usage increase.
>
> IO-threads :  Make it global, to the process, and ref count the resource.
> patch [1], has failures in brick mux regression, likey not related to the
> patch, need to get it passed.
>
> Posix- threads : Janitor, Helper, Fsyncer, instead of using one thread per
> task, use synctask framework instead. In the future use thread pool in
> patch [2]. Patches are posted[1], fixing some regression failures.
>
> Posix, bitrot aio-thread : This thread cannot be replaced to just use
> synctask/thread pool as there cannot be a delay in recieving notifications
> and acting on it. Hence, create a global aio event receiver thread for the
> process. This is WIP and is not yet posted upstream.
>
> Threads in changelog/bitrot xlator Mohit posted a patch where default
> xlator does not need to start a thread if xlator is not enabled
> https://review.gluster.org/#/c/glusterfs/+/21304/ (it can save 6 thread
> per brick in default option)
>
> Pending: Create a build of these patches, run perf tests with these
> patches and analyze the same.
>
>
> [1] https://review.gluster.org/#/c/glusterfs/+/20761/
> [2] https://review.gluster.org/#/c/glusterfs/+/20636/
>
> Regards,
> Poornima
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Status update : Brick Mux threads reduction

2018-10-03 Thread Poornima Gurusiddaiah
Hi,

For each brick, we create atleast 20+ threads, hence in a brick mux use
case, where we load multiple bricks in the same process, there will 100s of
threads resulting in perf issues, memory usage increase.

IO-threads :  Make it global, to the process, and ref count the resource.
patch [1], has failures in brick mux regression, likey not related to the
patch, need to get it passed.

Posix- threads : Janitor, Helper, Fsyncer, instead of using one thread per
task, use synctask framework instead. In the future use thread pool in
patch [2]. Patches are posted[1], fixing some regression failures.

Posix, bitrot aio-thread : This thread cannot be replaced to just use
synctask/thread pool as there cannot be a delay in recieving notifications
and acting on it. Hence, create a global aio event receiver thread for the
process. This is WIP and is not yet posted upstream.

Threads in changelog/bitrot xlator Mohit posted a patch where default
xlator does not need to start a thread if xlator is not enabled
https://review.gluster.org/#/c/glusterfs/+/21304/ (it can save 6 thread per
brick in default option)

Pending: Create a build of these patches, run perf tests with these patches
and analyze the same.


[1] https://review.gluster.org/#/c/glusterfs/+/20761/
[2] https://review.gluster.org/#/c/glusterfs/+/20636/

Regards,
Poornima
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel