ASF GitHub Bot commented on DRILL-6125:

Github user arina-ielchiieva commented on a diff in the pull request:

    --- Diff: 
    @@ -348,9 +348,12 @@ public void close() throws Exception {
         logger.debug("Partition sender stopping.");
         ok = false;
    -    if (partitioner != null) {
    -      updateAggregateStats();
    -      partitioner.clear();
    +    synchronized (this) {
    --- End diff --
    1. Should partitioner be volatile?
    2. Should we check if partitioner is not null before synchronization as 
well (DCL)?

> PartitionSenderRootExec can leak memory because close method is not 
> synchronized
> --------------------------------------------------------------------------------
>                 Key: DRILL-6125
>                 URL: https://issues.apache.org/jira/browse/DRILL-6125
>             Project: Apache Drill
>          Issue Type: Bug
>    Affects Versions: 1.13.0
>            Reporter: Timothy Farkas
>            Assignee: Timothy Farkas
>            Priority: Minor
>             Fix For: 1.13.0
> PartitionSenderRootExec creates a PartitionerDecorator and saves it in the 
> *partitioner* field. The creation of the partitioner happens in the 
> createPartitioner method. This method get's called by the main fragment 
> thread. The partitioner field is accessed by the fragment thread during 
> normal execution but it can also be accessed by the receivingFragmentFinished 
> method which is a callback executed by the event processor thread. Because 
> multiple threads can access the partitioner field synchronization is done on 
> creation and on when receivingFragmentFinished. However, the close method can 
> also be called by the event processor thread, and the close method does not 
> synchronize before accessing the partitioner field. Since synchronization is 
> not done the event processor thread may have an old reference to the 
> partitioner when a query cancellation is done. Since it has an old reference 
> the current partitioner can may not be cleared and a memory leak may occur.

This message was sent by Atlassian JIRA

Reply via email to