[ 
https://issues.apache.org/jira/browse/FLINK-8534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhu.qing updated FLINK-8534:
----------------------------
    Description: 
When insert too much entry into bucket will cause 

spillPartition(). So 

this.buildSideChannel = ioAccess.createBlockChannelWriter(targetChannel, 
bufferReturnQueue); 

And in 

prepareNextPartition() of reopenablemutable hash table

furtherPartitioning = true; 

so in 

finalizeProbePhase()

freeMemory.add(this.probeSideBuffer.getCurrentSegment());

// delete the spill files
 this.probeSideChannel.close();
 System.out.println("HashPartition probeSideRecordCounter Delete");
 this.buildSideChannel.deleteChannel();
 this.probeSideChannel.deleteChannel();

after deleteChannel the next iteartion will fail.

 

And I use web-google as dataset(SNAP). And I failed to use 16g laptop to 
reproduce the bug. The key to the bug is that you need insert enough entry in 
function insertBucketEntry upto 256 and that cause spillPartition().

  was:
When insert too much entry into bucket will cause 

spillPartition(). So 

this.buildSideChannel = ioAccess.createBlockChannelWriter(targetChannel, 
bufferReturnQueue); 

And in 

prepareNextPartition() of reopenablemutable hash table

furtherPartitioning = true; 

so in 

finalizeProbePhase()

freeMemory.add(this.probeSideBuffer.getCurrentSegment());

// delete the spill files
 this.probeSideChannel.close();
 System.out.println("HashPartition probeSideRecordCounter Delete");
 this.buildSideChannel.deleteChannel();
 this.probeSideChannel.deleteChannel();

after deleteChannel the next iteartion will fail.

 

And I use web google as dataset(SNAP). And I failed to use 16g laptop to 
reproduce the bug. The key to the bug is that you need insert enough entry in 
function insertBucketEntry upto 256 and that cause spillPartition().


> if insert too much BucketEntry into one bucket in join of iteration will 
> cause a error (Caused : java.io.FileNotFoundException release file error)
> --------------------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: FLINK-8534
>                 URL: https://issues.apache.org/jira/browse/FLINK-8534
>             Project: Flink
>          Issue Type: Bug
>         Environment: windows ideal 8g ram 4core i5 cpu. Flink 1.4.0
>            Reporter: zhu.qing
>            Priority: Major
>         Attachments: T2AdjSetBfs.java
>
>
> When insert too much entry into bucket will cause 
> spillPartition(). So 
> this.buildSideChannel = ioAccess.createBlockChannelWriter(targetChannel, 
> bufferReturnQueue); 
> And in 
> prepareNextPartition() of reopenablemutable hash table
> furtherPartitioning = true; 
> so in 
> finalizeProbePhase()
> freeMemory.add(this.probeSideBuffer.getCurrentSegment());
> // delete the spill files
>  this.probeSideChannel.close();
>  System.out.println("HashPartition probeSideRecordCounter Delete");
>  this.buildSideChannel.deleteChannel();
>  this.probeSideChannel.deleteChannel();
> after deleteChannel the next iteartion will fail.
>  
> And I use web-google as dataset(SNAP). And I failed to use 16g laptop to 
> reproduce the bug. The key to the bug is that you need insert enough entry in 
> function insertBucketEntry upto 256 and that cause spillPartition().



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to