[
https://issues.apache.org/jira/browse/TAJO-1950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15049903#comment-15049903
]
ASF GitHub Bot commented on TAJO-1950:
--------------------------------------
Github user jinossy commented on a diff in the pull request:
https://github.com/apache/tajo/pull/884#discussion_r47183366
--- Diff:
tajo-core/src/main/java/org/apache/tajo/worker/ExecutionBlockContext.java ---
@@ -191,10 +197,59 @@ public void stop(){
}
tasks.clear();
taskHistories.clear();
+
+ // Clear index cache in pull server
+ clearIndexCache();
+
resource.release();
RpcClientManager.cleanup(queryMasterClient);
}
+ private void clearIndexCache() {
+ if (executionBlockId.getId() > 1) {
+ Bootstrap bootstrap = new Bootstrap()
+
.group(NettyUtils.getSharedEventLoopGroup(NettyUtils.GROUP.FETCHER, 1))
+ .channel(NioSocketChannel.class)
+ .option(ChannelOption.ALLOCATOR, NettyUtils.ALLOCATOR)
+ .option(ChannelOption.CONNECT_TIMEOUT_MILLIS, 10 * 1000)
+ .option(ChannelOption.TCP_NODELAY, true);
+ ChannelInitializer<Channel> initializer = new
ChannelInitializer<Channel>() {
+ @Override
+ protected void initChannel(Channel channel) throws Exception {
+ ChannelPipeline pipeline = channel.pipeline();
+ pipeline.addLast("codec", new HttpClientCodec());
+ }
+ };
+ bootstrap.handler(initializer);
+
+ WorkerConnectionInfo connInfo = workerContext.getConnectionInfo();
+ ChannelFuture future = bootstrap.connect(new
InetSocketAddress(connInfo.getHost(), connInfo.getPullServerPort()))
+ .addListener(ChannelFutureListener.CLOSE_ON_FAILURE);
+
+ try {
+ Channel channel = future.await().channel();
+ if (!future.isSuccess()) {
+ // Upon failure to connect to pull server, cache clear message
is just ignored.
+ LOG.warn(future.cause());
+ return;
+ }
+
+ ExecutionBlockId clearEbId = new
ExecutionBlockId(executionBlockId.getQueryId(), executionBlockId.getId() - 1);
+ HttpRequest request = new DefaultHttpRequest(HttpVersion.HTTP_1_1,
HttpMethod.DELETE, clearEbId.toString());
+ request.headers().set(Names.HOST, connInfo.getHost());
+ request.headers().set(Names.CONNECTION, Values.CLOSE);
+ channel.writeAndFlush(request);
+ } catch (InterruptedException e) {
+ throw new RuntimeException(e);
+ } finally {
+ if (future != null && future.channel().isOpen()) {
+ // Close the channel to exit.
+ future.channel().closeFuture();
--- End diff --
It should be `future.channel().close()`
> Query master uses too much memory during range shuffle
> ------------------------------------------------------
>
> Key: TAJO-1950
> URL: https://issues.apache.org/jira/browse/TAJO-1950
> Project: Tajo
> Issue Type: Improvement
> Components: distributed query plan
> Reporter: Jihoon Son
> Assignee: Jihoon Son
> Priority: Critical
> Fix For: 0.11.1
>
> Attachments: TAJO-1950proposal.pdf
>
>
> I ran a simple sort query on a 8TB table as follows.
> {noformat}
> tpch10tb> select * from lineitem order by l_orderkey;
> {noformat}
> After the first stage is completed, query master divides the range of the
> sort key (l_orderkey) into multiple partitions for range shuffle. Here, the
> partitioning time took about 9 minutes.
> Here is the log.
> {noformat}
> ...
> 2015-10-26 14:23:10,782 INFO
> org.apache.tajo.engine.planner.global.ParallelExecutionQueue: Next executable
> block eb_1445835438802_0004_000002
> 2015-10-26 14:23:10,782 INFO org.apache.tajo.querymaster.Query: Scheduling
> Stage:eb_1445835438802_0004_000002
> 2015-10-26 14:23:10,796 INFO org.apache.tajo.querymaster.Stage:
> org.apache.tajo.querymaster.DefaultTaskScheduler is chosen for the task
> scheduling for eb_1445835438802_0004_000002
> 2015-10-26 14:23:10,796 INFO org.apache.tajo.querymaster.Stage:
> eb_1445835438802_0004_000002, Table's volume is approximately 663647 MB
> 2015-10-26 14:23:10,796 INFO org.apache.tajo.querymaster.Stage:
> eb_1445835438802_0004_000002, The determined number of non-leaf tasks is 10370
> 2015-10-26 14:23:10,816 INFO org.apache.tajo.querymaster.Repartitioner:
> eb_1445835438802_0004_000002, Try to divide [(6000000000), (1)) into 10370
> sub ranges (total units: 10370)
> 2015-10-26 14:24:58,996 INFO org.apache.tajo.util.JvmPauseMonitor: Detected
> pause in JVM or host machine (eg GC): pause of approximately 2440ms
> GC pool 'PS MarkSweep' had collection(s): count=1 time=2214ms
> GC pool 'PS Scavenge' had collection(s): count=1 time=622ms
> 2015-10-26 14:27:24,040 WARN org.apache.tajo.util.JvmPauseMonitor: Detected
> pause in JVM or host machine (eg GC): pause of approximately 13237ms
> GC pool 'PS MarkSweep' had collection(s): count=1 time=12635ms
> GC pool 'PS Scavenge' had collection(s): count=1 time=674ms
> 2015-10-26 14:28:51,914 WARN org.apache.tajo.util.JvmPauseMonitor: Detected
> pause in JVM or host machine (eg GC): pause of approximately 20873ms
> GC pool 'PS MarkSweep' had collection(s): count=1 time=20486ms
> GC pool 'PS Scavenge' had collection(s): count=1 time=644ms
> 2015-10-26 14:30:52,392 WARN org.apache.tajo.util.JvmPauseMonitor: Detected
> pause in JVM or host machine (eg GC): pause of approximately 30986ms
> GC pool 'PS MarkSweep' had collection(s): count=1 time=30546ms
> GC pool 'PS Scavenge' had collection(s): count=1 time=696ms
> 2015-10-26 14:32:07,550 WARN org.apache.tajo.util.JvmPauseMonitor: Detected
> pause in JVM or host machine (eg GC): pause of approximately 15449ms
> GC pool 'PS MarkSweep' had collection(s): count=1 time=14593ms
> GC pool 'PS Scavenge' had collection(s): count=1 time=1148ms
> 2015-10-26 14:32:15,807 INFO org.apache.tajo.querymaster.Stage: 10370 objects
> are scheduled
> ...
> {noformat}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)