[jira] [Commented] (DRILL-5153) RESOURCE ERROR: Waited for 15000ms, but tasks for 'Get block maps' are not complete
[ https://issues.apache.org/jira/browse/DRILL-5153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16167416#comment-16167416 ] Robert Hou commented on DRILL-5153: --- We do not have this table any more, so we cannot reproduce the problem. Closing it for now. > RESOURCE ERROR: Waited for 15000ms, but tasks for 'Get block maps' are not > complete > > > Key: DRILL-5153 > URL: https://issues.apache.org/jira/browse/DRILL-5153 > Project: Apache Drill > Issue Type: Bug > Components: Execution - RPC, Query Planning & Optimization >Reporter: Rahul Challapalli > Attachments: tera.log > > > git.commit.id.abbrev=cf2b7c7 > The below query consistently fails on my 2 node cluster. I used the data set > from the terasort benchmark > {code} > select * from dfs.`/drill/testdata/resource-manager/terasort-data` limit 1; > Error: RESOURCE ERROR: Waited for 15000ms, but tasks for 'Get block maps' are > not complete. Total runnable size 2, parallelism 2. > [Error Id: 580e6c04-7096-4c09-9c7a-63e70c71d574 on qa-node182.qa.lab:31010] > (state=,code=0) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (DRILL-5478) Spill file size parameter is not honored by the managed external sort
[ https://issues.apache.org/jira/browse/DRILL-5478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16167463#comment-16167463 ] Robert Hou commented on DRILL-5478: --- How is the config option set? I checked the sys.boot values: {noformat} 0: jdbc:drill:drillbit=10.10.100.190> select * from sys.boot where name like '%spill%'; +--+-+---+-+--++---++ | name | kind | type | status | num_val | string_val | bool_val | float_val | +--+-+---+-+--++---++ | drill.exec.hashagg.spill.directories | STRING | BOOT | BOOT | null | [ # jar:file:/opt/drill/jars/drill-java-exec-1.12.0-SNAPSHOT.jar!/drill-module.conf: 228 "/tmp/drill/spill" ] | null | null | | drill.exec.hashagg.spill.fs | STRING | BOOT | BOOT | null | "file:///" | null | null | | drill.exec.sort.external.spill.directories | STRING | BOOT | BOOT | null | [ # drill-override.conf: 27 "/tmp/drill" ] | null | null | | drill.exec.sort.external.spill.file_size | STRING | BOOT | BOOT | null | "256M" | null | null | | drill.exec.sort.external.spill.fs| STRING | BOOT | BOOT | null | "maprfs:///" | null | null | | drill.exec.sort.external.spill.group.size| LONG| BOOT | BOOT | 4| null | null | null | | drill.exec.sort.external.spill.merge_batch_size | STRING | BOOT | BOOT | null | "16M" | null | null | | drill.exec.sort.external.spill.spill_batch_size | STRING | BOOT | BOOT | null | "1M" | null | null | | drill.exec.sort.external.spill.threshold | LONG| BOOT | BOOT | 4| null | null | null | | drill.exec.spill.directories | STRING | BOOT | BOOT | null | [ # jar:file:/opt/drill/jars/drill-java-exec-1.12.0-SNAPSHOT.jar!/drill-module.conf: 228 "/tmp/drill/spill" ] | null | null | | drill.exec.spill.fs | STRING | BOOT | BOOT | null | "file:///" | null | null | +--+-+---+-+--++---++ {noformat} And I see spill files that are about 38 MB in size: {noformat} -rwxr-xr-x 3 root root 38067297 2017-09-15 00:22 /tmp/drill/qa-node190.qa.lab-31010_2644807c-be7b-8e98-b6fb-027bd156719e_Sort_0-5-0/spill1 -rwxr-xr-x 3 root root 38067297 2017-09-15 00:23 /tmp/drill/qa-node190.qa.lab-31010_2644807c-be7b-8e98-b6fb-027bd156719e_Sort_0-5-0/spill2 -rwxr-xr-x 3 root root 38067297 2017-09-15 00:23 /tmp/drill/qa-node190.qa.lab-31010_2644807c-be7b-8e98-b6fb-027bd156719e_Sort_0-5-0/spill3 -rwxr-xr-x 3 root root 38067297 2017-09-15 00:24 /tmp/drill/qa-node190.qa.lab-31010_2644807c-be7b-8e98-b6fb-027bd156719e_Sort_0-5-0/spill4 -rwxr-xr-x 3 root root 38067297 2017-09-15 00:25 /tmp/drill/qa-node190.qa.lab-31010_2644807c-be7b-8e98-b6fb-027bd156719e_Sort_0-5-0/spill5 -rwxr-xr-x 3 root root 38067297 2017-09-15 00:25 /tmp/drill/qa-node190.qa.lab-31010_2644807c-be7b-8e98-b6fb-027bd156719e_Sort_0-5-0/spill6 -rwxr-xr-x 3 root root 10027008 2017-09-15 00:26
[jira] [Assigned] (DRILL-5493) Managed External Sort + CTAS creates batches too large for sort
[ https://issues.apache.org/jira/browse/DRILL-5493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Hou reassigned DRILL-5493: - Assignee: Paul Rogers > Managed External Sort + CTAS creates batches too large for sort > --- > > Key: DRILL-5493 > URL: https://issues.apache.org/jira/browse/DRILL-5493 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Relational Operators >Affects Versions: 1.10.0 >Reporter: Rahul Challapalli >Assignee: Paul Rogers > Attachments: 26ee07bb-81ff-1c10-9003-90510f4b8e1d.sys.drill, > drillbit.log > > > Config : > {code} > git.commit.id.abbrev=1e0a14c > No of nodes : 1 > DRILL_MAX_DIRECT_MEMORY="32G" > DRILL_MAX_HEAP="4G" > Assertions Enabled : true > {code} > The below query fails during the CTAS phase (the explicit order by in the > query runs fine) > {code} > ALTER SESSION SET `exec.sort.disable_managed` = false; > alter session set `planner.width.max_per_query` = 17; > create table dfs.drillTestDir.xsort_ctas4 partition by (col1) as select > columns[0] as col1 from (select * from > dfs.`/drill/testdata/resource-manager/wide-to-zero` order by columns[0]); > Error: RESOURCE ERROR: Unable to allocate sv2 buffer > Fragment 0:0 > [Error Id: 24ae2ec8-ac2a-45c3-b550-43c12764165d on qa-node190.qa.lab:31010] > (state=,code=0) > {code} > I attached the logs and profiles. The data is too large to attach to a jira. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (DRILL-5786) External Sort encounters Exception in RPC communication during Sort
[ https://issues.apache.org/jira/browse/DRILL-5786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Hou updated DRILL-5786: -- Summary: External Sort encounters Exception in RPC communication during Sort (was: Query encounters Exception in RPC communication during Sort) > External Sort encounters Exception in RPC communication during Sort > --- > > Key: DRILL-5786 > URL: https://issues.apache.org/jira/browse/DRILL-5786 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Relational Operators >Affects Versions: 1.11.0 >Reporter: Robert Hou >Assignee: Paul Rogers > Fix For: 1.12.0 > > Attachments: 2647d2b0-69bf-5a2b-0e23-81e8d49e464e.sys.drill, > drillbit.log > > > Query is: > {noformat} > select count(*) from (select * from > dfs.`/drill/testdata/resource-manager/3500cols.tbl` order by > columns[450],columns[330],columns[230],columns[220],columns[110],columns[90],columns[80],columns[70],columns[40],columns[10],columns[20],columns[30],columns[40],columns[50], > > columns[454],columns[413],columns[940],columns[834],columns[73],columns[140],columns[104],columns[],columns[30],columns[2420],columns[1520], > columns[1410], > columns[1110],columns[1290],columns[2380],columns[705],columns[45],columns[1054],columns[2430],columns[420],columns[404],columns[3350], > > columns[],columns[153],columns[356],columns[84],columns[745],columns[1450],columns[103],columns[2065],columns[343],columns[3420],columns[530], > columns[3210] ) d where d.col433 = 'sjka skjf' > {noformat} > This is the same query as DRILL-5670 but no session variables are set. > Here is the stack trace: > {noformat} > 2017-09-12 13:14:57,584 [BitServer-5] ERROR > o.a.d.exec.rpc.RpcExceptionHandler - Exception in RPC communication. > Connection: /10.10.100.190:31012 <--> /10.10.100.190:46230 (data server). > Closing connection. > io.netty.handler.codec.DecoderException: > org.apache.drill.exec.exception.OutOfMemoryException: Failure allocating > buffer. > at > io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:233) > ~[netty-codec-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339) > [netty-transport-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324) > [netty-transport-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86) > [netty-transport-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339) > [netty-transport-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324) > [netty-transport-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:847) > [netty-transport-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131) > [netty-transport-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511) > [netty-transport-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468) > [netty-transport-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382) > [netty-transport-4.0.27.Final.jar:4.0.27.Final] > at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) > [netty-transport-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111) > [netty-common-4.0.27.Final.jar:4.0.27.Final] > at java.lang.Thread.run(Thread.java:745) [na:1.7.0_111] > Caused by: org.apache.drill.exec.exception.OutOfMemoryException: Failure > allocating buffer. > at > io.netty.buffer.PooledByteBufAllocatorL.allocate(PooledByteBufAllocatorL.java:64) > ~[drill-memory-base-1.12.0-SNAPSHOT.jar:4.0.27.Final] > at > org.apache.drill.exec.memory.AllocationManager.(AllocationManager.java:81) > ~[drill-memory-base-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.memory.BaseAllocator.bufferWithoutReservation(BaseAllocator.java:260) > ~[drill-memory-base-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.memory.BaseAllocator.buffer(BaseAllocator.java:243) >
[jira] [Updated] (DRILL-5805) External Sort runs out of memory
[ https://issues.apache.org/jira/browse/DRILL-5805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Hou updated DRILL-5805: -- Attachment: 2645d135-4222-d752-2609-c95568ff6e93.sys.drill > External Sort runs out of memory > > > Key: DRILL-5805 > URL: https://issues.apache.org/jira/browse/DRILL-5805 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Relational Operators >Affects Versions: 1.11.0 >Reporter: Robert Hou >Assignee: Paul Rogers > Fix For: 1.12.0 > > Attachments: 2645d135-4222-d752-2609-c95568ff6e93.sys.drill > > > Query is: > {noformat} > ALTER SESSION SET `exec.sort.disable_managed` = false; > alter session set `planner.width.max_per_node` = 5; > alter session set `planner.disable_exchanges` = true; > alter session set `planner.width.max_per_query` = 100; > select count(*) from (select * from (select id, flatten(str_list) str from > dfs.`/drill/testdata/resource-manager/flatten-large-small.json`) d order by > d.str) d1 where d1.id=0; > {noformat} > Plan is: > {noformat} > | 00-00Screen > 00-01 Project(EXPR$0=[$0]) > 00-02StreamAgg(group=[{}], EXPR$0=[COUNT()]) > 00-03 Project($f0=[0]) > 00-04SelectionVectorRemover > 00-05 Filter(condition=[=($0, 0)]) > 00-06SelectionVectorRemover > 00-07 Sort(sort0=[$1], dir0=[ASC]) > 00-08Flatten(flattenField=[$1]) > 00-09 Project(id=[$0], str=[$1]) > 00-10Scan(groupscan=[EasyGroupScan > [selectionRoot=maprfs:/drill/testdata/resource-manager/flatten-large-small.json, > numFiles=1, columns=[`id`, `str_list`], > files=[maprfs:///drill/testdata/resource-manager/flatten-large-small.json]]]) > {noformat} > sys.version is: > {noformat} > | 1.12.0-SNAPSHOT | c4211d3b545b0d1996b096a8e1ace35376a63977 | Fix for > DRILL-5670 | 09.09.2017 @ 14:38:25 PDT | r...@qa-node190.qa.lab | > 11.09.2017 @ 14:27:16 PDT | > {noformat} > mult drill5447_1 -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (DRILL-5804) Query times out, may be infinite loop
Robert Hou created DRILL-5804: - Summary: Query times out, may be infinite loop Key: DRILL-5804 URL: https://issues.apache.org/jira/browse/DRILL-5804 Project: Apache Drill Issue Type: Bug Components: Execution - Relational Operators Affects Versions: 1.11.0 Reporter: Robert Hou Assignee: Paul Rogers Fix For: 1.12.0 Query is: {noformat} ALTER SESSION SET `exec.sort.disable_managed` = false; select count(*) from ( select * from ( select s1.type type, flatten(s1.rms.rptd) rptds, s1.rms, s1.uid from ( select d.type type, d.uid uid, flatten(d.map.rm) rms from dfs.`/drill/testdata/resource-manager/nested_large` d order by d.uid ) s1 ) s2 order by s2.rms.mapid, s2.rptds.a, s2.rptds.do_not_exist ); {noformat} Plan is: {noformat} | 00-00Screen 00-01 Project(EXPR$0=[$0]) 00-02StreamAgg(group=[{}], EXPR$0=[$SUM0($0)]) 00-03 UnionExchange 01-01StreamAgg(group=[{}], EXPR$0=[COUNT()]) 01-02 Project($f0=[0]) 01-03SingleMergeExchange(sort0=[4 ASC], sort1=[5 ASC], sort2=[6 ASC]) 02-01 SelectionVectorRemover 02-02Sort(sort0=[$4], sort1=[$5], sort2=[$6], dir0=[ASC], dir1=[ASC], dir2=[ASC]) 02-03 Project(type=[$0], rptds=[$1], rms=[$2], uid=[$3], EXPR$4=[$4], EXPR$5=[$5], EXPR$6=[$6]) 02-04HashToRandomExchange(dist0=[[$4]], dist1=[[$5]], dist2=[[$6]]) 03-01 UnorderedMuxExchange 04-01Project(type=[$0], rptds=[$1], rms=[$2], uid=[$3], EXPR$4=[$4], EXPR$5=[$5], EXPR$6=[$6], E_X_P_R_H_A_S_H_F_I_E_L_D=[hash32AsDouble($6, hash32AsDouble($5, hash32AsDouble($4, 1301011)))]) 04-02 Project(type=[$0], rptds=[$1], rms=[$2], uid=[$3], EXPR$4=[ITEM($2, 'mapid')], EXPR$5=[ITEM($1, 'a')], EXPR$6=[ITEM($1, 'do_not_exist')]) 04-03Flatten(flattenField=[$1]) 04-04 Project(type=[$0], rptds=[ITEM($2, 'rptd')], rms=[$2], uid=[$1]) 04-05SingleMergeExchange(sort0=[1 ASC]) 05-01 SelectionVectorRemover 05-02Sort(sort0=[$1], dir0=[ASC]) 05-03 Project(type=[$0], uid=[$1], rms=[$2]) 05-04 HashToRandomExchange(dist0=[[$1]]) 06-01 UnorderedMuxExchange 07-01Project(type=[$0], uid=[$1], rms=[$2], E_X_P_R_H_A_S_H_F_I_E_L_D=[hash32AsDouble($1, 1301011)]) 07-02 Flatten(flattenField=[$2]) 07-03Project(type=[$0], uid=[$1], rms=[ITEM($2, 'rm')]) 07-04 Scan(groupscan=[ParquetGroupScan [entries=[ReadEntryWithPath [path=maprfs:///drill/testdata/resource-manager/nested_large]], selectionRoot=maprfs:/drill/testdata/resource-manager/nested_large, numFiles=1, usedMetadataFile=false, columns=[`type`, `uid`, `map`.`rm`]]]) {noformat} Here is a segment of the drillbit.log, starting at line 55890: {noformat} 2017-09-19 04:22:56,258 [263f0252-fc60-7f8d-a1b1-c075876d1bd2:frag:2:2] DEBUG o.a.d.e.t.g.SingleBatchSorterGen44 - Took 142 us to sort 1023 records 2017-09-19 04:22:56,265 [263f0252-fc60-7f8d-a1b1-c075876d1bd2:frag:2:4] DEBUG o.a.d.e.t.g.SingleBatchSorterGen44 - Took 105 us to sort 1023 records 2017-09-19 04:22:56,268 [263f0252-fc60-7f8d-a1b1-c075876d1bd2:frag:3:0] DEBUG o.a.d.e.p.i.p.PartitionSenderRootExec - Partitioner.next(): got next record batch with status OK 2017-09-19 04:22:56,275 [263f0252-fc60-7f8d-a1b1-c075876d1bd2:frag:2:7] DEBUG o.a.d.e.t.g.SingleBatchSorterGen44 - Took 145 us to sort 1023 records 2017-09-19 04:22:56,354 [263f0252-fc60-7f8d-a1b1-c075876d1bd2:frag:3:0] DEBUG o.a.d.e.p.i.p.PartitionSenderRootExec - Partitioner.next(): got next record batch with status OK 2017-09-19 04:22:56,357 [263f0252-fc60-7f8d-a1b1-c075876d1bd2:frag:2:2] DEBUG o.a.d.e.t.g.SingleBatchSorterGen44 - Took 143 us to sort 1023 records 2017-09-19 04:22:56,361 [263f0252-fc60-7f8d-a1b1-c075876d1bd2:frag:2:0] DEBUG o.a.d.exec.compile.ClassTransformer - Compiled and merged PriorityQueueCopierGen50: bytecode size = 11.0 KiB, time = 124 ms. 2017-09-19 04:22:56,365 [263f0252-fc60-7f8d-a1b1-c075876d1bd2:frag:2:4] DEBUG o.a.d.e.t.g.SingleBatchSorterGen44 - Took 108 us to sort 1023 records 2017-09-19 04:22:56,367 [263f0252-fc60-7f8d-a1b1-c075876d1bd2:frag:2:0] DEBUG o.a.d.e.p.i.x.m.PriorityQueueCopierWrapper - Copier setup complete 2017-09-19 04:22:56,375 [263f0252-fc60-7f8d-a1b1-c075876d1bd2:frag:2:7] DEBUG o.a.d.e.t.g.SingleBatchSorterGen44 - Took
[jira] [Updated] (DRILL-5804) External Sort times out, may be infinite loop
[ https://issues.apache.org/jira/browse/DRILL-5804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Hou updated DRILL-5804: -- Description: Query is: {noformat} ALTER SESSION SET `exec.sort.disable_managed` = false; select count(*) from ( select * from ( select s1.type type, flatten(s1.rms.rptd) rptds, s1.rms, s1.uid from ( select d.type type, d.uid uid, flatten(d.map.rm) rms from dfs.`/drill/testdata/resource-manager/nested_large` d order by d.uid ) s1 ) s2 order by s2.rms.mapid, s2.rptds.a, s2.rptds.do_not_exist ); {noformat} Plan is: {noformat} | 00-00Screen 00-01 Project(EXPR$0=[$0]) 00-02StreamAgg(group=[{}], EXPR$0=[$SUM0($0)]) 00-03 UnionExchange 01-01StreamAgg(group=[{}], EXPR$0=[COUNT()]) 01-02 Project($f0=[0]) 01-03SingleMergeExchange(sort0=[4 ASC], sort1=[5 ASC], sort2=[6 ASC]) 02-01 SelectionVectorRemover 02-02Sort(sort0=[$4], sort1=[$5], sort2=[$6], dir0=[ASC], dir1=[ASC], dir2=[ASC]) 02-03 Project(type=[$0], rptds=[$1], rms=[$2], uid=[$3], EXPR$4=[$4], EXPR$5=[$5], EXPR$6=[$6]) 02-04HashToRandomExchange(dist0=[[$4]], dist1=[[$5]], dist2=[[$6]]) 03-01 UnorderedMuxExchange 04-01Project(type=[$0], rptds=[$1], rms=[$2], uid=[$3], EXPR$4=[$4], EXPR$5=[$5], EXPR$6=[$6], E_X_P_R_H_A_S_H_F_I_E_L_D=[hash32AsDouble($6, hash32AsDouble($5, hash32AsDouble($4, 1301011)))]) 04-02 Project(type=[$0], rptds=[$1], rms=[$2], uid=[$3], EXPR$4=[ITEM($2, 'mapid')], EXPR$5=[ITEM($1, 'a')], EXPR$6=[ITEM($1, 'do_not_exist')]) 04-03Flatten(flattenField=[$1]) 04-04 Project(type=[$0], rptds=[ITEM($2, 'rptd')], rms=[$2], uid=[$1]) 04-05SingleMergeExchange(sort0=[1 ASC]) 05-01 SelectionVectorRemover 05-02Sort(sort0=[$1], dir0=[ASC]) 05-03 Project(type=[$0], uid=[$1], rms=[$2]) 05-04 HashToRandomExchange(dist0=[[$1]]) 06-01 UnorderedMuxExchange 07-01Project(type=[$0], uid=[$1], rms=[$2], E_X_P_R_H_A_S_H_F_I_E_L_D=[hash32AsDouble($1, 1301011)]) 07-02 Flatten(flattenField=[$2]) 07-03Project(type=[$0], uid=[$1], rms=[ITEM($2, 'rm')]) 07-04 Scan(groupscan=[ParquetGroupScan [entries=[ReadEntryWithPath [path=maprfs:///drill/testdata/resource-manager/nested_large]], selectionRoot=maprfs:/drill/testdata/resource-manager/nested_large, numFiles=1, usedMetadataFile=false, columns=[`type`, `uid`, `map`.`rm`]]]) {noformat} Here is a segment of the drillbit.log, starting at line 55890: {noformat} 2017-09-19 04:22:56,258 [263f0252-fc60-7f8d-a1b1-c075876d1bd2:frag:2:2] DEBUG o.a.d.e.t.g.SingleBatchSorterGen44 - Took 142 us to sort 1023 records 2017-09-19 04:22:56,265 [263f0252-fc60-7f8d-a1b1-c075876d1bd2:frag:2:4] DEBUG o.a.d.e.t.g.SingleBatchSorterGen44 - Took 105 us to sort 1023 records 2017-09-19 04:22:56,268 [263f0252-fc60-7f8d-a1b1-c075876d1bd2:frag:3:0] DEBUG o.a.d.e.p.i.p.PartitionSenderRootExec - Partitioner.next(): got next record batch with status OK 2017-09-19 04:22:56,275 [263f0252-fc60-7f8d-a1b1-c075876d1bd2:frag:2:7] DEBUG o.a.d.e.t.g.SingleBatchSorterGen44 - Took 145 us to sort 1023 records 2017-09-19 04:22:56,354 [263f0252-fc60-7f8d-a1b1-c075876d1bd2:frag:3:0] DEBUG o.a.d.e.p.i.p.PartitionSenderRootExec - Partitioner.next(): got next record batch with status OK 2017-09-19 04:22:56,357 [263f0252-fc60-7f8d-a1b1-c075876d1bd2:frag:2:2] DEBUG o.a.d.e.t.g.SingleBatchSorterGen44 - Took 143 us to sort 1023 records 2017-09-19 04:22:56,361 [263f0252-fc60-7f8d-a1b1-c075876d1bd2:frag:2:0] DEBUG o.a.d.exec.compile.ClassTransformer - Compiled and merged PriorityQueueCopierGen50: bytecode size = 11.0 KiB, time = 124 ms. 2017-09-19 04:22:56,365 [263f0252-fc60-7f8d-a1b1-c075876d1bd2:frag:2:4] DEBUG o.a.d.e.t.g.SingleBatchSorterGen44 - Took 108 us to sort 1023 records 2017-09-19 04:22:56,367 [263f0252-fc60-7f8d-a1b1-c075876d1bd2:frag:2:0] DEBUG o.a.d.e.p.i.x.m.PriorityQueueCopierWrapper - Copier setup complete 2017-09-19 04:22:56,375 [263f0252-fc60-7f8d-a1b1-c075876d1bd2:frag:2:7] DEBUG o.a.d.e.t.g.SingleBatchSorterGen44 - Took 144 us to sort 1023 records 2017-09-19 04:22:56,396 [263f0252-fc60-7f8d-a1b1-c075876d1bd2:frag:2:0] DEBUG o.a.drill.exec.vector.BigIntVector - Reallocating vector [$data$(BIGINT:REQUIRED)]. # of bytes: [0] -> [0] 2017-09-19 04:22:56,396
[jira] [Created] (DRILL-5805) External Sort runs out of memory
Robert Hou created DRILL-5805: - Summary: External Sort runs out of memory Key: DRILL-5805 URL: https://issues.apache.org/jira/browse/DRILL-5805 Project: Apache Drill Issue Type: Bug Components: Execution - Relational Operators Affects Versions: 1.11.0 Reporter: Robert Hou Assignee: Paul Rogers Fix For: 1.12.0 Query is: {noformat} ALTER SESSION SET `exec.sort.disable_managed` = false; alter session set `planner.width.max_per_node` = 5; alter session set `planner.disable_exchanges` = true; alter session set `planner.width.max_per_query` = 100; select count(*) from (select * from (select id, flatten(str_list) str from dfs.`/drill/testdata/resource-manager/flatten-large-small.json`) d order by d.str) d1 where d1.id=0; {noformat} Plan is: {noformat} | 00-00Screen 00-01 Project(EXPR$0=[$0]) 00-02StreamAgg(group=[{}], EXPR$0=[COUNT()]) 00-03 Project($f0=[0]) 00-04SelectionVectorRemover 00-05 Filter(condition=[=($0, 0)]) 00-06SelectionVectorRemover 00-07 Sort(sort0=[$1], dir0=[ASC]) 00-08Flatten(flattenField=[$1]) 00-09 Project(id=[$0], str=[$1]) 00-10Scan(groupscan=[EasyGroupScan [selectionRoot=maprfs:/drill/testdata/resource-manager/flatten-large-small.json, numFiles=1, columns=[`id`, `str_list`], files=[maprfs:///drill/testdata/resource-manager/flatten-large-small.json]]]) {noformat} sys.version is: {noformat} | 1.12.0-SNAPSHOT | c4211d3b545b0d1996b096a8e1ace35376a63977 | Fix for DRILL-5670 | 09.09.2017 @ 14:38:25 PDT | r...@qa-node190.qa.lab | 11.09.2017 @ 14:27:16 PDT | {noformat} mult drill5447_1 -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (DRILL-5804) External Sort times out, may be infinite loop
[ https://issues.apache.org/jira/browse/DRILL-5804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16172274#comment-16172274 ] Robert Hou commented on DRILL-5804: --- The profile is missing. I suspect it was not created. > External Sort times out, may be infinite loop > - > > Key: DRILL-5804 > URL: https://issues.apache.org/jira/browse/DRILL-5804 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Relational Operators >Affects Versions: 1.11.0 >Reporter: Robert Hou >Assignee: Paul Rogers > Fix For: 1.12.0 > > Attachments: drillbit.log > > > Query is: > {noformat} > ALTER SESSION SET `exec.sort.disable_managed` = false; > select count(*) from ( > select * from ( > select s1.type type, flatten(s1.rms.rptd) rptds, s1.rms, s1.uid > from ( > select d.type type, d.uid uid, flatten(d.map.rm) rms from > dfs.`/drill/testdata/resource-manager/nested_large` d order by d.uid > ) s1 > ) s2 > order by s2.rms.mapid, s2.rptds.a, s2.rptds.do_not_exist > ); > {noformat} > Plan is: > {noformat} > | 00-00Screen > 00-01 Project(EXPR$0=[$0]) > 00-02StreamAgg(group=[{}], EXPR$0=[$SUM0($0)]) > 00-03 UnionExchange > 01-01StreamAgg(group=[{}], EXPR$0=[COUNT()]) > 01-02 Project($f0=[0]) > 01-03SingleMergeExchange(sort0=[4 ASC], sort1=[5 ASC], > sort2=[6 ASC]) > 02-01 SelectionVectorRemover > 02-02Sort(sort0=[$4], sort1=[$5], sort2=[$6], dir0=[ASC], > dir1=[ASC], dir2=[ASC]) > 02-03 Project(type=[$0], rptds=[$1], rms=[$2], uid=[$3], > EXPR$4=[$4], EXPR$5=[$5], EXPR$6=[$6]) > 02-04HashToRandomExchange(dist0=[[$4]], dist1=[[$5]], > dist2=[[$6]]) > 03-01 UnorderedMuxExchange > 04-01Project(type=[$0], rptds=[$1], rms=[$2], > uid=[$3], EXPR$4=[$4], EXPR$5=[$5], EXPR$6=[$6], > E_X_P_R_H_A_S_H_F_I_E_L_D=[hash32AsDouble($6, hash32AsDouble($5, > hash32AsDouble($4, 1301011)))]) > 04-02 Project(type=[$0], rptds=[$1], rms=[$2], > uid=[$3], EXPR$4=[ITEM($2, 'mapid')], EXPR$5=[ITEM($1, 'a')], > EXPR$6=[ITEM($1, 'do_not_exist')]) > 04-03Flatten(flattenField=[$1]) > 04-04 Project(type=[$0], rptds=[ITEM($2, > 'rptd')], rms=[$2], uid=[$1]) > 04-05SingleMergeExchange(sort0=[1 ASC]) > 05-01 SelectionVectorRemover > 05-02Sort(sort0=[$1], dir0=[ASC]) > 05-03 Project(type=[$0], uid=[$1], > rms=[$2]) > 05-04 > HashToRandomExchange(dist0=[[$1]]) > 06-01 UnorderedMuxExchange > 07-01Project(type=[$0], > uid=[$1], rms=[$2], E_X_P_R_H_A_S_H_F_I_E_L_D=[hash32AsDouble($1, 1301011)]) > 07-02 > Flatten(flattenField=[$2]) > 07-03Project(type=[$0], > uid=[$1], rms=[ITEM($2, 'rm')]) > 07-04 > Scan(groupscan=[ParquetGroupScan [entries=[ReadEntryWithPath > [path=maprfs:///drill/testdata/resource-manager/nested_large]], > selectionRoot=maprfs:/drill/testdata/resource-manager/nested_large, > numFiles=1, usedMetadataFile=false, columns=[`type`, `uid`, `map`.`rm`]]]) > {noformat} > Here is a segment of the drillbit.log, starting at line 55890: > {noformat} > 2017-09-19 04:22:56,258 [263f0252-fc60-7f8d-a1b1-c075876d1bd2:frag:2:2] DEBUG > o.a.d.e.t.g.SingleBatchSorterGen44 - Took 142 us to sort 1023 records > 2017-09-19 04:22:56,265 [263f0252-fc60-7f8d-a1b1-c075876d1bd2:frag:2:4] DEBUG > o.a.d.e.t.g.SingleBatchSorterGen44 - Took 105 us to sort 1023 records > 2017-09-19 04:22:56,268 [263f0252-fc60-7f8d-a1b1-c075876d1bd2:frag:3:0] DEBUG > o.a.d.e.p.i.p.PartitionSenderRootExec - Partitioner.next(): got next record > batch with status OK > 2017-09-19 04:22:56,275 [263f0252-fc60-7f8d-a1b1-c075876d1bd2:frag:2:7] DEBUG > o.a.d.e.t.g.SingleBatchSorterGen44 - Took 145 us to sort 1023 records > 2017-09-19 04:22:56,354 [263f0252-fc60-7f8d-a1b1-c075876d1bd2:frag:3:0] DEBUG > o.a.d.e.p.i.p.PartitionSenderRootExec - Partitioner.next(): got next record > batch with status OK > 2017-09-19 04:22:56,357 [263f0252-fc60-7f8d-a1b1-c075876d1bd2:frag:2:2] DEBUG > o.a.d.e.t.g.SingleBatchSorterGen44 - Took 143 us to sort 1023 records > 2017-09-19 04:22:56,361 [263f0252-fc60-7f8d-a1b1-c075876d1bd2:frag:2:0] DEBUG > o.a.d.exec.compile.ClassTransformer - Compiled and merged >
[jira] [Updated] (DRILL-5804) External Sort times out, may be infinite loop
[ https://issues.apache.org/jira/browse/DRILL-5804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Hou updated DRILL-5804: -- Attachment: drillbit.log > External Sort times out, may be infinite loop > - > > Key: DRILL-5804 > URL: https://issues.apache.org/jira/browse/DRILL-5804 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Relational Operators >Affects Versions: 1.11.0 >Reporter: Robert Hou >Assignee: Paul Rogers > Fix For: 1.12.0 > > Attachments: drillbit.log > > > Query is: > {noformat} > ALTER SESSION SET `exec.sort.disable_managed` = false; > select count(*) from ( > select * from ( > select s1.type type, flatten(s1.rms.rptd) rptds, s1.rms, s1.uid > from ( > select d.type type, d.uid uid, flatten(d.map.rm) rms from > dfs.`/drill/testdata/resource-manager/nested_large` d order by d.uid > ) s1 > ) s2 > order by s2.rms.mapid, s2.rptds.a, s2.rptds.do_not_exist > ); > {noformat} > Plan is: > {noformat} > | 00-00Screen > 00-01 Project(EXPR$0=[$0]) > 00-02StreamAgg(group=[{}], EXPR$0=[$SUM0($0)]) > 00-03 UnionExchange > 01-01StreamAgg(group=[{}], EXPR$0=[COUNT()]) > 01-02 Project($f0=[0]) > 01-03SingleMergeExchange(sort0=[4 ASC], sort1=[5 ASC], > sort2=[6 ASC]) > 02-01 SelectionVectorRemover > 02-02Sort(sort0=[$4], sort1=[$5], sort2=[$6], dir0=[ASC], > dir1=[ASC], dir2=[ASC]) > 02-03 Project(type=[$0], rptds=[$1], rms=[$2], uid=[$3], > EXPR$4=[$4], EXPR$5=[$5], EXPR$6=[$6]) > 02-04HashToRandomExchange(dist0=[[$4]], dist1=[[$5]], > dist2=[[$6]]) > 03-01 UnorderedMuxExchange > 04-01Project(type=[$0], rptds=[$1], rms=[$2], > uid=[$3], EXPR$4=[$4], EXPR$5=[$5], EXPR$6=[$6], > E_X_P_R_H_A_S_H_F_I_E_L_D=[hash32AsDouble($6, hash32AsDouble($5, > hash32AsDouble($4, 1301011)))]) > 04-02 Project(type=[$0], rptds=[$1], rms=[$2], > uid=[$3], EXPR$4=[ITEM($2, 'mapid')], EXPR$5=[ITEM($1, 'a')], > EXPR$6=[ITEM($1, 'do_not_exist')]) > 04-03Flatten(flattenField=[$1]) > 04-04 Project(type=[$0], rptds=[ITEM($2, > 'rptd')], rms=[$2], uid=[$1]) > 04-05SingleMergeExchange(sort0=[1 ASC]) > 05-01 SelectionVectorRemover > 05-02Sort(sort0=[$1], dir0=[ASC]) > 05-03 Project(type=[$0], uid=[$1], > rms=[$2]) > 05-04 > HashToRandomExchange(dist0=[[$1]]) > 06-01 UnorderedMuxExchange > 07-01Project(type=[$0], > uid=[$1], rms=[$2], E_X_P_R_H_A_S_H_F_I_E_L_D=[hash32AsDouble($1, 1301011)]) > 07-02 > Flatten(flattenField=[$2]) > 07-03Project(type=[$0], > uid=[$1], rms=[ITEM($2, 'rm')]) > 07-04 > Scan(groupscan=[ParquetGroupScan [entries=[ReadEntryWithPath > [path=maprfs:///drill/testdata/resource-manager/nested_large]], > selectionRoot=maprfs:/drill/testdata/resource-manager/nested_large, > numFiles=1, usedMetadataFile=false, columns=[`type`, `uid`, `map`.`rm`]]]) > {noformat} > Here is a segment of the drillbit.log, starting at line 55890: > {noformat} > 2017-09-19 04:22:56,258 [263f0252-fc60-7f8d-a1b1-c075876d1bd2:frag:2:2] DEBUG > o.a.d.e.t.g.SingleBatchSorterGen44 - Took 142 us to sort 1023 records > 2017-09-19 04:22:56,265 [263f0252-fc60-7f8d-a1b1-c075876d1bd2:frag:2:4] DEBUG > o.a.d.e.t.g.SingleBatchSorterGen44 - Took 105 us to sort 1023 records > 2017-09-19 04:22:56,268 [263f0252-fc60-7f8d-a1b1-c075876d1bd2:frag:3:0] DEBUG > o.a.d.e.p.i.p.PartitionSenderRootExec - Partitioner.next(): got next record > batch with status OK > 2017-09-19 04:22:56,275 [263f0252-fc60-7f8d-a1b1-c075876d1bd2:frag:2:7] DEBUG > o.a.d.e.t.g.SingleBatchSorterGen44 - Took 145 us to sort 1023 records > 2017-09-19 04:22:56,354 [263f0252-fc60-7f8d-a1b1-c075876d1bd2:frag:3:0] DEBUG > o.a.d.e.p.i.p.PartitionSenderRootExec - Partitioner.next(): got next record > batch with status OK > 2017-09-19 04:22:56,357 [263f0252-fc60-7f8d-a1b1-c075876d1bd2:frag:2:2] DEBUG > o.a.d.e.t.g.SingleBatchSorterGen44 - Took 143 us to sort 1023 records > 2017-09-19 04:22:56,361 [263f0252-fc60-7f8d-a1b1-c075876d1bd2:frag:2:0] DEBUG > o.a.d.exec.compile.ClassTransformer - Compiled and merged > PriorityQueueCopierGen50: bytecode size = 11.0 KiB, time = 124 ms. > 2017-09-19
[jira] [Commented] (DRILL-5786) A query that includes sort encounters Exception in RPC communication
[ https://issues.apache.org/jira/browse/DRILL-5786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16172375#comment-16172375 ] Robert Hou commented on DRILL-5786: --- Updated title. > A query that includes sort encounters Exception in RPC communication > > > Key: DRILL-5786 > URL: https://issues.apache.org/jira/browse/DRILL-5786 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Relational Operators >Affects Versions: 1.11.0 >Reporter: Robert Hou >Assignee: Paul Rogers > Fix For: 1.12.0 > > Attachments: 2647d2b0-69bf-5a2b-0e23-81e8d49e464e.sys.drill, > drillbit.log > > > Query is: > {noformat} > select count(*) from (select * from > dfs.`/drill/testdata/resource-manager/3500cols.tbl` order by > columns[450],columns[330],columns[230],columns[220],columns[110],columns[90],columns[80],columns[70],columns[40],columns[10],columns[20],columns[30],columns[40],columns[50], > > columns[454],columns[413],columns[940],columns[834],columns[73],columns[140],columns[104],columns[],columns[30],columns[2420],columns[1520], > columns[1410], > columns[1110],columns[1290],columns[2380],columns[705],columns[45],columns[1054],columns[2430],columns[420],columns[404],columns[3350], > > columns[],columns[153],columns[356],columns[84],columns[745],columns[1450],columns[103],columns[2065],columns[343],columns[3420],columns[530], > columns[3210] ) d where d.col433 = 'sjka skjf' > {noformat} > This is the same query as DRILL-5670 but no session variables are set. > Here is the stack trace: > {noformat} > 2017-09-12 13:14:57,584 [BitServer-5] ERROR > o.a.d.exec.rpc.RpcExceptionHandler - Exception in RPC communication. > Connection: /10.10.100.190:31012 <--> /10.10.100.190:46230 (data server). > Closing connection. > io.netty.handler.codec.DecoderException: > org.apache.drill.exec.exception.OutOfMemoryException: Failure allocating > buffer. > at > io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:233) > ~[netty-codec-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339) > [netty-transport-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324) > [netty-transport-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86) > [netty-transport-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339) > [netty-transport-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324) > [netty-transport-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:847) > [netty-transport-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131) > [netty-transport-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511) > [netty-transport-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468) > [netty-transport-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382) > [netty-transport-4.0.27.Final.jar:4.0.27.Final] > at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) > [netty-transport-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111) > [netty-common-4.0.27.Final.jar:4.0.27.Final] > at java.lang.Thread.run(Thread.java:745) [na:1.7.0_111] > Caused by: org.apache.drill.exec.exception.OutOfMemoryException: Failure > allocating buffer. > at > io.netty.buffer.PooledByteBufAllocatorL.allocate(PooledByteBufAllocatorL.java:64) > ~[drill-memory-base-1.12.0-SNAPSHOT.jar:4.0.27.Final] > at > org.apache.drill.exec.memory.AllocationManager.(AllocationManager.java:81) > ~[drill-memory-base-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.memory.BaseAllocator.bufferWithoutReservation(BaseAllocator.java:260) > ~[drill-memory-base-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.memory.BaseAllocator.buffer(BaseAllocator.java:243) > ~[drill-memory-base-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at >
[jira] [Updated] (DRILL-5804) External Sort times out, may be infinite loop
[ https://issues.apache.org/jira/browse/DRILL-5804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Hou updated DRILL-5804: -- Summary: External Sort times out, may be infinite loop (was: Query times out, may be infinite loop) > External Sort times out, may be infinite loop > - > > Key: DRILL-5804 > URL: https://issues.apache.org/jira/browse/DRILL-5804 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Relational Operators >Affects Versions: 1.11.0 >Reporter: Robert Hou >Assignee: Paul Rogers > Fix For: 1.12.0 > > > Query is: > {noformat} > ALTER SESSION SET `exec.sort.disable_managed` = false; > select count(*) from ( > select * from ( > select s1.type type, flatten(s1.rms.rptd) rptds, s1.rms, s1.uid > from ( > select d.type type, d.uid uid, flatten(d.map.rm) rms from > dfs.`/drill/testdata/resource-manager/nested_large` d order by d.uid > ) s1 > ) s2 > order by s2.rms.mapid, s2.rptds.a, s2.rptds.do_not_exist > ); > {noformat} > Plan is: > {noformat} > | 00-00Screen > 00-01 Project(EXPR$0=[$0]) > 00-02StreamAgg(group=[{}], EXPR$0=[$SUM0($0)]) > 00-03 UnionExchange > 01-01StreamAgg(group=[{}], EXPR$0=[COUNT()]) > 01-02 Project($f0=[0]) > 01-03SingleMergeExchange(sort0=[4 ASC], sort1=[5 ASC], > sort2=[6 ASC]) > 02-01 SelectionVectorRemover > 02-02Sort(sort0=[$4], sort1=[$5], sort2=[$6], dir0=[ASC], > dir1=[ASC], dir2=[ASC]) > 02-03 Project(type=[$0], rptds=[$1], rms=[$2], uid=[$3], > EXPR$4=[$4], EXPR$5=[$5], EXPR$6=[$6]) > 02-04HashToRandomExchange(dist0=[[$4]], dist1=[[$5]], > dist2=[[$6]]) > 03-01 UnorderedMuxExchange > 04-01Project(type=[$0], rptds=[$1], rms=[$2], > uid=[$3], EXPR$4=[$4], EXPR$5=[$5], EXPR$6=[$6], > E_X_P_R_H_A_S_H_F_I_E_L_D=[hash32AsDouble($6, hash32AsDouble($5, > hash32AsDouble($4, 1301011)))]) > 04-02 Project(type=[$0], rptds=[$1], rms=[$2], > uid=[$3], EXPR$4=[ITEM($2, 'mapid')], EXPR$5=[ITEM($1, 'a')], > EXPR$6=[ITEM($1, 'do_not_exist')]) > 04-03Flatten(flattenField=[$1]) > 04-04 Project(type=[$0], rptds=[ITEM($2, > 'rptd')], rms=[$2], uid=[$1]) > 04-05SingleMergeExchange(sort0=[1 ASC]) > 05-01 SelectionVectorRemover > 05-02Sort(sort0=[$1], dir0=[ASC]) > 05-03 Project(type=[$0], uid=[$1], > rms=[$2]) > 05-04 > HashToRandomExchange(dist0=[[$1]]) > 06-01 UnorderedMuxExchange > 07-01Project(type=[$0], > uid=[$1], rms=[$2], E_X_P_R_H_A_S_H_F_I_E_L_D=[hash32AsDouble($1, 1301011)]) > 07-02 > Flatten(flattenField=[$2]) > 07-03Project(type=[$0], > uid=[$1], rms=[ITEM($2, 'rm')]) > 07-04 > Scan(groupscan=[ParquetGroupScan [entries=[ReadEntryWithPath > [path=maprfs:///drill/testdata/resource-manager/nested_large]], > selectionRoot=maprfs:/drill/testdata/resource-manager/nested_large, > numFiles=1, usedMetadataFile=false, columns=[`type`, `uid`, `map`.`rm`]]]) > {noformat} > Here is a segment of the drillbit.log, starting at line 55890: > {noformat} > 2017-09-19 04:22:56,258 [263f0252-fc60-7f8d-a1b1-c075876d1bd2:frag:2:2] DEBUG > o.a.d.e.t.g.SingleBatchSorterGen44 - Took 142 us to sort 1023 records > 2017-09-19 04:22:56,265 [263f0252-fc60-7f8d-a1b1-c075876d1bd2:frag:2:4] DEBUG > o.a.d.e.t.g.SingleBatchSorterGen44 - Took 105 us to sort 1023 records > 2017-09-19 04:22:56,268 [263f0252-fc60-7f8d-a1b1-c075876d1bd2:frag:3:0] DEBUG > o.a.d.e.p.i.p.PartitionSenderRootExec - Partitioner.next(): got next record > batch with status OK > 2017-09-19 04:22:56,275 [263f0252-fc60-7f8d-a1b1-c075876d1bd2:frag:2:7] DEBUG > o.a.d.e.t.g.SingleBatchSorterGen44 - Took 145 us to sort 1023 records > 2017-09-19 04:22:56,354 [263f0252-fc60-7f8d-a1b1-c075876d1bd2:frag:3:0] DEBUG > o.a.d.e.p.i.p.PartitionSenderRootExec - Partitioner.next(): got next record > batch with status OK > 2017-09-19 04:22:56,357 [263f0252-fc60-7f8d-a1b1-c075876d1bd2:frag:2:2] DEBUG > o.a.d.e.t.g.SingleBatchSorterGen44 - Took 143 us to sort 1023 records > 2017-09-19 04:22:56,361 [263f0252-fc60-7f8d-a1b1-c075876d1bd2:frag:2:0] DEBUG > o.a.d.exec.compile.ClassTransformer - Compiled and merged > PriorityQueueCopierGen50: bytecode size =
[jira] [Updated] (DRILL-5786) A query that includes sort encounters Exception in RPC communication
[ https://issues.apache.org/jira/browse/DRILL-5786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Hou updated DRILL-5786: -- Summary: A query that includes sort encounters Exception in RPC communication (was: External Sort encounters Exception in RPC communication during Sort) > A query that includes sort encounters Exception in RPC communication > > > Key: DRILL-5786 > URL: https://issues.apache.org/jira/browse/DRILL-5786 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Relational Operators >Affects Versions: 1.11.0 >Reporter: Robert Hou >Assignee: Paul Rogers > Fix For: 1.12.0 > > Attachments: 2647d2b0-69bf-5a2b-0e23-81e8d49e464e.sys.drill, > drillbit.log > > > Query is: > {noformat} > select count(*) from (select * from > dfs.`/drill/testdata/resource-manager/3500cols.tbl` order by > columns[450],columns[330],columns[230],columns[220],columns[110],columns[90],columns[80],columns[70],columns[40],columns[10],columns[20],columns[30],columns[40],columns[50], > > columns[454],columns[413],columns[940],columns[834],columns[73],columns[140],columns[104],columns[],columns[30],columns[2420],columns[1520], > columns[1410], > columns[1110],columns[1290],columns[2380],columns[705],columns[45],columns[1054],columns[2430],columns[420],columns[404],columns[3350], > > columns[],columns[153],columns[356],columns[84],columns[745],columns[1450],columns[103],columns[2065],columns[343],columns[3420],columns[530], > columns[3210] ) d where d.col433 = 'sjka skjf' > {noformat} > This is the same query as DRILL-5670 but no session variables are set. > Here is the stack trace: > {noformat} > 2017-09-12 13:14:57,584 [BitServer-5] ERROR > o.a.d.exec.rpc.RpcExceptionHandler - Exception in RPC communication. > Connection: /10.10.100.190:31012 <--> /10.10.100.190:46230 (data server). > Closing connection. > io.netty.handler.codec.DecoderException: > org.apache.drill.exec.exception.OutOfMemoryException: Failure allocating > buffer. > at > io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:233) > ~[netty-codec-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339) > [netty-transport-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324) > [netty-transport-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86) > [netty-transport-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339) > [netty-transport-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324) > [netty-transport-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:847) > [netty-transport-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131) > [netty-transport-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511) > [netty-transport-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468) > [netty-transport-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382) > [netty-transport-4.0.27.Final.jar:4.0.27.Final] > at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) > [netty-transport-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111) > [netty-common-4.0.27.Final.jar:4.0.27.Final] > at java.lang.Thread.run(Thread.java:745) [na:1.7.0_111] > Caused by: org.apache.drill.exec.exception.OutOfMemoryException: Failure > allocating buffer. > at > io.netty.buffer.PooledByteBufAllocatorL.allocate(PooledByteBufAllocatorL.java:64) > ~[drill-memory-base-1.12.0-SNAPSHOT.jar:4.0.27.Final] > at > org.apache.drill.exec.memory.AllocationManager.(AllocationManager.java:81) > ~[drill-memory-base-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.memory.BaseAllocator.bufferWithoutReservation(BaseAllocator.java:260) > ~[drill-memory-base-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.memory.BaseAllocator.buffer(BaseAllocator.java:243)
[jira] [Updated] (DRILL-5805) External Sort runs out of memory
[ https://issues.apache.org/jira/browse/DRILL-5805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Hou updated DRILL-5805: -- Attachment: drillbit.log.gz > External Sort runs out of memory > > > Key: DRILL-5805 > URL: https://issues.apache.org/jira/browse/DRILL-5805 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Relational Operators >Affects Versions: 1.11.0 >Reporter: Robert Hou >Assignee: Paul Rogers > Fix For: 1.12.0 > > Attachments: 2645d135-4222-d752-2609-c95568ff6e93.sys.drill, > drillbit.log.gz > > > Query is: > {noformat} > ALTER SESSION SET `exec.sort.disable_managed` = false; > alter session set `planner.width.max_per_node` = 5; > alter session set `planner.disable_exchanges` = true; > alter session set `planner.width.max_per_query` = 100; > select count(*) from (select * from (select id, flatten(str_list) str from > dfs.`/drill/testdata/resource-manager/flatten-large-small.json`) d order by > d.str) d1 where d1.id=0; > {noformat} > Plan is: > {noformat} > | 00-00Screen > 00-01 Project(EXPR$0=[$0]) > 00-02StreamAgg(group=[{}], EXPR$0=[COUNT()]) > 00-03 Project($f0=[0]) > 00-04SelectionVectorRemover > 00-05 Filter(condition=[=($0, 0)]) > 00-06SelectionVectorRemover > 00-07 Sort(sort0=[$1], dir0=[ASC]) > 00-08Flatten(flattenField=[$1]) > 00-09 Project(id=[$0], str=[$1]) > 00-10Scan(groupscan=[EasyGroupScan > [selectionRoot=maprfs:/drill/testdata/resource-manager/flatten-large-small.json, > numFiles=1, columns=[`id`, `str_list`], > files=[maprfs:///drill/testdata/resource-manager/flatten-large-small.json]]]) > {noformat} > sys.version is: > {noformat} > | 1.12.0-SNAPSHOT | c4211d3b545b0d1996b096a8e1ace35376a63977 | Fix for > DRILL-5670 | 09.09.2017 @ 14:38:25 PDT | r...@qa-node190.qa.lab | > 11.09.2017 @ 14:27:16 PDT | > {noformat} > mult drill5447_1 -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (DRILL-5478) Spill file size parameter is not honored by the managed external sort
[ https://issues.apache.org/jira/browse/DRILL-5478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16172380#comment-16172380 ] Robert Hou commented on DRILL-5478: --- If this is exposed to Support, then it would be good to have some documentation about how this parameter works. Maybe even just a comment that the parameter is not precise, and depends on the memory. > Spill file size parameter is not honored by the managed external sort > - > > Key: DRILL-5478 > URL: https://issues.apache.org/jira/browse/DRILL-5478 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Relational Operators >Affects Versions: 1.10.0 >Reporter: Rahul Challapalli >Assignee: Paul Rogers > Fix For: 1.12.0 > > > git.commit.id.abbrev=1e0a14c > Query: > {code} > ALTER SESSION SET `exec.sort.disable_managed` = false; > alter session set `planner.width.max_per_node` = 1; > alter session set `planner.disable_exchanges` = true; > alter session set `planner.width.max_per_query` = 1; > alter session set `planner.memory.max_query_memory_per_node` = 1052428800; > alter session set `planner.enable_decimal_data_type` = true; > select count(*) from ( > select * from dfs.`/drill/testdata/resource-manager/all_types_large` d1 > order by d1.map.missing > ) d; > {code} > Boot Options (spill file size is set to 256MB) > {code} > 0: jdbc:drill:zk=10.10.100.190:5181> select * from sys.boot where name like > '%spill%'; > +--+-+---+-+--++---++ > | name | kind | type | status > | num_val | string_val | bool_val > | float_val | > +--+-+---+-+--++---++ > | drill.exec.sort.external.spill.directories | STRING | BOOT | BOOT > | null | [ > # drill-override.conf: 26 > "/tmp/test" > ] | null | null | > | drill.exec.sort.external.spill.file_size | STRING | BOOT | BOOT > | null | "256M" | null > | null | > | drill.exec.sort.external.spill.fs| STRING | BOOT | BOOT > | null | "maprfs:///" | null > | null | > | drill.exec.sort.external.spill.group.size| LONG| BOOT | BOOT > | 4| null | null > | null | > | drill.exec.sort.external.spill.merge_batch_size | STRING | BOOT | BOOT > | null | "16M" | null > | null | > | drill.exec.sort.external.spill.spill_batch_size | STRING | BOOT | BOOT > | null | "8M" | null > | null | > | drill.exec.sort.external.spill.threshold | LONG| BOOT | BOOT > | 4| null | null > | null | > +--+-+---+-+--++---++ > {code} > Below are the spill files while the query is still executing. The size of the > spill files is ~34MB > {code} > -rwxr-xr-x 3 root root 34957815 2017-05-05 11:26 > /tmp/test/26f33c36-4235-3531-aeaa-2c73dc4ddeb5_major0_minor0_op5_sort/run1 > -rwxr-xr-x 3 root root 34957815 2017-05-05 11:27 > /tmp/test/26f33c36-4235-3531-aeaa-2c73dc4ddeb5_major0_minor0_op5_sort/run2 > -rwxr-xr-x 3 root root 0 2017-05-05 11:27 > /tmp/test/26f33c36-4235-3531-aeaa-2c73dc4ddeb5_major0_minor0_op5_sort/run3 > {code} > The data set is too large to attach here. Reach out to me if you need anything -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (DRILL-5813) A query that includes sort encounters Exception occurred with closed channel
Robert Hou created DRILL-5813: - Summary: A query that includes sort encounters Exception occurred with closed channel Key: DRILL-5813 URL: https://issues.apache.org/jira/browse/DRILL-5813 Project: Apache Drill Issue Type: Bug Components: Execution - Relational Operators Affects Versions: 1.11.0 Reporter: Robert Hou Assignee: Paul Rogers Fix For: 1.12.0 Query is: {noformat} ALTER SESSION SET `exec.sort.disable_managed` = false; alter session set `planner.enable_decimal_data_type` = true; select count(*) from (select * from dfs.`/drill/testdata/resource-manager/all_types_large` order by missing11) d where d.missing3 is false; {noformat} This query has passed before when the number of threads and amount of memory is restricted. With more threads and memory, the query does not complete execution. Here is the stack trace: {noformat} Exception occurred with closed channel. Connection: /10.10.100.190:59281 <--> /10.10.100.190:31010 (user client) java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:192) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:384) at oadd.io.netty.buffer.PooledUnsafeDirectByteBuf.setBytes(PooledUnsafeDirectByteBuf.java:311) at oadd.io.netty.buffer.WrappedByteBuf.setBytes(WrappedByteBuf.java:407) at oadd.io.netty.buffer.UnsafeDirectLittleEndian.setBytes(UnsafeDirectLittleEndian.java:32) at oadd.io.netty.buffer.DrillBuf.setBytes(DrillBuf.java:792) at oadd.io.netty.buffer.MutableWrappedByteBuf.setBytes(MutableWrappedByteBuf.java:280) at oadd.io.netty.buffer.ExpandableByteBuf.setBytes(ExpandableByteBuf.java:26) at oadd.io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:881) at oadd.io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:241) at oadd.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:119) at oadd.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511) at oadd.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468) at oadd.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382) at oadd.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) at oadd.io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111) at java.lang.Thread.run(Thread.java:745) User Error Occurred: Connection /10.10.100.190:59281 <--> /10.10.100.190:31010 (user client) closed unexpectedly. Drillbit down? oadd.org.apache.drill.common.exceptions.UserException: CONNECTION ERROR: Connection /10.10.100.190:59281 <--> /10.10.100.190:31010 (user client) closed un expectedly. Drillbit down? [Error Id: b97704a4-b8f0-4cd0-b428-2cf1bcf39a1d ] at oadd.org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:550) at oadd.org.apache.drill.exec.rpc.user.QueryResultHandler$ChannelClosedHandler$1.operationComplete(QueryResultHandler.java:373) at oadd.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680) at oadd.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603) at oadd.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563) at oadd.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:406) at oadd.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:82) at oadd.io.netty.channel.AbstractChannel$CloseFuture.setClosed(AbstractChannel.java:943) at oadd.io.netty.channel.AbstractChannel$AbstractUnsafe.doClose0(AbstractChannel.java:592) at oadd.io.netty.channel.AbstractChannel$AbstractUnsafe.close(AbstractChannel.java:584) at oadd.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.closeOnRead(AbstractNioByteChannel.java:71) at oadd.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.handleReadException(AbstractNioByteChannel.java:89) at oadd.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:162) at oadd.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511) at oadd.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468) at oadd.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382) at oadd.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) at
[jira] [Updated] (DRILL-5805) External Sort runs out of memory
[ https://issues.apache.org/jira/browse/DRILL-5805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Hou updated DRILL-5805: -- Description: Query is: {noformat} ALTER SESSION SET `exec.sort.disable_managed` = false; alter session set `planner.width.max_per_node` = 5; alter session set `planner.disable_exchanges` = true; alter session set `planner.width.max_per_query` = 100; select count(*) from (select * from (select id, flatten(str_list) str from dfs.`/drill/testdata/resource-manager/flatten-large-small.json`) d order by d.str) d1 where d1.id=0; {noformat} Error is: {noformat} java.sql.SQLException: RESOURCE ERROR: Unable to allocate sv2 buffer Fragment 0:0 [Error Id: d67e087f-30e3-4861-8d3a-ddd952ddacc1 on atsqa6c83.qa.lab:31010] (org.apache.drill.exec.exception.OutOfMemoryException) Unable to allocate sv2 buffer org.apache.drill.exec.physical.impl.xsort.managed.BufferedBatches.newSV2():157 org.apache.drill.exec.physical.impl.xsort.managed.BufferedBatches.makeSelectionVector():142 org.apache.drill.exec.physical.impl.xsort.managed.BufferedBatches.add():97 org.apache.drill.exec.physical.impl.xsort.managed.SortImpl.addBatch():265 org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.loadBatch():422 org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.load():358 org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.innerNext():303 org.apache.drill.exec.record.AbstractRecordBatch.next():164 org.apache.drill.exec.record.AbstractRecordBatch.next():119 org.apache.drill.exec.record.AbstractRecordBatch.next():109 org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51 org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.innerNext():93 org.apache.drill.exec.record.AbstractRecordBatch.next():164 org.apache.drill.exec.record.AbstractRecordBatch.next():119 org.apache.drill.exec.record.AbstractRecordBatch.next():109 org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51 org.apache.drill.exec.record.AbstractRecordBatch.next():164 org.apache.drill.exec.record.AbstractRecordBatch.next():119 org.apache.drill.exec.record.AbstractRecordBatch.next():109 org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51 org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.innerNext():93 org.apache.drill.exec.record.AbstractRecordBatch.next():164 org.apache.drill.exec.record.AbstractRecordBatch.next():119 org.apache.drill.exec.record.AbstractRecordBatch.next():109 org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51 org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext():133 org.apache.drill.exec.record.AbstractRecordBatch.next():164 org.apache.drill.exec.record.AbstractRecordBatch.next():119 org.apache.drill.exec.record.AbstractRecordBatch.next():109 org.apache.drill.exec.physical.impl.aggregate.StreamingAggBatch.innerNext():151 org.apache.drill.exec.record.AbstractRecordBatch.next():164 org.apache.drill.exec.record.AbstractRecordBatch.next():119 org.apache.drill.exec.record.AbstractRecordBatch.next():109 org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51 org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext():133 org.apache.drill.exec.record.AbstractRecordBatch.next():164 org.apache.drill.exec.physical.impl.BaseRootExec.next():105 org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.innerNext():81 org.apache.drill.exec.physical.impl.BaseRootExec.next():95 org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():234 org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():227 java.security.AccessController.doPrivileged():-2 javax.security.auth.Subject.doAs():415 org.apache.hadoop.security.UserGroupInformation.doAs():1595 org.apache.drill.exec.work.fragment.FragmentExecutor.run():227 org.apache.drill.common.SelfCleaningRunnable.run():38 java.util.concurrent.ThreadPoolExecutor.runWorker():1145 java.util.concurrent.ThreadPoolExecutor$Worker.run():615 java.lang.Thread.run():744 at org.apache.drill.jdbc.impl.DrillCursor.nextRowInternally(DrillCursor.java:489) at org.apache.drill.jdbc.impl.DrillCursor.next(DrillCursor.java:593) at oadd.org.apache.calcite.avatica.AvaticaResultSet.next(AvaticaResultSet.java:215) at org.apache.drill.jdbc.impl.DrillResultSetImpl.next(DrillResultSetImpl.java:140) at org.apache.drill.test.framework.DrillTestJdbc.executeSetupQuery(DrillTestJdbc.java:193) at org.apache.drill.test.framework.DrillTestJdbc.run(DrillTestJdbc.java:111) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.run(FutureTask.java:262)
[jira] [Updated] (DRILL-5813) A query that includes sort loses Drill connection. Drill sometimes crashes.
[ https://issues.apache.org/jira/browse/DRILL-5813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Hou updated DRILL-5813: -- Attachment: drillbit.log > A query that includes sort loses Drill connection. Drill sometimes crashes. > > > Key: DRILL-5813 > URL: https://issues.apache.org/jira/browse/DRILL-5813 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Relational Operators >Affects Versions: 1.11.0 >Reporter: Robert Hou >Assignee: Paul Rogers > Fix For: 1.12.0 > > Attachments: drillbit.log > > > Query is: > {noformat} > ALTER SESSION SET `exec.sort.disable_managed` = false; > alter session set `planner.enable_decimal_data_type` = true; > select count(*) from (select * from > dfs.`/drill/testdata/resource-manager/all_types_large` order by missing11) d > where d.missing3 is false; > {noformat} > This query has passed before when the number of threads and amount of memory > is restricted. With more threads and memory, the query does not complete > execution. > Here is the stack trace: > {noformat} > Exception occurred with closed channel. Connection: /10.10.100.190:59281 > <--> /10.10.100.190:31010 (user client) > java.io.IOException: Connection reset by peer > at sun.nio.ch.FileDispatcherImpl.read0(Native Method) > at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) > at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) > at sun.nio.ch.IOUtil.read(IOUtil.java:192) > at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:384) > at > oadd.io.netty.buffer.PooledUnsafeDirectByteBuf.setBytes(PooledUnsafeDirectByteBuf.java:311) > at oadd.io.netty.buffer.WrappedByteBuf.setBytes(WrappedByteBuf.java:407) > at > oadd.io.netty.buffer.UnsafeDirectLittleEndian.setBytes(UnsafeDirectLittleEndian.java:32) > at oadd.io.netty.buffer.DrillBuf.setBytes(DrillBuf.java:792) > at > oadd.io.netty.buffer.MutableWrappedByteBuf.setBytes(MutableWrappedByteBuf.java:280) > at > oadd.io.netty.buffer.ExpandableByteBuf.setBytes(ExpandableByteBuf.java:26) > at > oadd.io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:881) > at > oadd.io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:241) > at > oadd.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:119) > at > oadd.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511) > at > oadd.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468) > at > oadd.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382) > at oadd.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) > at > oadd.io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111) > at java.lang.Thread.run(Thread.java:745) > User Error Occurred: Connection /10.10.100.190:59281 <--> > /10.10.100.190:31010 (user client) closed unexpectedly. Drillbit down? > oadd.org.apache.drill.common.exceptions.UserException: CONNECTION ERROR: > Connection /10.10.100.190:59281 <--> /10.10.100.190:31010 (user client) > closed un > expectedly. Drillbit down? > [Error Id: b97704a4-b8f0-4cd0-b428-2cf1bcf39a1d ] > at > oadd.org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:550) > at > oadd.org.apache.drill.exec.rpc.user.QueryResultHandler$ChannelClosedHandler$1.operationComplete(QueryResultHandler.java:373) > at > oadd.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680) > at > oadd.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603) > at > oadd.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563) > at > oadd.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:406) > at > oadd.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:82) > at > oadd.io.netty.channel.AbstractChannel$CloseFuture.setClosed(AbstractChannel.java:943) > at > oadd.io.netty.channel.AbstractChannel$AbstractUnsafe.doClose0(AbstractChannel.java:592) > at > oadd.io.netty.channel.AbstractChannel$AbstractUnsafe.close(AbstractChannel.java:584) > at > oadd.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.closeOnRead(AbstractNioByteChannel.java:71) > at > oadd.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.handleReadException(AbstractNioByteChannel.java:89) > at > oadd.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:162) > at >
[jira] [Commented] (DRILL-5813) A query that includes sort loses Drill connection. Drill sometimes crashes.
[ https://issues.apache.org/jira/browse/DRILL-5813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177685#comment-16177685 ] Robert Hou commented on DRILL-5813: --- I cannot find the profile for this query. I suspect it was not created, even though the query was executing. > A query that includes sort loses Drill connection. Drill sometimes crashes. > > > Key: DRILL-5813 > URL: https://issues.apache.org/jira/browse/DRILL-5813 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Relational Operators >Affects Versions: 1.11.0 >Reporter: Robert Hou >Assignee: Paul Rogers > Fix For: 1.12.0 > > Attachments: drillbit.log > > > Query is: > {noformat} > ALTER SESSION SET `exec.sort.disable_managed` = false; > alter session set `planner.enable_decimal_data_type` = true; > select count(*) from (select * from > dfs.`/drill/testdata/resource-manager/all_types_large` order by missing11) d > where d.missing3 is false; > {noformat} > This query has passed before when the number of threads and amount of memory > is restricted. With more threads and memory, the query does not complete > execution. > Here is the stack trace: > {noformat} > Exception occurred with closed channel. Connection: /10.10.100.190:59281 > <--> /10.10.100.190:31010 (user client) > java.io.IOException: Connection reset by peer > at sun.nio.ch.FileDispatcherImpl.read0(Native Method) > at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) > at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) > at sun.nio.ch.IOUtil.read(IOUtil.java:192) > at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:384) > at > oadd.io.netty.buffer.PooledUnsafeDirectByteBuf.setBytes(PooledUnsafeDirectByteBuf.java:311) > at oadd.io.netty.buffer.WrappedByteBuf.setBytes(WrappedByteBuf.java:407) > at > oadd.io.netty.buffer.UnsafeDirectLittleEndian.setBytes(UnsafeDirectLittleEndian.java:32) > at oadd.io.netty.buffer.DrillBuf.setBytes(DrillBuf.java:792) > at > oadd.io.netty.buffer.MutableWrappedByteBuf.setBytes(MutableWrappedByteBuf.java:280) > at > oadd.io.netty.buffer.ExpandableByteBuf.setBytes(ExpandableByteBuf.java:26) > at > oadd.io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:881) > at > oadd.io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:241) > at > oadd.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:119) > at > oadd.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511) > at > oadd.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468) > at > oadd.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382) > at oadd.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) > at > oadd.io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111) > at java.lang.Thread.run(Thread.java:745) > User Error Occurred: Connection /10.10.100.190:59281 <--> > /10.10.100.190:31010 (user client) closed unexpectedly. Drillbit down? > oadd.org.apache.drill.common.exceptions.UserException: CONNECTION ERROR: > Connection /10.10.100.190:59281 <--> /10.10.100.190:31010 (user client) > closed un > expectedly. Drillbit down? > [Error Id: b97704a4-b8f0-4cd0-b428-2cf1bcf39a1d ] > at > oadd.org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:550) > at > oadd.org.apache.drill.exec.rpc.user.QueryResultHandler$ChannelClosedHandler$1.operationComplete(QueryResultHandler.java:373) > at > oadd.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680) > at > oadd.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603) > at > oadd.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563) > at > oadd.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:406) > at > oadd.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:82) > at > oadd.io.netty.channel.AbstractChannel$CloseFuture.setClosed(AbstractChannel.java:943) > at > oadd.io.netty.channel.AbstractChannel$AbstractUnsafe.doClose0(AbstractChannel.java:592) > at > oadd.io.netty.channel.AbstractChannel$AbstractUnsafe.close(AbstractChannel.java:584) > at > oadd.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.closeOnRead(AbstractNioByteChannel.java:71) > at > oadd.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.handleReadException(AbstractNioByteChannel.java:89) > at >
[jira] [Updated] (DRILL-5813) A query that includes sort loses Drill connection. Drill sometimes crashes.
[ https://issues.apache.org/jira/browse/DRILL-5813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Hou updated DRILL-5813: -- Description: Query is: {noformat} ALTER SESSION SET `exec.sort.disable_managed` = false; alter session set `planner.enable_decimal_data_type` = true; select count(*) from (select * from dfs.`/drill/testdata/resource-manager/all_types_large` order by missing11) d where d.missing3 is false; {noformat} This query has passed before when the number of threads and amount of memory is restricted. With more threads and memory, the query does not complete execution. Here is the stack trace: {noformat} Exception occurred with closed channel. Connection: /10.10.100.190:59281 <--> /10.10.100.190:31010 (user client) java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:192) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:384) at oadd.io.netty.buffer.PooledUnsafeDirectByteBuf.setBytes(PooledUnsafeDirectByteBuf.java:311) at oadd.io.netty.buffer.WrappedByteBuf.setBytes(WrappedByteBuf.java:407) at oadd.io.netty.buffer.UnsafeDirectLittleEndian.setBytes(UnsafeDirectLittleEndian.java:32) at oadd.io.netty.buffer.DrillBuf.setBytes(DrillBuf.java:792) at oadd.io.netty.buffer.MutableWrappedByteBuf.setBytes(MutableWrappedByteBuf.java:280) at oadd.io.netty.buffer.ExpandableByteBuf.setBytes(ExpandableByteBuf.java:26) at oadd.io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:881) at oadd.io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:241) at oadd.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:119) at oadd.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511) at oadd.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468) at oadd.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382) at oadd.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) at oadd.io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111) at java.lang.Thread.run(Thread.java:745) User Error Occurred: Connection /10.10.100.190:59281 <--> /10.10.100.190:31010 (user client) closed unexpectedly. Drillbit down? oadd.org.apache.drill.common.exceptions.UserException: CONNECTION ERROR: Connection /10.10.100.190:59281 <--> /10.10.100.190:31010 (user client) closed un expectedly. Drillbit down? [Error Id: b97704a4-b8f0-4cd0-b428-2cf1bcf39a1d ] at oadd.org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:550) at oadd.org.apache.drill.exec.rpc.user.QueryResultHandler$ChannelClosedHandler$1.operationComplete(QueryResultHandler.java:373) at oadd.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680) at oadd.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603) at oadd.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563) at oadd.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:406) at oadd.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:82) at oadd.io.netty.channel.AbstractChannel$CloseFuture.setClosed(AbstractChannel.java:943) at oadd.io.netty.channel.AbstractChannel$AbstractUnsafe.doClose0(AbstractChannel.java:592) at oadd.io.netty.channel.AbstractChannel$AbstractUnsafe.close(AbstractChannel.java:584) at oadd.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.closeOnRead(AbstractNioByteChannel.java:71) at oadd.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.handleReadException(AbstractNioByteChannel.java:89) at oadd.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:162) at oadd.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511) at oadd.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468) at oadd.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382) at oadd.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) at oadd.io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111) at java.lang.Thread.run(Thread.java:745) [#14] Query failed: oadd.org.apache.drill.common.exceptions.UserException: CONNECTION ERROR: Connection /10.10.100.190:59281 <--> /10.10.100.190:31010 (user client) closed un
[jira] [Updated] (DRILL-5813) A query that includes sort loses Drill connection. Drill sometimes crashes.
[ https://issues.apache.org/jira/browse/DRILL-5813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Hou updated DRILL-5813: -- Summary: A query that includes sort loses Drill connection. Drill sometimes crashes. (was: A query that includes sort encounters Exception occurred with closed channel) > A query that includes sort loses Drill connection. Drill sometimes crashes. > > > Key: DRILL-5813 > URL: https://issues.apache.org/jira/browse/DRILL-5813 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Relational Operators >Affects Versions: 1.11.0 >Reporter: Robert Hou >Assignee: Paul Rogers > Fix For: 1.12.0 > > > Query is: > {noformat} > ALTER SESSION SET `exec.sort.disable_managed` = false; > alter session set `planner.enable_decimal_data_type` = true; > select count(*) from (select * from > dfs.`/drill/testdata/resource-manager/all_types_large` order by missing11) d > where d.missing3 is false; > {noformat} > This query has passed before when the number of threads and amount of memory > is restricted. With more threads and memory, the query does not complete > execution. > Here is the stack trace: > {noformat} > Exception occurred with closed channel. Connection: /10.10.100.190:59281 > <--> /10.10.100.190:31010 (user client) > java.io.IOException: Connection reset by peer > at sun.nio.ch.FileDispatcherImpl.read0(Native Method) > at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) > at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) > at sun.nio.ch.IOUtil.read(IOUtil.java:192) > at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:384) > at > oadd.io.netty.buffer.PooledUnsafeDirectByteBuf.setBytes(PooledUnsafeDirectByteBuf.java:311) > at oadd.io.netty.buffer.WrappedByteBuf.setBytes(WrappedByteBuf.java:407) > at > oadd.io.netty.buffer.UnsafeDirectLittleEndian.setBytes(UnsafeDirectLittleEndian.java:32) > at oadd.io.netty.buffer.DrillBuf.setBytes(DrillBuf.java:792) > at > oadd.io.netty.buffer.MutableWrappedByteBuf.setBytes(MutableWrappedByteBuf.java:280) > at > oadd.io.netty.buffer.ExpandableByteBuf.setBytes(ExpandableByteBuf.java:26) > at > oadd.io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:881) > at > oadd.io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:241) > at > oadd.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:119) > at > oadd.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511) > at > oadd.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468) > at > oadd.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382) > at oadd.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) > at > oadd.io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111) > at java.lang.Thread.run(Thread.java:745) > User Error Occurred: Connection /10.10.100.190:59281 <--> > /10.10.100.190:31010 (user client) closed unexpectedly. Drillbit down? > oadd.org.apache.drill.common.exceptions.UserException: CONNECTION ERROR: > Connection /10.10.100.190:59281 <--> /10.10.100.190:31010 (user client) > closed un > expectedly. Drillbit down? > [Error Id: b97704a4-b8f0-4cd0-b428-2cf1bcf39a1d ] > at > oadd.org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:550) > at > oadd.org.apache.drill.exec.rpc.user.QueryResultHandler$ChannelClosedHandler$1.operationComplete(QueryResultHandler.java:373) > at > oadd.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680) > at > oadd.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603) > at > oadd.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563) > at > oadd.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:406) > at > oadd.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:82) > at > oadd.io.netty.channel.AbstractChannel$CloseFuture.setClosed(AbstractChannel.java:943) > at > oadd.io.netty.channel.AbstractChannel$AbstractUnsafe.doClose0(AbstractChannel.java:592) > at > oadd.io.netty.channel.AbstractChannel$AbstractUnsafe.close(AbstractChannel.java:584) > at > oadd.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.closeOnRead(AbstractNioByteChannel.java:71) > at > oadd.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.handleReadException(AbstractNioByteChannel.java:89) > at >
[jira] [Commented] (DRILL-5478) Spill file size parameter is not honored by the managed external sort
[ https://issues.apache.org/jira/browse/DRILL-5478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16168488#comment-16168488 ] Robert Hou commented on DRILL-5478: --- I think the default memory setting is 10GB. The spill file size is set to 256 MB. I got one spill file with 193 GB. -rwxr-xr-x 3 root root 193233725 2017-09-15 13:22 /tmp/drill/qa-node190.qa.lab-31010_2643ca63-6496-2290-2659-8951a257c740_Sort_0-5-0/spill1 In the test above, memory is restricted to 1 GB. The spill files end up being 38 MB. > Spill file size parameter is not honored by the managed external sort > - > > Key: DRILL-5478 > URL: https://issues.apache.org/jira/browse/DRILL-5478 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Relational Operators >Affects Versions: 1.10.0 >Reporter: Rahul Challapalli >Assignee: Paul Rogers > Fix For: 1.12.0 > > > git.commit.id.abbrev=1e0a14c > Query: > {code} > ALTER SESSION SET `exec.sort.disable_managed` = false; > alter session set `planner.width.max_per_node` = 1; > alter session set `planner.disable_exchanges` = true; > alter session set `planner.width.max_per_query` = 1; > alter session set `planner.memory.max_query_memory_per_node` = 1052428800; > alter session set `planner.enable_decimal_data_type` = true; > select count(*) from ( > select * from dfs.`/drill/testdata/resource-manager/all_types_large` d1 > order by d1.map.missing > ) d; > {code} > Boot Options (spill file size is set to 256MB) > {code} > 0: jdbc:drill:zk=10.10.100.190:5181> select * from sys.boot where name like > '%spill%'; > +--+-+---+-+--++---++ > | name | kind | type | status > | num_val | string_val | bool_val > | float_val | > +--+-+---+-+--++---++ > | drill.exec.sort.external.spill.directories | STRING | BOOT | BOOT > | null | [ > # drill-override.conf: 26 > "/tmp/test" > ] | null | null | > | drill.exec.sort.external.spill.file_size | STRING | BOOT | BOOT > | null | "256M" | null > | null | > | drill.exec.sort.external.spill.fs| STRING | BOOT | BOOT > | null | "maprfs:///" | null > | null | > | drill.exec.sort.external.spill.group.size| LONG| BOOT | BOOT > | 4| null | null > | null | > | drill.exec.sort.external.spill.merge_batch_size | STRING | BOOT | BOOT > | null | "16M" | null > | null | > | drill.exec.sort.external.spill.spill_batch_size | STRING | BOOT | BOOT > | null | "8M" | null > | null | > | drill.exec.sort.external.spill.threshold | LONG| BOOT | BOOT > | 4| null | null > | null | > +--+-+---+-+--++---++ > {code} > Below are the spill files while the query is still executing. The size of the > spill files is ~34MB > {code} > -rwxr-xr-x 3 root root 34957815 2017-05-05 11:26 > /tmp/test/26f33c36-4235-3531-aeaa-2c73dc4ddeb5_major0_minor0_op5_sort/run1 > -rwxr-xr-x 3 root root 34957815 2017-05-05 11:27 > /tmp/test/26f33c36-4235-3531-aeaa-2c73dc4ddeb5_major0_minor0_op5_sort/run2 > -rwxr-xr-x 3 root root 0 2017-05-05 11:27 > /tmp/test/26f33c36-4235-3531-aeaa-2c73dc4ddeb5_major0_minor0_op5_sort/run3 > {code} > The data set is too large to attach here. Reach out to me if you need anything -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Closed] (DRILL-5813) A query that includes sort loses Drill connection. Drill sometimes crashes.
[ https://issues.apache.org/jira/browse/DRILL-5813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Hou closed DRILL-5813. - I have verified this. > A query that includes sort loses Drill connection. Drill sometimes crashes. > > > Key: DRILL-5813 > URL: https://issues.apache.org/jira/browse/DRILL-5813 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Relational Operators >Affects Versions: 1.11.0 >Reporter: Robert Hou >Assignee: Vlad Rozov > Attachments: drillbit.log, drill.log > > > Query is: > {noformat} > ALTER SESSION SET `exec.sort.disable_managed` = false; > alter session set `planner.enable_decimal_data_type` = true; > select count(*) from (select * from > dfs.`/drill/testdata/resource-manager/all_types_large` order by missing11) d > where d.missing3 is false; > {noformat} > This query has passed before when the number of threads and amount of memory > is restricted. With more threads and memory, the query does not complete > execution. > Here is the stack trace: > {noformat} > Exception occurred with closed channel. Connection: /10.10.100.190:59281 > <--> /10.10.100.190:31010 (user client) > java.io.IOException: Connection reset by peer > at sun.nio.ch.FileDispatcherImpl.read0(Native Method) > at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) > at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) > at sun.nio.ch.IOUtil.read(IOUtil.java:192) > at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:384) > at > oadd.io.netty.buffer.PooledUnsafeDirectByteBuf.setBytes(PooledUnsafeDirectByteBuf.java:311) > at oadd.io.netty.buffer.WrappedByteBuf.setBytes(WrappedByteBuf.java:407) > at > oadd.io.netty.buffer.UnsafeDirectLittleEndian.setBytes(UnsafeDirectLittleEndian.java:32) > at oadd.io.netty.buffer.DrillBuf.setBytes(DrillBuf.java:792) > at > oadd.io.netty.buffer.MutableWrappedByteBuf.setBytes(MutableWrappedByteBuf.java:280) > at > oadd.io.netty.buffer.ExpandableByteBuf.setBytes(ExpandableByteBuf.java:26) > at > oadd.io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:881) > at > oadd.io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:241) > at > oadd.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:119) > at > oadd.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511) > at > oadd.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468) > at > oadd.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382) > at oadd.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) > at > oadd.io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111) > at java.lang.Thread.run(Thread.java:745) > User Error Occurred: Connection /10.10.100.190:59281 <--> > /10.10.100.190:31010 (user client) closed unexpectedly. Drillbit down? > oadd.org.apache.drill.common.exceptions.UserException: CONNECTION ERROR: > Connection /10.10.100.190:59281 <--> /10.10.100.190:31010 (user client) > closed un > expectedly. Drillbit down? > [Error Id: b97704a4-b8f0-4cd0-b428-2cf1bcf39a1d ] > at > oadd.org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:550) > at > oadd.org.apache.drill.exec.rpc.user.QueryResultHandler$ChannelClosedHandler$1.operationComplete(QueryResultHandler.java:373) > at > oadd.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680) > at > oadd.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603) > at > oadd.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563) > at > oadd.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:406) > at > oadd.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:82) > at > oadd.io.netty.channel.AbstractChannel$CloseFuture.setClosed(AbstractChannel.java:943) > at > oadd.io.netty.channel.AbstractChannel$AbstractUnsafe.doClose0(AbstractChannel.java:592) > at > oadd.io.netty.channel.AbstractChannel$AbstractUnsafe.close(AbstractChannel.java:584) > at > oadd.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.closeOnRead(AbstractNioByteChannel.java:71) > at > oadd.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.handleReadException(AbstractNioByteChannel.java:89) > at > oadd.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:162) > at >
[jira] [Resolved] (DRILL-5840) A query that includes sort completes, and then loses Drill connection. Drill becomes unresponsive, and cannot restart because it cannot communicate with Zookeeper
[ https://issues.apache.org/jira/browse/DRILL-5840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Hou resolved DRILL-5840. --- Resolution: Not A Problem > A query that includes sort completes, and then loses Drill connection. Drill > becomes unresponsive, and cannot restart because it cannot communicate with > Zookeeper > -- > > Key: DRILL-5840 > URL: https://issues.apache.org/jira/browse/DRILL-5840 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Relational Operators >Affects Versions: 1.11.0 >Reporter: Robert Hou >Assignee: Paul Rogers > Fix For: 1.12.0 > > > Query is: > {noformat} > ALTER SESSION SET `exec.sort.disable_managed` = false; > select count(*) from (select * from > dfs.`/drill/testdata/resource-manager/250wide.tbl` order by columns[0])d > where d.columns[0] = 'ljdfhwuehnoiueyf'; > {noformat} > Query tries to complete, but cannot. It takes 20 hours from the time the > query tries to complete, to the time Drill finally loses its connection. > From the drillbit.log: > {noformat} > 2017-10-03 16:28:14,892 [262bec7f-3539-0dd7-6fea-f2959f9df3b6:frag:0:0] DEBUG > o.a.drill.exec.work.foreman.Foreman - 262bec7f-3539-0dd7-6fea-f2959f9df3b6: > State change requested RUNNING --> COMPLETED > 2017-10-04 01:47:27,698 [UserServer-1] DEBUG > o.a.d.e.r.u.UserServerRequestHandler - Received query to run. Returning > query handle. > 2017-10-04 03:30:02,916 [262bec7f-3539-0dd7-6fea-f2959f9df3b6:frag:0:0] WARN > o.a.d.exec.work.foreman.QueryManager - Failure while trying to delete the > estore profile for this query. > org.apache.drill.common.exceptions.DrillRuntimeException: unable to delete > node at /running/262bec7f-3539-0dd7-6fea-f2959f9df3b6 > at > org.apache.drill.exec.coord.zk.ZookeeperClient.delete(ZookeeperClient.java:343) > ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.coord.zk.ZkEphemeralStore.remove(ZkEphemeralStore.java:108) > ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.work.foreman.QueryManager.updateEphemeralState(QueryManager.java:293) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.work.foreman.Foreman.recordNewState(Foreman.java:1043) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.work.foreman.Foreman.moveToState(Foreman.java:964) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.work.foreman.Foreman.access$2600(Foreman.java:113) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.work.foreman.Foreman$StateSwitch.processEvent(Foreman.java:1025) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.work.foreman.Foreman$StateSwitch.processEvent(Foreman.java:1018) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.common.EventProcessor.processEvents(EventProcessor.java:107) > [drill-common-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.common.EventProcessor.sendEvent(EventProcessor.java:65) > [drill-common-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.work.foreman.Foreman$StateSwitch.addEvent(Foreman.java:1020) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.work.foreman.Foreman.addToEventQueue(Foreman.java:1038) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.work.foreman.QueryManager.nodeComplete(QueryManager.java:498) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.work.foreman.QueryManager.access$100(QueryManager.java:66) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.work.foreman.QueryManager$NodeTracker.fragmentComplete(QueryManager.java:462) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.work.foreman.QueryManager.fragmentDone(QueryManager.java:147) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.work.foreman.QueryManager.access$400(QueryManager.java:66) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.work.foreman.QueryManager$1.statusUpdate(QueryManager.java:525) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.rpc.control.WorkEventBus.statusUpdate(WorkEventBus.java:71) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at >
[jira] [Commented] (DRILL-5840) A query that includes sort completes, and then loses Drill connection. Drill becomes unresponsive, and cannot restart because it cannot communicate with Zookeeper
[ https://issues.apache.org/jira/browse/DRILL-5840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16197476#comment-16197476 ] Robert Hou commented on DRILL-5840: --- The cluster ran out of disk space and caused problems for Zookeeper. >From /opt/mapr/zookeeper/zookeeper-3.4.5/logs 2017-10-04 16:22:54,475 [myid:] - ERROR [SyncThread:0:SyncRequestProcessor@151] - Severe unrecoverable error, exiting java.io.IOException: No space left on device at javStopping zookeeper ... STOPPED Starting zookeeper ... 2017-10-05 15:07:54,495 [myid:] - INFO [main:QuorumPeerConfig@101] - Reading configuration from: /opt/mapr/zookeeper/zookeeper-3.4.5/conf/zoo.cfg > A query that includes sort completes, and then loses Drill connection. Drill > becomes unresponsive, and cannot restart because it cannot communicate with > Zookeeper > -- > > Key: DRILL-5840 > URL: https://issues.apache.org/jira/browse/DRILL-5840 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Relational Operators >Affects Versions: 1.11.0 >Reporter: Robert Hou >Assignee: Paul Rogers > Fix For: 1.12.0 > > > Query is: > {noformat} > ALTER SESSION SET `exec.sort.disable_managed` = false; > select count(*) from (select * from > dfs.`/drill/testdata/resource-manager/250wide.tbl` order by columns[0])d > where d.columns[0] = 'ljdfhwuehnoiueyf'; > {noformat} > Query tries to complete, but cannot. It takes 20 hours from the time the > query tries to complete, to the time Drill finally loses its connection. > From the drillbit.log: > {noformat} > 2017-10-03 16:28:14,892 [262bec7f-3539-0dd7-6fea-f2959f9df3b6:frag:0:0] DEBUG > o.a.drill.exec.work.foreman.Foreman - 262bec7f-3539-0dd7-6fea-f2959f9df3b6: > State change requested RUNNING --> COMPLETED > 2017-10-04 01:47:27,698 [UserServer-1] DEBUG > o.a.d.e.r.u.UserServerRequestHandler - Received query to run. Returning > query handle. > 2017-10-04 03:30:02,916 [262bec7f-3539-0dd7-6fea-f2959f9df3b6:frag:0:0] WARN > o.a.d.exec.work.foreman.QueryManager - Failure while trying to delete the > estore profile for this query. > org.apache.drill.common.exceptions.DrillRuntimeException: unable to delete > node at /running/262bec7f-3539-0dd7-6fea-f2959f9df3b6 > at > org.apache.drill.exec.coord.zk.ZookeeperClient.delete(ZookeeperClient.java:343) > ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.coord.zk.ZkEphemeralStore.remove(ZkEphemeralStore.java:108) > ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.work.foreman.QueryManager.updateEphemeralState(QueryManager.java:293) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.work.foreman.Foreman.recordNewState(Foreman.java:1043) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.work.foreman.Foreman.moveToState(Foreman.java:964) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.work.foreman.Foreman.access$2600(Foreman.java:113) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.work.foreman.Foreman$StateSwitch.processEvent(Foreman.java:1025) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.work.foreman.Foreman$StateSwitch.processEvent(Foreman.java:1018) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.common.EventProcessor.processEvents(EventProcessor.java:107) > [drill-common-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.common.EventProcessor.sendEvent(EventProcessor.java:65) > [drill-common-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.work.foreman.Foreman$StateSwitch.addEvent(Foreman.java:1020) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.work.foreman.Foreman.addToEventQueue(Foreman.java:1038) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.work.foreman.QueryManager.nodeComplete(QueryManager.java:498) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.work.foreman.QueryManager.access$100(QueryManager.java:66) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.work.foreman.QueryManager$NodeTracker.fragmentComplete(QueryManager.java:462) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.work.foreman.QueryManager.fragmentDone(QueryManager.java:147) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at
[jira] [Commented] (DRILL-5840) A query that includes sort completes, and then loses Drill connection. Drill becomes unresponsive, and cannot restart because it cannot communicate with Zookeeper
[ https://issues.apache.org/jira/browse/DRILL-5840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16197477#comment-16197477 ] Robert Hou commented on DRILL-5840: --- Closing this as not a bug. > A query that includes sort completes, and then loses Drill connection. Drill > becomes unresponsive, and cannot restart because it cannot communicate with > Zookeeper > -- > > Key: DRILL-5840 > URL: https://issues.apache.org/jira/browse/DRILL-5840 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Relational Operators >Affects Versions: 1.11.0 >Reporter: Robert Hou >Assignee: Paul Rogers > Fix For: 1.12.0 > > > Query is: > {noformat} > ALTER SESSION SET `exec.sort.disable_managed` = false; > select count(*) from (select * from > dfs.`/drill/testdata/resource-manager/250wide.tbl` order by columns[0])d > where d.columns[0] = 'ljdfhwuehnoiueyf'; > {noformat} > Query tries to complete, but cannot. It takes 20 hours from the time the > query tries to complete, to the time Drill finally loses its connection. > From the drillbit.log: > {noformat} > 2017-10-03 16:28:14,892 [262bec7f-3539-0dd7-6fea-f2959f9df3b6:frag:0:0] DEBUG > o.a.drill.exec.work.foreman.Foreman - 262bec7f-3539-0dd7-6fea-f2959f9df3b6: > State change requested RUNNING --> COMPLETED > 2017-10-04 01:47:27,698 [UserServer-1] DEBUG > o.a.d.e.r.u.UserServerRequestHandler - Received query to run. Returning > query handle. > 2017-10-04 03:30:02,916 [262bec7f-3539-0dd7-6fea-f2959f9df3b6:frag:0:0] WARN > o.a.d.exec.work.foreman.QueryManager - Failure while trying to delete the > estore profile for this query. > org.apache.drill.common.exceptions.DrillRuntimeException: unable to delete > node at /running/262bec7f-3539-0dd7-6fea-f2959f9df3b6 > at > org.apache.drill.exec.coord.zk.ZookeeperClient.delete(ZookeeperClient.java:343) > ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.coord.zk.ZkEphemeralStore.remove(ZkEphemeralStore.java:108) > ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.work.foreman.QueryManager.updateEphemeralState(QueryManager.java:293) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.work.foreman.Foreman.recordNewState(Foreman.java:1043) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.work.foreman.Foreman.moveToState(Foreman.java:964) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.work.foreman.Foreman.access$2600(Foreman.java:113) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.work.foreman.Foreman$StateSwitch.processEvent(Foreman.java:1025) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.work.foreman.Foreman$StateSwitch.processEvent(Foreman.java:1018) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.common.EventProcessor.processEvents(EventProcessor.java:107) > [drill-common-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.common.EventProcessor.sendEvent(EventProcessor.java:65) > [drill-common-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.work.foreman.Foreman$StateSwitch.addEvent(Foreman.java:1020) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.work.foreman.Foreman.addToEventQueue(Foreman.java:1038) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.work.foreman.QueryManager.nodeComplete(QueryManager.java:498) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.work.foreman.QueryManager.access$100(QueryManager.java:66) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.work.foreman.QueryManager$NodeTracker.fragmentComplete(QueryManager.java:462) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.work.foreman.QueryManager.fragmentDone(QueryManager.java:147) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.work.foreman.QueryManager.access$400(QueryManager.java:66) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.work.foreman.QueryManager$1.statusUpdate(QueryManager.java:525) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.rpc.control.WorkEventBus.statusUpdate(WorkEventBus.java:71) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at >
[jira] [Commented] (DRILL-5670) Varchar vector throws an assertion error when allocating a new vector
[ https://issues.apache.org/jira/browse/DRILL-5670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16154349#comment-16154349 ] Robert Hou commented on DRILL-5670: --- Here is the plan. There is a StreamAgg. Can that affect things? {noformat} | 00-00Screen 00-01 Project(EXPR$0=[$0]) 00-02StreamAgg(group=[{}], EXPR$0=[$SUM0($0)]) 00-03 UnionExchange 01-01StreamAgg(group=[{}], EXPR$0=[COUNT()]) 01-02 Project($f0=[0]) 01-03SelectionVectorRemover 01-04 Filter(condition=[=(ITEM($0, 'col433'), 'sjka skjf')]) 01-05Project(T1¦¦*=[$0]) 01-06 SingleMergeExchange(sort0=[1 ASC], sort1=[2 ASC], sort2=[3 ASC], sort3=[4 ASC], sort4=[5 ASC], sort5=[6 ASC], sort6=[7 ASC], sort7=[8 ASC], sort8=[9 ASC], sort9=[10 ASC], sort10=[11 ASC], sort11=[12 ASC], sort12=[9 ASC], sort13=[13 ASC], sort14=[14 ASC], sort15=[15 ASC], sort16=[16 ASC], sort17=[17 ASC], sort18=[18 ASC], sort19=[19 ASC], sort20=[20 ASC], sort21=[21 ASC], sort22=[12 ASC], sort23=[22 ASC], sort24=[23 ASC], sort25=[24 ASC], sort26=[25 ASC], sort27=[26 ASC], sort28=[27 ASC], sort29=[28 ASC], sort30=[29 ASC], sort31=[30 ASC], sort32=[31 ASC], sort33=[32 ASC], sort34=[33 ASC], sort35=[34 ASC], sort36=[35 ASC], sort37=[36 ASC], sort38=[37 ASC], sort39=[38 ASC], sort40=[39 ASC], sort41=[40 ASC], sort42=[41 ASC], sort43=[42 ASC], sort44=[43 ASC], sort45=[44 ASC], sort46=[45 ASC], sort47=[46 ASC]) 02-01SelectionVectorRemover 02-02 Sort(sort0=[$1], sort1=[$2], sort2=[$3], sort3=[$4], sort4=[$5], sort5=[$6], sort6=[$7], sort7=[$8], sort8=[$9], sort9=[$10], sort10=[$11], sort11=[$12], sort12=[$9], sort13=[$13], sort14=[$14], sort15=[$15], sort16=[$16], sort17=[$17], sort18=[$18], sort19=[$19], sort20=[$20], sort21=[$21], sort22=[$12], sort23=[$22], sort24=[$23], sort25=[$24], sort26=[$25], sort27=[$26], sort28=[$27], sort29=[$28], sort30=[$29], sort31=[$30], sort32=[$31], sort33=[$32], sort34=[$33], sort35=[$34], sort36=[$35], sort37=[$36], sort38=[$37], sort39=[$38], sort40=[$39], sort41=[$40], sort42=[$41], sort43=[$42], sort44=[$43], sort45=[$44], sort46=[$45], sort47=[$46], dir0=[ASC], dir1=[ASC], dir2=[ASC], dir3=[ASC], dir4=[ASC], dir5=[ASC], dir6=[ASC], dir7=[ASC], dir8=[ASC], dir9=[ASC], dir10=[ASC], dir11=[ASC], dir12=[ASC], dir13=[ASC], dir14=[ASC], dir15=[ASC], dir16=[ASC], dir17=[ASC], dir18=[ASC], dir19=[ASC], dir20=[ASC], dir21=[ASC], dir22=[ASC], dir23=[ASC], dir24=[ASC], dir25=[ASC], dir26=[ASC], dir27=[ASC], dir28=[ASC], dir29=[ASC], dir30=[ASC], dir31=[ASC], dir32=[ASC], dir33=[ASC], dir34=[ASC], dir35=[ASC], dir36=[ASC], dir37=[ASC], dir38=[ASC], dir39=[ASC], dir40=[ASC], dir41=[ASC], dir42=[ASC], dir43=[ASC], dir44=[ASC], dir45=[ASC], dir46=[ASC], dir47=[ASC]) 02-03Project(T1¦¦*=[$0], EXPR$1=[$1], EXPR$2=[$2], EXPR$3=[$3], EXPR$4=[$4], EXPR$5=[$5], EXPR$6=[$6], EXPR$7=[$7], EXPR$8=[$8], EXPR$9=[$9], EXPR$10=[$10], EXPR$11=[$11], EXPR$12=[$12], EXPR$13=[$13], EXPR$14=[$14], EXPR$15=[$15], EXPR$16=[$16], EXPR$17=[$17], EXPR$18=[$18], EXPR$19=[$19], EXPR$20=[$20], EXPR$21=[$21], EXPR$22=[$22], EXPR$23=[$23], EXPR$24=[$24], EXPR$25=[$25], EXPR$26=[$26], EXPR$27=[$27], EXPR$28=[$28], EXPR$29=[$29], EXPR$30=[$30], EXPR$31=[$31], EXPR$32=[$32], EXPR$33=[$33], EXPR$34=[$34], EXPR$35=[$35], EXPR$36=[$36], EXPR$37=[$37], EXPR$38=[$38], EXPR$39=[$39], EXPR$40=[$40], EXPR$41=[$41], EXPR$42=[$42], EXPR$43=[$43], EXPR$44=[$44], EXPR$45=[$45], EXPR$46=[$46]) 02-04 HashToRandomExchange(dist0=[[$1]], dist1=[[$2]], dist2=[[$3]], dist3=[[$4]], dist4=[[$5]], dist5=[[$6]], dist6=[[$7]], dist7=[[$8]], dist8=[[$9]], dist9=[[$10]], dist10=[[$11]], dist11=[[$12]], dist12=[[$9]], dist13=[[$13]], dist14=[[$14]], dist15=[[$15]], dist16=[[$16]], dist17=[[$17]], dist18=[[$18]], dist19=[[$19]], dist20=[[$20]], dist21=[[$21]], dist22=[[$12]], dist23=[[$22]], dist24=[[$23]], dist25=[[$24]], dist26=[[$25]], dist27=[[$26]], dist28=[[$27]], dist29=[[$28]], dist30=[[$29]], dist31=[[$30]], dist32=[[$31]], dist33=[[$32]], dist34=[[$33]], dist35=[[$34]], dist36=[[$35]], dist37=[[$36]], dist38=[[$37]], dist39=[[$38]], dist40=[[$39]], dist41=[[$40]], dist42=[[$41]], dist43=[[$42]], dist44=[[$43]], dist45=[[$44]], dist46=[[$45]], dist47=[[$46]]) 03-01UnorderedMuxExchange 04-01 Project(T1¦¦*=[$0], EXPR$1=[$1], EXPR$2=[$2], EXPR$3=[$3], EXPR$4=[$4], EXPR$5=[$5], EXPR$6=[$6], EXPR$7=[$7], EXPR$8=[$8], EXPR$9=[$9], EXPR$10=[$10], EXPR$11=[$11], EXPR$12=[$12], EXPR$13=[$13], EXPR$14=[$14], EXPR$15=[$15], EXPR$16=[$16], EXPR$17=[$17], EXPR$18=[$18], EXPR$19=[$19], EXPR$20=[$20], EXPR$21=[$21], EXPR$22=[$22], EXPR$23=[$23], EXPR$24=[$24], EXPR$25=[$25], EXPR$26=[$26], EXPR$27=[$27],
[jira] [Commented] (DRILL-5670) Varchar vector throws an assertion error when allocating a new vector
[ https://issues.apache.org/jira/browse/DRILL-5670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16154837#comment-16154837 ] Robert Hou commented on DRILL-5670: --- Sorry, found the "other" log. Here are the error messages and stack trace. Looks like it is in the merge phase: {noformat} 2017-09-02 07:12:22,348 [26555749-4d36-10d2-6faf-e403db40c370:frag:3:0] INFO o.a.d.e.w.f.FragmentStatusReporter - 26555749-4d36-10d2-6faf-e403db40c370:3:0: State to report: FINISHED 2017-09-02 07:12:22,348 [drill-executor-23] DEBUG o.a.d.exec.rpc.control.WorkEventBus - Removing fragment manager: 26555749-4d36-10d2-6faf-e403db40c370:3:0 2017-09-02 07:12:22,370 [26555749-4d36-10d2-6faf-e403db40c370:frag:2:0] DEBUG o.a.d.e.t.g.SingleBatchSorterGen68 - Took 3622 us to sort 1023 records 2017-09-02 07:12:22,370 [26555749-4d36-10d2-6faf-e403db40c370:frag:2:0] DEBUG o.a.d.e.w.batch.BaseRawBatchBuffer - Got last batch from 3:0 2017-09-02 07:12:22,370 [26555749-4d36-10d2-6faf-e403db40c370:frag:2:0] DEBUG o.a.d.e.w.batch.BaseRawBatchBuffer - Streams finished 2017-09-02 07:12:22,384 [26555749-4d36-10d2-6faf-e403db40c370:frag:2:0] DEBUG o.a.d.e.t.g.SingleBatchSorterGen68 - Took 1907 us to sort 529 records 2017-09-02 07:12:22,384 [26555749-4d36-10d2-6faf-e403db40c370:frag:2:0] DEBUG o.a.d.e.p.i.x.m.ExternalSortBatch - Completed load phase: read 978 batches, spilled 194 times, total input bytes: 49162397056 2017-09-02 07:12:22,384 [26555749-4d36-10d2-6faf-e403db40c370:frag:2:0] DEBUG o.a.d.e.p.i.x.m.ExternalSortBatch - Starting consolidate phase. Batches = 978, Records = 100, Memory = 378422096, In-memory batches 8, spilled runs 194 2017-09-02 07:22:09,943 [26555749-4d36-10d2-6faf-e403db40c370:frag:2:0] DEBUG o.a.d.e.p.i.x.managed.SpilledRuns - Starting merge phase. Runs = 62, Alloc. memory = 0 2017-09-02 07:22:17,568 [26555749-4d36-10d2-6faf-e403db40c370:frag:2:0] INFO o.a.d.e.w.fragment.FragmentExecutor - User Error Occurred: One or more nodes ran out of memory while executing the query. (Unable to allocate buffer of size 16777216 (rounded from 15834000) due to memory limit. Current allocation: 525809920) org.apache.drill.common.exceptions.UserException: RESOURCE ERROR: One or more nodes ran out of memory while executing the query. Unable to allocate buffer of size 16777216 (rounded from 15834000) due to memory limit. Current allocation: 525809920 [Error Id: 34b695f5-b41d-440a-b07e-7e11531f9419 ] at org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:550) ~[drill-common-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:244) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38) [drill-common-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_51] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_51] at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51] Caused by: org.apache.drill.exec.exception.OutOfMemoryException: Unable to allocate buffer of size 16777216 (rounded from 15834000) due to memory limit. Current allocation: 525809920 at org.apache.drill.exec.memory.BaseAllocator.buffer(BaseAllocator.java:238) ~[drill-memory-base-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.memory.BaseAllocator.buffer(BaseAllocator.java:213) ~[drill-memory-base-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.vector.VarCharVector.allocateNew(VarCharVector.java:402) ~[vector-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.vector.RepeatedVarCharVector.allocateNew(RepeatedVarCharVector.java:272) ~[vector-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.vector.AllocationHelper.allocatePrecomputedChildCount(AllocationHelper.java:39) ~[vector-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.vector.AllocationHelper.allocate(AllocationHelper.java:46) ~[vector-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.record.VectorInitializer.allocateVector(VectorInitializer.java:115) ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.record.VectorInitializer.allocateVector(VectorInitializer.java:95) ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.record.VectorInitializer.allocateBatch(VectorInitializer.java:85) ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.physical.impl.xsort.managed.PriorityQueueCopierWrapper$BatchMerger.next(PriorityQueueCopierWrapper.java:262) ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at
[jira] [Updated] (DRILL-5670) Varchar vector throws an assertion error when allocating a new vector
[ https://issues.apache.org/jira/browse/DRILL-5670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Hou updated DRILL-5670: -- Attachment: drillbit.log > Varchar vector throws an assertion error when allocating a new vector > - > > Key: DRILL-5670 > URL: https://issues.apache.org/jira/browse/DRILL-5670 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Relational Operators >Affects Versions: 1.11.0 >Reporter: Rahul Challapalli >Assignee: Paul Rogers > Fix For: 1.12.0 > > Attachments: 26555749-4d36-10d2-6faf-e403db40c370.sys.drill, > 266290f3-5fdc-5873-7372-e9ee053bf867.sys.drill, > 269969ca-8d4d-073a-d916-9031e3d3fbf0.sys.drill, drillbit.log, drillbit.log, > drillbit.log, drillbit.log, drillbit.out, drill-override.conf > > > I am running this test on a private branch of [paul's > repository|https://github.com/paul-rogers/drill]. Below is the commit info > {code} > git.commit.id.abbrev=d86e16c > git.commit.user.email=prog...@maprtech.com > git.commit.message.full=DRILL-5601\: Rollup of external sort fixes an > improvements\n\n- DRILL-5513\: Managed External Sort \: OOM error during the > merge phase\n- DRILL-5519\: Sort fails to spill and results in an OOM\n- > DRILL-5522\: OOM during the merge and spill process of the managed external > sort\n- DRILL-5594\: Excessive buffer reallocations during merge phase of > external sort\n- DRILL-5597\: Incorrect "bits" vector allocation in nullable > vectors allocateNew()\n- DRILL-5602\: Repeated List Vector fails to > initialize the offset vector\n\nAll of the bugs have to do with handling > low-memory conditions, and with\ncorrectly estimating the sizes of vectors, > even when those vectors come\nfrom the spill file or from an exchange. Hence, > the changes for all of\nthe above issues are interrelated.\n > git.commit.id=d86e16c551e7d3553f2cde748a739b1c5a7a7659 > git.commit.message.short=DRILL-5601\: Rollup of external sort fixes an > improvements > git.commit.user.name=Paul Rogers > git.build.user.name=Rahul Challapalli > git.commit.id.describe=0.9.0-1078-gd86e16c > git.build.user.email=challapallira...@gmail.com > git.branch=d86e16c551e7d3553f2cde748a739b1c5a7a7659 > git.commit.time=05.07.2017 @ 20\:34\:39 PDT > git.build.time=12.07.2017 @ 14\:27\:03 PDT > git.remote.origin.url=g...@github.com\:paul-rogers/drill.git > {code} > Below query fails with an Assertion Error > {code} > 0: jdbc:drill:zk=10.10.100.190:5181> ALTER SESSION SET > `exec.sort.disable_managed` = false; > +---+-+ > | ok | summary | > +---+-+ > | true | exec.sort.disable_managed updated. | > +---+-+ > 1 row selected (1.044 seconds) > 0: jdbc:drill:zk=10.10.100.190:5181> alter session set > `planner.memory.max_query_memory_per_node` = 482344960; > +---++ > | ok | summary | > +---++ > | true | planner.memory.max_query_memory_per_node updated. | > +---++ > 1 row selected (0.372 seconds) > 0: jdbc:drill:zk=10.10.100.190:5181> alter session set > `planner.width.max_per_node` = 1; > +---+--+ > | ok | summary| > +---+--+ > | true | planner.width.max_per_node updated. | > +---+--+ > 1 row selected (0.292 seconds) > 0: jdbc:drill:zk=10.10.100.190:5181> alter session set > `planner.width.max_per_query` = 1; > +---+---+ > | ok |summary| > +---+---+ > | true | planner.width.max_per_query updated. | > +---+---+ > 1 row selected (0.25 seconds) > 0: jdbc:drill:zk=10.10.100.190:5181> select count(*) from (select * from > dfs.`/drill/testdata/resource-manager/3500cols.tbl` order by > columns[450],columns[330],columns[230],columns[220],columns[110],columns[90],columns[80],columns[70],columns[40],columns[10],columns[20],columns[30],columns[40],columns[50], > > columns[454],columns[413],columns[940],columns[834],columns[73],columns[140],columns[104],columns[],columns[30],columns[2420],columns[1520], > columns[1410], > columns[1110],columns[1290],columns[2380],columns[705],columns[45],columns[1054],columns[2430],columns[420],columns[404],columns[3350], > >
[jira] [Created] (DRILL-5778) Drill seems to run out of memory but completes execution
Robert Hou created DRILL-5778: - Summary: Drill seems to run out of memory but completes execution Key: DRILL-5778 URL: https://issues.apache.org/jira/browse/DRILL-5778 Project: Apache Drill Issue Type: Bug Components: Execution - Relational Operators Affects Versions: 1.11.0 Reporter: Robert Hou Assignee: Paul Rogers Fix For: 1.12.0 Query is: {noformat} ALTER SESSION SET `exec.sort.disable_managed` = false; alter session set `planner.width.max_per_node` = 1; alter session set `planner.disable_exchanges` = true; alter session set `planner.width.max_per_query` = 1; alter session set `planner.memory.max_query_memory_per_node` = 2147483648; select count(*) from (select * from (select id, flatten(str_list) str from dfs.`/drill/testdata/resource-manager/flatten-large-small.json`) d order by d.str) d1 where d1.id=0; {noformat} Plan is: {noformat} | 00-00Screen 00-01 Project(EXPR$0=[$0]) 00-02StreamAgg(group=[{}], EXPR$0=[$SUM0($0)]) 00-03 UnionExchange 01-01StreamAgg(group=[{}], EXPR$0=[COUNT()]) 01-02 Project($f0=[0]) 01-03SelectionVectorRemover 01-04 Filter(condition=[=($0, 0)]) 01-05SingleMergeExchange(sort0=[1 ASC]) 02-01 SelectionVectorRemover 02-02Sort(sort0=[$1], dir0=[ASC]) 02-03 Project(id=[$0], str=[$1]) 02-04HashToRandomExchange(dist0=[[$1]]) 03-01 UnorderedMuxExchange 04-01Project(id=[$0], str=[$1], E_X_P_R_H_A_S_H_F_I_E_L_D=[hash32AsDouble($1, 1301011)]) 04-02 Flatten(flattenField=[$1]) 04-03Project(id=[$0], str=[$1]) 04-04 Scan(groupscan=[EasyGroupScan [selectionRoot=maprfs:/drill/testdata/resource-manager/flatten-large-small.json, numFiles=1, columns=[`id`, `str_list`], files=[maprfs:///drill/testdata/resource-manager/flatten-large-small.json]]]) {noformat} >From drillbit.log: {noformat} 2017-09-08 05:07:21,515 [264d780f-41ac-2c4f-6bc8-bdbb5eeb3df0:frag:0:0] DEBUG o.a.d.e.p.i.x.m.ExternalSortBatch - Actual batch schema & sizes { str(type: REQUIRED VARCHAR, count: 4096, std size: 54, actual size: 134, data size: 548360) id(type: OPTIONAL BIGINT, count: 4096, std size: 8, actual size: 9, data size: 36864) Records: 4096, Total size: 1073819648, Data size: 585224, Gross row width: 262163, Net row width: 143, Density: 1} 2017-09-08 05:07:21,515 [264d780f-41ac-2c4f-6bc8-bdbb5eeb3df0:frag:0:0] ERROR o.a.d.e.p.i.x.m.ExternalSortBatch - Insufficient memory to merge two batches. Incoming batch size: 1073819648, available memory: 2147483648 2017-09-08 05:07:21,517 [264d780f-41ac-2c4f-6bc8-bdbb5eeb3df0:frag:0:0] INFO o.a.d.e.c.ClassCompilerSelector - Java compiler policy: DEFAULT, Debug option: true 2017-09-08 05:07:21,517 [264d780f-41ac-2c4f-6bc8-bdbb5eeb3df0:frag:0:0] DEBUG o.a.d.e.compile.JaninoClassCompiler - Compiling (source size=3.3 KiB): ... 2017-09-08 05:07:21,536 [264d780f-41ac-2c4f-6bc8-bdbb5eeb3df0:frag:0:0] DEBUG o.a.d.exec.compile.ClassTransformer - Compiled and merged SingleBatchSorterGen2677: bytecode size = 3.6 KiB, time = 19 ms. 2017-09-08 05:07:21,566 [264d780f-41ac-2c4f-6bc8-bdbb5eeb3df0:frag:0:0] DEBUG o.a.d.e.t.g.SingleBatchSorterGen2677 - Took 5608 us to sort 4096 records 2017-09-08 05:07:21,566 [264d780f-41ac-2c4f-6bc8-bdbb5eeb3df0:frag:0:0] DEBUG o.a.d.e.p.i.x.m.ExternalSortBatch - Input Batch Estimates: record size = 143 bytes; net = 1073819648 bytes, gross = 1610729472, records = 4096 2017-09-08 05:07:21,566 [264d780f-41ac-2c4f-6bc8-bdbb5eeb3df0:frag:0:0] DEBUG o.a.d.e.p.i.x.m.ExternalSortBatch - Spill batch size: net = 1048476 bytes, gross = 1572714 bytes, records = 7332; spill file = 268435456 bytes 2017-09-08 05:07:21,566 [264d780f-41ac-2c4f-6bc8-bdbb5eeb3df0:frag:0:0] DEBUG o.a.d.e.p.i.x.m.ExternalSortBatch - Output batch size: net = 9371505 bytes, gross = 14057257 bytes, records = 65535 2017-09-08 05:07:21,566 [264d780f-41ac-2c4f-6bc8-bdbb5eeb3df0:frag:0:0] DEBUG o.a.d.e.p.i.x.m.ExternalSortBatch - Available memory: 2147483648, buffer memory = 2143289744, merge memory = 2128740638 2017-09-08 05:07:21,571 [264d780f-41ac-2c4f-6bc8-bdbb5eeb3df0:frag:0:0] DEBUG o.a.d.e.t.g.SingleBatchSorterGen2677 - Took 4303 us to sort 4096 records 2017-09-08 05:07:21,571 [264d780f-41ac-2c4f-6bc8-bdbb5eeb3df0:frag:0:0] DEBUG o.a.d.e.p.i.x.m.ExternalSortBatch - Input Batch Estimates: record size = 266 bytes; net = 1073819648 bytes, gross = 1610729472, records = 4096 2017-09-08 05:07:21,571 [264d780f-41ac-2c4f-6bc8-bdbb5eeb3df0:frag:0:0] DEBUG o.a.d.e.p.i.x.m.ExternalSortBatch - Spill batch size: net = 1048572 bytes, gross = 1572858 bytes,
[jira] [Updated] (DRILL-5778) Drill seems to run out of memory but completes execution
[ https://issues.apache.org/jira/browse/DRILL-5778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Hou updated DRILL-5778: -- Attachment: 264d780f-41ac-2c4f-6bc8-bdbb5eeb3df0.sys.drill drillbit.log > Drill seems to run out of memory but completes execution > > > Key: DRILL-5778 > URL: https://issues.apache.org/jira/browse/DRILL-5778 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Relational Operators >Affects Versions: 1.11.0 >Reporter: Robert Hou >Assignee: Paul Rogers > Fix For: 1.12.0 > > Attachments: 264d780f-41ac-2c4f-6bc8-bdbb5eeb3df0.sys.drill, > drillbit.log > > > Query is: > {noformat} > ALTER SESSION SET `exec.sort.disable_managed` = false; > alter session set `planner.width.max_per_node` = 1; > alter session set `planner.disable_exchanges` = true; > alter session set `planner.width.max_per_query` = 1; > alter session set `planner.memory.max_query_memory_per_node` = 2147483648; > select count(*) from (select * from (select id, flatten(str_list) str from > dfs.`/drill/testdata/resource-manager/flatten-large-small.json`) d order by > d.str) d1 where d1.id=0; > {noformat} > Plan is: > {noformat} > | 00-00Screen > 00-01 Project(EXPR$0=[$0]) > 00-02StreamAgg(group=[{}], EXPR$0=[$SUM0($0)]) > 00-03 UnionExchange > 01-01StreamAgg(group=[{}], EXPR$0=[COUNT()]) > 01-02 Project($f0=[0]) > 01-03SelectionVectorRemover > 01-04 Filter(condition=[=($0, 0)]) > 01-05SingleMergeExchange(sort0=[1 ASC]) > 02-01 SelectionVectorRemover > 02-02Sort(sort0=[$1], dir0=[ASC]) > 02-03 Project(id=[$0], str=[$1]) > 02-04HashToRandomExchange(dist0=[[$1]]) > 03-01 UnorderedMuxExchange > 04-01Project(id=[$0], str=[$1], > E_X_P_R_H_A_S_H_F_I_E_L_D=[hash32AsDouble($1, 1301011)]) > 04-02 Flatten(flattenField=[$1]) > 04-03Project(id=[$0], str=[$1]) > 04-04 Scan(groupscan=[EasyGroupScan > [selectionRoot=maprfs:/drill/testdata/resource-manager/flatten-large-small.json, > numFiles=1, columns=[`id`, `str_list`], > files=[maprfs:///drill/testdata/resource-manager/flatten-large-small.json]]]) > {noformat} > From drillbit.log: > {noformat} > 2017-09-08 05:07:21,515 [264d780f-41ac-2c4f-6bc8-bdbb5eeb3df0:frag:0:0] DEBUG > o.a.d.e.p.i.x.m.ExternalSortBatch - Actual batch schema & sizes { > str(type: REQUIRED VARCHAR, count: 4096, std size: 54, actual size: 134, > data size: 548360) > id(type: OPTIONAL BIGINT, count: 4096, std size: 8, actual size: 9, data > size: 36864) > Records: 4096, Total size: 1073819648, Data size: 585224, Gross row width: > 262163, Net row width: 143, Density: 1} > 2017-09-08 05:07:21,515 [264d780f-41ac-2c4f-6bc8-bdbb5eeb3df0:frag:0:0] ERROR > o.a.d.e.p.i.x.m.ExternalSortBatch - Insufficient memory to merge two batches. > Incoming batch size: 1073819648, available memory: 2147483648 > 2017-09-08 05:07:21,517 [264d780f-41ac-2c4f-6bc8-bdbb5eeb3df0:frag:0:0] INFO > o.a.d.e.c.ClassCompilerSelector - Java compiler policy: DEFAULT, Debug > option: true > 2017-09-08 05:07:21,517 [264d780f-41ac-2c4f-6bc8-bdbb5eeb3df0:frag:0:0] DEBUG > o.a.d.e.compile.JaninoClassCompiler - Compiling (source size=3.3 KiB): > ... > 2017-09-08 05:07:21,536 [264d780f-41ac-2c4f-6bc8-bdbb5eeb3df0:frag:0:0] DEBUG > o.a.d.exec.compile.ClassTransformer - Compiled and merged > SingleBatchSorterGen2677: bytecode size = 3.6 KiB, time = 19 ms. > 2017-09-08 05:07:21,566 [264d780f-41ac-2c4f-6bc8-bdbb5eeb3df0:frag:0:0] DEBUG > o.a.d.e.t.g.SingleBatchSorterGen2677 - Took 5608 us to sort 4096 records > 2017-09-08 05:07:21,566 [264d780f-41ac-2c4f-6bc8-bdbb5eeb3df0:frag:0:0] DEBUG > o.a.d.e.p.i.x.m.ExternalSortBatch - Input Batch Estimates: record size = 143 > bytes; net = 1073819648 bytes, gross = 1610729472, records = 4096 > 2017-09-08 05:07:21,566 [264d780f-41ac-2c4f-6bc8-bdbb5eeb3df0:frag:0:0] DEBUG > o.a.d.e.p.i.x.m.ExternalSortBatch - Spill batch size: net = 1048476 bytes, > gross = 1572714 bytes, records = 7332; spill file = 268435456 bytes > 2017-09-08 05:07:21,566 [264d780f-41ac-2c4f-6bc8-bdbb5eeb3df0:frag:0:0] DEBUG > o.a.d.e.p.i.x.m.ExternalSortBatch - Output batch size: net = 9371505 bytes, > gross = 14057257 bytes, records = 65535 > 2017-09-08 05:07:21,566 [264d780f-41ac-2c4f-6bc8-bdbb5eeb3df0:frag:0:0] DEBUG > o.a.d.e.p.i.x.m.ExternalSortBatch - Available memory: 2147483648, buffer > memory = 2143289744, merge memory = 2128740638 > 2017-09-08 05:07:21,571
[jira] [Reopened] (DRILL-5443) Managed External Sort fails with OOM while spilling to disk
[ https://issues.apache.org/jira/browse/DRILL-5443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Hou reopened DRILL-5443: --- > Managed External Sort fails with OOM while spilling to disk > --- > > Key: DRILL-5443 > URL: https://issues.apache.org/jira/browse/DRILL-5443 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Relational Operators >Affects Versions: 1.10.0, 1.11.0 >Reporter: Rahul Challapalli >Assignee: Paul Rogers > Fix For: 1.12.0 > > Attachments: 265a014b-8cae-30b5-adab-ff030b6c7086.sys.drill, > 27016969-ef53-40dc-b582-eea25371fa1c.sys.drill, drill5443.drillbit.log, > drillbit.log > > > git.commit.id.abbrev=3e8b01d > The below query fails with an OOM > {code} > ALTER SESSION SET `exec.sort.disable_managed` = false; > alter session set `planner.width.max_per_node` = 1; > alter session set `planner.disable_exchanges` = true; > alter session set `planner.width.max_per_query` = 1; > alter session set `planner.memory.max_query_memory_per_node` = 52428800; > select s1.type type, flatten(s1.rms.rptd) rptds from (select d.type type, > d.uid uid, flatten(d.map.rm) rms from > dfs.`/drill/testdata/resource-manager/nested-large.json` d order by d.uid) s1 > order by s1.rms.mapid; > {code} > Exception from the logs > {code} > 2017-04-24 17:22:59,439 [27016969-ef53-40dc-b582-eea25371fa1c:frag:0:0] INFO > o.a.d.e.p.i.x.m.ExternalSortBatch - User Error Occurred: External Sort > encountered an error while spilling to disk (Unable to allocate buffer of > size 524288 (rounded from 307197) due to memory limit. Current allocation: > 25886728) > org.apache.drill.common.exceptions.UserException: RESOURCE ERROR: External > Sort encountered an error while spilling to disk > [Error Id: a64e3790-3a34-42c8-b4ea-4cb1df780e63 ] > at > org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:544) > ~[drill-common-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.doMergeAndSpill(ExternalSortBatch.java:1445) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.mergeAndSpill(ExternalSortBatch.java:1376) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.mergeRuns(ExternalSortBatch.java:1372) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.consolidateBatches(ExternalSortBatch.java:1299) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.mergeSpilledRuns(ExternalSortBatch.java:1195) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.load(ExternalSortBatch.java:689) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.innerNext(ExternalSortBatch.java:559) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next(IteratorValidatorBatchIterator.java:215) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.innerNext(RemovingRecordBatch.java:93) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next(IteratorValidatorBatchIterator.java:215) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at >
[jira] [Commented] (DRILL-5670) Varchar vector throws an assertion error when allocating a new vector
[ https://issues.apache.org/jira/browse/DRILL-5670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16147966#comment-16147966 ] Robert Hou commented on DRILL-5670: --- Here is the stack trace: {noformat} 2017-08-23 06:46:00,154 [266290f3-5fdc-5873-7372-e9ee053bf867:frag:2:0] INFO o.a.d.e.w.fragment.FragmentExecutor - User Error Occurred: One or more nodes ran out of memory while executing the query. (Unable to allocate buffer of size 16777216 (rounded from 16512948) due to memory limit. Current allocation: 525809920) org.apache.drill.common.exceptions.UserException: RESOURCE ERROR: One or more nodes ran out of memory while executing the query. Unable to allocate buffer of size 16777216 (rounded from 16512948) due to memory limit. Current allocation: 525809920 [Error Id: 716dde13-2de5-4c55-b37c-0de81c9b2564 ] at org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:550) ~[drill-common-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:244) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38) [drill-common-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_51] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_51] at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51] Caused by: org.apache.drill.exec.exception.OutOfMemoryException: Unable to allocate buffer of size 16777216 (rounded from 16512948) due to memory limit. Current allocation: 525809920 at org.apache.drill.exec.memory.BaseAllocator.buffer(BaseAllocator.java:238) ~[drill-memory-base-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.memory.BaseAllocator.buffer(BaseAllocator.java:213) ~[drill-memory-base-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.vector.VarCharVector.allocateNew(VarCharVector.java:402) ~[vector-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.vector.RepeatedVarCharVector.allocateNew(RepeatedVarCharVector.java:272) ~[vector-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.vector.AllocationHelper.allocatePrecomputedChildCount(AllocationHelper.java:39) ~[vector-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.vector.AllocationHelper.allocate(AllocationHelper.java:46) ~[vector-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.record.VectorInitializer.allocateVector(VectorInitializer.java:115) ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.record.VectorInitializer.allocateVector(VectorInitializer.java:95) ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.record.VectorInitializer.allocateBatch(VectorInitializer.java:85) ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.physical.impl.xsort.managed.PriorityQueueCopierWrapper$BatchMerger.next(PriorityQueueCopierWrapper.java:262) ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.load(ExternalSortBatch.java:374) ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.innerNext(ExternalSortBatch.java:303) ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:164) ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119) ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109) ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51) ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.innerNext(RemovingRecordBatch.java:93) ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:164) ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:105) ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.physical.impl.SingleSenderCreator$SingleSenderRootExec.innerNext(SingleSenderCreator.java:92) ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at
[jira] [Updated] (DRILL-5670) Varchar vector throws an assertion error when allocating a new vector
[ https://issues.apache.org/jira/browse/DRILL-5670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Hou updated DRILL-5670: -- Attachment: (was: drillbit.log) > Varchar vector throws an assertion error when allocating a new vector > - > > Key: DRILL-5670 > URL: https://issues.apache.org/jira/browse/DRILL-5670 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Relational Operators >Affects Versions: 1.11.0 >Reporter: Rahul Challapalli >Assignee: Paul Rogers > Fix For: 1.12.0 > > Attachments: 269969ca-8d4d-073a-d916-9031e3d3fbf0.sys.drill, > drillbit.log, drillbit.out, drill-override.conf > > > I am running this test on a private branch of [paul's > repository|https://github.com/paul-rogers/drill]. Below is the commit info > {code} > git.commit.id.abbrev=d86e16c > git.commit.user.email=prog...@maprtech.com > git.commit.message.full=DRILL-5601\: Rollup of external sort fixes an > improvements\n\n- DRILL-5513\: Managed External Sort \: OOM error during the > merge phase\n- DRILL-5519\: Sort fails to spill and results in an OOM\n- > DRILL-5522\: OOM during the merge and spill process of the managed external > sort\n- DRILL-5594\: Excessive buffer reallocations during merge phase of > external sort\n- DRILL-5597\: Incorrect "bits" vector allocation in nullable > vectors allocateNew()\n- DRILL-5602\: Repeated List Vector fails to > initialize the offset vector\n\nAll of the bugs have to do with handling > low-memory conditions, and with\ncorrectly estimating the sizes of vectors, > even when those vectors come\nfrom the spill file or from an exchange. Hence, > the changes for all of\nthe above issues are interrelated.\n > git.commit.id=d86e16c551e7d3553f2cde748a739b1c5a7a7659 > git.commit.message.short=DRILL-5601\: Rollup of external sort fixes an > improvements > git.commit.user.name=Paul Rogers > git.build.user.name=Rahul Challapalli > git.commit.id.describe=0.9.0-1078-gd86e16c > git.build.user.email=challapallira...@gmail.com > git.branch=d86e16c551e7d3553f2cde748a739b1c5a7a7659 > git.commit.time=05.07.2017 @ 20\:34\:39 PDT > git.build.time=12.07.2017 @ 14\:27\:03 PDT > git.remote.origin.url=g...@github.com\:paul-rogers/drill.git > {code} > Below query fails with an Assertion Error > {code} > 0: jdbc:drill:zk=10.10.100.190:5181> ALTER SESSION SET > `exec.sort.disable_managed` = false; > +---+-+ > | ok | summary | > +---+-+ > | true | exec.sort.disable_managed updated. | > +---+-+ > 1 row selected (1.044 seconds) > 0: jdbc:drill:zk=10.10.100.190:5181> alter session set > `planner.memory.max_query_memory_per_node` = 482344960; > +---++ > | ok | summary | > +---++ > | true | planner.memory.max_query_memory_per_node updated. | > +---++ > 1 row selected (0.372 seconds) > 0: jdbc:drill:zk=10.10.100.190:5181> alter session set > `planner.width.max_per_node` = 1; > +---+--+ > | ok | summary| > +---+--+ > | true | planner.width.max_per_node updated. | > +---+--+ > 1 row selected (0.292 seconds) > 0: jdbc:drill:zk=10.10.100.190:5181> alter session set > `planner.width.max_per_query` = 1; > +---+---+ > | ok |summary| > +---+---+ > | true | planner.width.max_per_query updated. | > +---+---+ > 1 row selected (0.25 seconds) > 0: jdbc:drill:zk=10.10.100.190:5181> select count(*) from (select * from > dfs.`/drill/testdata/resource-manager/3500cols.tbl` order by > columns[450],columns[330],columns[230],columns[220],columns[110],columns[90],columns[80],columns[70],columns[40],columns[10],columns[20],columns[30],columns[40],columns[50], > > columns[454],columns[413],columns[940],columns[834],columns[73],columns[140],columns[104],columns[],columns[30],columns[2420],columns[1520], > columns[1410], > columns[1110],columns[1290],columns[2380],columns[705],columns[45],columns[1054],columns[2430],columns[420],columns[404],columns[3350], > > columns[],columns[153],columns[356],columns[84],columns[745],columns[1450],columns[103],columns[2065],columns[343],columns[3420],columns[530], > columns[3210] ) d where d.col433 = 'sjka skjf'; > Error: RESOURCE ERROR: External Sort
[jira] [Updated] (DRILL-5670) Varchar vector throws an assertion error when allocating a new vector
[ https://issues.apache.org/jira/browse/DRILL-5670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Hou updated DRILL-5670: -- Attachment: (was: 266290f3-5fdc-5873-7372-e9ee053bf867.sys.drill) > Varchar vector throws an assertion error when allocating a new vector > - > > Key: DRILL-5670 > URL: https://issues.apache.org/jira/browse/DRILL-5670 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Relational Operators >Affects Versions: 1.11.0 >Reporter: Rahul Challapalli >Assignee: Paul Rogers > Fix For: 1.12.0 > > Attachments: 269969ca-8d4d-073a-d916-9031e3d3fbf0.sys.drill, > drillbit.log, drillbit.out, drill-override.conf > > > I am running this test on a private branch of [paul's > repository|https://github.com/paul-rogers/drill]. Below is the commit info > {code} > git.commit.id.abbrev=d86e16c > git.commit.user.email=prog...@maprtech.com > git.commit.message.full=DRILL-5601\: Rollup of external sort fixes an > improvements\n\n- DRILL-5513\: Managed External Sort \: OOM error during the > merge phase\n- DRILL-5519\: Sort fails to spill and results in an OOM\n- > DRILL-5522\: OOM during the merge and spill process of the managed external > sort\n- DRILL-5594\: Excessive buffer reallocations during merge phase of > external sort\n- DRILL-5597\: Incorrect "bits" vector allocation in nullable > vectors allocateNew()\n- DRILL-5602\: Repeated List Vector fails to > initialize the offset vector\n\nAll of the bugs have to do with handling > low-memory conditions, and with\ncorrectly estimating the sizes of vectors, > even when those vectors come\nfrom the spill file or from an exchange. Hence, > the changes for all of\nthe above issues are interrelated.\n > git.commit.id=d86e16c551e7d3553f2cde748a739b1c5a7a7659 > git.commit.message.short=DRILL-5601\: Rollup of external sort fixes an > improvements > git.commit.user.name=Paul Rogers > git.build.user.name=Rahul Challapalli > git.commit.id.describe=0.9.0-1078-gd86e16c > git.build.user.email=challapallira...@gmail.com > git.branch=d86e16c551e7d3553f2cde748a739b1c5a7a7659 > git.commit.time=05.07.2017 @ 20\:34\:39 PDT > git.build.time=12.07.2017 @ 14\:27\:03 PDT > git.remote.origin.url=g...@github.com\:paul-rogers/drill.git > {code} > Below query fails with an Assertion Error > {code} > 0: jdbc:drill:zk=10.10.100.190:5181> ALTER SESSION SET > `exec.sort.disable_managed` = false; > +---+-+ > | ok | summary | > +---+-+ > | true | exec.sort.disable_managed updated. | > +---+-+ > 1 row selected (1.044 seconds) > 0: jdbc:drill:zk=10.10.100.190:5181> alter session set > `planner.memory.max_query_memory_per_node` = 482344960; > +---++ > | ok | summary | > +---++ > | true | planner.memory.max_query_memory_per_node updated. | > +---++ > 1 row selected (0.372 seconds) > 0: jdbc:drill:zk=10.10.100.190:5181> alter session set > `planner.width.max_per_node` = 1; > +---+--+ > | ok | summary| > +---+--+ > | true | planner.width.max_per_node updated. | > +---+--+ > 1 row selected (0.292 seconds) > 0: jdbc:drill:zk=10.10.100.190:5181> alter session set > `planner.width.max_per_query` = 1; > +---+---+ > | ok |summary| > +---+---+ > | true | planner.width.max_per_query updated. | > +---+---+ > 1 row selected (0.25 seconds) > 0: jdbc:drill:zk=10.10.100.190:5181> select count(*) from (select * from > dfs.`/drill/testdata/resource-manager/3500cols.tbl` order by > columns[450],columns[330],columns[230],columns[220],columns[110],columns[90],columns[80],columns[70],columns[40],columns[10],columns[20],columns[30],columns[40],columns[50], > > columns[454],columns[413],columns[940],columns[834],columns[73],columns[140],columns[104],columns[],columns[30],columns[2420],columns[1520], > columns[1410], > columns[1110],columns[1290],columns[2380],columns[705],columns[45],columns[1054],columns[2430],columns[420],columns[404],columns[3350], > > columns[],columns[153],columns[356],columns[84],columns[745],columns[1450],columns[103],columns[2065],columns[343],columns[3420],columns[530], > columns[3210] ) d where d.col433 = 'sjka skjf'; >
[jira] [Closed] (DRILL-5234) External sort's spilling functionality does not work when the spilled columns contains a map type column
[ https://issues.apache.org/jira/browse/DRILL-5234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Hou closed DRILL-5234. - This bug has been fixed and verified. > External sort's spilling functionality does not work when the spilled columns > contains a map type column > > > Key: DRILL-5234 > URL: https://issues.apache.org/jira/browse/DRILL-5234 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Relational Operators >Reporter: Rahul Challapalli >Assignee: Robert Hou >Priority: Critical > Labels: ready-to-commit > Fix For: 1.11.0 > > Attachments: 27703303-c436-7d47-9b1c-1381e16c6b02.sys.drill, > drillbit.log > > > Env : > {code} > git.commit.id.abbrev=2af709f > No of nodes : 1 > DRILL_MAX_DIRECT_MEMORY="32G" > DRILL_MAX_HEAP="4G" > Data Size : ~250 MB > {code} > The below query results in an assertion error > {code} > alter session set `planner.width.max_per_node` = 1; > alter session set `planner.disable_exchanges` = true; > alter session set `planner.memory.max_query_memory_per_node` = 52428800; > select * from (select d1.type, d1.evnt, d1.transaction from (select d.type > type, flatten(d.events) evnt, flatten(d.transactions) transaction from > dfs.`/drill/testdata/resource-manager/10rows/data.json` d) d1 order by > d1.evnt.event_time, d1.transaction.trans_time) d2 where d2.type='web' and > d2.evnt.type = 'cmpgn4'; > {code} > Error from the logs : > {code} > 2017-01-30 15:33:51,137 [27703303-c436-7d47-9b1c-1381e16c6b02:frag:0:0] ERROR > o.a.d.e.w.fragment.FragmentExecutor - SYSTEM ERROR: AssertionError > Fragment 0:0 > [Error Id: 789d14bb-875b-4106-aad9-e665b8c3b7f1 on qa-node190.qa.lab:31010] > org.apache.drill.common.exceptions.UserException: SYSTEM ERROR: AssertionError > Fragment 0:0 > [Error Id: 789d14bb-875b-4106-aad9-e665b8c3b7f1 on qa-node190.qa.lab:31010] > at > org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:544) > ~[drill-common-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT] > at > org.apache.drill.exec.work.fragment.FragmentExecutor.sendFinalState(FragmentExecutor.java:293) > [drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT] > at > org.apache.drill.exec.work.fragment.FragmentExecutor.cleanup(FragmentExecutor.java:160) > [drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT] > at > org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:262) > [drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT] > at > org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38) > [drill-common-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > [na:1.7.0_111] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > [na:1.7.0_111] > at java.lang.Thread.run(Thread.java:745) [na:1.7.0_111] > Caused by: java.lang.RuntimeException: java.lang.AssertionError > at > org.apache.drill.common.DeferredException.addThrowable(DeferredException.java:101) > ~[drill-common-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT] > at > org.apache.drill.exec.work.fragment.FragmentExecutor.fail(FragmentExecutor.java:407) > [drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT] > at > org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:248) > [drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT] > ... 4 common frames omitted > Caused by: java.lang.AssertionError: null > at > org.apache.drill.exec.vector.complex.MapVector.load(MapVector.java:280) > ~[vector-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT] > at > org.apache.drill.exec.cache.VectorAccessibleSerializable.readFromStream(VectorAccessibleSerializable.java:117) > ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.xsort.BatchGroup.getBatch(BatchGroup.java:111) > ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.xsort.BatchGroup.getNextIndex(BatchGroup.java:137) > ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT] > at > org.apache.drill.exec.test.generated.PriorityQueueCopierGen11.next(PriorityQueueCopierTemplate.java:76) > ~[na:na] > at > org.apache.drill.exec.physical.impl.xsort.ExternalSortBatch.innerNext(ExternalSortBatch.java:290) > ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT] > at > org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162) > ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT] > at >
[jira] [Assigned] (DRILL-5234) External sort's spilling functionality does not work when the spilled columns contains a map type column
[ https://issues.apache.org/jira/browse/DRILL-5234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Hou reassigned DRILL-5234: - Assignee: Robert Hou (was: Paul Rogers) > External sort's spilling functionality does not work when the spilled columns > contains a map type column > > > Key: DRILL-5234 > URL: https://issues.apache.org/jira/browse/DRILL-5234 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Relational Operators >Reporter: Rahul Challapalli >Assignee: Robert Hou >Priority: Critical > Labels: ready-to-commit > Fix For: 1.11.0 > > Attachments: 27703303-c436-7d47-9b1c-1381e16c6b02.sys.drill, > drillbit.log > > > Env : > {code} > git.commit.id.abbrev=2af709f > No of nodes : 1 > DRILL_MAX_DIRECT_MEMORY="32G" > DRILL_MAX_HEAP="4G" > Data Size : ~250 MB > {code} > The below query results in an assertion error > {code} > alter session set `planner.width.max_per_node` = 1; > alter session set `planner.disable_exchanges` = true; > alter session set `planner.memory.max_query_memory_per_node` = 52428800; > select * from (select d1.type, d1.evnt, d1.transaction from (select d.type > type, flatten(d.events) evnt, flatten(d.transactions) transaction from > dfs.`/drill/testdata/resource-manager/10rows/data.json` d) d1 order by > d1.evnt.event_time, d1.transaction.trans_time) d2 where d2.type='web' and > d2.evnt.type = 'cmpgn4'; > {code} > Error from the logs : > {code} > 2017-01-30 15:33:51,137 [27703303-c436-7d47-9b1c-1381e16c6b02:frag:0:0] ERROR > o.a.d.e.w.fragment.FragmentExecutor - SYSTEM ERROR: AssertionError > Fragment 0:0 > [Error Id: 789d14bb-875b-4106-aad9-e665b8c3b7f1 on qa-node190.qa.lab:31010] > org.apache.drill.common.exceptions.UserException: SYSTEM ERROR: AssertionError > Fragment 0:0 > [Error Id: 789d14bb-875b-4106-aad9-e665b8c3b7f1 on qa-node190.qa.lab:31010] > at > org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:544) > ~[drill-common-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT] > at > org.apache.drill.exec.work.fragment.FragmentExecutor.sendFinalState(FragmentExecutor.java:293) > [drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT] > at > org.apache.drill.exec.work.fragment.FragmentExecutor.cleanup(FragmentExecutor.java:160) > [drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT] > at > org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:262) > [drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT] > at > org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38) > [drill-common-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > [na:1.7.0_111] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > [na:1.7.0_111] > at java.lang.Thread.run(Thread.java:745) [na:1.7.0_111] > Caused by: java.lang.RuntimeException: java.lang.AssertionError > at > org.apache.drill.common.DeferredException.addThrowable(DeferredException.java:101) > ~[drill-common-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT] > at > org.apache.drill.exec.work.fragment.FragmentExecutor.fail(FragmentExecutor.java:407) > [drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT] > at > org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:248) > [drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT] > ... 4 common frames omitted > Caused by: java.lang.AssertionError: null > at > org.apache.drill.exec.vector.complex.MapVector.load(MapVector.java:280) > ~[vector-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT] > at > org.apache.drill.exec.cache.VectorAccessibleSerializable.readFromStream(VectorAccessibleSerializable.java:117) > ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.xsort.BatchGroup.getBatch(BatchGroup.java:111) > ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.xsort.BatchGroup.getNextIndex(BatchGroup.java:137) > ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT] > at > org.apache.drill.exec.test.generated.PriorityQueueCopierGen11.next(PriorityQueueCopierTemplate.java:76) > ~[na:na] > at > org.apache.drill.exec.physical.impl.xsort.ExternalSortBatch.innerNext(ExternalSortBatch.java:290) > ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT] > at > org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162) > ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT] > at >
[jira] [Updated] (DRILL-5670) Varchar vector throws an assertion error when allocating a new vector
[ https://issues.apache.org/jira/browse/DRILL-5670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Hou updated DRILL-5670: -- Attachment: 266290f3-5fdc-5873-7372-e9ee053bf867.sys.drill drillbit.log > Varchar vector throws an assertion error when allocating a new vector > - > > Key: DRILL-5670 > URL: https://issues.apache.org/jira/browse/DRILL-5670 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Relational Operators >Affects Versions: 1.11.0 >Reporter: Rahul Challapalli >Assignee: Paul Rogers > Fix For: 1.12.0 > > Attachments: 266290f3-5fdc-5873-7372-e9ee053bf867.sys.drill, > 269969ca-8d4d-073a-d916-9031e3d3fbf0.sys.drill, drillbit.log, drillbit.log, > drillbit.out, drill-override.conf > > > I am running this test on a private branch of [paul's > repository|https://github.com/paul-rogers/drill]. Below is the commit info > {code} > git.commit.id.abbrev=d86e16c > git.commit.user.email=prog...@maprtech.com > git.commit.message.full=DRILL-5601\: Rollup of external sort fixes an > improvements\n\n- DRILL-5513\: Managed External Sort \: OOM error during the > merge phase\n- DRILL-5519\: Sort fails to spill and results in an OOM\n- > DRILL-5522\: OOM during the merge and spill process of the managed external > sort\n- DRILL-5594\: Excessive buffer reallocations during merge phase of > external sort\n- DRILL-5597\: Incorrect "bits" vector allocation in nullable > vectors allocateNew()\n- DRILL-5602\: Repeated List Vector fails to > initialize the offset vector\n\nAll of the bugs have to do with handling > low-memory conditions, and with\ncorrectly estimating the sizes of vectors, > even when those vectors come\nfrom the spill file or from an exchange. Hence, > the changes for all of\nthe above issues are interrelated.\n > git.commit.id=d86e16c551e7d3553f2cde748a739b1c5a7a7659 > git.commit.message.short=DRILL-5601\: Rollup of external sort fixes an > improvements > git.commit.user.name=Paul Rogers > git.build.user.name=Rahul Challapalli > git.commit.id.describe=0.9.0-1078-gd86e16c > git.build.user.email=challapallira...@gmail.com > git.branch=d86e16c551e7d3553f2cde748a739b1c5a7a7659 > git.commit.time=05.07.2017 @ 20\:34\:39 PDT > git.build.time=12.07.2017 @ 14\:27\:03 PDT > git.remote.origin.url=g...@github.com\:paul-rogers/drill.git > {code} > Below query fails with an Assertion Error > {code} > 0: jdbc:drill:zk=10.10.100.190:5181> ALTER SESSION SET > `exec.sort.disable_managed` = false; > +---+-+ > | ok | summary | > +---+-+ > | true | exec.sort.disable_managed updated. | > +---+-+ > 1 row selected (1.044 seconds) > 0: jdbc:drill:zk=10.10.100.190:5181> alter session set > `planner.memory.max_query_memory_per_node` = 482344960; > +---++ > | ok | summary | > +---++ > | true | planner.memory.max_query_memory_per_node updated. | > +---++ > 1 row selected (0.372 seconds) > 0: jdbc:drill:zk=10.10.100.190:5181> alter session set > `planner.width.max_per_node` = 1; > +---+--+ > | ok | summary| > +---+--+ > | true | planner.width.max_per_node updated. | > +---+--+ > 1 row selected (0.292 seconds) > 0: jdbc:drill:zk=10.10.100.190:5181> alter session set > `planner.width.max_per_query` = 1; > +---+---+ > | ok |summary| > +---+---+ > | true | planner.width.max_per_query updated. | > +---+---+ > 1 row selected (0.25 seconds) > 0: jdbc:drill:zk=10.10.100.190:5181> select count(*) from (select * from > dfs.`/drill/testdata/resource-manager/3500cols.tbl` order by > columns[450],columns[330],columns[230],columns[220],columns[110],columns[90],columns[80],columns[70],columns[40],columns[10],columns[20],columns[30],columns[40],columns[50], > > columns[454],columns[413],columns[940],columns[834],columns[73],columns[140],columns[104],columns[],columns[30],columns[2420],columns[1520], > columns[1410], > columns[1110],columns[1290],columns[2380],columns[705],columns[45],columns[1054],columns[2430],columns[420],columns[404],columns[3350], > >
[jira] [Commented] (DRILL-5754) Test framework does not enforce column orders
[ https://issues.apache.org/jira/browse/DRILL-5754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16148305#comment-16148305 ] Robert Hou commented on DRILL-5754: --- The Drill QA test framework enforces column order. So an interim solution could be to make sure desired tests are incorporated in the QA test suites. > Test framework does not enforce column orders > - > > Key: DRILL-5754 > URL: https://issues.apache.org/jira/browse/DRILL-5754 > Project: Apache Drill > Issue Type: Bug >Reporter: Jinfeng Ni > > Drill has provided a test framework to submit SQL statements and verify the > query results against expected results. For instance > {code} > final String query = "select n_nationkey, n_regionkey from > cp.`tpch/nation.parquet` where n_nationkey = 5 and n_regionkey = 0"; > testBuilder() > .sqlQuery(query) > .unOrdered() > .baselineColumns("n_nationkey", "n_regionkey") > .baselineValues(5, 0) > .build() > .run(); > {code} > However, it seems that the test framework only do result match based on > column name, without enforcing the column order in the output result set. The > missing of column order verification may be different from what people > typically expect, and hide some code bugs. > The following test specify the expected output columns in a reverse order. > However, the current test framework would still pass the test. > {code} > final String query = "select n_nationkey, n_regionkey from > cp.`tpch/nation.parquet` where n_nationkey = 5 and n_regionkey = 0"; > testBuilder() > .sqlQuery(query) > .unOrdered() > .baselineColumns("n_regionkey", "n_nationkey") > .baselineValues(0, 5) > .build() > .run(); > {code} > For now, to check the column order in query output, people should use > SchemaTestBuilder. The problem is SchemaTestBuilder only allows to verify > schema, without allowing to specify base line values. This means people has > to write two tests if they want to verify schema & values. > {code} > final List> expectedSchema = > Lists.newArrayList( > Pair.of(SchemaPath.getSimplePath("n_nationkey"), > Types.required(TypeProtos.MinorType.INT)), > Pair.of(SchemaPath.getSimplePath("n_regionkey"), > Types.required(TypeProtos.MinorType.INT))); > testBuilder() > .sqlQuery(query) > .schemaBaseLine(expectedSchema) > .go(); > {code} > This JIRA is opened to ask for enhance test framework to make it enforce > column order as well. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (DRILL-5617) Spill file name collisions when spill file is on a shared file system
[ https://issues.apache.org/jira/browse/DRILL-5617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Hou reassigned DRILL-5617: - Assignee: Chun Chang (was: Paul Rogers) > Spill file name collisions when spill file is on a shared file system > - > > Key: DRILL-5617 > URL: https://issues.apache.org/jira/browse/DRILL-5617 > Project: Apache Drill > Issue Type: Bug > Components: Functions - Drill >Affects Versions: 1.11.0 >Reporter: Chun Chang >Assignee: Chun Chang > Fix For: 1.12.0 > > > Spill location can be configured to be written on hdfs such as: > hashagg: { > # The partitions divide the work inside the hashagg, to ease > # handling spilling. This initial figure is tuned down when > # memory is limited. > # Setting this option to 1 disables spilling ! > num_partitions: 32, > spill: { > # The 2 options below override the common ones > # they should be deprecated in the future > directories : [ "/tmp/drill/spill" ], > fs : "maprfs:///" > } > } > However, this could cause spill filename conflict since name convention does > not contain node name. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (DRILL-5753) Managed External Sort: One or more nodes ran out of memory while executing the query.
[ https://issues.apache.org/jira/browse/DRILL-5753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16148064#comment-16148064 ] Robert Hou commented on DRILL-5753: --- It is starting the consolidation phase. 2017-08-30 03:35:10,223 [26596b4e-9883-7dc2-6275-37134f7d63be:frag:2:17] DEBUG o.a.d.e.p.i.x.m.ExternalSortBatch - Completed load phase: read 392 batches, spilled 2 times, total input bytes: 103303184 2017-08-30 03:35:10,223 [26596b4e-9883-7dc2-6275-37134f7d63be:frag:2:17] DEBUG o.a.d.e.p.i.x.m.ExternalSortBatch - Starting consolidate phase. Batches = 392, Records = 40, Memory = 28272656, In-memory batches 108, spilled runs 2 > Managed External Sort: One or more nodes ran out of memory while executing > the query. > - > > Key: DRILL-5753 > URL: https://issues.apache.org/jira/browse/DRILL-5753 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Relational Operators >Affects Versions: 1.11.0 >Reporter: Robert Hou >Assignee: Paul Rogers > Fix For: 1.12.0 > > > The query is: > {noformat} > ALTER SESSION SET `exec.sort.disable_managed` = false; > alter session set `planner.memory.max_query_memory_per_node` = 1252428800; > select count(*) from ( > select * from ( > select s1.type type, flatten(s1.rms.rptd) rptds, s1.rms, s1.uid > from ( > select d.type type, d.uid uid, flatten(d.map.rm) rms from > dfs.`/drill/testdata/resource-manager/nested-large.json` d order by d.uid > ) s1 > ) s2 > order by s2.rms.mapid, s2.rptds.a, s2.rptds.do_not_exist > ); > ALTER SESSION SET `exec.sort.disable_managed` = true; > alter session set `planner.memory.max_query_memory_per_node` = 2147483648; > {noformat} > The stack trace is: > {noformat} > 2017-08-30 03:35:10,479 [BitServer-5] DEBUG > o.a.drill.exec.work.foreman.Foreman - 26596b4e-9883-7dc2-6275-37134f7d63be: > State change requested RUNNING --> FAILED > org.apache.drill.common.exceptions.UserRemoteException: RESOURCE ERROR: One > or more nodes ran out of memory while executing the query. > Unable to allocate buffer of size 4194304 due to memory limit. Current > allocation: 43960640 > Fragment 2:9 > [Error Id: f58210a2-7569-42d0-8961-8c7e42c7fea3 on atsqa6c80.qa.lab:31010] > (org.apache.drill.exec.exception.OutOfMemoryException) Unable to allocate > buffer of size 4194304 due to memory limit. Current allocation: 43960640 > org.apache.drill.exec.memory.BaseAllocator.buffer():238 > org.apache.drill.exec.memory.BaseAllocator.buffer():213 > org.apache.drill.exec.vector.BigIntVector.reAlloc():252 > org.apache.drill.exec.vector.BigIntVector$Mutator.setSafe():452 > org.apache.drill.exec.vector.RepeatedBigIntVector$Mutator.addSafe():355 > org.apache.drill.exec.vector.RepeatedBigIntVector.copyFromSafe():220 > > org.apache.drill.exec.vector.RepeatedBigIntVector$TransferImpl.copyValueSafe():202 > > org.apache.drill.exec.vector.complex.MapVector$MapTransferPair.copyValueSafe():225 > > org.apache.drill.exec.vector.complex.MapVector$MapTransferPair.copyValueSafe():225 > org.apache.drill.exec.vector.complex.MapVector.copyFromSafe():82 > > org.apache.drill.exec.test.generated.PriorityQueueCopierGen1466.doCopy():47 > org.apache.drill.exec.test.generated.PriorityQueueCopierGen1466.next():77 > > org.apache.drill.exec.physical.impl.xsort.managed.PriorityQueueCopierWrapper$BatchMerger.next():267 > > org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.load():374 > > org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.innerNext():303 > org.apache.drill.exec.record.AbstractRecordBatch.next():164 > org.apache.drill.exec.record.AbstractRecordBatch.next():119 > org.apache.drill.exec.record.AbstractRecordBatch.next():109 > org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51 > > org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.innerNext():93 > org.apache.drill.exec.record.AbstractRecordBatch.next():164 > org.apache.drill.exec.physical.impl.BaseRootExec.next():105 > > org.apache.drill.exec.physical.impl.SingleSenderCreator$SingleSenderRootExec.innerNext():92 > org.apache.drill.exec.physical.impl.BaseRootExec.next():95 > org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():234 > org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():227 > java.security.AccessController.doPrivileged():-2 > javax.security.auth.Subject.doAs():415 > org.apache.hadoop.security.UserGroupInformation.doAs():1595 > org.apache.drill.exec.work.fragment.FragmentExecutor.run():227 > org.apache.drill.common.SelfCleaningRunnable.run():38 > java.util.concurrent.ThreadPoolExecutor.runWorker():1145 >
[jira] [Created] (DRILL-5753) Managed External Sort: One or more nodes ran out of memory while executing the query.
Robert Hou created DRILL-5753: - Summary: Managed External Sort: One or more nodes ran out of memory while executing the query. Key: DRILL-5753 URL: https://issues.apache.org/jira/browse/DRILL-5753 Project: Apache Drill Issue Type: Bug Components: Execution - Relational Operators Affects Versions: 1.11.0 Reporter: Robert Hou Assignee: Paul Rogers Fix For: 1.12.0 The query is: {noformat} ALTER SESSION SET `exec.sort.disable_managed` = false; alter session set `planner.memory.max_query_memory_per_node` = 1252428800; select count(*) from ( select * from ( select s1.type type, flatten(s1.rms.rptd) rptds, s1.rms, s1.uid from ( select d.type type, d.uid uid, flatten(d.map.rm) rms from dfs.`/drill/testdata/resource-manager/nested-large.json` d order by d.uid ) s1 ) s2 order by s2.rms.mapid, s2.rptds.a, s2.rptds.do_not_exist ); ALTER SESSION SET `exec.sort.disable_managed` = true; alter session set `planner.memory.max_query_memory_per_node` = 2147483648; {noformat} The stack trace is: {noformat} 2017-08-30 03:35:10,479 [BitServer-5] DEBUG o.a.drill.exec.work.foreman.Foreman - 26596b4e-9883-7dc2-6275-37134f7d63be: State change requested RUNNING --> FAILED org.apache.drill.common.exceptions.UserRemoteException: RESOURCE ERROR: One or more nodes ran out of memory while executing the query. Unable to allocate buffer of size 4194304 due to memory limit. Current allocation: 43960640 Fragment 2:9 [Error Id: f58210a2-7569-42d0-8961-8c7e42c7fea3 on atsqa6c80.qa.lab:31010] (org.apache.drill.exec.exception.OutOfMemoryException) Unable to allocate buffer of size 4194304 due to memory limit. Current allocation: 43960640 org.apache.drill.exec.memory.BaseAllocator.buffer():238 org.apache.drill.exec.memory.BaseAllocator.buffer():213 org.apache.drill.exec.vector.BigIntVector.reAlloc():252 org.apache.drill.exec.vector.BigIntVector$Mutator.setSafe():452 org.apache.drill.exec.vector.RepeatedBigIntVector$Mutator.addSafe():355 org.apache.drill.exec.vector.RepeatedBigIntVector.copyFromSafe():220 org.apache.drill.exec.vector.RepeatedBigIntVector$TransferImpl.copyValueSafe():202 org.apache.drill.exec.vector.complex.MapVector$MapTransferPair.copyValueSafe():225 org.apache.drill.exec.vector.complex.MapVector$MapTransferPair.copyValueSafe():225 org.apache.drill.exec.vector.complex.MapVector.copyFromSafe():82 org.apache.drill.exec.test.generated.PriorityQueueCopierGen1466.doCopy():47 org.apache.drill.exec.test.generated.PriorityQueueCopierGen1466.next():77 org.apache.drill.exec.physical.impl.xsort.managed.PriorityQueueCopierWrapper$BatchMerger.next():267 org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.load():374 org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.innerNext():303 org.apache.drill.exec.record.AbstractRecordBatch.next():164 org.apache.drill.exec.record.AbstractRecordBatch.next():119 org.apache.drill.exec.record.AbstractRecordBatch.next():109 org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51 org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.innerNext():93 org.apache.drill.exec.record.AbstractRecordBatch.next():164 org.apache.drill.exec.physical.impl.BaseRootExec.next():105 org.apache.drill.exec.physical.impl.SingleSenderCreator$SingleSenderRootExec.innerNext():92 org.apache.drill.exec.physical.impl.BaseRootExec.next():95 org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():234 org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():227 java.security.AccessController.doPrivileged():-2 javax.security.auth.Subject.doAs():415 org.apache.hadoop.security.UserGroupInformation.doAs():1595 org.apache.drill.exec.work.fragment.FragmentExecutor.run():227 org.apache.drill.common.SelfCleaningRunnable.run():38 java.util.concurrent.ThreadPoolExecutor.runWorker():1145 java.util.concurrent.ThreadPoolExecutor$Worker.run():615 java.lang.Thread.run():744 at org.apache.drill.exec.work.foreman.QueryManager$1.statusUpdate(QueryManager.java:521) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.rpc.control.WorkEventBus.statusUpdate(WorkEventBus.java:71) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.work.batch.ControlMessageHandler.handle(ControlMessageHandler.java:94) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.work.batch.ControlMessageHandler.handle(ControlMessageHandler.java:55) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.rpc.BasicServer.handle(BasicServer.java:157) [drill-rpc-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at
[jira] [Commented] (DRILL-5753) Managed External Sort: One or more nodes ran out of memory while executing the query.
[ https://issues.apache.org/jira/browse/DRILL-5753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16148063#comment-16148063 ] Robert Hou commented on DRILL-5753: --- Note that multiple fragments are executing. > Managed External Sort: One or more nodes ran out of memory while executing > the query. > - > > Key: DRILL-5753 > URL: https://issues.apache.org/jira/browse/DRILL-5753 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Relational Operators >Affects Versions: 1.11.0 >Reporter: Robert Hou >Assignee: Paul Rogers > Fix For: 1.12.0 > > > The query is: > {noformat} > ALTER SESSION SET `exec.sort.disable_managed` = false; > alter session set `planner.memory.max_query_memory_per_node` = 1252428800; > select count(*) from ( > select * from ( > select s1.type type, flatten(s1.rms.rptd) rptds, s1.rms, s1.uid > from ( > select d.type type, d.uid uid, flatten(d.map.rm) rms from > dfs.`/drill/testdata/resource-manager/nested-large.json` d order by d.uid > ) s1 > ) s2 > order by s2.rms.mapid, s2.rptds.a, s2.rptds.do_not_exist > ); > ALTER SESSION SET `exec.sort.disable_managed` = true; > alter session set `planner.memory.max_query_memory_per_node` = 2147483648; > {noformat} > The stack trace is: > {noformat} > 2017-08-30 03:35:10,479 [BitServer-5] DEBUG > o.a.drill.exec.work.foreman.Foreman - 26596b4e-9883-7dc2-6275-37134f7d63be: > State change requested RUNNING --> FAILED > org.apache.drill.common.exceptions.UserRemoteException: RESOURCE ERROR: One > or more nodes ran out of memory while executing the query. > Unable to allocate buffer of size 4194304 due to memory limit. Current > allocation: 43960640 > Fragment 2:9 > [Error Id: f58210a2-7569-42d0-8961-8c7e42c7fea3 on atsqa6c80.qa.lab:31010] > (org.apache.drill.exec.exception.OutOfMemoryException) Unable to allocate > buffer of size 4194304 due to memory limit. Current allocation: 43960640 > org.apache.drill.exec.memory.BaseAllocator.buffer():238 > org.apache.drill.exec.memory.BaseAllocator.buffer():213 > org.apache.drill.exec.vector.BigIntVector.reAlloc():252 > org.apache.drill.exec.vector.BigIntVector$Mutator.setSafe():452 > org.apache.drill.exec.vector.RepeatedBigIntVector$Mutator.addSafe():355 > org.apache.drill.exec.vector.RepeatedBigIntVector.copyFromSafe():220 > > org.apache.drill.exec.vector.RepeatedBigIntVector$TransferImpl.copyValueSafe():202 > > org.apache.drill.exec.vector.complex.MapVector$MapTransferPair.copyValueSafe():225 > > org.apache.drill.exec.vector.complex.MapVector$MapTransferPair.copyValueSafe():225 > org.apache.drill.exec.vector.complex.MapVector.copyFromSafe():82 > > org.apache.drill.exec.test.generated.PriorityQueueCopierGen1466.doCopy():47 > org.apache.drill.exec.test.generated.PriorityQueueCopierGen1466.next():77 > > org.apache.drill.exec.physical.impl.xsort.managed.PriorityQueueCopierWrapper$BatchMerger.next():267 > > org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.load():374 > > org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.innerNext():303 > org.apache.drill.exec.record.AbstractRecordBatch.next():164 > org.apache.drill.exec.record.AbstractRecordBatch.next():119 > org.apache.drill.exec.record.AbstractRecordBatch.next():109 > org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51 > > org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.innerNext():93 > org.apache.drill.exec.record.AbstractRecordBatch.next():164 > org.apache.drill.exec.physical.impl.BaseRootExec.next():105 > > org.apache.drill.exec.physical.impl.SingleSenderCreator$SingleSenderRootExec.innerNext():92 > org.apache.drill.exec.physical.impl.BaseRootExec.next():95 > org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():234 > org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():227 > java.security.AccessController.doPrivileged():-2 > javax.security.auth.Subject.doAs():415 > org.apache.hadoop.security.UserGroupInformation.doAs():1595 > org.apache.drill.exec.work.fragment.FragmentExecutor.run():227 > org.apache.drill.common.SelfCleaningRunnable.run():38 > java.util.concurrent.ThreadPoolExecutor.runWorker():1145 > java.util.concurrent.ThreadPoolExecutor$Worker.run():615 > java.lang.Thread.run():744 > at > org.apache.drill.exec.work.foreman.QueryManager$1.statusUpdate(QueryManager.java:521) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.rpc.control.WorkEventBus.statusUpdate(WorkEventBus.java:71) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at >
[jira] [Commented] (DRILL-5670) Varchar vector throws an assertion error when allocating a new vector
[ https://issues.apache.org/jira/browse/DRILL-5670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16154347#comment-16154347 ] Robert Hou commented on DRILL-5670: --- Attached log and profile. > Varchar vector throws an assertion error when allocating a new vector > - > > Key: DRILL-5670 > URL: https://issues.apache.org/jira/browse/DRILL-5670 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Relational Operators >Affects Versions: 1.11.0 >Reporter: Rahul Challapalli >Assignee: Paul Rogers > Fix For: 1.12.0 > > Attachments: 26555749-4d36-10d2-6faf-e403db40c370.sys.drill, > 266290f3-5fdc-5873-7372-e9ee053bf867.sys.drill, > 269969ca-8d4d-073a-d916-9031e3d3fbf0.sys.drill, drillbit.log, drillbit.log, > drillbit.log, drillbit.out, drill-override.conf > > > I am running this test on a private branch of [paul's > repository|https://github.com/paul-rogers/drill]. Below is the commit info > {code} > git.commit.id.abbrev=d86e16c > git.commit.user.email=prog...@maprtech.com > git.commit.message.full=DRILL-5601\: Rollup of external sort fixes an > improvements\n\n- DRILL-5513\: Managed External Sort \: OOM error during the > merge phase\n- DRILL-5519\: Sort fails to spill and results in an OOM\n- > DRILL-5522\: OOM during the merge and spill process of the managed external > sort\n- DRILL-5594\: Excessive buffer reallocations during merge phase of > external sort\n- DRILL-5597\: Incorrect "bits" vector allocation in nullable > vectors allocateNew()\n- DRILL-5602\: Repeated List Vector fails to > initialize the offset vector\n\nAll of the bugs have to do with handling > low-memory conditions, and with\ncorrectly estimating the sizes of vectors, > even when those vectors come\nfrom the spill file or from an exchange. Hence, > the changes for all of\nthe above issues are interrelated.\n > git.commit.id=d86e16c551e7d3553f2cde748a739b1c5a7a7659 > git.commit.message.short=DRILL-5601\: Rollup of external sort fixes an > improvements > git.commit.user.name=Paul Rogers > git.build.user.name=Rahul Challapalli > git.commit.id.describe=0.9.0-1078-gd86e16c > git.build.user.email=challapallira...@gmail.com > git.branch=d86e16c551e7d3553f2cde748a739b1c5a7a7659 > git.commit.time=05.07.2017 @ 20\:34\:39 PDT > git.build.time=12.07.2017 @ 14\:27\:03 PDT > git.remote.origin.url=g...@github.com\:paul-rogers/drill.git > {code} > Below query fails with an Assertion Error > {code} > 0: jdbc:drill:zk=10.10.100.190:5181> ALTER SESSION SET > `exec.sort.disable_managed` = false; > +---+-+ > | ok | summary | > +---+-+ > | true | exec.sort.disable_managed updated. | > +---+-+ > 1 row selected (1.044 seconds) > 0: jdbc:drill:zk=10.10.100.190:5181> alter session set > `planner.memory.max_query_memory_per_node` = 482344960; > +---++ > | ok | summary | > +---++ > | true | planner.memory.max_query_memory_per_node updated. | > +---++ > 1 row selected (0.372 seconds) > 0: jdbc:drill:zk=10.10.100.190:5181> alter session set > `planner.width.max_per_node` = 1; > +---+--+ > | ok | summary| > +---+--+ > | true | planner.width.max_per_node updated. | > +---+--+ > 1 row selected (0.292 seconds) > 0: jdbc:drill:zk=10.10.100.190:5181> alter session set > `planner.width.max_per_query` = 1; > +---+---+ > | ok |summary| > +---+---+ > | true | planner.width.max_per_query updated. | > +---+---+ > 1 row selected (0.25 seconds) > 0: jdbc:drill:zk=10.10.100.190:5181> select count(*) from (select * from > dfs.`/drill/testdata/resource-manager/3500cols.tbl` order by > columns[450],columns[330],columns[230],columns[220],columns[110],columns[90],columns[80],columns[70],columns[40],columns[10],columns[20],columns[30],columns[40],columns[50], > > columns[454],columns[413],columns[940],columns[834],columns[73],columns[140],columns[104],columns[],columns[30],columns[2420],columns[1520], > columns[1410], > columns[1110],columns[1290],columns[2380],columns[705],columns[45],columns[1054],columns[2430],columns[420],columns[404],columns[3350], > >
[jira] [Updated] (DRILL-5670) Varchar vector throws an assertion error when allocating a new vector
[ https://issues.apache.org/jira/browse/DRILL-5670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Hou updated DRILL-5670: -- Attachment: drillbit.log 26555749-4d36-10d2-6faf-e403db40c370.sys.drill > Varchar vector throws an assertion error when allocating a new vector > - > > Key: DRILL-5670 > URL: https://issues.apache.org/jira/browse/DRILL-5670 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Relational Operators >Affects Versions: 1.11.0 >Reporter: Rahul Challapalli >Assignee: Paul Rogers > Fix For: 1.12.0 > > Attachments: 26555749-4d36-10d2-6faf-e403db40c370.sys.drill, > 266290f3-5fdc-5873-7372-e9ee053bf867.sys.drill, > 269969ca-8d4d-073a-d916-9031e3d3fbf0.sys.drill, drillbit.log, drillbit.log, > drillbit.log, drillbit.out, drill-override.conf > > > I am running this test on a private branch of [paul's > repository|https://github.com/paul-rogers/drill]. Below is the commit info > {code} > git.commit.id.abbrev=d86e16c > git.commit.user.email=prog...@maprtech.com > git.commit.message.full=DRILL-5601\: Rollup of external sort fixes an > improvements\n\n- DRILL-5513\: Managed External Sort \: OOM error during the > merge phase\n- DRILL-5519\: Sort fails to spill and results in an OOM\n- > DRILL-5522\: OOM during the merge and spill process of the managed external > sort\n- DRILL-5594\: Excessive buffer reallocations during merge phase of > external sort\n- DRILL-5597\: Incorrect "bits" vector allocation in nullable > vectors allocateNew()\n- DRILL-5602\: Repeated List Vector fails to > initialize the offset vector\n\nAll of the bugs have to do with handling > low-memory conditions, and with\ncorrectly estimating the sizes of vectors, > even when those vectors come\nfrom the spill file or from an exchange. Hence, > the changes for all of\nthe above issues are interrelated.\n > git.commit.id=d86e16c551e7d3553f2cde748a739b1c5a7a7659 > git.commit.message.short=DRILL-5601\: Rollup of external sort fixes an > improvements > git.commit.user.name=Paul Rogers > git.build.user.name=Rahul Challapalli > git.commit.id.describe=0.9.0-1078-gd86e16c > git.build.user.email=challapallira...@gmail.com > git.branch=d86e16c551e7d3553f2cde748a739b1c5a7a7659 > git.commit.time=05.07.2017 @ 20\:34\:39 PDT > git.build.time=12.07.2017 @ 14\:27\:03 PDT > git.remote.origin.url=g...@github.com\:paul-rogers/drill.git > {code} > Below query fails with an Assertion Error > {code} > 0: jdbc:drill:zk=10.10.100.190:5181> ALTER SESSION SET > `exec.sort.disable_managed` = false; > +---+-+ > | ok | summary | > +---+-+ > | true | exec.sort.disable_managed updated. | > +---+-+ > 1 row selected (1.044 seconds) > 0: jdbc:drill:zk=10.10.100.190:5181> alter session set > `planner.memory.max_query_memory_per_node` = 482344960; > +---++ > | ok | summary | > +---++ > | true | planner.memory.max_query_memory_per_node updated. | > +---++ > 1 row selected (0.372 seconds) > 0: jdbc:drill:zk=10.10.100.190:5181> alter session set > `planner.width.max_per_node` = 1; > +---+--+ > | ok | summary| > +---+--+ > | true | planner.width.max_per_node updated. | > +---+--+ > 1 row selected (0.292 seconds) > 0: jdbc:drill:zk=10.10.100.190:5181> alter session set > `planner.width.max_per_query` = 1; > +---+---+ > | ok |summary| > +---+---+ > | true | planner.width.max_per_query updated. | > +---+---+ > 1 row selected (0.25 seconds) > 0: jdbc:drill:zk=10.10.100.190:5181> select count(*) from (select * from > dfs.`/drill/testdata/resource-manager/3500cols.tbl` order by > columns[450],columns[330],columns[230],columns[220],columns[110],columns[90],columns[80],columns[70],columns[40],columns[10],columns[20],columns[30],columns[40],columns[50], > > columns[454],columns[413],columns[940],columns[834],columns[73],columns[140],columns[104],columns[],columns[30],columns[2420],columns[1520], > columns[1410], > columns[1110],columns[1290],columns[2380],columns[705],columns[45],columns[1054],columns[2430],columns[420],columns[404],columns[3350], > >
[jira] [Commented] (DRILL-5670) Varchar vector throws an assertion error when allocating a new vector
[ https://issues.apache.org/jira/browse/DRILL-5670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16154344#comment-16154344 ] Robert Hou commented on DRILL-5670: --- I still see a memory allocation problem, and I do not see warnings that suggest a misconfiguration issue. Stack trace is: {noformat} 2017-09-02 07:22:17,842 [BitServer-7] DEBUG o.a.drill.exec.work.foreman.Foreman - 26555749-4d36-10d2-6faf-e403db40c370: State change requested RUNNING --> FAILED org.apache.drill.common.exceptions.UserRemoteException: RESOURCE ERROR: One or more nodes ran out of memory while executing the query. Unable to allocate buffer of size 16777216 (rounded from 15834000) due to memory limit. Current allocation: 525809920 Fragment 2:0 [Error Id: 34b695f5-b41d-440a-b07e-7e11531f9419 on atsqa6c86.qa.lab:31010] (org.apache.drill.exec.exception.OutOfMemoryException) Unable to allocate buffer of size 16777216 (rounded from 15834000) due to memory limit. Current allocation: 525809920 org.apache.drill.exec.memory.BaseAllocator.buffer():238 org.apache.drill.exec.memory.BaseAllocator.buffer():213 org.apache.drill.exec.vector.VarCharVector.allocateNew():402 org.apache.drill.exec.vector.RepeatedVarCharVector.allocateNew():272 org.apache.drill.exec.vector.AllocationHelper.allocatePrecomputedChildCount():39 org.apache.drill.exec.vector.AllocationHelper.allocate():46 org.apache.drill.exec.record.VectorInitializer.allocateVector():115 org.apache.drill.exec.record.VectorInitializer.allocateVector():95 org.apache.drill.exec.record.VectorInitializer.allocateBatch():85 org.apache.drill.exec.physical.impl.xsort.managed.PriorityQueueCopierWrapper$BatchMerger.next():262 org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.load():374 org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.innerNext():303 org.apache.drill.exec.record.AbstractRecordBatch.next():164 org.apache.drill.exec.record.AbstractRecordBatch.next():119 org.apache.drill.exec.record.AbstractRecordBatch.next():109 org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51 org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.innerNext():93 org.apache.drill.exec.record.AbstractRecordBatch.next():164 org.apache.drill.exec.physical.impl.BaseRootExec.next():105 org.apache.drill.exec.physical.impl.SingleSenderCreator$SingleSenderRootExec.innerNext():92 org.apache.drill.exec.physical.impl.BaseRootExec.next():95 org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():234 org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():227 java.security.AccessController.doPrivileged():-2 javax.security.auth.Subject.doAs():415 org.apache.hadoop.security.UserGroupInformation.doAs():1595 org.apache.drill.exec.work.fragment.FragmentExecutor.run():227 org.apache.drill.common.SelfCleaningRunnable.run():38 java.util.concurrent.ThreadPoolExecutor.runWorker():1145 java.util.concurrent.ThreadPoolExecutor$Worker.run():615 java.lang.Thread.run():744 at org.apache.drill.exec.work.foreman.QueryManager$1.statusUpdate(QueryManager.java:521) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.rpc.control.WorkEventBus.statusUpdate(WorkEventBus.java:71) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.work.batch.ControlMessageHandler.handle(ControlMessageHandler.java:94) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.work.batch.ControlMessageHandler.handle(ControlMessageHandler.java:55) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.rpc.BasicServer.handle(BasicServer.java:157) [drill-rpc-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.rpc.BasicServer.handle(BasicServer.java:53) [drill-rpc-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:274) [drill-rpc-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:244) [drill-rpc-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:89) [netty-codec-4.0.27.Final.jar:4.0.27.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339) [netty-transport-4.0.27.Final.jar:4.0.27.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324) [netty-transport-4.0.27.Final.jar:4.0.27.Final] at io.netty.handler.timeout.ReadTimeoutHandler.channelRead(ReadTimeoutHandler.java:150) [netty-handler-4.0.27.Final.jar:4.0.27.Final] at
[jira] [Commented] (DRILL-5447) Managed External Sort : Unable to allocate sv2 vector
[ https://issues.apache.org/jira/browse/DRILL-5447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157560#comment-16157560 ] Robert Hou commented on DRILL-5447: --- I re-ran this case with the fix for DRILL-5443. I am getting the same error as before. I have some questions after looking at the numbers more closely. This query does a flatten on a large table. The result is 160M records. Half the records have a one-byte string, and half have a 253-byte string. And then there are 40K records with 223 byte strings. {noformat} select length(str), count(*) from (select id, flatten(str_list) str from dfs.`/drill/testdata/resource-manager/flatten-large-small.json`) group by length(str); +-+---+ | EXPR$0 | EXPR$1 | +-+---+ | 223 | 4 | | 1 | 80042001 | | 253 | 8000 | {noformat} >From the logs, we see this: {noformat} 2017-09-02 11:43:44,596 [26550427-6adf-a52e-2ea8-dc52d8d8433f:frag:0:0] DEBUG o.a.d.exec.compile.ClassTransformer - Compiled and merged CopierGen1889: bytecode size = 3.3 KiB, time = 10 ms. 2017-09-02 11:43:44,596 [26550427-6adf-a52e-2ea8-dc52d8d8433f:frag:0:0] DEBUG o.a.d.e.p.i.s.RemovingRecordBatch - doWork(): 0 records copied out of 0, remaining: 0, incoming schema BatchSchema [fields=[str(VARCHAR:REQUIRED), id(BIGINT:OPTIONAL)], selectionVector=TWO_BYTE] 2017-09-02 11:43:44,596 [26550427-6adf-a52e-2ea8-dc52d8d8433f:frag:0:0] DEBUG o.a.d.e.p.i.p.ProjectRecordBatch - Added eval for project expression. 2017-09-02 11:43:44,597 [26550427-6adf-a52e-2ea8-dc52d8d8433f:frag:0:0] DEBUG o.a.d.e.p.i.a.StreamingAggBatch - Creating new aggregator. 2017-09-02 11:43:44,598 [26550427-6adf-a52e-2ea8-dc52d8d8433f:frag:0:0] DEBUG o.a.d.e.p.i.x.m.ExternalSortBatch - Actual batch schema & sizes { str(type: REQUIRED VARCHAR, count: 4096, std size: 54, actual size: 134, data size: 548360) id(type: OPTIONAL BIGINT, count: 4096, std size: 8, actual size: 9, data size: 36864) Records: 4096, Total size: 1073819648, Data size: 585224, Gross row width: 262163, Net row width: 143, Density: 1} 2017-09-02 11:43:44,598 [26550427-6adf-a52e-2ea8-dc52d8d8433f:frag:0:0] ERROR o.a.d.e.p.i.x.m.ExternalSortBatch - Insufficient memory to merge two batches. Incoming batch size: 1073819648, available memory: 268435456 2017-09-02 11:43:44,600 [26550427-6adf-a52e-2ea8-dc52d8d8433f:frag:0:0] INFO o.a.d.e.p.i.x.m.BufferedBatches - User Error Occurred: Unable to allocate sv2 buffer (Unable to allocate sv2 buffer) org.apache.drill.common.exceptions.UserException: RESOURCE ERROR: Unable to allocate sv2 buffer {noformat} If the largest string is 253 bytes, plus an id gives 257 or 261 bytes (4-byte or 8-byte integer), then 64K records should give a batch size of 17104896 bytes. The batch size indicated above seems to be 50x larger. Here is the stack trace: {noformat} [Error Id: 33687160-5aa7-4b13-a6b7-93554a55af5f ] at org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:550) ~[drill-common-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.physical.impl.xsort.managed.BufferedBatches.newSV2(BufferedBatches.java:157) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.physical.impl.xsort.managed.BufferedBatches.makeSelectionVector(BufferedBatches.java:142) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.physical.impl.xsort.managed.BufferedBatches.add(BufferedBatches.java:97) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.physical.impl.xsort.managed.SortImpl.addBatch(SortImpl.java:265) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.loadBatch(ExternalSortBatch.java:422) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.load(ExternalSortBatch.java:358) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.innerNext(ExternalSortBatch.java:303) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:164) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at
[jira] [Commented] (DRILL-5447) Managed External Sort : Unable to allocate sv2 vector
[ https://issues.apache.org/jira/browse/DRILL-5447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157563#comment-16157563 ] Robert Hou commented on DRILL-5447: --- Here is the plan: {noformat} | 00-00Screen 00-01 Project(EXPR$0=[$0]) 00-02StreamAgg(group=[{}], EXPR$0=[$SUM0($0)]) 00-03 UnionExchange 01-01StreamAgg(group=[{}], EXPR$0=[COUNT()]) 01-02 Project($f0=[0]) 01-03SelectionVectorRemover 01-04 Filter(condition=[=($0, 0)]) 01-05SingleMergeExchange(sort0=[1 ASC]) 02-01 SelectionVectorRemover 02-02Sort(sort0=[$1], dir0=[ASC]) 02-03 Project(id=[$0], str=[$1]) 02-04HashToRandomExchange(dist0=[[$1]]) 03-01 UnorderedMuxExchange 04-01Project(id=[$0], str=[$1], E_X_P_R_H_A_S_H_F_I_E_L_D=[hash32AsDouble($1, 1301011)]) 04-02 Flatten(flattenField=[$1]) 04-03Project(id=[$0], str=[$1]) 04-04 Scan(groupscan=[EasyGroupScan [selectionRoot=maprfs:/drill/testdata/resource-manager/flatten-large-small.json, numFiles=1, columns=[`id`, `str_list`], files=[maprfs:///drill/testdata/resource-manager/flatten-large-small.json]]]) {noformat} > Managed External Sort : Unable to allocate sv2 vector > - > > Key: DRILL-5447 > URL: https://issues.apache.org/jira/browse/DRILL-5447 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Relational Operators >Affects Versions: 1.10.0 >Reporter: Rahul Challapalli >Assignee: Paul Rogers > Fix For: 1.12.0 > > Attachments: 26617a7e-b953-7ac3-556d-43fd88e51b19.sys.drill, > 26fee988-ed18-a86a-7164-3e75118c0ffc.sys.drill, drillbit.log, drillbit.log > > > git.commit.id.abbrev=3e8b01d > Dataset : > {code} > Every records contains a repeated type with 2000 elements. > The repeated type contains varchars of length 250 for the first 2000 records > and single character strings for the next 2000 records > The above pattern is repeated a few types > {code} > The below query fails > {code} > ALTER SESSION SET `exec.sort.disable_managed` = false; > alter session set `planner.width.max_per_node` = 1; > alter session set `planner.disable_exchanges` = true; > alter session set `planner.width.max_per_query` = 1; > select count(*) from (select * from (select id, flatten(str_list) str from > dfs.`/drill/testdata/resource-manager/flatten-large-small.json`) d order by > d.str) d1 where d1.id=0; > Error: RESOURCE ERROR: Unable to allocate sv2 buffer > Fragment 0:0 > [Error Id: 9e45c293-ab26-489d-a90e-25da96004f15 on qa-node190.qa.lab:31010] > (state=,code=0) > {code} > Exception from the logs > {code} > [Error Id: 9e45c293-ab26-489d-a90e-25da96004f15 ] > at > org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:544) > ~[drill-common-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.newSV2(ExternalSortBatch.java:1463) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.makeSelectionVector(ExternalSortBatch.java:799) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.processBatch(ExternalSortBatch.java:856) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.loadBatch(ExternalSortBatch.java:618) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.load(ExternalSortBatch.java:660) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.innerNext(ExternalSortBatch.java:559) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next(IteratorValidatorBatchIterator.java:215) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109) >
[jira] [Updated] (DRILL-5447) Managed External Sort : Unable to allocate sv2 vector
[ https://issues.apache.org/jira/browse/DRILL-5447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Hou updated DRILL-5447: -- Attachment: 26550427-6adf-a52e-2ea8-dc52d8d8433f.sys.drill drillbit.log > Managed External Sort : Unable to allocate sv2 vector > - > > Key: DRILL-5447 > URL: https://issues.apache.org/jira/browse/DRILL-5447 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Relational Operators >Affects Versions: 1.10.0 >Reporter: Rahul Challapalli >Assignee: Paul Rogers > Fix For: 1.12.0 > > Attachments: 26550427-6adf-a52e-2ea8-dc52d8d8433f.sys.drill, > 26617a7e-b953-7ac3-556d-43fd88e51b19.sys.drill, > 26fee988-ed18-a86a-7164-3e75118c0ffc.sys.drill, drillbit.log, drillbit.log, > drillbit.log > > > git.commit.id.abbrev=3e8b01d > Dataset : > {code} > Every records contains a repeated type with 2000 elements. > The repeated type contains varchars of length 250 for the first 2000 records > and single character strings for the next 2000 records > The above pattern is repeated a few types > {code} > The below query fails > {code} > ALTER SESSION SET `exec.sort.disable_managed` = false; > alter session set `planner.width.max_per_node` = 1; > alter session set `planner.disable_exchanges` = true; > alter session set `planner.width.max_per_query` = 1; > select count(*) from (select * from (select id, flatten(str_list) str from > dfs.`/drill/testdata/resource-manager/flatten-large-small.json`) d order by > d.str) d1 where d1.id=0; > Error: RESOURCE ERROR: Unable to allocate sv2 buffer > Fragment 0:0 > [Error Id: 9e45c293-ab26-489d-a90e-25da96004f15 on qa-node190.qa.lab:31010] > (state=,code=0) > {code} > Exception from the logs > {code} > [Error Id: 9e45c293-ab26-489d-a90e-25da96004f15 ] > at > org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:544) > ~[drill-common-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.newSV2(ExternalSortBatch.java:1463) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.makeSelectionVector(ExternalSortBatch.java:799) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.processBatch(ExternalSortBatch.java:856) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.loadBatch(ExternalSortBatch.java:618) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.load(ExternalSortBatch.java:660) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.innerNext(ExternalSortBatch.java:559) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next(IteratorValidatorBatchIterator.java:215) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.innerNext(RemovingRecordBatch.java:93) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next(IteratorValidatorBatchIterator.java:215) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at >
[jira] [Commented] (DRILL-5774) Excessive memory allocation
[ https://issues.apache.org/jira/browse/DRILL-5774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157746#comment-16157746 ] Robert Hou commented on DRILL-5774: --- Here is the plan: {noformat} | 00-00Screen 00-01 Project(EXPR$0=[$0]) 00-02StreamAgg(group=[{}], EXPR$0=[$SUM0($0)]) 00-03 UnionExchange 01-01StreamAgg(group=[{}], EXPR$0=[COUNT()]) 01-02 Project($f0=[0]) 01-03SelectionVectorRemover 01-04 Filter(condition=[=($0, 0)]) 01-05SingleMergeExchange(sort0=[1 ASC]) 02-01 SelectionVectorRemover 02-02Sort(sort0=[$1], dir0=[ASC]) 02-03 Project(id=[$0], str=[$1]) 02-04HashToRandomExchange(dist0=[[$1]]) 03-01 UnorderedMuxExchange 04-01Project(id=[$0], str=[$1], E_X_P_R_H_A_S_H_F_I_E_L_D=[hash32AsDouble($1, 1301011)]) 04-02 Flatten(flattenField=[$1]) 04-03Project(id=[$0], str=[$1]) 04-04 Scan(groupscan=[EasyGroupScan [selectionRoot=maprfs:/drill/testdata/resource-manager/flatten-large-small.json, numFiles=1, columns=[`id`, `str_list`], files=[maprfs:///drill/testdata/resource-manager/flatten-large-small.json]]]) {noformat} One of the operators between the Scan and the Sort allocated the extra memory for the batch. Flatten is likely a good candidate to look at. > Excessive memory allocation > --- > > Key: DRILL-5774 > URL: https://issues.apache.org/jira/browse/DRILL-5774 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Relational Operators >Affects Versions: 1.11.0 >Reporter: Robert Hou >Assignee: Paul Rogers > Fix For: 1.12.0 > > > This query exhibits excessive memory allocation: > {noformat} > ALTER SESSION SET `exec.sort.disable_managed` = false; > alter session set `planner.width.max_per_node` = 1; > alter session set `planner.disable_exchanges` = true; > alter session set `planner.width.max_per_query` = 1; > select count(*) from (select * from (select id, flatten(str_list) str from > dfs.`/drill/testdata/resource-manager/flatten-large-small.json`) d order by > d.str) d1 where d1.id=0; > {noformat} > This query does a flatten on a large table. The result is 160M records. > Half the records have a one-byte string, and half have a 253-byte string. > And then there are 40K records with 223 byte strings. > {noformat} > select length(str), count(*) from (select id, flatten(str_list) str from > dfs.`/drill/testdata/resource-manager/flatten-large-small.json`) group by > length(str); > +-+---+ > | EXPR$0 | EXPR$1 | > +-+---+ > | 223 | 4 | > | 1 | 80042001 | > | 253 | 8000 | > {noformat} > From the drillbit.log: > {noformat} > 2017-09-02 11:43:44,598 [26550427-6adf-a52e-2ea8-dc52d8d8433f:frag:0:0] DEBUG > o.a.d.e.p.i.x.m.ExternalSortBatch - Actual batch schema & sizes { > str(type: REQUIRED VARCHAR, count: 4096, std size: 54, actual size: 134, > data size: 548360) > id(type: OPTIONAL BIGINT, count: 4096, std size: 8, actual size: 9, data > size: 36864) > Records: 4096, Total size: 1073819648, Data size: 585224, Gross row width: > 262163, Net row width: 143, Density: 1} > {noformat} > The data size is 585K, but the batch size is 1 GB. The density is 1%. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (DRILL-5786) Query encounters Exception in RPC communication during Sort
[ https://issues.apache.org/jira/browse/DRILL-5786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Hou updated DRILL-5786: -- Summary: Query encounters Exception in RPC communication during Sort (was: Query encounters Exception in RPC communication) > Query encounters Exception in RPC communication during Sort > --- > > Key: DRILL-5786 > URL: https://issues.apache.org/jira/browse/DRILL-5786 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Relational Operators >Affects Versions: 1.11.0 >Reporter: Robert Hou >Assignee: Paul Rogers > Fix For: 1.12.0 > > Attachments: 2647d2b0-69bf-5a2b-0e23-81e8d49e464e.sys.drill, > drillbit.log > > > Query is: > {noformat} > select count(*) from (select * from > dfs.`/drill/testdata/resource-manager/3500cols.tbl` order by > columns[450],columns[330],columns[230],columns[220],columns[110],columns[90],columns[80],columns[70],columns[40],columns[10],columns[20],columns[30],columns[40],columns[50], > > columns[454],columns[413],columns[940],columns[834],columns[73],columns[140],columns[104],columns[],columns[30],columns[2420],columns[1520], > columns[1410], > columns[1110],columns[1290],columns[2380],columns[705],columns[45],columns[1054],columns[2430],columns[420],columns[404],columns[3350], > > columns[],columns[153],columns[356],columns[84],columns[745],columns[1450],columns[103],columns[2065],columns[343],columns[3420],columns[530], > columns[3210] ) d where d.col433 = 'sjka skjf' > {noformat} > This is the same query as DRILL-5670 but no session variables are set. > Here is the stack trace: > {noformat} > 2017-09-12 13:14:57,584 [BitServer-5] ERROR > o.a.d.exec.rpc.RpcExceptionHandler - Exception in RPC communication. > Connection: /10.10.100.190:31012 <--> /10.10.100.190:46230 (data server). > Closing connection. > io.netty.handler.codec.DecoderException: > org.apache.drill.exec.exception.OutOfMemoryException: Failure allocating > buffer. > at > io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:233) > ~[netty-codec-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339) > [netty-transport-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324) > [netty-transport-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86) > [netty-transport-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339) > [netty-transport-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324) > [netty-transport-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:847) > [netty-transport-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131) > [netty-transport-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511) > [netty-transport-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468) > [netty-transport-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382) > [netty-transport-4.0.27.Final.jar:4.0.27.Final] > at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) > [netty-transport-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111) > [netty-common-4.0.27.Final.jar:4.0.27.Final] > at java.lang.Thread.run(Thread.java:745) [na:1.7.0_111] > Caused by: org.apache.drill.exec.exception.OutOfMemoryException: Failure > allocating buffer. > at > io.netty.buffer.PooledByteBufAllocatorL.allocate(PooledByteBufAllocatorL.java:64) > ~[drill-memory-base-1.12.0-SNAPSHOT.jar:4.0.27.Final] > at > org.apache.drill.exec.memory.AllocationManager.(AllocationManager.java:81) > ~[drill-memory-base-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.memory.BaseAllocator.bufferWithoutReservation(BaseAllocator.java:260) > ~[drill-memory-base-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.memory.BaseAllocator.buffer(BaseAllocator.java:243) >
[jira] [Commented] (DRILL-5670) Varchar vector throws an assertion error when allocating a new vector
[ https://issues.apache.org/jira/browse/DRILL-5670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16164244#comment-16164244 ] Robert Hou commented on DRILL-5670: --- attached drillbit.log.exchange and profile > Varchar vector throws an assertion error when allocating a new vector > - > > Key: DRILL-5670 > URL: https://issues.apache.org/jira/browse/DRILL-5670 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Relational Operators >Affects Versions: 1.11.0 >Reporter: Rahul Challapalli >Assignee: Paul Rogers > Fix For: 1.12.0 > > Attachments: 26498995-bbad-83bc-618f-914c37a84e1f.sys.drill, > 26555749-4d36-10d2-6faf-e403db40c370.sys.drill, > 266290f3-5fdc-5873-7372-e9ee053bf867.sys.drill, > 269969ca-8d4d-073a-d916-9031e3d3fbf0.sys.drill, drillbit.log, drillbit.log, > drillbit.log, drillbit.log, drillbit.log, drillbit.log.sort, drillbit.out, > drill-override.conf > > > I am running this test on a private branch of [paul's > repository|https://github.com/paul-rogers/drill]. Below is the commit info > {code} > git.commit.id.abbrev=d86e16c > git.commit.user.email=prog...@maprtech.com > git.commit.message.full=DRILL-5601\: Rollup of external sort fixes an > improvements\n\n- DRILL-5513\: Managed External Sort \: OOM error during the > merge phase\n- DRILL-5519\: Sort fails to spill and results in an OOM\n- > DRILL-5522\: OOM during the merge and spill process of the managed external > sort\n- DRILL-5594\: Excessive buffer reallocations during merge phase of > external sort\n- DRILL-5597\: Incorrect "bits" vector allocation in nullable > vectors allocateNew()\n- DRILL-5602\: Repeated List Vector fails to > initialize the offset vector\n\nAll of the bugs have to do with handling > low-memory conditions, and with\ncorrectly estimating the sizes of vectors, > even when those vectors come\nfrom the spill file or from an exchange. Hence, > the changes for all of\nthe above issues are interrelated.\n > git.commit.id=d86e16c551e7d3553f2cde748a739b1c5a7a7659 > git.commit.message.short=DRILL-5601\: Rollup of external sort fixes an > improvements > git.commit.user.name=Paul Rogers > git.build.user.name=Rahul Challapalli > git.commit.id.describe=0.9.0-1078-gd86e16c > git.build.user.email=challapallira...@gmail.com > git.branch=d86e16c551e7d3553f2cde748a739b1c5a7a7659 > git.commit.time=05.07.2017 @ 20\:34\:39 PDT > git.build.time=12.07.2017 @ 14\:27\:03 PDT > git.remote.origin.url=g...@github.com\:paul-rogers/drill.git > {code} > Below query fails with an Assertion Error > {code} > 0: jdbc:drill:zk=10.10.100.190:5181> ALTER SESSION SET > `exec.sort.disable_managed` = false; > +---+-+ > | ok | summary | > +---+-+ > | true | exec.sort.disable_managed updated. | > +---+-+ > 1 row selected (1.044 seconds) > 0: jdbc:drill:zk=10.10.100.190:5181> alter session set > `planner.memory.max_query_memory_per_node` = 482344960; > +---++ > | ok | summary | > +---++ > | true | planner.memory.max_query_memory_per_node updated. | > +---++ > 1 row selected (0.372 seconds) > 0: jdbc:drill:zk=10.10.100.190:5181> alter session set > `planner.width.max_per_node` = 1; > +---+--+ > | ok | summary| > +---+--+ > | true | planner.width.max_per_node updated. | > +---+--+ > 1 row selected (0.292 seconds) > 0: jdbc:drill:zk=10.10.100.190:5181> alter session set > `planner.width.max_per_query` = 1; > +---+---+ > | ok |summary| > +---+---+ > | true | planner.width.max_per_query updated. | > +---+---+ > 1 row selected (0.25 seconds) > 0: jdbc:drill:zk=10.10.100.190:5181> select count(*) from (select * from > dfs.`/drill/testdata/resource-manager/3500cols.tbl` order by > columns[450],columns[330],columns[230],columns[220],columns[110],columns[90],columns[80],columns[70],columns[40],columns[10],columns[20],columns[30],columns[40],columns[50], > > columns[454],columns[413],columns[940],columns[834],columns[73],columns[140],columns[104],columns[],columns[30],columns[2420],columns[1520], > columns[1410], >
[jira] [Updated] (DRILL-5786) Query encounters Exception in RPC communication
[ https://issues.apache.org/jira/browse/DRILL-5786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Hou updated DRILL-5786: -- Attachment: 2647d2b0-69bf-5a2b-0e23-81e8d49e464e.sys.drill drillbit.log > Query encounters Exception in RPC communication > --- > > Key: DRILL-5786 > URL: https://issues.apache.org/jira/browse/DRILL-5786 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Relational Operators >Affects Versions: 1.11.0 >Reporter: Robert Hou >Assignee: Paul Rogers > Fix For: 1.12.0 > > Attachments: 2647d2b0-69bf-5a2b-0e23-81e8d49e464e.sys.drill, > drillbit.log > > > Query is: > {noformat} > select count(*) from (select * from > dfs.`/drill/testdata/resource-manager/3500cols.tbl` order by > columns[450],columns[330],columns[230],columns[220],columns[110],columns[90],columns[80],columns[70],columns[40],columns[10],columns[20],columns[30],columns[40],columns[50], > > columns[454],columns[413],columns[940],columns[834],columns[73],columns[140],columns[104],columns[],columns[30],columns[2420],columns[1520], > columns[1410], > columns[1110],columns[1290],columns[2380],columns[705],columns[45],columns[1054],columns[2430],columns[420],columns[404],columns[3350], > > columns[],columns[153],columns[356],columns[84],columns[745],columns[1450],columns[103],columns[2065],columns[343],columns[3420],columns[530], > columns[3210] ) d where d.col433 = 'sjka skjf' > {noformat} > This is the same query as DRILL-5670 but no session variables are set. > Here is the stack trace: > {noformat} > 2017-09-12 13:14:57,584 [BitServer-5] ERROR > o.a.d.exec.rpc.RpcExceptionHandler - Exception in RPC communication. > Connection: /10.10.100.190:31012 <--> /10.10.100.190:46230 (data server). > Closing connection. > io.netty.handler.codec.DecoderException: > org.apache.drill.exec.exception.OutOfMemoryException: Failure allocating > buffer. > at > io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:233) > ~[netty-codec-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339) > [netty-transport-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324) > [netty-transport-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86) > [netty-transport-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339) > [netty-transport-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324) > [netty-transport-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:847) > [netty-transport-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131) > [netty-transport-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511) > [netty-transport-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468) > [netty-transport-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382) > [netty-transport-4.0.27.Final.jar:4.0.27.Final] > at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) > [netty-transport-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111) > [netty-common-4.0.27.Final.jar:4.0.27.Final] > at java.lang.Thread.run(Thread.java:745) [na:1.7.0_111] > Caused by: org.apache.drill.exec.exception.OutOfMemoryException: Failure > allocating buffer. > at > io.netty.buffer.PooledByteBufAllocatorL.allocate(PooledByteBufAllocatorL.java:64) > ~[drill-memory-base-1.12.0-SNAPSHOT.jar:4.0.27.Final] > at > org.apache.drill.exec.memory.AllocationManager.(AllocationManager.java:81) > ~[drill-memory-base-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.memory.BaseAllocator.bufferWithoutReservation(BaseAllocator.java:260) > ~[drill-memory-base-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.memory.BaseAllocator.buffer(BaseAllocator.java:243) > ~[drill-memory-base-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at >
[jira] [Commented] (DRILL-5670) Varchar vector throws an assertion error when allocating a new vector
[ https://issues.apache.org/jira/browse/DRILL-5670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16164240#comment-16164240 ] Robert Hou commented on DRILL-5670: --- I tried with disabling exchanges, and got a different error. It looks like sort did not complete in this case. {noformat} ALTER SESSION SET `exec.sort.disable_managed` = false alter session set `planner.memory.max_query_memory_per_node` = 482344960 alter session set `planner.width.max_per_node` = 1 alter session set `planner.width.max_per_query` = 1 alter session set `planner.disable_exchanges` = true select count(*) from (select * from dfs.`/drill/testdata/resource-manager/3500cols.tbl` order by columns[450],columns[330],columns[230],columns[220],columns[110],columns[90],columns[80],columns[70],columns[40],columns[10],columns[20],columns[30],columns[40],columns[50], columns[454],columns[413],columns[940],columns[834],columns[73],columns[140],columns[104],columns[],columns[30],columns[2420],columns[1520], columns[1410], columns[1110],columns[1290],columns[2380],columns[705],columns[45],columns[1054],columns[2430],columns[420],columns[404],columns[3350], columns[],columns[153],columns[356],columns[84],columns[745],columns[1450],columns[103],columns[2065],columns[343],columns[3420],columns[530], columns[3210] ) d where d.col433 = 'sjka skjf' {noformat} Here is the stack trace: {noformat} | 00-00Screen 00-01 Project(EXPR$0=[$0]) 00-02StreamAgg(group=[{}], EXPR$0=[COUNT()]) 00-03 Project($f0=[0]) 00-04SelectionVectorRemover 00-05 Filter(condition=[=(ITEM($0, 'col433'), 'sjka skjf')]) 00-06Project(T8¦¦*=[$0]) 00-07 SelectionVectorRemover 00-08Sort(sort0=[$1], sort1=[$2], sort2=[$3], sort3=[$4], sort4=[$5], sort5=[$6], sort6=[$7], sort7=[$8], sort8=[$9], sort9=[$10], sort10=[$11], sort11=[$12], sort12=[$9], sort13=[$13], sort14=[$14], sort15=[$15], sort16=[$16], sort17=[$17], sort18=[$18], sort19=[$19], sort20=[$20], sort21=[$21], sort22=[$12], sort23=[$22], sort24=[$23], sort25=[$24], sort26=[$25], sort27=[$26], sort28=[$27], sort29=[$28], sort30=[$29], sort31=[$30], sort32=[$31], sort33=[$32], sort34=[$33], sort35=[$34], sort36=[$35], sort37=[$36], sort38=[$37], sort39=[$38], sort40=[$39], sort41=[$40], sort42=[$41], sort43=[$42], sort44=[$43], sort45=[$44], sort46=[$45], sort47=[$46], dir0=[ASC], dir1=[ASC], dir2=[ASC], dir3=[ASC], dir4=[ASC], dir5=[ASC], dir6=[ASC], dir7=[ASC], dir8=[ASC], dir9=[ASC], dir10=[ASC], dir11=[ASC], dir12=[ASC], dir13=[ASC], dir14=[ASC], dir15=[ASC], dir16=[ASC], dir17=[ASC], dir18=[ASC], dir19=[ASC], dir20=[ASC], dir21=[ASC], dir22=[ASC], dir23=[ASC], dir24=[ASC], dir25=[ASC], dir26=[ASC], dir27=[ASC], dir28=[ASC], dir29=[ASC], dir30=[ASC], dir31=[ASC], dir32=[ASC], dir33=[ASC], dir34=[ASC], dir35=[ASC], dir36=[ASC], dir37=[ASC], dir38=[ASC], dir39=[ASC], dir40=[ASC], dir41=[ASC], dir42=[ASC], dir43=[ASC], dir44=[ASC], dir45=[ASC], dir46=[ASC], dir47=[ASC]) 00-09 Project(T8¦¦*=[$0], EXPR$1=[ITEM($1, 450)], EXPR$2=[ITEM($1, 330)], EXPR$3=[ITEM($1, 230)], EXPR$4=[ITEM($1, 220)], EXPR$5=[ITEM($1, 110)], EXPR$6=[ITEM($1, 90)], EXPR$7=[ITEM($1, 80)], EXPR$8=[ITEM($1, 70)], EXPR$9=[ITEM($1, 40)], EXPR$10=[ITEM($1, 10)], EXPR$11=[ITEM($1, 20)], EXPR$12=[ITEM($1, 30)], EXPR$13=[ITEM($1, 50)], EXPR$14=[ITEM($1, 454)], EXPR$15=[ITEM($1, 413)], EXPR$16=[ITEM($1, 940)], EXPR$17=[ITEM($1, 834)], EXPR$18=[ITEM($1, 73)], EXPR$19=[ITEM($1, 140)], EXPR$20=[ITEM($1, 104)], EXPR$21=[ITEM($1, )], EXPR$22=[ITEM($1, 2420)], EXPR$23=[ITEM($1, 1520)], EXPR$24=[ITEM($1, 1410)], EXPR$25=[ITEM($1, 1110)], EXPR$26=[ITEM($1, 1290)], EXPR$27=[ITEM($1, 2380)], EXPR$28=[ITEM($1, 705)], EXPR$29=[ITEM($1, 45)], EXPR$30=[ITEM($1, 1054)], EXPR$31=[ITEM($1, 2430)], EXPR$32=[ITEM($1, 420)], EXPR$33=[ITEM($1, 404)], EXPR$34=[ITEM($1, 3350)], EXPR$35=[ITEM($1, )], EXPR$36=[ITEM($1, 153)], EXPR$37=[ITEM($1, 356)], EXPR$38=[ITEM($1, 84)], EXPR$39=[ITEM($1, 745)], EXPR$40=[ITEM($1, 1450)], EXPR$41=[ITEM($1, 103)], EXPR$42=[ITEM($1, 2065)], EXPR$43=[ITEM($1, 343)], EXPR$44=[ITEM($1, 3420)], EXPR$45=[ITEM($1, 530)], EXPR$46=[ITEM($1, 3210)]) 00-10Project(T8¦¦*=[$0], columns=[$1]) 00-11 Scan(groupscan=[EasyGroupScan [selectionRoot=maprfs:/drill/testdata/resource-manager/3500cols.tbl, numFiles=1, columns=[`*`], files=[maprfs:///drill/testdata/resource-manager/3500cols.tbl]]]) {noformat} > Varchar vector throws an assertion error when allocating a new vector > - > > Key: DRILL-5670 > URL: https://issues.apache.org/jira/browse/DRILL-5670 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Relational Operators >Affects
[jira] [Updated] (DRILL-5670) Varchar vector throws an assertion error when allocating a new vector
[ https://issues.apache.org/jira/browse/DRILL-5670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Hou updated DRILL-5670: -- Attachment: drillbit.log.exchange 26478262-f0a7-8fc1-1887-4f27071b9c0f.sys.drill > Varchar vector throws an assertion error when allocating a new vector > - > > Key: DRILL-5670 > URL: https://issues.apache.org/jira/browse/DRILL-5670 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Relational Operators >Affects Versions: 1.11.0 >Reporter: Rahul Challapalli >Assignee: Paul Rogers > Fix For: 1.12.0 > > Attachments: 26478262-f0a7-8fc1-1887-4f27071b9c0f.sys.drill, > 26498995-bbad-83bc-618f-914c37a84e1f.sys.drill, > 26555749-4d36-10d2-6faf-e403db40c370.sys.drill, > 266290f3-5fdc-5873-7372-e9ee053bf867.sys.drill, > 269969ca-8d4d-073a-d916-9031e3d3fbf0.sys.drill, drillbit.log, drillbit.log, > drillbit.log, drillbit.log, drillbit.log, drillbit.log.exchange, > drillbit.log.sort, drillbit.out, drill-override.conf > > > I am running this test on a private branch of [paul's > repository|https://github.com/paul-rogers/drill]. Below is the commit info > {code} > git.commit.id.abbrev=d86e16c > git.commit.user.email=prog...@maprtech.com > git.commit.message.full=DRILL-5601\: Rollup of external sort fixes an > improvements\n\n- DRILL-5513\: Managed External Sort \: OOM error during the > merge phase\n- DRILL-5519\: Sort fails to spill and results in an OOM\n- > DRILL-5522\: OOM during the merge and spill process of the managed external > sort\n- DRILL-5594\: Excessive buffer reallocations during merge phase of > external sort\n- DRILL-5597\: Incorrect "bits" vector allocation in nullable > vectors allocateNew()\n- DRILL-5602\: Repeated List Vector fails to > initialize the offset vector\n\nAll of the bugs have to do with handling > low-memory conditions, and with\ncorrectly estimating the sizes of vectors, > even when those vectors come\nfrom the spill file or from an exchange. Hence, > the changes for all of\nthe above issues are interrelated.\n > git.commit.id=d86e16c551e7d3553f2cde748a739b1c5a7a7659 > git.commit.message.short=DRILL-5601\: Rollup of external sort fixes an > improvements > git.commit.user.name=Paul Rogers > git.build.user.name=Rahul Challapalli > git.commit.id.describe=0.9.0-1078-gd86e16c > git.build.user.email=challapallira...@gmail.com > git.branch=d86e16c551e7d3553f2cde748a739b1c5a7a7659 > git.commit.time=05.07.2017 @ 20\:34\:39 PDT > git.build.time=12.07.2017 @ 14\:27\:03 PDT > git.remote.origin.url=g...@github.com\:paul-rogers/drill.git > {code} > Below query fails with an Assertion Error > {code} > 0: jdbc:drill:zk=10.10.100.190:5181> ALTER SESSION SET > `exec.sort.disable_managed` = false; > +---+-+ > | ok | summary | > +---+-+ > | true | exec.sort.disable_managed updated. | > +---+-+ > 1 row selected (1.044 seconds) > 0: jdbc:drill:zk=10.10.100.190:5181> alter session set > `planner.memory.max_query_memory_per_node` = 482344960; > +---++ > | ok | summary | > +---++ > | true | planner.memory.max_query_memory_per_node updated. | > +---++ > 1 row selected (0.372 seconds) > 0: jdbc:drill:zk=10.10.100.190:5181> alter session set > `planner.width.max_per_node` = 1; > +---+--+ > | ok | summary| > +---+--+ > | true | planner.width.max_per_node updated. | > +---+--+ > 1 row selected (0.292 seconds) > 0: jdbc:drill:zk=10.10.100.190:5181> alter session set > `planner.width.max_per_query` = 1; > +---+---+ > | ok |summary| > +---+---+ > | true | planner.width.max_per_query updated. | > +---+---+ > 1 row selected (0.25 seconds) > 0: jdbc:drill:zk=10.10.100.190:5181> select count(*) from (select * from > dfs.`/drill/testdata/resource-manager/3500cols.tbl` order by > columns[450],columns[330],columns[230],columns[220],columns[110],columns[90],columns[80],columns[70],columns[40],columns[10],columns[20],columns[30],columns[40],columns[50], > > columns[454],columns[413],columns[940],columns[834],columns[73],columns[140],columns[104],columns[],columns[30],columns[2420],columns[1520], > columns[1410], >
[jira] [Commented] (DRILL-5785) Query Error on a large PCAP file
[ https://issues.apache.org/jira/browse/DRILL-5785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16164029#comment-16164029 ] Robert Hou commented on DRILL-5785: --- Can you attach a sample of your PCAP file? Can you attach the query profile? Go to the URL http://:8047. Click on Profiles in the upper left hand corner. You will see a list of queries that have run on your system. Click on the link for the query. You will see something like this in the URL: https://:8047/profiles/264e5071-f248-1001-d72a-5a4e850d6ea6 The long string is your queryID. There should be a file 264e5071-f248-1001-d72a-5a4e850d6ea6.sys.drill on your system in $DRILL_HOME/logs. This is your query file stored on disk. It would be helpful if you can attach it to this Jira. > Query Error on a large PCAP file > > > Key: DRILL-5785 > URL: https://issues.apache.org/jira/browse/DRILL-5785 > Project: Apache Drill > Issue Type: Bug > Components: Storage - Other >Affects Versions: 1.11.0 >Reporter: Takeo Ogawara >Priority: Minor > Attachments: Apache Drill_files.zip, Apache Drill.html > > > Query on a very large PCAP file (larger than 100GB) failed with following > error message. > > Error: SYSTEM ERROR: IllegalStateException: Bad magic number = 0a0d0d0a > > > > Fragment 1:169 > > > > [Error Id: 8882c359-c253-40c0-866c-417ef1ce5aa3 on node22:31010] > > (state=,code=0) -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (DRILL-5786) Query encounters Exception in RPC communication
[ https://issues.apache.org/jira/browse/DRILL-5786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Hou updated DRILL-5786: -- Description: Query is: {noformat} select count(*) from (select * from dfs.`/drill/testdata/resource-manager/3500cols.tbl` order by columns[450],columns[330],columns[230],columns[220],columns[110],columns[90],columns[80],columns[70],columns[40],columns[10],columns[20],columns[30],columns[40],columns[50], columns[454],columns[413],columns[940],columns[834],columns[73],columns[140],columns[104],columns[],columns[30],columns[2420],columns[1520], columns[1410], columns[1110],columns[1290],columns[2380],columns[705],columns[45],columns[1054],columns[2430],columns[420],columns[404],columns[3350], columns[],columns[153],columns[356],columns[84],columns[745],columns[1450],columns[103],columns[2065],columns[343],columns[3420],columns[530], columns[3210] ) d where d.col433 = 'sjka skjf' {noformat} This is the same query as DRILL-5670 but no session variables are set. Here is the stack trace: {noformat} 2017-09-12 13:14:57,584 [BitServer-5] ERROR o.a.d.exec.rpc.RpcExceptionHandler - Exception in RPC communication. Connection: /10.10.100.190:31012 <--> /10.10.100.190:46230 (data server). Closing connection. io.netty.handler.codec.DecoderException: org.apache.drill.exec.exception.OutOfMemoryException: Failure allocating buffer. at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:233) ~[netty-codec-4.0.27.Final.jar:4.0.27.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339) [netty-transport-4.0.27.Final.jar:4.0.27.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324) [netty-transport-4.0.27.Final.jar:4.0.27.Final] at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86) [netty-transport-4.0.27.Final.jar:4.0.27.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339) [netty-transport-4.0.27.Final.jar:4.0.27.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324) [netty-transport-4.0.27.Final.jar:4.0.27.Final] at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:847) [netty-transport-4.0.27.Final.jar:4.0.27.Final] at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131) [netty-transport-4.0.27.Final.jar:4.0.27.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511) [netty-transport-4.0.27.Final.jar:4.0.27.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468) [netty-transport-4.0.27.Final.jar:4.0.27.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382) [netty-transport-4.0.27.Final.jar:4.0.27.Final] at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) [netty-transport-4.0.27.Final.jar:4.0.27.Final] at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111) [netty-common-4.0.27.Final.jar:4.0.27.Final] at java.lang.Thread.run(Thread.java:745) [na:1.7.0_111] Caused by: org.apache.drill.exec.exception.OutOfMemoryException: Failure allocating buffer. at io.netty.buffer.PooledByteBufAllocatorL.allocate(PooledByteBufAllocatorL.java:64) ~[drill-memory-base-1.12.0-SNAPSHOT.jar:4.0.27.Final] at org.apache.drill.exec.memory.AllocationManager.(AllocationManager.java:81) ~[drill-memory-base-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.memory.BaseAllocator.bufferWithoutReservation(BaseAllocator.java:260) ~[drill-memory-base-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.memory.BaseAllocator.buffer(BaseAllocator.java:243) ~[drill-memory-base-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.memory.BaseAllocator.buffer(BaseAllocator.java:213) ~[drill-memory-base-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at io.netty.buffer.ExpandableByteBuf.capacity(ExpandableByteBuf.java:43) ~[drill-memory-base-1.12.0-SNAPSHOT.jar:4.0.27.Final] at io.netty.buffer.AbstractByteBuf.ensureWritable(AbstractByteBuf.java:251) ~[netty-buffer-4.0.27.Final.jar:4.0.27.Final] at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:849) ~[netty-buffer-4.0.27.Final.jar:4.0.27.Final] at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:841) ~[netty-buffer-4.0.27.Final.jar:4.0.27.Final] at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:831) ~[netty-buffer-4.0.27.Final.jar:4.0.27.Final] at
[jira] [Comment Edited] (DRILL-5670) Varchar vector throws an assertion error when allocating a new vector
[ https://issues.apache.org/jira/browse/DRILL-5670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16164240#comment-16164240 ] Robert Hou edited comment on DRILL-5670 at 9/13/17 7:21 AM: I tried with disabling exchanges, and got a different error. It looks like sort did not complete in this case. {noformat} ALTER SESSION SET `exec.sort.disable_managed` = false alter session set `planner.memory.max_query_memory_per_node` = 482344960 alter session set `planner.width.max_per_node` = 1 alter session set `planner.width.max_per_query` = 1 alter session set `planner.disable_exchanges` = true select count(*) from (select * from dfs.`/drill/testdata/resource-manager/3500cols.tbl` order by columns[450],columns[330],columns[230],columns[220],columns[110],columns[90],columns[80],columns[70],columns[40],columns[10],columns[20],columns[30],columns[40],columns[50], columns[454],columns[413],columns[940],columns[834],columns[73],columns[140],columns[104],columns[],columns[30],columns[2420],columns[1520], columns[1410], columns[1110],columns[1290],columns[2380],columns[705],columns[45],columns[1054],columns[2430],columns[420],columns[404],columns[3350], columns[],columns[153],columns[356],columns[84],columns[745],columns[1450],columns[103],columns[2065],columns[343],columns[3420],columns[530], columns[3210] ) d where d.col433 = 'sjka skjf' {noformat} This is the error from drillbit.log: 2017-09-12 17:36:53,155 [26478262-f0a7-8fc1-1887-4f27071b9c0f:frag:0:0] ERROR o.a.d.e.p.i.x.m.ExternalSortBatch - Insufficient memory to merge two batches. Incoming batch size: 409305088, available memory: 482344960 Here is the plan: {noformat} | 00-00Screen 00-01 Project(EXPR$0=[$0]) 00-02StreamAgg(group=[{}], EXPR$0=[COUNT()]) 00-03 Project($f0=[0]) 00-04SelectionVectorRemover 00-05 Filter(condition=[=(ITEM($0, 'col433'), 'sjka skjf')]) 00-06Project(T8¦¦*=[$0]) 00-07 SelectionVectorRemover 00-08Sort(sort0=[$1], sort1=[$2], sort2=[$3], sort3=[$4], sort4=[$5], sort5=[$6], sort6=[$7], sort7=[$8], sort8=[$9], sort9=[$10], sort10=[$11], sort11=[$12], sort12=[$9], sort13=[$13], sort14=[$14], sort15=[$15], sort16=[$16], sort17=[$17], sort18=[$18], sort19=[$19], sort20=[$20], sort21=[$21], sort22=[$12], sort23=[$22], sort24=[$23], sort25=[$24], sort26=[$25], sort27=[$26], sort28=[$27], sort29=[$28], sort30=[$29], sort31=[$30], sort32=[$31], sort33=[$32], sort34=[$33], sort35=[$34], sort36=[$35], sort37=[$36], sort38=[$37], sort39=[$38], sort40=[$39], sort41=[$40], sort42=[$41], sort43=[$42], sort44=[$43], sort45=[$44], sort46=[$45], sort47=[$46], dir0=[ASC], dir1=[ASC], dir2=[ASC], dir3=[ASC], dir4=[ASC], dir5=[ASC], dir6=[ASC], dir7=[ASC], dir8=[ASC], dir9=[ASC], dir10=[ASC], dir11=[ASC], dir12=[ASC], dir13=[ASC], dir14=[ASC], dir15=[ASC], dir16=[ASC], dir17=[ASC], dir18=[ASC], dir19=[ASC], dir20=[ASC], dir21=[ASC], dir22=[ASC], dir23=[ASC], dir24=[ASC], dir25=[ASC], dir26=[ASC], dir27=[ASC], dir28=[ASC], dir29=[ASC], dir30=[ASC], dir31=[ASC], dir32=[ASC], dir33=[ASC], dir34=[ASC], dir35=[ASC], dir36=[ASC], dir37=[ASC], dir38=[ASC], dir39=[ASC], dir40=[ASC], dir41=[ASC], dir42=[ASC], dir43=[ASC], dir44=[ASC], dir45=[ASC], dir46=[ASC], dir47=[ASC]) 00-09 Project(T8¦¦*=[$0], EXPR$1=[ITEM($1, 450)], EXPR$2=[ITEM($1, 330)], EXPR$3=[ITEM($1, 230)], EXPR$4=[ITEM($1, 220)], EXPR$5=[ITEM($1, 110)], EXPR$6=[ITEM($1, 90)], EXPR$7=[ITEM($1, 80)], EXPR$8=[ITEM($1, 70)], EXPR$9=[ITEM($1, 40)], EXPR$10=[ITEM($1, 10)], EXPR$11=[ITEM($1, 20)], EXPR$12=[ITEM($1, 30)], EXPR$13=[ITEM($1, 50)], EXPR$14=[ITEM($1, 454)], EXPR$15=[ITEM($1, 413)], EXPR$16=[ITEM($1, 940)], EXPR$17=[ITEM($1, 834)], EXPR$18=[ITEM($1, 73)], EXPR$19=[ITEM($1, 140)], EXPR$20=[ITEM($1, 104)], EXPR$21=[ITEM($1, )], EXPR$22=[ITEM($1, 2420)], EXPR$23=[ITEM($1, 1520)], EXPR$24=[ITEM($1, 1410)], EXPR$25=[ITEM($1, 1110)], EXPR$26=[ITEM($1, 1290)], EXPR$27=[ITEM($1, 2380)], EXPR$28=[ITEM($1, 705)], EXPR$29=[ITEM($1, 45)], EXPR$30=[ITEM($1, 1054)], EXPR$31=[ITEM($1, 2430)], EXPR$32=[ITEM($1, 420)], EXPR$33=[ITEM($1, 404)], EXPR$34=[ITEM($1, 3350)], EXPR$35=[ITEM($1, )], EXPR$36=[ITEM($1, 153)], EXPR$37=[ITEM($1, 356)], EXPR$38=[ITEM($1, 84)], EXPR$39=[ITEM($1, 745)], EXPR$40=[ITEM($1, 1450)], EXPR$41=[ITEM($1, 103)], EXPR$42=[ITEM($1, 2065)], EXPR$43=[ITEM($1, 343)], EXPR$44=[ITEM($1, 3420)], EXPR$45=[ITEM($1, 530)], EXPR$46=[ITEM($1, 3210)]) 00-10Project(T8¦¦*=[$0], columns=[$1]) 00-11 Scan(groupscan=[EasyGroupScan [selectionRoot=maprfs:/drill/testdata/resource-manager/3500cols.tbl, numFiles=1, columns=[`*`], files=[maprfs:///drill/testdata/resource-manager/3500cols.tbl]]]) {noformat} was (Author: rhou): I tried with disabling exchanges, and got a different error. It
[jira] [Created] (DRILL-5786) Query enters Exception in RPC communication
Robert Hou created DRILL-5786: - Summary: Query enters Exception in RPC communication Key: DRILL-5786 URL: https://issues.apache.org/jira/browse/DRILL-5786 Project: Apache Drill Issue Type: Bug Components: Execution - Relational Operators Affects Versions: 1.11.0 Reporter: Robert Hou Assignee: Paul Rogers Fix For: 1.12.0 Query is: {noformat} select count(*) from (select * from dfs.`/drill/testdata/resource-manager/3500cols.tbl` order by columns[450],columns[330],columns[230],columns[220],columns[110],columns[90],columns[80],columns[70],columns[40],columns[10],columns[20],columns[30],columns[40],columns[50], columns[454],columns[413],columns[940],columns[834],columns[73],columns[140],columns[104],columns[],columns[30],columns[2420],columns[1520], columns[1410], columns[1110],columns[1290],columns[2380],columns[705],columns[45],columns[1054],columns[2430],columns[420],columns[404],columns[3350], columns[],columns[153],columns[356],columns[84],columns[745],columns[1450],columns[103],columns[2065],columns[343],columns[3420],columns[530], columns[3210] ) d where d.col433 = 'sjka skjf' {noformat} This is the same query as DRILL-5670 but no session variables are set. Here is the stack trace: {noformat} 2017-09-12 13:14:57,584 [BitServer-5] ERROR o.a.d.exec.rpc.RpcExceptionHandler - Exception in RPC communication. Connection: /10.10.100.190:31012 <--> /10.10.100.190:46230 (data server). Closing connection. io.netty.handler.codec.DecoderException: org.apache.drill.exec.exception.OutOfMemoryException: Failure allocating buffer. at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:233) ~[netty-codec-4.0.27.Final.jar:4.0.27.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339) [netty-transport-4.0.27.Final.jar:4.0.27.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324) [netty-transport-4.0.27.Final.jar:4.0.27.Final] at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86) [netty-transport-4.0.27.Final.jar:4.0.27.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339) [netty-transport-4.0.27.Final.jar:4.0.27.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324) [netty-transport-4.0.27.Final.jar:4.0.27.Final] at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:847) [netty-transport-4.0.27.Final.jar:4.0.27.Final] at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131) [netty-transport-4.0.27.Final.jar:4.0.27.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511) [netty-transport-4.0.27.Final.jar:4.0.27.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468) [netty-transport-4.0.27.Final.jar:4.0.27.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382) [netty-transport-4.0.27.Final.jar:4.0.27.Final] at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) [netty-transport-4.0.27.Final.jar:4.0.27.Final] at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111) [netty-common-4.0.27.Final.jar:4.0.27.Final] at java.lang.Thread.run(Thread.java:745) [na:1.7.0_111] Caused by: org.apache.drill.exec.exception.OutOfMemoryException: Failure allocating buffer. at io.netty.buffer.PooledByteBufAllocatorL.allocate(PooledByteBufAllocatorL.java:64) ~[drill-memory-base-1.12.0-SNAPSHOT.jar:4.0.27.Final] at org.apache.drill.exec.memory.AllocationManager.(AllocationManager.java:81) ~[drill-memory-base-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.memory.BaseAllocator.bufferWithoutReservation(BaseAllocator.java:260) ~[drill-memory-base-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.memory.BaseAllocator.buffer(BaseAllocator.java:243) ~[drill-memory-base-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.memory.BaseAllocator.buffer(BaseAllocator.java:213) ~[drill-memory-base-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at io.netty.buffer.ExpandableByteBuf.capacity(ExpandableByteBuf.java:43) ~[drill-memory-base-1.12.0-SNAPSHOT.jar:4.0.27.Final] at io.netty.buffer.AbstractByteBuf.ensureWritable(AbstractByteBuf.java:251) ~[netty-buffer-4.0.27.Final.jar:4.0.27.Final] at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:849) ~[netty-buffer-4.0.27.Final.jar:4.0.27.Final] at
[jira] [Updated] (DRILL-5786) Query encounters Exception in RPC communication
[ https://issues.apache.org/jira/browse/DRILL-5786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Hou updated DRILL-5786: -- Summary: Query encounters Exception in RPC communication (was: Query enters Exception in RPC communication) > Query encounters Exception in RPC communication > --- > > Key: DRILL-5786 > URL: https://issues.apache.org/jira/browse/DRILL-5786 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Relational Operators >Affects Versions: 1.11.0 >Reporter: Robert Hou >Assignee: Paul Rogers > Fix For: 1.12.0 > > > Query is: > {noformat} > select count(*) from (select * from > dfs.`/drill/testdata/resource-manager/3500cols.tbl` order by > columns[450],columns[330],columns[230],columns[220],columns[110],columns[90],columns[80],columns[70],columns[40],columns[10],columns[20],columns[30],columns[40],columns[50], > > columns[454],columns[413],columns[940],columns[834],columns[73],columns[140],columns[104],columns[],columns[30],columns[2420],columns[1520], > columns[1410], > columns[1110],columns[1290],columns[2380],columns[705],columns[45],columns[1054],columns[2430],columns[420],columns[404],columns[3350], > > columns[],columns[153],columns[356],columns[84],columns[745],columns[1450],columns[103],columns[2065],columns[343],columns[3420],columns[530], > columns[3210] ) d where d.col433 = 'sjka skjf' > {noformat} > This is the same query as DRILL-5670 but no session variables are set. > Here is the stack trace: > {noformat} > 2017-09-12 13:14:57,584 [BitServer-5] ERROR > o.a.d.exec.rpc.RpcExceptionHandler - Exception in RPC communication. > Connection: /10.10.100.190:31012 <--> /10.10.100.190:46230 (data server). > Closing connection. > io.netty.handler.codec.DecoderException: > org.apache.drill.exec.exception.OutOfMemoryException: Failure allocating > buffer. > at > io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:233) > ~[netty-codec-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339) > [netty-transport-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324) > [netty-transport-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86) > [netty-transport-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339) > [netty-transport-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324) > [netty-transport-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:847) > [netty-transport-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131) > [netty-transport-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511) > [netty-transport-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468) > [netty-transport-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382) > [netty-transport-4.0.27.Final.jar:4.0.27.Final] > at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) > [netty-transport-4.0.27.Final.jar:4.0.27.Final] > at > io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111) > [netty-common-4.0.27.Final.jar:4.0.27.Final] > at java.lang.Thread.run(Thread.java:745) [na:1.7.0_111] > Caused by: org.apache.drill.exec.exception.OutOfMemoryException: Failure > allocating buffer. > at > io.netty.buffer.PooledByteBufAllocatorL.allocate(PooledByteBufAllocatorL.java:64) > ~[drill-memory-base-1.12.0-SNAPSHOT.jar:4.0.27.Final] > at > org.apache.drill.exec.memory.AllocationManager.(AllocationManager.java:81) > ~[drill-memory-base-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.memory.BaseAllocator.bufferWithoutReservation(BaseAllocator.java:260) > ~[drill-memory-base-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.memory.BaseAllocator.buffer(BaseAllocator.java:243) > ~[drill-memory-base-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.memory.BaseAllocator.buffer(BaseAllocator.java:213) >
[jira] [Resolved] (DRILL-5744) External sort fails with OOM error
[ https://issues.apache.org/jira/browse/DRILL-5744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Hou resolved DRILL-5744. --- Resolution: Fixed This has been verified. > External sort fails with OOM error > -- > > Key: DRILL-5744 > URL: https://issues.apache.org/jira/browse/DRILL-5744 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Relational Operators >Affects Versions: 1.10.0 >Reporter: Robert Hou >Assignee: Paul Rogers > Fix For: 1.12.0 > > Attachments: 265b163b-cf44-d2ff-2e70-4cd746b56611.sys.drill, > q34.drillbit.log > > > Query is: > {noformat} > ALTER SESSION SET `exec.sort.disable_managed` = false; > alter session set `planner.width.max_per_node` = 1; > alter session set `planner.disable_exchanges` = true; > alter session set `planner.width.max_per_query` = 1; > alter session set `planner.memory.max_query_memory_per_node` = 152428800; > select count(*) from ( > select * from ( > select s1.type type, flatten(s1.rms.rptd) rptds, s1.rms, s1.uid > from ( > select d.type type, d.uid uid, flatten(d.map.rm) rms from > dfs.`/drill/testdata/resource-manager/nested-large.json` d order by d.uid > ) s1 > ) s2 > order by s2.rms.mapid > ); > ALTER SESSION SET `exec.sort.disable_managed` = true; > alter session set `planner.width.max_per_node` = 17; > alter session set `planner.disable_exchanges` = false; > alter session set `planner.width.max_per_query` = 1000; > alter session set `planner.memory.max_query_memory_per_node` = 2147483648; > {noformat} > Stack trace is: > {noformat} > 2017-08-23 06:59:42,763 [266275e5-ebdb-14ae-d52d-00fa3a154f6d:frag:0:0] INFO > o.a.d.e.w.fragment.FragmentExecutor - User Error Occurred: One or more nodes > ran out of memory while executing the query. (Unable to allocate buffer of > size 4194304 (rounded from 3276750) due to memory limit. Current allocation: 7 > 9986944) > org.apache.drill.common.exceptions.UserException: RESOURCE ERROR: One or more > nodes ran out of memory while executing the query. > Unable to allocate buffer of size 4194304 (rounded from 3276750) due to > memory limit. Current allocation: 79986944 > [Error Id: 4f4959df-0921-4a50-b75e-56488469ab10 ] > at > org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:550) > ~[drill-common-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:244) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38) > [drill-common-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > [na:1.7.0_51] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > [na:1.7.0_51] > at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51] > Caused by: org.apache.drill.exec.exception.OutOfMemoryException: Unable to > allocate buffer of size 4194304 (rounded from 3276750) due to memory limit. > Cur > rent allocation: 79986944 > at > org.apache.drill.exec.memory.BaseAllocator.buffer(BaseAllocator.java:238) > ~[drill-memory-base-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.memory.BaseAllocator.buffer(BaseAllocator.java:213) > ~[drill-memory-base-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.vector.VarCharVector.allocateNew(VarCharVector.java:402) > ~[vector-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.vector.NullableVarCharVector.allocateNew(NullableVarCharVector.java:236) > ~[vector-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.vector.AllocationHelper.allocatePrecomputedChildCount(AllocationHelper.java:33) > ~[vector-1.12.0-SNAPSHOT.jar:1.12.0-SNAPS > HOT] > at > org.apache.drill.exec.vector.AllocationHelper.allocate(AllocationHelper.java:46) > ~[vector-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.record.VectorInitializer.allocateVector(VectorInitializer.java:113) > ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT > ] > at > org.apache.drill.exec.record.VectorInitializer.allocateVector(VectorInitializer.java:95) > ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.record.VectorInitializer.allocateMap(VectorInitializer.java:130) > ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.record.VectorInitializer.allocateVector(VectorInitializer.java:93) > ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.record.VectorInitializer.allocateBatch(VectorInitializer.java:85) >
[jira] [Closed] (DRILL-5744) External sort fails with OOM error
[ https://issues.apache.org/jira/browse/DRILL-5744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Hou closed DRILL-5744. - > External sort fails with OOM error > -- > > Key: DRILL-5744 > URL: https://issues.apache.org/jira/browse/DRILL-5744 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Relational Operators >Affects Versions: 1.10.0 >Reporter: Robert Hou >Assignee: Paul Rogers > Fix For: 1.12.0 > > Attachments: 265b163b-cf44-d2ff-2e70-4cd746b56611.sys.drill, > q34.drillbit.log > > > Query is: > {noformat} > ALTER SESSION SET `exec.sort.disable_managed` = false; > alter session set `planner.width.max_per_node` = 1; > alter session set `planner.disable_exchanges` = true; > alter session set `planner.width.max_per_query` = 1; > alter session set `planner.memory.max_query_memory_per_node` = 152428800; > select count(*) from ( > select * from ( > select s1.type type, flatten(s1.rms.rptd) rptds, s1.rms, s1.uid > from ( > select d.type type, d.uid uid, flatten(d.map.rm) rms from > dfs.`/drill/testdata/resource-manager/nested-large.json` d order by d.uid > ) s1 > ) s2 > order by s2.rms.mapid > ); > ALTER SESSION SET `exec.sort.disable_managed` = true; > alter session set `planner.width.max_per_node` = 17; > alter session set `planner.disable_exchanges` = false; > alter session set `planner.width.max_per_query` = 1000; > alter session set `planner.memory.max_query_memory_per_node` = 2147483648; > {noformat} > Stack trace is: > {noformat} > 2017-08-23 06:59:42,763 [266275e5-ebdb-14ae-d52d-00fa3a154f6d:frag:0:0] INFO > o.a.d.e.w.fragment.FragmentExecutor - User Error Occurred: One or more nodes > ran out of memory while executing the query. (Unable to allocate buffer of > size 4194304 (rounded from 3276750) due to memory limit. Current allocation: 7 > 9986944) > org.apache.drill.common.exceptions.UserException: RESOURCE ERROR: One or more > nodes ran out of memory while executing the query. > Unable to allocate buffer of size 4194304 (rounded from 3276750) due to > memory limit. Current allocation: 79986944 > [Error Id: 4f4959df-0921-4a50-b75e-56488469ab10 ] > at > org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:550) > ~[drill-common-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:244) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38) > [drill-common-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > [na:1.7.0_51] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > [na:1.7.0_51] > at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51] > Caused by: org.apache.drill.exec.exception.OutOfMemoryException: Unable to > allocate buffer of size 4194304 (rounded from 3276750) due to memory limit. > Cur > rent allocation: 79986944 > at > org.apache.drill.exec.memory.BaseAllocator.buffer(BaseAllocator.java:238) > ~[drill-memory-base-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.memory.BaseAllocator.buffer(BaseAllocator.java:213) > ~[drill-memory-base-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.vector.VarCharVector.allocateNew(VarCharVector.java:402) > ~[vector-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.vector.NullableVarCharVector.allocateNew(NullableVarCharVector.java:236) > ~[vector-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.vector.AllocationHelper.allocatePrecomputedChildCount(AllocationHelper.java:33) > ~[vector-1.12.0-SNAPSHOT.jar:1.12.0-SNAPS > HOT] > at > org.apache.drill.exec.vector.AllocationHelper.allocate(AllocationHelper.java:46) > ~[vector-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.record.VectorInitializer.allocateVector(VectorInitializer.java:113) > ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT > ] > at > org.apache.drill.exec.record.VectorInitializer.allocateVector(VectorInitializer.java:95) > ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.record.VectorInitializer.allocateMap(VectorInitializer.java:130) > ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.record.VectorInitializer.allocateVector(VectorInitializer.java:93) > ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.record.VectorInitializer.allocateBatch(VectorInitializer.java:85) > ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] >
[jira] [Resolved] (DRILL-5753) Managed External Sort: One or more nodes ran out of memory while executing the query.
[ https://issues.apache.org/jira/browse/DRILL-5753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Hou resolved DRILL-5753. --- Resolution: Fixed > Managed External Sort: One or more nodes ran out of memory while executing > the query. > - > > Key: DRILL-5753 > URL: https://issues.apache.org/jira/browse/DRILL-5753 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Relational Operators >Affects Versions: 1.11.0 >Reporter: Robert Hou >Assignee: Paul Rogers > Fix For: 1.12.0 > > Attachments: 26596b4e-9883-7dc2-6275-37134f7d63be.sys.drill, > drillbit.log > > > The query is: > {noformat} > ALTER SESSION SET `exec.sort.disable_managed` = false; > alter session set `planner.memory.max_query_memory_per_node` = 1252428800; > select count(*) from ( > select * from ( > select s1.type type, flatten(s1.rms.rptd) rptds, s1.rms, s1.uid > from ( > select d.type type, d.uid uid, flatten(d.map.rm) rms from > dfs.`/drill/testdata/resource-manager/nested-large.json` d order by d.uid > ) s1 > ) s2 > order by s2.rms.mapid, s2.rptds.a, s2.rptds.do_not_exist > ); > ALTER SESSION SET `exec.sort.disable_managed` = true; > alter session set `planner.memory.max_query_memory_per_node` = 2147483648; > {noformat} > The stack trace is: > {noformat} > 2017-08-30 03:35:10,479 [BitServer-5] DEBUG > o.a.drill.exec.work.foreman.Foreman - 26596b4e-9883-7dc2-6275-37134f7d63be: > State change requested RUNNING --> FAILED > org.apache.drill.common.exceptions.UserRemoteException: RESOURCE ERROR: One > or more nodes ran out of memory while executing the query. > Unable to allocate buffer of size 4194304 due to memory limit. Current > allocation: 43960640 > Fragment 2:9 > [Error Id: f58210a2-7569-42d0-8961-8c7e42c7fea3 on atsqa6c80.qa.lab:31010] > (org.apache.drill.exec.exception.OutOfMemoryException) Unable to allocate > buffer of size 4194304 due to memory limit. Current allocation: 43960640 > org.apache.drill.exec.memory.BaseAllocator.buffer():238 > org.apache.drill.exec.memory.BaseAllocator.buffer():213 > org.apache.drill.exec.vector.BigIntVector.reAlloc():252 > org.apache.drill.exec.vector.BigIntVector$Mutator.setSafe():452 > org.apache.drill.exec.vector.RepeatedBigIntVector$Mutator.addSafe():355 > org.apache.drill.exec.vector.RepeatedBigIntVector.copyFromSafe():220 > > org.apache.drill.exec.vector.RepeatedBigIntVector$TransferImpl.copyValueSafe():202 > > org.apache.drill.exec.vector.complex.MapVector$MapTransferPair.copyValueSafe():225 > > org.apache.drill.exec.vector.complex.MapVector$MapTransferPair.copyValueSafe():225 > org.apache.drill.exec.vector.complex.MapVector.copyFromSafe():82 > > org.apache.drill.exec.test.generated.PriorityQueueCopierGen1466.doCopy():47 > org.apache.drill.exec.test.generated.PriorityQueueCopierGen1466.next():77 > > org.apache.drill.exec.physical.impl.xsort.managed.PriorityQueueCopierWrapper$BatchMerger.next():267 > > org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.load():374 > > org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.innerNext():303 > org.apache.drill.exec.record.AbstractRecordBatch.next():164 > org.apache.drill.exec.record.AbstractRecordBatch.next():119 > org.apache.drill.exec.record.AbstractRecordBatch.next():109 > org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51 > > org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.innerNext():93 > org.apache.drill.exec.record.AbstractRecordBatch.next():164 > org.apache.drill.exec.physical.impl.BaseRootExec.next():105 > > org.apache.drill.exec.physical.impl.SingleSenderCreator$SingleSenderRootExec.innerNext():92 > org.apache.drill.exec.physical.impl.BaseRootExec.next():95 > org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():234 > org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():227 > java.security.AccessController.doPrivileged():-2 > javax.security.auth.Subject.doAs():415 > org.apache.hadoop.security.UserGroupInformation.doAs():1595 > org.apache.drill.exec.work.fragment.FragmentExecutor.run():227 > org.apache.drill.common.SelfCleaningRunnable.run():38 > java.util.concurrent.ThreadPoolExecutor.runWorker():1145 > java.util.concurrent.ThreadPoolExecutor$Worker.run():615 > java.lang.Thread.run():744 > at > org.apache.drill.exec.work.foreman.QueryManager$1.statusUpdate(QueryManager.java:521) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.rpc.control.WorkEventBus.statusUpdate(WorkEventBus.java:71) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at >
[jira] [Closed] (DRILL-5753) Managed External Sort: One or more nodes ran out of memory while executing the query.
[ https://issues.apache.org/jira/browse/DRILL-5753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Hou closed DRILL-5753. - I have verified that this has been fixed. > Managed External Sort: One or more nodes ran out of memory while executing > the query. > - > > Key: DRILL-5753 > URL: https://issues.apache.org/jira/browse/DRILL-5753 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Relational Operators >Affects Versions: 1.11.0 >Reporter: Robert Hou >Assignee: Paul Rogers > Fix For: 1.12.0 > > Attachments: 26596b4e-9883-7dc2-6275-37134f7d63be.sys.drill, > drillbit.log > > > The query is: > {noformat} > ALTER SESSION SET `exec.sort.disable_managed` = false; > alter session set `planner.memory.max_query_memory_per_node` = 1252428800; > select count(*) from ( > select * from ( > select s1.type type, flatten(s1.rms.rptd) rptds, s1.rms, s1.uid > from ( > select d.type type, d.uid uid, flatten(d.map.rm) rms from > dfs.`/drill/testdata/resource-manager/nested-large.json` d order by d.uid > ) s1 > ) s2 > order by s2.rms.mapid, s2.rptds.a, s2.rptds.do_not_exist > ); > ALTER SESSION SET `exec.sort.disable_managed` = true; > alter session set `planner.memory.max_query_memory_per_node` = 2147483648; > {noformat} > The stack trace is: > {noformat} > 2017-08-30 03:35:10,479 [BitServer-5] DEBUG > o.a.drill.exec.work.foreman.Foreman - 26596b4e-9883-7dc2-6275-37134f7d63be: > State change requested RUNNING --> FAILED > org.apache.drill.common.exceptions.UserRemoteException: RESOURCE ERROR: One > or more nodes ran out of memory while executing the query. > Unable to allocate buffer of size 4194304 due to memory limit. Current > allocation: 43960640 > Fragment 2:9 > [Error Id: f58210a2-7569-42d0-8961-8c7e42c7fea3 on atsqa6c80.qa.lab:31010] > (org.apache.drill.exec.exception.OutOfMemoryException) Unable to allocate > buffer of size 4194304 due to memory limit. Current allocation: 43960640 > org.apache.drill.exec.memory.BaseAllocator.buffer():238 > org.apache.drill.exec.memory.BaseAllocator.buffer():213 > org.apache.drill.exec.vector.BigIntVector.reAlloc():252 > org.apache.drill.exec.vector.BigIntVector$Mutator.setSafe():452 > org.apache.drill.exec.vector.RepeatedBigIntVector$Mutator.addSafe():355 > org.apache.drill.exec.vector.RepeatedBigIntVector.copyFromSafe():220 > > org.apache.drill.exec.vector.RepeatedBigIntVector$TransferImpl.copyValueSafe():202 > > org.apache.drill.exec.vector.complex.MapVector$MapTransferPair.copyValueSafe():225 > > org.apache.drill.exec.vector.complex.MapVector$MapTransferPair.copyValueSafe():225 > org.apache.drill.exec.vector.complex.MapVector.copyFromSafe():82 > > org.apache.drill.exec.test.generated.PriorityQueueCopierGen1466.doCopy():47 > org.apache.drill.exec.test.generated.PriorityQueueCopierGen1466.next():77 > > org.apache.drill.exec.physical.impl.xsort.managed.PriorityQueueCopierWrapper$BatchMerger.next():267 > > org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.load():374 > > org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.innerNext():303 > org.apache.drill.exec.record.AbstractRecordBatch.next():164 > org.apache.drill.exec.record.AbstractRecordBatch.next():119 > org.apache.drill.exec.record.AbstractRecordBatch.next():109 > org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51 > > org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.innerNext():93 > org.apache.drill.exec.record.AbstractRecordBatch.next():164 > org.apache.drill.exec.physical.impl.BaseRootExec.next():105 > > org.apache.drill.exec.physical.impl.SingleSenderCreator$SingleSenderRootExec.innerNext():92 > org.apache.drill.exec.physical.impl.BaseRootExec.next():95 > org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():234 > org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():227 > java.security.AccessController.doPrivileged():-2 > javax.security.auth.Subject.doAs():415 > org.apache.hadoop.security.UserGroupInformation.doAs():1595 > org.apache.drill.exec.work.fragment.FragmentExecutor.run():227 > org.apache.drill.common.SelfCleaningRunnable.run():38 > java.util.concurrent.ThreadPoolExecutor.runWorker():1145 > java.util.concurrent.ThreadPoolExecutor$Worker.run():615 > java.lang.Thread.run():744 > at > org.apache.drill.exec.work.foreman.QueryManager$1.statusUpdate(QueryManager.java:521) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.rpc.control.WorkEventBus.statusUpdate(WorkEventBus.java:71) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
[jira] [Commented] (DRILL-5670) Varchar vector throws an assertion error when allocating a new vector
[ https://issues.apache.org/jira/browse/DRILL-5670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16162228#comment-16162228 ] Robert Hou commented on DRILL-5670: --- I am getting a different error now: {noformat} 2017-09-11 06:23:17,297 [BitServer-3] DEBUG o.a.drill.exec.work.foreman.Foreman - 26498995-bbad-83bc-618f-914c37a84e1f: State change requested RUNNING --> FAILED org.apache.drill.common.exceptions.UserRemoteException: SYSTEM ERROR: OversizedAllocationException: Unable to expand the buffer. Max allowed buffer size is reached. Fragment 1:0 [Error Id: 2f6ad792-9160-487e-9dbe-0d54ec53d0ae on atsqa6c86.qa.lab:31010] (org.apache.drill.exec.exception.OversizedAllocationException) Unable to expand the buffer. Max allowed buffer size is reached. org.apache.drill.exec.vector.VarCharVector.reAlloc():425 org.apache.drill.exec.vector.VarCharVector$Mutator.setSafe():623 org.apache.drill.exec.vector.RepeatedVarCharVector$Mutator.addSafe():374 org.apache.drill.exec.vector.RepeatedVarCharVector$Mutator.addSafe():365 org.apache.drill.exec.vector.RepeatedVarCharVector.copyFromSafe():220 org.apache.drill.exec.test.generated.MergingReceiverGeneratorBaseGen584.doCopy():343 org.apache.drill.exec.physical.impl.mergereceiver.MergingRecordBatch.copyRecordToOutgoingBatch():721 org.apache.drill.exec.physical.impl.mergereceiver.MergingRecordBatch.innerNext():360 org.apache.drill.exec.record.AbstractRecordBatch.next():164 org.apache.drill.exec.record.AbstractRecordBatch.next():119 org.apache.drill.exec.record.AbstractRecordBatch.next():109 org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51 org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext():133 org.apache.drill.exec.record.AbstractRecordBatch.next():164 org.apache.drill.exec.record.AbstractRecordBatch.next():119 org.apache.drill.exec.record.AbstractRecordBatch.next():109 org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51 org.apache.drill.exec.record.AbstractRecordBatch.next():164 org.apache.drill.exec.record.AbstractRecordBatch.next():119 org.apache.drill.exec.record.AbstractRecordBatch.next():109 org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51 org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.innerNext():93 org.apache.drill.exec.record.AbstractRecordBatch.next():164 org.apache.drill.exec.record.AbstractRecordBatch.next():119 org.apache.drill.exec.record.AbstractRecordBatch.next():109 org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51 org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext():133 org.apache.drill.exec.record.AbstractRecordBatch.next():164 org.apache.drill.exec.record.AbstractRecordBatch.next():119 org.apache.drill.exec.record.AbstractRecordBatch.next():109 org.apache.drill.exec.physical.impl.aggregate.StreamingAggBatch.innerNext():151 org.apache.drill.exec.record.AbstractRecordBatch.next():164 org.apache.drill.exec.physical.impl.BaseRootExec.next():105 org.apache.drill.exec.physical.impl.SingleSenderCreator$SingleSenderRootExec.innerNext():92 org.apache.drill.exec.physical.impl.BaseRootExec.next():95 org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():234 org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():227 java.security.AccessController.doPrivileged():-2 javax.security.auth.Subject.doAs():415 org.apache.hadoop.security.UserGroupInformation.doAs():1595 org.apache.drill.exec.work.fragment.FragmentExecutor.run():227 org.apache.drill.common.SelfCleaningRunnable.run():38 java.util.concurrent.ThreadPoolExecutor.runWorker():1145 java.util.concurrent.ThreadPoolExecutor$Worker.run():615 java.lang.Thread.run():744 at org.apache.drill.exec.work.foreman.QueryManager$1.statusUpdate(QueryManager.java:521) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.rpc.control.WorkEventBus.statusUpdate(WorkEventBus.java:71) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.work.batch.ControlMessageHandler.handle(ControlMessageHandler.java:94) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.work.batch.ControlMessageHandler.handle(ControlMessageHandler.java:55) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.rpc.BasicServer.handle(BasicServer.java:157) [drill-rpc-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.rpc.BasicServer.handle(BasicServer.java:53) [drill-rpc-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:274) [drill-rpc-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at
[jira] [Updated] (DRILL-5670) Varchar vector throws an assertion error when allocating a new vector
[ https://issues.apache.org/jira/browse/DRILL-5670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Hou updated DRILL-5670: -- Attachment: drillbit.log 26498995-bbad-83bc-618f-914c37a84e1f.sys.drill > Varchar vector throws an assertion error when allocating a new vector > - > > Key: DRILL-5670 > URL: https://issues.apache.org/jira/browse/DRILL-5670 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Relational Operators >Affects Versions: 1.11.0 >Reporter: Rahul Challapalli >Assignee: Paul Rogers > Fix For: 1.12.0 > > Attachments: 26498995-bbad-83bc-618f-914c37a84e1f.sys.drill, > 26555749-4d36-10d2-6faf-e403db40c370.sys.drill, > 266290f3-5fdc-5873-7372-e9ee053bf867.sys.drill, > 269969ca-8d4d-073a-d916-9031e3d3fbf0.sys.drill, drillbit.log, drillbit.log, > drillbit.log, drillbit.log, drillbit.log, drillbit.out, drill-override.conf > > > I am running this test on a private branch of [paul's > repository|https://github.com/paul-rogers/drill]. Below is the commit info > {code} > git.commit.id.abbrev=d86e16c > git.commit.user.email=prog...@maprtech.com > git.commit.message.full=DRILL-5601\: Rollup of external sort fixes an > improvements\n\n- DRILL-5513\: Managed External Sort \: OOM error during the > merge phase\n- DRILL-5519\: Sort fails to spill and results in an OOM\n- > DRILL-5522\: OOM during the merge and spill process of the managed external > sort\n- DRILL-5594\: Excessive buffer reallocations during merge phase of > external sort\n- DRILL-5597\: Incorrect "bits" vector allocation in nullable > vectors allocateNew()\n- DRILL-5602\: Repeated List Vector fails to > initialize the offset vector\n\nAll of the bugs have to do with handling > low-memory conditions, and with\ncorrectly estimating the sizes of vectors, > even when those vectors come\nfrom the spill file or from an exchange. Hence, > the changes for all of\nthe above issues are interrelated.\n > git.commit.id=d86e16c551e7d3553f2cde748a739b1c5a7a7659 > git.commit.message.short=DRILL-5601\: Rollup of external sort fixes an > improvements > git.commit.user.name=Paul Rogers > git.build.user.name=Rahul Challapalli > git.commit.id.describe=0.9.0-1078-gd86e16c > git.build.user.email=challapallira...@gmail.com > git.branch=d86e16c551e7d3553f2cde748a739b1c5a7a7659 > git.commit.time=05.07.2017 @ 20\:34\:39 PDT > git.build.time=12.07.2017 @ 14\:27\:03 PDT > git.remote.origin.url=g...@github.com\:paul-rogers/drill.git > {code} > Below query fails with an Assertion Error > {code} > 0: jdbc:drill:zk=10.10.100.190:5181> ALTER SESSION SET > `exec.sort.disable_managed` = false; > +---+-+ > | ok | summary | > +---+-+ > | true | exec.sort.disable_managed updated. | > +---+-+ > 1 row selected (1.044 seconds) > 0: jdbc:drill:zk=10.10.100.190:5181> alter session set > `planner.memory.max_query_memory_per_node` = 482344960; > +---++ > | ok | summary | > +---++ > | true | planner.memory.max_query_memory_per_node updated. | > +---++ > 1 row selected (0.372 seconds) > 0: jdbc:drill:zk=10.10.100.190:5181> alter session set > `planner.width.max_per_node` = 1; > +---+--+ > | ok | summary| > +---+--+ > | true | planner.width.max_per_node updated. | > +---+--+ > 1 row selected (0.292 seconds) > 0: jdbc:drill:zk=10.10.100.190:5181> alter session set > `planner.width.max_per_query` = 1; > +---+---+ > | ok |summary| > +---+---+ > | true | planner.width.max_per_query updated. | > +---+---+ > 1 row selected (0.25 seconds) > 0: jdbc:drill:zk=10.10.100.190:5181> select count(*) from (select * from > dfs.`/drill/testdata/resource-manager/3500cols.tbl` order by > columns[450],columns[330],columns[230],columns[220],columns[110],columns[90],columns[80],columns[70],columns[40],columns[10],columns[20],columns[30],columns[40],columns[50], > > columns[454],columns[413],columns[940],columns[834],columns[73],columns[140],columns[104],columns[],columns[30],columns[2420],columns[1520], > columns[1410], >
[jira] [Commented] (DRILL-5732) Unable to allocate sv2 for 9039 records, and not enough batchGroups to spill.
[ https://issues.apache.org/jira/browse/DRILL-5732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16140973#comment-16140973 ] Robert Hou commented on DRILL-5732: --- Yes, there is a Hash Agg step. The plan is: {noformat} | 00-00Screen 00-01 ProjectAllowDup(EXPR$0=[$0], EXPR$1=[$1], EXPR$2=[$2], EXPR$3=[$3], EXPR$4=[$4], EXPR$5=[$5], EXPR$6=[$6], EXPR$7=[$7], EXPR$8=[$8], EXPR$9=[$9], EXPR$10=[$10], EXPR$11=[$11], EXPR$12=[$12], EXPR$13=[$13], EXPR$14=[$14], EXPR$15=[$15], EXPR$16=[$16], EXPR$17=[$17], EXPR$18=[$18], EXPR$19=[$19], EXPR$20=[$20], EXPR$21=[$21], EXPR$22=[$22], EXPR$23=[$23], EXPR$24=[$24], EXPR$25=[$25], EXPR$26=[$26], EXPR$27=[$27], EXPR$28=[$28], EXPR$29=[$29], EXPR$30=[$30], EXPR$31=[$31], EXPR$32=[$32], EXPR$33=[$33], EXPR$34=[$34], EXPR$35=[$35], EXPR$36=[$36], EXPR$37=[$37], EXPR$38=[$38], EXPR$39=[$39], EXPR$40=[$40], EXPR$41=[$41], EXPR$42=[$42], EXPR$43=[$43], EXPR$44=[$44], EXPR$45=[$45], EXPR$46=[$46], EXPR$47=[$47], EXPR$48=[$48], EXPR$49=[$49], c_email_address=[$50]) 00-02Project(EXPR$0=[$1], EXPR$1=[$2], EXPR$2=[$3], EXPR$3=[$4], EXPR$4=[$5], EXPR$5=[$6], EXPR$6=[$7], EXPR$7=[$8], EXPR$8=[$9], EXPR$9=[$10], EXPR$10=[$11], EXPR$11=[$12], EXPR$12=[$13], EXPR$13=[$14], EXPR$14=[$15], EXPR$15=[$16], EXPR$16=[$17], EXPR$17=[$18], EXPR$18=[$19], EXPR$19=[$20], EXPR$20=[$21], EXPR$21=[$22], EXPR$22=[$23], EXPR$23=[$24], EXPR$24=[$25], EXPR$25=[$26], EXPR$26=[$27], EXPR$27=[$28], EXPR$28=[$29], EXPR$29=[$30], EXPR$30=[$31], EXPR$31=[$32], EXPR$32=[$33], EXPR$33=[$34], EXPR$34=[$35], EXPR$35=[$36], EXPR$36=[$37], EXPR$37=[$38], EXPR$38=[$39], EXPR$39=[$40], EXPR$40=[$41], EXPR$41=[$42], EXPR$42=[$43], EXPR$43=[$44], EXPR$44=[$45], EXPR$45=[$46], EXPR$46=[$47], EXPR$47=[$48], EXPR$48=[$49], EXPR$49=[$50], ITEM=[$0]) 00-03 HashAgg(group=[{0}], EXPR$0=[MAX($1)], EXPR$1=[MAX($2)], EXPR$2=[MAX($3)], EXPR$3=[MAX($4)], EXPR$4=[MAX($5)], EXPR$5=[MAX($6)], EXPR$6=[MAX($7)], EXPR$7=[MAX($8)], EXPR$8=[MAX($9)], EXPR$9=[MAX($10)], EXPR$10=[MAX($11)], EXPR$11=[MAX($12)], EXPR$12=[MAX($13)], EXPR$13=[MAX($14)], EXPR$14=[MAX($15)], EXPR$15=[MIN($16)], EXPR$16=[MAX($17)], EXPR$17=[MAX($18)], EXPR$18=[MAX($19)], EXPR$19=[MAX($20)], EXPR$20=[MAX($21)], EXPR$21=[MAX($22)], EXPR$22=[MAX($23)], EXPR$23=[MAX($24)], EXPR$24=[MIN($25)], EXPR$25=[MAX($26)], EXPR$26=[MIN($27)], EXPR$27=[MIN($28)], EXPR$28=[MIN($29)], EXPR$29=[MAX($30)], EXPR$30=[MAX($31)], EXPR$31=[MAX($32)], EXPR$32=[MIN($33)], EXPR$33=[MIN($34)], EXPR$34=[MIN($35)], EXPR$35=[MIN($36)], EXPR$36=[MIN($37)], EXPR$37=[MAX($38)], EXPR$38=[MAX($39)], EXPR$39=[MIN($40)], EXPR$40=[MIN($41)], EXPR$41=[MIN($42)], EXPR$42=[MIN($43)], EXPR$43=[MIN($44)], EXPR$44=[MIN($45)], EXPR$45=[MIN($46)], EXPR$46=[MAX($47)], EXPR$47=[MIN($48)], EXPR$48=[MIN($49)], EXPR$49=[MAX($50)]) 00-04Project(ITEM=[$1], col1=[$0], ITEM2=[$2], ITEM3=[$3], ITEM4=[$4], ITEM5=[$5], ITEM6=[$6], ITEM7=[$7], ITEM8=[$8], ITEM9=[$9], ITEM10=[$10], ITEM11=[$11], ITEM12=[$12], ITEM13=[$13], ITEM14=[$14], ITEM15=[$15], ITEM16=[$16], ITEM17=[$17], ITEM18=[$18], ITEM19=[$19], ITEM20=[$20], ITEM21=[$21], ITEM22=[$22], ITEM23=[$23], ITEM24=[$24], ITEM25=[$25], ITEM26=[$26], ITEM27=[$27], ITEM28=[$28], ITEM29=[$29], ITEM30=[$30], ITEM31=[$31], ITEM32=[$32], ITEM33=[$33], ITEM34=[$34], ITEM35=[$35], ITEM36=[$36], $f37=[LENGTH($37)], ITEM38=[$38], ITEM39=[$39], ITEM40=[$40], ITEM41=[$41], ITEM42=[$42], $f43=[LENGTH($43)], $f44=[LENGTH($44)], $f45=[LENGTH($45)], $f46=[LENGTH($46)], ITEM47=[$47], ITEM48=[$48], ITEM49=[$49], ITEM50=[$50]) 00-05 SelectionVectorRemover 00-06Filter(condition=[AND(>($0, 2536816), IS NOT NULL($1))]) 00-07 Project(col1=[$0], ITEM=[ITEM($1, 'c_email_address')], ITEM2=[ITEM($1, 'cs_sold_date_sk')], ITEM3=[ITEM($1, 'cs_sold_time_sk')], ITEM4=[ITEM($1, 'cs_ship_date_sk')], ITEM5=[ITEM($1, 'cs_bill_customer_sk')], ITEM6=[ITEM($1, 'cs_bill_cdemo_sk')], ITEM7=[ITEM($1, 'cs_bill_hdemo_sk')], ITEM8=[ITEM($1, 'cs_bill_addr_sk')], ITEM9=[ITEM($1, 'cs_ship_customer_sk')], ITEM10=[ITEM($1, 'cs_ship_cdemo_sk')], ITEM11=[ITEM($1, 'cs_ship_hdemo_sk')], ITEM12=[ITEM($1, 'cs_ship_addr_sk')], ITEM13=[ITEM($1, 'cs_call_center_sk')], ITEM14=[ITEM($1, 'cs_catalog_page_sk')], ITEM15=[ITEM($1, 'cs_ship_mode_sk')], ITEM16=[ITEM($1, 'cs_warehouse_sk')], ITEM17=[ITEM($1, 'cs_item_sk')], ITEM18=[ITEM($1, 'cs_promo_sk')], ITEM19=[ITEM($1, 'cs_order_number')], ITEM20=[ITEM($1, 'cs_quantity')], ITEM21=[ITEM($1, 'cs_wholesale_cost')], ITEM22=[ITEM($1, 'cs_list_price')], ITEM23=[ITEM($1, 'cs_sales_price')], ITEM24=[ITEM($1, 'cs_ext_discount_amt')], ITEM25=[ITEM($1, 'cs_ext_sales_price')], ITEM26=[ITEM($1, 'cs_ext_wholesale_cost')], ITEM27=[ITEM($1, 'cs_ext_list_price')], ITEM28=[ITEM($1, 'cs_ext_tax')], ITEM29=[ITEM($1, 'cs_coupon_amt')], ITEM30=[ITEM($1, 'cs_ext_ship_cost')], ITEM31=[ITEM($1,
[jira] [Commented] (DRILL-5732) Unable to allocate sv2 for 9039 records, and not enough batchGroups to spill.
[ https://issues.apache.org/jira/browse/DRILL-5732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16142219#comment-16142219 ] Robert Hou commented on DRILL-5732: --- The new memory setting should be: new setting = (old setting / x) * (x + y) > Unable to allocate sv2 for 9039 records, and not enough batchGroups to spill. > - > > Key: DRILL-5732 > URL: https://issues.apache.org/jira/browse/DRILL-5732 > Project: Apache Drill > Issue Type: Bug >Affects Versions: 1.10.0 >Reporter: Robert Hou >Assignee: Paul Rogers > Attachments: 26621eb2-daec-cef9-efed-5986e72a750a.sys.drill, > drillbit.log.83 > > > git commit id: > {noformat} > | 1.12.0-SNAPSHOT | e9065b55ea560e7f737d6fcb4948f9e945b9b14f | DRILL-5660: > Parquet metadata caching improvements | 15.08.2017 @ 09:31:00 PDT | > r...@qa-node190.qa.lab | 15.08.2017 @ 13:29:26 PDT | > {noformat} > Query is: > {noformat} > ALTER SESSION SET `exec.sort.disable_managed` = false; > alter session set `planner.disable_exchanges` = true; > alter session set `planner.memory.max_query_memory_per_node` = 104857600; > alter session set `planner.width.max_per_node` = 1; > alter session set `planner.width.max_per_query` = 1; > select max(col1), max(cs_sold_date_sk), max(cs_sold_time_sk), > max(cs_ship_date_sk), max(cs_bill_customer_sk), max(cs_bill_cdemo_sk), > max(cs_bill_hdemo_sk), max(cs_bill_addr_sk), max(cs_ship_customer_sk), > max(cs_ship_cdemo_sk), max(cs_ship_hdemo_sk), max(cs_ship_addr_sk), > max(cs_call_center_sk), max(cs_catalog_page_sk), max(cs_ship_mode_sk), > min(cs_warehouse_sk), max(cs_item_sk), max(cs_promo_sk), > max(cs_order_number), max(cs_quantity), max(cs_wholesale_cost), > max(cs_list_price), max(cs_sales_price), max(cs_ext_discount_amt), > min(cs_ext_sales_price), max(cs_ext_wholesale_cost), min(cs_ext_list_price), > min(cs_ext_tax), min(cs_coupon_amt), max(cs_ext_ship_cost), max(cs_net_paid), > max(cs_net_paid_inc_tax), min(cs_net_paid_inc_ship), > min(cs_net_paid_inc_ship_tax), min(cs_net_profit), min(c_customer_sk), > min(length(c_customer_id)), max(c_current_cdemo_sk), max(c_current_hdemo_sk), > min(c_current_addr_sk), min(c_first_shipto_date_sk), > min(c_first_sales_date_sk), min(length(c_salutation)), > min(length(c_first_name)), min(length(c_last_name)), > min(length(c_preferred_cust_flag)), max(c_birth_day), min(c_birth_month), > min(c_birth_year), max(c_last_review_date), c_email_address from (select > cs_sold_date_sk+cs_sold_time_sk col1, * from > dfs.`/drill/testdata/resource-manager/md1362` order by c_email_address nulls > first) d where d.col1 > 2536816 and c_email_address is not null group by > c_email_address; > ALTER SESSION SET `exec.sort.disable_managed` = true; > alter session set `planner.disable_exchanges` = false; > alter session set `planner.memory.max_query_memory_per_node` = 2147483648; > alter session set `planner.width.max_per_node` = 17; > alter session set `planner.width.max_per_query` = 1000; > {noformat} > Here is the stack trace: > {noformat} > 2017-08-18 13:15:27,052 [2668b522-5833-8fd2-0b6d-e685197f0ae3:frag:0:0] DEBUG > o.a.d.e.t.g.SingleBatchSorterGen27 - Took 6445 us to sort 9039 records > 2017-08-18 13:15:27,420 [2668b522-5833-8fd2-0b6d-e685197f0ae3:frag:0:0] DEBUG > o.a.d.e.p.i.xsort.ExternalSortBatch - Copier allocator current allocation 0 > 2017-08-18 13:15:27,420 [2668b522-5833-8fd2-0b6d-e685197f0ae3:frag:0:0] DEBUG > o.a.d.e.p.i.xsort.ExternalSortBatch - mergeAndSpill: starting total size in > memory = 71964288 > 2017-08-18 13:15:27,421 [2668b522-5833-8fd2-0b6d-e685197f0ae3:frag:0:0] INFO > o.a.d.e.p.i.xsort.ExternalSortBatch - User Error Occurred: One or more nodes > ran out of memory while executing the query. > org.apache.drill.common.exceptions.UserException: RESOURCE ERROR: One or more > nodes ran out of memory while executing the query. > Unable to allocate sv2 for 9039 records, and not enough batchGroups to spill. > batchGroups.size 1 > spilledBatchGroups.size 0 > allocated memory 71964288 > allocator limit 52428800 > [Error Id: 7b248f12-2b31-4013-86b6-92e6c842db48 ] > at > org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:550) > ~[drill-common-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.xsort.ExternalSortBatch.newSV2(ExternalSortBatch.java:637) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.xsort.ExternalSortBatch.innerNext(ExternalSortBatch.java:379) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:164) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at >
[jira] [Commented] (DRILL-5670) Varchar vector throws an assertion error when allocating a new vector
[ https://issues.apache.org/jira/browse/DRILL-5670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144160#comment-16144160 ] Robert Hou commented on DRILL-5670: --- This bug has been fixed. > Varchar vector throws an assertion error when allocating a new vector > - > > Key: DRILL-5670 > URL: https://issues.apache.org/jira/browse/DRILL-5670 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Relational Operators >Affects Versions: 1.11.0 >Reporter: Rahul Challapalli >Assignee: Paul Rogers > Fix For: 1.12.0 > > Attachments: 269969ca-8d4d-073a-d916-9031e3d3fbf0.sys.drill, > drillbit.log, drillbit.out, drill-override.conf > > > I am running this test on a private branch of [paul's > repository|https://github.com/paul-rogers/drill]. Below is the commit info > {code} > git.commit.id.abbrev=d86e16c > git.commit.user.email=prog...@maprtech.com > git.commit.message.full=DRILL-5601\: Rollup of external sort fixes an > improvements\n\n- DRILL-5513\: Managed External Sort \: OOM error during the > merge phase\n- DRILL-5519\: Sort fails to spill and results in an OOM\n- > DRILL-5522\: OOM during the merge and spill process of the managed external > sort\n- DRILL-5594\: Excessive buffer reallocations during merge phase of > external sort\n- DRILL-5597\: Incorrect "bits" vector allocation in nullable > vectors allocateNew()\n- DRILL-5602\: Repeated List Vector fails to > initialize the offset vector\n\nAll of the bugs have to do with handling > low-memory conditions, and with\ncorrectly estimating the sizes of vectors, > even when those vectors come\nfrom the spill file or from an exchange. Hence, > the changes for all of\nthe above issues are interrelated.\n > git.commit.id=d86e16c551e7d3553f2cde748a739b1c5a7a7659 > git.commit.message.short=DRILL-5601\: Rollup of external sort fixes an > improvements > git.commit.user.name=Paul Rogers > git.build.user.name=Rahul Challapalli > git.commit.id.describe=0.9.0-1078-gd86e16c > git.build.user.email=challapallira...@gmail.com > git.branch=d86e16c551e7d3553f2cde748a739b1c5a7a7659 > git.commit.time=05.07.2017 @ 20\:34\:39 PDT > git.build.time=12.07.2017 @ 14\:27\:03 PDT > git.remote.origin.url=g...@github.com\:paul-rogers/drill.git > {code} > Below query fails with an Assertion Error > {code} > 0: jdbc:drill:zk=10.10.100.190:5181> ALTER SESSION SET > `exec.sort.disable_managed` = false; > +---+-+ > | ok | summary | > +---+-+ > | true | exec.sort.disable_managed updated. | > +---+-+ > 1 row selected (1.044 seconds) > 0: jdbc:drill:zk=10.10.100.190:5181> alter session set > `planner.memory.max_query_memory_per_node` = 482344960; > +---++ > | ok | summary | > +---++ > | true | planner.memory.max_query_memory_per_node updated. | > +---++ > 1 row selected (0.372 seconds) > 0: jdbc:drill:zk=10.10.100.190:5181> alter session set > `planner.width.max_per_node` = 1; > +---+--+ > | ok | summary| > +---+--+ > | true | planner.width.max_per_node updated. | > +---+--+ > 1 row selected (0.292 seconds) > 0: jdbc:drill:zk=10.10.100.190:5181> alter session set > `planner.width.max_per_query` = 1; > +---+---+ > | ok |summary| > +---+---+ > | true | planner.width.max_per_query updated. | > +---+---+ > 1 row selected (0.25 seconds) > 0: jdbc:drill:zk=10.10.100.190:5181> select count(*) from (select * from > dfs.`/drill/testdata/resource-manager/3500cols.tbl` order by > columns[450],columns[330],columns[230],columns[220],columns[110],columns[90],columns[80],columns[70],columns[40],columns[10],columns[20],columns[30],columns[40],columns[50], > > columns[454],columns[413],columns[940],columns[834],columns[73],columns[140],columns[104],columns[],columns[30],columns[2420],columns[1520], > columns[1410], > columns[1110],columns[1290],columns[2380],columns[705],columns[45],columns[1054],columns[2430],columns[420],columns[404],columns[3350], > > columns[],columns[153],columns[356],columns[84],columns[745],columns[1450],columns[103],columns[2065],columns[343],columns[3420],columns[530], > columns[3210] ) d where d.col433 = 'sjka skjf'; > Error:
[jira] [Closed] (DRILL-5670) Varchar vector throws an assertion error when allocating a new vector
[ https://issues.apache.org/jira/browse/DRILL-5670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Hou closed DRILL-5670. - This bug has been fixed and verified. > Varchar vector throws an assertion error when allocating a new vector > - > > Key: DRILL-5670 > URL: https://issues.apache.org/jira/browse/DRILL-5670 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Relational Operators >Affects Versions: 1.11.0 >Reporter: Rahul Challapalli >Assignee: Paul Rogers > Fix For: 1.12.0 > > Attachments: 269969ca-8d4d-073a-d916-9031e3d3fbf0.sys.drill, > drillbit.log, drillbit.out, drill-override.conf > > > I am running this test on a private branch of [paul's > repository|https://github.com/paul-rogers/drill]. Below is the commit info > {code} > git.commit.id.abbrev=d86e16c > git.commit.user.email=prog...@maprtech.com > git.commit.message.full=DRILL-5601\: Rollup of external sort fixes an > improvements\n\n- DRILL-5513\: Managed External Sort \: OOM error during the > merge phase\n- DRILL-5519\: Sort fails to spill and results in an OOM\n- > DRILL-5522\: OOM during the merge and spill process of the managed external > sort\n- DRILL-5594\: Excessive buffer reallocations during merge phase of > external sort\n- DRILL-5597\: Incorrect "bits" vector allocation in nullable > vectors allocateNew()\n- DRILL-5602\: Repeated List Vector fails to > initialize the offset vector\n\nAll of the bugs have to do with handling > low-memory conditions, and with\ncorrectly estimating the sizes of vectors, > even when those vectors come\nfrom the spill file or from an exchange. Hence, > the changes for all of\nthe above issues are interrelated.\n > git.commit.id=d86e16c551e7d3553f2cde748a739b1c5a7a7659 > git.commit.message.short=DRILL-5601\: Rollup of external sort fixes an > improvements > git.commit.user.name=Paul Rogers > git.build.user.name=Rahul Challapalli > git.commit.id.describe=0.9.0-1078-gd86e16c > git.build.user.email=challapallira...@gmail.com > git.branch=d86e16c551e7d3553f2cde748a739b1c5a7a7659 > git.commit.time=05.07.2017 @ 20\:34\:39 PDT > git.build.time=12.07.2017 @ 14\:27\:03 PDT > git.remote.origin.url=g...@github.com\:paul-rogers/drill.git > {code} > Below query fails with an Assertion Error > {code} > 0: jdbc:drill:zk=10.10.100.190:5181> ALTER SESSION SET > `exec.sort.disable_managed` = false; > +---+-+ > | ok | summary | > +---+-+ > | true | exec.sort.disable_managed updated. | > +---+-+ > 1 row selected (1.044 seconds) > 0: jdbc:drill:zk=10.10.100.190:5181> alter session set > `planner.memory.max_query_memory_per_node` = 482344960; > +---++ > | ok | summary | > +---++ > | true | planner.memory.max_query_memory_per_node updated. | > +---++ > 1 row selected (0.372 seconds) > 0: jdbc:drill:zk=10.10.100.190:5181> alter session set > `planner.width.max_per_node` = 1; > +---+--+ > | ok | summary| > +---+--+ > | true | planner.width.max_per_node updated. | > +---+--+ > 1 row selected (0.292 seconds) > 0: jdbc:drill:zk=10.10.100.190:5181> alter session set > `planner.width.max_per_query` = 1; > +---+---+ > | ok |summary| > +---+---+ > | true | planner.width.max_per_query updated. | > +---+---+ > 1 row selected (0.25 seconds) > 0: jdbc:drill:zk=10.10.100.190:5181> select count(*) from (select * from > dfs.`/drill/testdata/resource-manager/3500cols.tbl` order by > columns[450],columns[330],columns[230],columns[220],columns[110],columns[90],columns[80],columns[70],columns[40],columns[10],columns[20],columns[30],columns[40],columns[50], > > columns[454],columns[413],columns[940],columns[834],columns[73],columns[140],columns[104],columns[],columns[30],columns[2420],columns[1520], > columns[1410], > columns[1110],columns[1290],columns[2380],columns[705],columns[45],columns[1054],columns[2430],columns[420],columns[404],columns[3350], > > columns[],columns[153],columns[356],columns[84],columns[745],columns[1450],columns[103],columns[2065],columns[343],columns[3420],columns[530], > columns[3210] ) d where d.col433 = 'sjka skjf'; > Error: RESOURCE ERROR: External Sort
[jira] [Updated] (DRILL-5744) External sort fails with OOM error
[ https://issues.apache.org/jira/browse/DRILL-5744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Hou updated DRILL-5744: -- Attachment: 266275e5-ebdb-14ae-d52d-00fa3a154f6d.sys.drill drillbit.log > External sort fails with OOM error > -- > > Key: DRILL-5744 > URL: https://issues.apache.org/jira/browse/DRILL-5744 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Relational Operators >Affects Versions: 1.10.0 >Reporter: Robert Hou >Assignee: Paul Rogers > Fix For: 1.12.0 > > Attachments: 266275e5-ebdb-14ae-d52d-00fa3a154f6d.sys.drill, > drillbit.log > > > Query is: > {noformat} > ALTER SESSION SET `exec.sort.disable_managed` = false; > alter session set `planner.width.max_per_node` = 1; > alter session set `planner.disable_exchanges` = true; > alter session set `planner.width.max_per_query` = 1; > alter session set `planner.memory.max_query_memory_per_node` = 152428800; > select count(*) from ( > select * from ( > select s1.type type, flatten(s1.rms.rptd) rptds, s1.rms, s1.uid > from ( > select d.type type, d.uid uid, flatten(d.map.rm) rms from > dfs.`/drill/testdata/resource-manager/nested-large.json` d order by d.uid > ) s1 > ) s2 > order by s2.rms.mapid > ); > ALTER SESSION SET `exec.sort.disable_managed` = true; > alter session set `planner.width.max_per_node` = 17; > alter session set `planner.disable_exchanges` = false; > alter session set `planner.width.max_per_query` = 1000; > alter session set `planner.memory.max_query_memory_per_node` = 2147483648; > {noformat} > Stack trace is: > {noformat} > 2017-08-23 06:59:42,763 [266275e5-ebdb-14ae-d52d-00fa3a154f6d:frag:0:0] INFO > o.a.d.e.w.fragment.FragmentExecutor - User Error Occurred: One or more nodes > ran out of memory while executing the query. (Unable to allocate buffer of > size 4194304 (rounded from 3276750) due to memory limit. Current allocation: 7 > 9986944) > org.apache.drill.common.exceptions.UserException: RESOURCE ERROR: One or more > nodes ran out of memory while executing the query. > Unable to allocate buffer of size 4194304 (rounded from 3276750) due to > memory limit. Current allocation: 79986944 > [Error Id: 4f4959df-0921-4a50-b75e-56488469ab10 ] > at > org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:550) > ~[drill-common-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:244) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38) > [drill-common-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > [na:1.7.0_51] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > [na:1.7.0_51] > at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51] > Caused by: org.apache.drill.exec.exception.OutOfMemoryException: Unable to > allocate buffer of size 4194304 (rounded from 3276750) due to memory limit. > Cur > rent allocation: 79986944 > at > org.apache.drill.exec.memory.BaseAllocator.buffer(BaseAllocator.java:238) > ~[drill-memory-base-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.memory.BaseAllocator.buffer(BaseAllocator.java:213) > ~[drill-memory-base-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.vector.VarCharVector.allocateNew(VarCharVector.java:402) > ~[vector-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.vector.NullableVarCharVector.allocateNew(NullableVarCharVector.java:236) > ~[vector-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.vector.AllocationHelper.allocatePrecomputedChildCount(AllocationHelper.java:33) > ~[vector-1.12.0-SNAPSHOT.jar:1.12.0-SNAPS > HOT] > at > org.apache.drill.exec.vector.AllocationHelper.allocate(AllocationHelper.java:46) > ~[vector-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.record.VectorInitializer.allocateVector(VectorInitializer.java:113) > ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT > ] > at > org.apache.drill.exec.record.VectorInitializer.allocateVector(VectorInitializer.java:95) > ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.record.VectorInitializer.allocateMap(VectorInitializer.java:130) > ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.record.VectorInitializer.allocateVector(VectorInitializer.java:93) > ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at >
[jira] [Updated] (DRILL-5447) Managed External Sort : Unable to allocate sv2 vector
[ https://issues.apache.org/jira/browse/DRILL-5447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Hou updated DRILL-5447: -- Attachment: 26617a7e-b953-7ac3-556d-43fd88e51b19.sys.drill drillbit.log > Managed External Sort : Unable to allocate sv2 vector > - > > Key: DRILL-5447 > URL: https://issues.apache.org/jira/browse/DRILL-5447 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Relational Operators >Affects Versions: 1.10.0 >Reporter: Rahul Challapalli >Assignee: Paul Rogers > Fix For: 1.12.0 > > Attachments: 26617a7e-b953-7ac3-556d-43fd88e51b19.sys.drill, > 26fee988-ed18-a86a-7164-3e75118c0ffc.sys.drill, drillbit.log, drillbit.log > > > git.commit.id.abbrev=3e8b01d > Dataset : > {code} > Every records contains a repeated type with 2000 elements. > The repeated type contains varchars of length 250 for the first 2000 records > and single character strings for the next 2000 records > The above pattern is repeated a few types > {code} > The below query fails > {code} > ALTER SESSION SET `exec.sort.disable_managed` = false; > alter session set `planner.width.max_per_node` = 1; > alter session set `planner.disable_exchanges` = true; > alter session set `planner.width.max_per_query` = 1; > select count(*) from (select * from (select id, flatten(str_list) str from > dfs.`/drill/testdata/resource-manager/flatten-large-small.json`) d order by > d.str) d1 where d1.id=0; > Error: RESOURCE ERROR: Unable to allocate sv2 buffer > Fragment 0:0 > [Error Id: 9e45c293-ab26-489d-a90e-25da96004f15 on qa-node190.qa.lab:31010] > (state=,code=0) > {code} > Exception from the logs > {code} > [Error Id: 9e45c293-ab26-489d-a90e-25da96004f15 ] > at > org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:544) > ~[drill-common-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.newSV2(ExternalSortBatch.java:1463) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.makeSelectionVector(ExternalSortBatch.java:799) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.processBatch(ExternalSortBatch.java:856) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.loadBatch(ExternalSortBatch.java:618) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.load(ExternalSortBatch.java:660) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.innerNext(ExternalSortBatch.java:559) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next(IteratorValidatorBatchIterator.java:215) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.innerNext(RemovingRecordBatch.java:93) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next(IteratorValidatorBatchIterator.java:215) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51) >
[jira] [Commented] (DRILL-5447) Managed External Sort : Unable to allocate sv2 vector
[ https://issues.apache.org/jira/browse/DRILL-5447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144229#comment-16144229 ] Robert Hou commented on DRILL-5447: --- The memory is not being constricted for this test. E.g. `planner.memory.max_query_memory_per_node` is not being set. > Managed External Sort : Unable to allocate sv2 vector > - > > Key: DRILL-5447 > URL: https://issues.apache.org/jira/browse/DRILL-5447 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Relational Operators >Affects Versions: 1.10.0 >Reporter: Rahul Challapalli >Assignee: Paul Rogers > Fix For: 1.12.0 > > Attachments: 26fee988-ed18-a86a-7164-3e75118c0ffc.sys.drill, > drillbit.log > > > git.commit.id.abbrev=3e8b01d > Dataset : > {code} > Every records contains a repeated type with 2000 elements. > The repeated type contains varchars of length 250 for the first 2000 records > and single character strings for the next 2000 records > The above pattern is repeated a few types > {code} > The below query fails > {code} > ALTER SESSION SET `exec.sort.disable_managed` = false; > alter session set `planner.width.max_per_node` = 1; > alter session set `planner.disable_exchanges` = true; > alter session set `planner.width.max_per_query` = 1; > select count(*) from (select * from (select id, flatten(str_list) str from > dfs.`/drill/testdata/resource-manager/flatten-large-small.json`) d order by > d.str) d1 where d1.id=0; > Error: RESOURCE ERROR: Unable to allocate sv2 buffer > Fragment 0:0 > [Error Id: 9e45c293-ab26-489d-a90e-25da96004f15 on qa-node190.qa.lab:31010] > (state=,code=0) > {code} > Exception from the logs > {code} > [Error Id: 9e45c293-ab26-489d-a90e-25da96004f15 ] > at > org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:544) > ~[drill-common-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.newSV2(ExternalSortBatch.java:1463) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.makeSelectionVector(ExternalSortBatch.java:799) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.processBatch(ExternalSortBatch.java:856) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.loadBatch(ExternalSortBatch.java:618) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.load(ExternalSortBatch.java:660) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.innerNext(ExternalSortBatch.java:559) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next(IteratorValidatorBatchIterator.java:215) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.innerNext(RemovingRecordBatch.java:93) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next(IteratorValidatorBatchIterator.java:215) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51) >
[jira] [Created] (DRILL-5744) External sort fails with OOM error
Robert Hou created DRILL-5744: - Summary: External sort fails with OOM error Key: DRILL-5744 URL: https://issues.apache.org/jira/browse/DRILL-5744 Project: Apache Drill Issue Type: Bug Components: Execution - Relational Operators Affects Versions: 1.10.0 Reporter: Robert Hou Assignee: Paul Rogers Fix For: 1.12.0 Query is: {noformat} ALTER SESSION SET `exec.sort.disable_managed` = false; alter session set `planner.width.max_per_node` = 1; alter session set `planner.disable_exchanges` = true; alter session set `planner.width.max_per_query` = 1; alter session set `planner.memory.max_query_memory_per_node` = 152428800; select count(*) from ( select * from ( select s1.type type, flatten(s1.rms.rptd) rptds, s1.rms, s1.uid from ( select d.type type, d.uid uid, flatten(d.map.rm) rms from dfs.`/drill/testdata/resource-manager/nested-large.json` d order by d.uid ) s1 ) s2 order by s2.rms.mapid ); ALTER SESSION SET `exec.sort.disable_managed` = true; alter session set `planner.width.max_per_node` = 17; alter session set `planner.disable_exchanges` = false; alter session set `planner.width.max_per_query` = 1000; alter session set `planner.memory.max_query_memory_per_node` = 2147483648; {noformat} Stack trace is: {noformat} 2017-08-23 06:59:42,763 [266275e5-ebdb-14ae-d52d-00fa3a154f6d:frag:0:0] INFO o.a.d.e.w.fragment.FragmentExecutor - User Error Occurred: One or more nodes ran out of memory while executing the query. (Unable to allocate buffer of size 4194304 (rounded from 3276750) due to memory limit. Current allocation: 7 9986944) org.apache.drill.common.exceptions.UserException: RESOURCE ERROR: One or more nodes ran out of memory while executing the query. Unable to allocate buffer of size 4194304 (rounded from 3276750) due to memory limit. Current allocation: 79986944 [Error Id: 4f4959df-0921-4a50-b75e-56488469ab10 ] at org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:550) ~[drill-common-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:244) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38) [drill-common-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_51] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_51] at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51] Caused by: org.apache.drill.exec.exception.OutOfMemoryException: Unable to allocate buffer of size 4194304 (rounded from 3276750) due to memory limit. Cur rent allocation: 79986944 at org.apache.drill.exec.memory.BaseAllocator.buffer(BaseAllocator.java:238) ~[drill-memory-base-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.memory.BaseAllocator.buffer(BaseAllocator.java:213) ~[drill-memory-base-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.vector.VarCharVector.allocateNew(VarCharVector.java:402) ~[vector-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.vector.NullableVarCharVector.allocateNew(NullableVarCharVector.java:236) ~[vector-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.vector.AllocationHelper.allocatePrecomputedChildCount(AllocationHelper.java:33) ~[vector-1.12.0-SNAPSHOT.jar:1.12.0-SNAPS HOT] at org.apache.drill.exec.vector.AllocationHelper.allocate(AllocationHelper.java:46) ~[vector-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.record.VectorInitializer.allocateVector(VectorInitializer.java:113) ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT ] at org.apache.drill.exec.record.VectorInitializer.allocateVector(VectorInitializer.java:95) ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.record.VectorInitializer.allocateMap(VectorInitializer.java:130) ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.record.VectorInitializer.allocateVector(VectorInitializer.java:93) ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.record.VectorInitializer.allocateBatch(VectorInitializer.java:85) ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.physical.impl.xsort.managed.PriorityQueueCopierWrapper$BatchMerger.next(PriorityQueueCopierWrapper.java:262) ~[drill-java -exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.load(ExternalSortBatch.java:374) ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12 .0-SNAPSHOT] at
[jira] [Reopened] (DRILL-5447) Managed External Sort : Unable to allocate sv2 vector
[ https://issues.apache.org/jira/browse/DRILL-5447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Hou reopened DRILL-5447: --- This is not passing on Jenkins. Here is the query: {noformat} ALTER SESSION SET `exec.sort.disable_managed` = false; alter session set `planner.width.max_per_node` = 1; alter session set `planner.disable_exchanges` = true; alter session set `planner.width.max_per_query` = 1; select count(*) from (select * from (select id, flatten(str_list) str from dfs.`/drill/testdata/resource-manager/flatten-large-small.json`) d order by d.str) d1 where d1.id=0; ALTER SESSION SET `exec.sort.disable_managed` = true; alter session set `planner.width.max_per_node` = 17; alter session set `planner.disable_exchanges` = false; alter session set `planner.width.max_per_query` = 1000; alter session set `planner.memory.max_query_memory_per_node` = 268435456; {noformat} Stack trace is: {noformat} 2017-08-24 00:51:38,821 [26617a7e-b953-7ac3-556d-43fd88e51b19:frag:0:0] ERROR o.a.d.e.p.i.x.m.ExternalSortBatch - Insufficient memory to merge two batches . Incoming batch size: 1073819648, available memory: 268435456 2017-08-24 00:51:38,832 [26617a7e-b953-7ac3-556d-43fd88e51b19:frag:0:0] INFO o.a.d.e.p.i.x.m.BufferedBatches - User Error Occurred: Unable to allocate sv 2 buffer (Unable to allocate sv2 buffer) org.apache.drill.common.exceptions.UserException: RESOURCE ERROR: Unable to allocate sv2 buffer [Error Id: 17cdce48-2ff3-44cc-916d-136ab409e896 ] at org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:550) ~[drill-common-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.physical.impl.xsort.managed.BufferedBatches.newSV2(BufferedBatches.java:157) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0- SNAPSHOT] at org.apache.drill.exec.physical.impl.xsort.managed.BufferedBatches.makeSelectionVector(BufferedBatches.java:142) [drill-java-exec-1.12.0-SNAPSHO T.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.physical.impl.xsort.managed.BufferedBatches.add(BufferedBatches.java:97) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAP SHOT] at org.apache.drill.exec.physical.impl.xsort.managed.SortImpl.addBatch(SortImpl.java:265) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.loadBatch(ExternalSortBatch.java:422) [drill-java-exec-1.12.0-SNAPSHOT.jar: 1.12.0-SNAPSHOT] at org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.load(ExternalSortBatch.java:358) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12. 0-SNAPSHOT] at org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.innerNext(ExternalSortBatch.java:303) [drill-java-exec-1.12.0-SNAPSHOT.jar: 1.12.0-SNAPSHOT] at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:164) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0 -SNAPSHOT] at org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.innerNext(RemovingRecordBatch.java:93) [drill-java-exec-1.12.0-SNAPSHOT.jar:1 .12.0-SNAPSHOT] at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:164) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0 -SNAPSHOT] at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:164) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0 -SNAPSHOT] at
[jira] [Updated] (DRILL-5744) External sort fails with OOM error
[ https://issues.apache.org/jira/browse/DRILL-5744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Hou updated DRILL-5744: -- Description: Query is: {noformat} ALTER SESSION SET `exec.sort.disable_managed` = false; alter session set `planner.width.max_per_node` = 1; alter session set `planner.disable_exchanges` = true; alter session set `planner.width.max_per_query` = 1; alter session set `planner.memory.max_query_memory_per_node` = 152428800; select count(*) from ( select * from ( select s1.type type, flatten(s1.rms.rptd) rptds, s1.rms, s1.uid from ( select d.type type, d.uid uid, flatten(d.map.rm) rms from dfs.`/drill/testdata/resource-manager/nested-large.json` d order by d.uid ) s1 ) s2 order by s2.rms.mapid ); ALTER SESSION SET `exec.sort.disable_managed` = true; alter session set `planner.width.max_per_node` = 17; alter session set `planner.disable_exchanges` = false; alter session set `planner.width.max_per_query` = 1000; alter session set `planner.memory.max_query_memory_per_node` = 2147483648; {noformat} Stack trace is: {noformat} 2017-08-23 06:59:42,763 [266275e5-ebdb-14ae-d52d-00fa3a154f6d:frag:0:0] INFO o.a.d.e.w.fragment.FragmentExecutor - User Error Occurred: One or more nodes ran out of memory while executing the query. (Unable to allocate buffer of size 4194304 (rounded from 3276750) due to memory limit. Current allocation: 7 9986944) org.apache.drill.common.exceptions.UserException: RESOURCE ERROR: One or more nodes ran out of memory while executing the query. Unable to allocate buffer of size 4194304 (rounded from 3276750) due to memory limit. Current allocation: 79986944 [Error Id: 4f4959df-0921-4a50-b75e-56488469ab10 ] at org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:550) ~[drill-common-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:244) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38) [drill-common-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_51] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_51] at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51] Caused by: org.apache.drill.exec.exception.OutOfMemoryException: Unable to allocate buffer of size 4194304 (rounded from 3276750) due to memory limit. Cur rent allocation: 79986944 at org.apache.drill.exec.memory.BaseAllocator.buffer(BaseAllocator.java:238) ~[drill-memory-base-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.memory.BaseAllocator.buffer(BaseAllocator.java:213) ~[drill-memory-base-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.vector.VarCharVector.allocateNew(VarCharVector.java:402) ~[vector-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.vector.NullableVarCharVector.allocateNew(NullableVarCharVector.java:236) ~[vector-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.vector.AllocationHelper.allocatePrecomputedChildCount(AllocationHelper.java:33) ~[vector-1.12.0-SNAPSHOT.jar:1.12.0-SNAPS HOT] at org.apache.drill.exec.vector.AllocationHelper.allocate(AllocationHelper.java:46) ~[vector-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.record.VectorInitializer.allocateVector(VectorInitializer.java:113) ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT ] at org.apache.drill.exec.record.VectorInitializer.allocateVector(VectorInitializer.java:95) ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.record.VectorInitializer.allocateMap(VectorInitializer.java:130) ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.record.VectorInitializer.allocateVector(VectorInitializer.java:93) ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.record.VectorInitializer.allocateBatch(VectorInitializer.java:85) ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.physical.impl.xsort.managed.PriorityQueueCopierWrapper$BatchMerger.next(PriorityQueueCopierWrapper.java:262) ~[drill-java -exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.load(ExternalSortBatch.java:374) ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12 .0-SNAPSHOT] at org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.innerNext(ExternalSortBatch.java:303) ~[drill-java-exec-1.12.0-SNAPSHOT.jar :1.12.0-SNAPSHOT] at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:164)
[jira] [Commented] (DRILL-5732) Unable to allocate sv2 for 9039 records, and not enough batchGroups to spill.
[ https://issues.apache.org/jira/browse/DRILL-5732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16139618#comment-16139618 ] Robert Hou commented on DRILL-5732: --- Sorry, I'm not sure why it is using the original version. Let's try this one: {noformat} ALTER SESSION SET `exec.sort.disable_managed` = false; alter session set `planner.disable_exchanges` = true; alter session set `planner.memory.max_query_memory_per_node` = 104857600; alter session set `planner.width.max_per_node` = 1; alter session set `planner.width.max_per_query` = 1; select max(col1), max(cs_sold_date_sk), max(cs_sold_time_sk), max(cs_ship_date_sk), max(cs_bill_customer_sk), max(cs_bill_cdemo_sk), max(cs_bill_hdemo_sk), max(cs_bill_addr_sk), max(cs_ship_customer_sk), max(cs_ship_cdemo_sk), max(cs_ship_hdemo_sk), max(cs_ship_addr_sk), max(cs_call_center_sk), max(cs_catalog_page_sk), max(cs_ship_mode_sk), min(cs_warehouse_sk), max(cs_item_sk), max(cs_promo_sk), max(cs_order_number), max(cs_quantity), max(cs_wholesale_cost), max(cs_list_price), max(cs_sales_price), max(cs_ext_discount_amt), min(cs_ext_sales_price), max(cs_ext_wholesale_cost), min(cs_ext_list_price), min(cs_ext_tax), min(cs_coupon_amt), max(cs_ext_ship_cost), max(cs_net_paid), max(cs_net_paid_inc_tax), min(cs_net_paid_inc_ship), min(cs_net_paid_inc_ship_tax), min(cs_net_profit), min(c_customer_sk), min(length(c_customer_id)), max(c_current_cdemo_sk), max(c_current_hdemo_sk), min(c_current_addr_sk), min(c_first_shipto_date_sk), min(c_first_sales_date_sk), min(length(c_salutation)), min(length(c_first_name)), min(length(c_last_name)), min(length(c_preferred_cust_flag)), max(c_birth_day), min(c_birth_month), min(c_birth_year), max(c_last_review_date), c_email_address from (select cs_sold_date_sk+cs_sold_time_sk col1, * from dfs.`/drill/testdata/resource-manager/md1362` order by c_email_address nulls first) d where d.col1 > 2536816 and c_email_address is not null group by c_email_address; ALTER SESSION SET `exec.sort.disable_managed` = true; alter session set `planner.disable_exchanges` = false; alter session set `planner.memory.max_query_memory_per_node` = 2147483648; alter session set `planner.width.max_per_node` = 17; alter session set `planner.width.max_per_query` = 1000; {noformat} stack trace is: {noformat} 2017-08-23 13:10:57,702 [26621eb2-daec-cef9-efed-5986e72a750a:frag:0:0] ERROR o.a.d.e.p.i.x.m.ExternalSortBatch - Insufficient memory to merge two batches. Incoming batch size: 30998528, available memory: 52428800 2017-08-23 13:10:57,707 [26621eb2-daec-cef9-efed-5986e72a750a:frag:0:0] WARN o.a.d.e.p.i.x.m.ExternalSortBatch - Potential memory overflow during load phase! Minimum needed = 92996444 bytes, actual available = 52428800 bytes 2017-08-23 13:10:57,707 [26621eb2-daec-cef9-efed-5986e72a750a:frag:0:0] WARN o.a.d.e.p.i.x.m.ExternalSortBatch - Potential performance degredation due to low memory or large input row. Preferred spill batch row count: 100, actual: 1 2017-08-23 13:10:57,707 [26621eb2-daec-cef9-efed-5986e72a750a:frag:0:0] WARN o.a.d.e.p.i.x.m.ExternalSortBatch - Potential performance degredation due to low memory or large input row. Preferred merge batch row count: 100, actual: 1 2017-08-23 13:10:57,762 [26621eb2-daec-cef9-efed-5986e72a750a:frag:0:0] ERROR o.a.d.e.p.i.x.m.ExternalSortBatch - Insufficient memory to merge two batches. Incoming batch size: 30998528, available memory: 21430272 2017-08-23 13:10:57,773 [26621eb2-daec-cef9-efed-5986e72a750a:frag:0:0] INFO o.a.d.e.p.i.x.m.BufferedBatches - User Error Occurred: Unable to allocate sv2 buffer (Unable to allocate sv2 buffer) org.apache.drill.common.exceptions.UserException: RESOURCE ERROR: Unable to allocate sv2 buffer [Error Id: e37c705a-70dd-4b57-b6fc-7054fdee45c6 ] at org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:550) ~[drill-common-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.physical.impl.xsort.managed.BufferedBatches.newSV2(BufferedBatches.java:157) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.physical.impl.xsort.managed.BufferedBatches.makeSelectionVector(BufferedBatches.java:142) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.physical.impl.xsort.managed.BufferedBatches.add(BufferedBatches.java:97) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.physical.impl.xsort.managed.SortImpl.addBatch(SortImpl.java:265) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.loadBatch(ExternalSortBatch.java:422) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.load(ExternalSortBatch.java:358)
[jira] [Updated] (DRILL-5732) Unable to allocate sv2 for 9039 records, and not enough batchGroups to spill.
[ https://issues.apache.org/jira/browse/DRILL-5732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Hou updated DRILL-5732: -- Attachment: (was: 2668b522-5833-8fd2-0b6d-e685197f0ae3.sys.drill) > Unable to allocate sv2 for 9039 records, and not enough batchGroups to spill. > - > > Key: DRILL-5732 > URL: https://issues.apache.org/jira/browse/DRILL-5732 > Project: Apache Drill > Issue Type: Bug >Affects Versions: 1.10.0 >Reporter: Robert Hou >Assignee: Paul Rogers > Attachments: 26621eb2-daec-cef9-efed-5986e72a750a.sys.drill, > drillbit.log.83 > > > git commit id: > {noformat} > | 1.12.0-SNAPSHOT | e9065b55ea560e7f737d6fcb4948f9e945b9b14f | DRILL-5660: > Parquet metadata caching improvements | 15.08.2017 @ 09:31:00 PDT | > r...@qa-node190.qa.lab | 15.08.2017 @ 13:29:26 PDT | > {noformat} > Query is: > {noformat} > ALTER SESSION SET `exec.sort.disable_managed` = false; > alter session set `planner.disable_exchanges` = true; > alter session set `planner.memory.max_query_memory_per_node` = 104857600; > alter session set `planner.width.max_per_node` = 1; > alter session set `planner.width.max_per_query` = 1; > select max(col1), max(cs_sold_date_sk), max(cs_sold_time_sk), > max(cs_ship_date_sk), max(cs_bill_customer_sk), max(cs_bill_cdemo_sk), > max(cs_bill_hdemo_sk), max(cs_bill_addr_sk), max(cs_ship_customer_sk), > max(cs_ship_cdemo_sk), max(cs_ship_hdemo_sk), max(cs_ship_addr_sk), > max(cs_call_center_sk), max(cs_catalog_page_sk), max(cs_ship_mode_sk), > min(cs_warehouse_sk), max(cs_item_sk), max(cs_promo_sk), > max(cs_order_number), max(cs_quantity), max(cs_wholesale_cost), > max(cs_list_price), max(cs_sales_price), max(cs_ext_discount_amt), > min(cs_ext_sales_price), max(cs_ext_wholesale_cost), min(cs_ext_list_price), > min(cs_ext_tax), min(cs_coupon_amt), max(cs_ext_ship_cost), max(cs_net_paid), > max(cs_net_paid_inc_tax), min(cs_net_paid_inc_ship), > min(cs_net_paid_inc_ship_tax), min(cs_net_profit), min(c_customer_sk), > min(length(c_customer_id)), max(c_current_cdemo_sk), max(c_current_hdemo_sk), > min(c_current_addr_sk), min(c_first_shipto_date_sk), > min(c_first_sales_date_sk), min(length(c_salutation)), > min(length(c_first_name)), min(length(c_last_name)), > min(length(c_preferred_cust_flag)), max(c_birth_day), min(c_birth_month), > min(c_birth_year), max(c_last_review_date), c_email_address from (select > cs_sold_date_sk+cs_sold_time_sk col1, * from > dfs.`/drill/testdata/resource-manager/md1362` order by c_email_address nulls > first) d where d.col1 > 2536816 and c_email_address is not null group by > c_email_address; > ALTER SESSION SET `exec.sort.disable_managed` = true; > alter session set `planner.disable_exchanges` = false; > alter session set `planner.memory.max_query_memory_per_node` = 2147483648; > alter session set `planner.width.max_per_node` = 17; > alter session set `planner.width.max_per_query` = 1000; > {noformat} > Here is the stack trace: > {noformat} > 2017-08-18 13:15:27,052 [2668b522-5833-8fd2-0b6d-e685197f0ae3:frag:0:0] DEBUG > o.a.d.e.t.g.SingleBatchSorterGen27 - Took 6445 us to sort 9039 records > 2017-08-18 13:15:27,420 [2668b522-5833-8fd2-0b6d-e685197f0ae3:frag:0:0] DEBUG > o.a.d.e.p.i.xsort.ExternalSortBatch - Copier allocator current allocation 0 > 2017-08-18 13:15:27,420 [2668b522-5833-8fd2-0b6d-e685197f0ae3:frag:0:0] DEBUG > o.a.d.e.p.i.xsort.ExternalSortBatch - mergeAndSpill: starting total size in > memory = 71964288 > 2017-08-18 13:15:27,421 [2668b522-5833-8fd2-0b6d-e685197f0ae3:frag:0:0] INFO > o.a.d.e.p.i.xsort.ExternalSortBatch - User Error Occurred: One or more nodes > ran out of memory while executing the query. > org.apache.drill.common.exceptions.UserException: RESOURCE ERROR: One or more > nodes ran out of memory while executing the query. > Unable to allocate sv2 for 9039 records, and not enough batchGroups to spill. > batchGroups.size 1 > spilledBatchGroups.size 0 > allocated memory 71964288 > allocator limit 52428800 > [Error Id: 7b248f12-2b31-4013-86b6-92e6c842db48 ] > at > org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:550) > ~[drill-common-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.xsort.ExternalSortBatch.newSV2(ExternalSortBatch.java:637) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.xsort.ExternalSortBatch.innerNext(ExternalSortBatch.java:379) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:164) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at >
[jira] [Updated] (DRILL-5732) Unable to allocate sv2 for 9039 records, and not enough batchGroups to spill.
[ https://issues.apache.org/jira/browse/DRILL-5732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Hou updated DRILL-5732: -- Attachment: (was: drillbit.log) > Unable to allocate sv2 for 9039 records, and not enough batchGroups to spill. > - > > Key: DRILL-5732 > URL: https://issues.apache.org/jira/browse/DRILL-5732 > Project: Apache Drill > Issue Type: Bug >Affects Versions: 1.10.0 >Reporter: Robert Hou >Assignee: Paul Rogers > Attachments: 26621eb2-daec-cef9-efed-5986e72a750a.sys.drill, > drillbit.log.83 > > > git commit id: > {noformat} > | 1.12.0-SNAPSHOT | e9065b55ea560e7f737d6fcb4948f9e945b9b14f | DRILL-5660: > Parquet metadata caching improvements | 15.08.2017 @ 09:31:00 PDT | > r...@qa-node190.qa.lab | 15.08.2017 @ 13:29:26 PDT | > {noformat} > Query is: > {noformat} > ALTER SESSION SET `exec.sort.disable_managed` = false; > alter session set `planner.disable_exchanges` = true; > alter session set `planner.memory.max_query_memory_per_node` = 104857600; > alter session set `planner.width.max_per_node` = 1; > alter session set `planner.width.max_per_query` = 1; > select max(col1), max(cs_sold_date_sk), max(cs_sold_time_sk), > max(cs_ship_date_sk), max(cs_bill_customer_sk), max(cs_bill_cdemo_sk), > max(cs_bill_hdemo_sk), max(cs_bill_addr_sk), max(cs_ship_customer_sk), > max(cs_ship_cdemo_sk), max(cs_ship_hdemo_sk), max(cs_ship_addr_sk), > max(cs_call_center_sk), max(cs_catalog_page_sk), max(cs_ship_mode_sk), > min(cs_warehouse_sk), max(cs_item_sk), max(cs_promo_sk), > max(cs_order_number), max(cs_quantity), max(cs_wholesale_cost), > max(cs_list_price), max(cs_sales_price), max(cs_ext_discount_amt), > min(cs_ext_sales_price), max(cs_ext_wholesale_cost), min(cs_ext_list_price), > min(cs_ext_tax), min(cs_coupon_amt), max(cs_ext_ship_cost), max(cs_net_paid), > max(cs_net_paid_inc_tax), min(cs_net_paid_inc_ship), > min(cs_net_paid_inc_ship_tax), min(cs_net_profit), min(c_customer_sk), > min(length(c_customer_id)), max(c_current_cdemo_sk), max(c_current_hdemo_sk), > min(c_current_addr_sk), min(c_first_shipto_date_sk), > min(c_first_sales_date_sk), min(length(c_salutation)), > min(length(c_first_name)), min(length(c_last_name)), > min(length(c_preferred_cust_flag)), max(c_birth_day), min(c_birth_month), > min(c_birth_year), max(c_last_review_date), c_email_address from (select > cs_sold_date_sk+cs_sold_time_sk col1, * from > dfs.`/drill/testdata/resource-manager/md1362` order by c_email_address nulls > first) d where d.col1 > 2536816 and c_email_address is not null group by > c_email_address; > ALTER SESSION SET `exec.sort.disable_managed` = true; > alter session set `planner.disable_exchanges` = false; > alter session set `planner.memory.max_query_memory_per_node` = 2147483648; > alter session set `planner.width.max_per_node` = 17; > alter session set `planner.width.max_per_query` = 1000; > {noformat} > Here is the stack trace: > {noformat} > 2017-08-18 13:15:27,052 [2668b522-5833-8fd2-0b6d-e685197f0ae3:frag:0:0] DEBUG > o.a.d.e.t.g.SingleBatchSorterGen27 - Took 6445 us to sort 9039 records > 2017-08-18 13:15:27,420 [2668b522-5833-8fd2-0b6d-e685197f0ae3:frag:0:0] DEBUG > o.a.d.e.p.i.xsort.ExternalSortBatch - Copier allocator current allocation 0 > 2017-08-18 13:15:27,420 [2668b522-5833-8fd2-0b6d-e685197f0ae3:frag:0:0] DEBUG > o.a.d.e.p.i.xsort.ExternalSortBatch - mergeAndSpill: starting total size in > memory = 71964288 > 2017-08-18 13:15:27,421 [2668b522-5833-8fd2-0b6d-e685197f0ae3:frag:0:0] INFO > o.a.d.e.p.i.xsort.ExternalSortBatch - User Error Occurred: One or more nodes > ran out of memory while executing the query. > org.apache.drill.common.exceptions.UserException: RESOURCE ERROR: One or more > nodes ran out of memory while executing the query. > Unable to allocate sv2 for 9039 records, and not enough batchGroups to spill. > batchGroups.size 1 > spilledBatchGroups.size 0 > allocated memory 71964288 > allocator limit 52428800 > [Error Id: 7b248f12-2b31-4013-86b6-92e6c842db48 ] > at > org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:550) > ~[drill-common-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.xsort.ExternalSortBatch.newSV2(ExternalSortBatch.java:637) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.xsort.ExternalSortBatch.innerNext(ExternalSortBatch.java:379) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:164) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at >
[jira] [Updated] (DRILL-5732) Unable to allocate sv2 for 9039 records, and not enough batchGroups to spill.
[ https://issues.apache.org/jira/browse/DRILL-5732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Hou updated DRILL-5732: -- Attachment: 26621eb2-daec-cef9-efed-5986e72a750a.sys.drill drillbit.log.83 > Unable to allocate sv2 for 9039 records, and not enough batchGroups to spill. > - > > Key: DRILL-5732 > URL: https://issues.apache.org/jira/browse/DRILL-5732 > Project: Apache Drill > Issue Type: Bug >Affects Versions: 1.10.0 >Reporter: Robert Hou >Assignee: Paul Rogers > Attachments: 26621eb2-daec-cef9-efed-5986e72a750a.sys.drill, > drillbit.log.83 > > > git commit id: > {noformat} > | 1.12.0-SNAPSHOT | e9065b55ea560e7f737d6fcb4948f9e945b9b14f | DRILL-5660: > Parquet metadata caching improvements | 15.08.2017 @ 09:31:00 PDT | > r...@qa-node190.qa.lab | 15.08.2017 @ 13:29:26 PDT | > {noformat} > Query is: > {noformat} > ALTER SESSION SET `exec.sort.disable_managed` = false; > alter session set `planner.disable_exchanges` = true; > alter session set `planner.memory.max_query_memory_per_node` = 104857600; > alter session set `planner.width.max_per_node` = 1; > alter session set `planner.width.max_per_query` = 1; > select max(col1), max(cs_sold_date_sk), max(cs_sold_time_sk), > max(cs_ship_date_sk), max(cs_bill_customer_sk), max(cs_bill_cdemo_sk), > max(cs_bill_hdemo_sk), max(cs_bill_addr_sk), max(cs_ship_customer_sk), > max(cs_ship_cdemo_sk), max(cs_ship_hdemo_sk), max(cs_ship_addr_sk), > max(cs_call_center_sk), max(cs_catalog_page_sk), max(cs_ship_mode_sk), > min(cs_warehouse_sk), max(cs_item_sk), max(cs_promo_sk), > max(cs_order_number), max(cs_quantity), max(cs_wholesale_cost), > max(cs_list_price), max(cs_sales_price), max(cs_ext_discount_amt), > min(cs_ext_sales_price), max(cs_ext_wholesale_cost), min(cs_ext_list_price), > min(cs_ext_tax), min(cs_coupon_amt), max(cs_ext_ship_cost), max(cs_net_paid), > max(cs_net_paid_inc_tax), min(cs_net_paid_inc_ship), > min(cs_net_paid_inc_ship_tax), min(cs_net_profit), min(c_customer_sk), > min(length(c_customer_id)), max(c_current_cdemo_sk), max(c_current_hdemo_sk), > min(c_current_addr_sk), min(c_first_shipto_date_sk), > min(c_first_sales_date_sk), min(length(c_salutation)), > min(length(c_first_name)), min(length(c_last_name)), > min(length(c_preferred_cust_flag)), max(c_birth_day), min(c_birth_month), > min(c_birth_year), max(c_last_review_date), c_email_address from (select > cs_sold_date_sk+cs_sold_time_sk col1, * from > dfs.`/drill/testdata/resource-manager/md1362` order by c_email_address nulls > first) d where d.col1 > 2536816 and c_email_address is not null group by > c_email_address; > ALTER SESSION SET `exec.sort.disable_managed` = true; > alter session set `planner.disable_exchanges` = false; > alter session set `planner.memory.max_query_memory_per_node` = 2147483648; > alter session set `planner.width.max_per_node` = 17; > alter session set `planner.width.max_per_query` = 1000; > {noformat} > Here is the stack trace: > {noformat} > 2017-08-18 13:15:27,052 [2668b522-5833-8fd2-0b6d-e685197f0ae3:frag:0:0] DEBUG > o.a.d.e.t.g.SingleBatchSorterGen27 - Took 6445 us to sort 9039 records > 2017-08-18 13:15:27,420 [2668b522-5833-8fd2-0b6d-e685197f0ae3:frag:0:0] DEBUG > o.a.d.e.p.i.xsort.ExternalSortBatch - Copier allocator current allocation 0 > 2017-08-18 13:15:27,420 [2668b522-5833-8fd2-0b6d-e685197f0ae3:frag:0:0] DEBUG > o.a.d.e.p.i.xsort.ExternalSortBatch - mergeAndSpill: starting total size in > memory = 71964288 > 2017-08-18 13:15:27,421 [2668b522-5833-8fd2-0b6d-e685197f0ae3:frag:0:0] INFO > o.a.d.e.p.i.xsort.ExternalSortBatch - User Error Occurred: One or more nodes > ran out of memory while executing the query. > org.apache.drill.common.exceptions.UserException: RESOURCE ERROR: One or more > nodes ran out of memory while executing the query. > Unable to allocate sv2 for 9039 records, and not enough batchGroups to spill. > batchGroups.size 1 > spilledBatchGroups.size 0 > allocated memory 71964288 > allocator limit 52428800 > [Error Id: 7b248f12-2b31-4013-86b6-92e6c842db48 ] > at > org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:550) > ~[drill-common-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.xsort.ExternalSortBatch.newSV2(ExternalSortBatch.java:637) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.xsort.ExternalSortBatch.innerNext(ExternalSortBatch.java:379) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:164) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at >
[jira] [Commented] (DRILL-5732) Unable to allocate sv2 for 9039 records, and not enough batchGroups to spill.
[ https://issues.apache.org/jira/browse/DRILL-5732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16139636#comment-16139636 ] Robert Hou commented on DRILL-5732: --- I am using the same commit ID as above: | 1.12.0-SNAPSHOT | e9065b55ea560e7f737d6fcb4948f9e945b9b14f | DRILL-5660: Parquet metadata caching improvements | 15.08.2017 @ 09:31:00 PDT | r...@qa-node190.qa.lab | 15.08.2017 @ 13:29:26 PDT | > Unable to allocate sv2 for 9039 records, and not enough batchGroups to spill. > - > > Key: DRILL-5732 > URL: https://issues.apache.org/jira/browse/DRILL-5732 > Project: Apache Drill > Issue Type: Bug >Affects Versions: 1.10.0 >Reporter: Robert Hou >Assignee: Paul Rogers > Attachments: 26621eb2-daec-cef9-efed-5986e72a750a.sys.drill, > drillbit.log.83 > > > git commit id: > {noformat} > | 1.12.0-SNAPSHOT | e9065b55ea560e7f737d6fcb4948f9e945b9b14f | DRILL-5660: > Parquet metadata caching improvements | 15.08.2017 @ 09:31:00 PDT | > r...@qa-node190.qa.lab | 15.08.2017 @ 13:29:26 PDT | > {noformat} > Query is: > {noformat} > ALTER SESSION SET `exec.sort.disable_managed` = false; > alter session set `planner.disable_exchanges` = true; > alter session set `planner.memory.max_query_memory_per_node` = 104857600; > alter session set `planner.width.max_per_node` = 1; > alter session set `planner.width.max_per_query` = 1; > select max(col1), max(cs_sold_date_sk), max(cs_sold_time_sk), > max(cs_ship_date_sk), max(cs_bill_customer_sk), max(cs_bill_cdemo_sk), > max(cs_bill_hdemo_sk), max(cs_bill_addr_sk), max(cs_ship_customer_sk), > max(cs_ship_cdemo_sk), max(cs_ship_hdemo_sk), max(cs_ship_addr_sk), > max(cs_call_center_sk), max(cs_catalog_page_sk), max(cs_ship_mode_sk), > min(cs_warehouse_sk), max(cs_item_sk), max(cs_promo_sk), > max(cs_order_number), max(cs_quantity), max(cs_wholesale_cost), > max(cs_list_price), max(cs_sales_price), max(cs_ext_discount_amt), > min(cs_ext_sales_price), max(cs_ext_wholesale_cost), min(cs_ext_list_price), > min(cs_ext_tax), min(cs_coupon_amt), max(cs_ext_ship_cost), max(cs_net_paid), > max(cs_net_paid_inc_tax), min(cs_net_paid_inc_ship), > min(cs_net_paid_inc_ship_tax), min(cs_net_profit), min(c_customer_sk), > min(length(c_customer_id)), max(c_current_cdemo_sk), max(c_current_hdemo_sk), > min(c_current_addr_sk), min(c_first_shipto_date_sk), > min(c_first_sales_date_sk), min(length(c_salutation)), > min(length(c_first_name)), min(length(c_last_name)), > min(length(c_preferred_cust_flag)), max(c_birth_day), min(c_birth_month), > min(c_birth_year), max(c_last_review_date), c_email_address from (select > cs_sold_date_sk+cs_sold_time_sk col1, * from > dfs.`/drill/testdata/resource-manager/md1362` order by c_email_address nulls > first) d where d.col1 > 2536816 and c_email_address is not null group by > c_email_address; > ALTER SESSION SET `exec.sort.disable_managed` = true; > alter session set `planner.disable_exchanges` = false; > alter session set `planner.memory.max_query_memory_per_node` = 2147483648; > alter session set `planner.width.max_per_node` = 17; > alter session set `planner.width.max_per_query` = 1000; > {noformat} > Here is the stack trace: > {noformat} > 2017-08-18 13:15:27,052 [2668b522-5833-8fd2-0b6d-e685197f0ae3:frag:0:0] DEBUG > o.a.d.e.t.g.SingleBatchSorterGen27 - Took 6445 us to sort 9039 records > 2017-08-18 13:15:27,420 [2668b522-5833-8fd2-0b6d-e685197f0ae3:frag:0:0] DEBUG > o.a.d.e.p.i.xsort.ExternalSortBatch - Copier allocator current allocation 0 > 2017-08-18 13:15:27,420 [2668b522-5833-8fd2-0b6d-e685197f0ae3:frag:0:0] DEBUG > o.a.d.e.p.i.xsort.ExternalSortBatch - mergeAndSpill: starting total size in > memory = 71964288 > 2017-08-18 13:15:27,421 [2668b522-5833-8fd2-0b6d-e685197f0ae3:frag:0:0] INFO > o.a.d.e.p.i.xsort.ExternalSortBatch - User Error Occurred: One or more nodes > ran out of memory while executing the query. > org.apache.drill.common.exceptions.UserException: RESOURCE ERROR: One or more > nodes ran out of memory while executing the query. > Unable to allocate sv2 for 9039 records, and not enough batchGroups to spill. > batchGroups.size 1 > spilledBatchGroups.size 0 > allocated memory 71964288 > allocator limit 52428800 > [Error Id: 7b248f12-2b31-4013-86b6-92e6c842db48 ] > at > org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:550) > ~[drill-common-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.xsort.ExternalSortBatch.newSV2(ExternalSortBatch.java:637) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.xsort.ExternalSortBatch.innerNext(ExternalSortBatch.java:379) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at >
[jira] [Updated] (DRILL-5744) External sort fails with OOM error
[ https://issues.apache.org/jira/browse/DRILL-5744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Hou updated DRILL-5744: -- Attachment: (was: drillbit.log) > External sort fails with OOM error > -- > > Key: DRILL-5744 > URL: https://issues.apache.org/jira/browse/DRILL-5744 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Relational Operators >Affects Versions: 1.10.0 >Reporter: Robert Hou >Assignee: Paul Rogers > Fix For: 1.12.0 > > Attachments: 266275e5-ebdb-14ae-d52d-00fa3a154f6d.sys.drill > > > Query is: > {noformat} > ALTER SESSION SET `exec.sort.disable_managed` = false; > alter session set `planner.width.max_per_node` = 1; > alter session set `planner.disable_exchanges` = true; > alter session set `planner.width.max_per_query` = 1; > alter session set `planner.memory.max_query_memory_per_node` = 152428800; > select count(*) from ( > select * from ( > select s1.type type, flatten(s1.rms.rptd) rptds, s1.rms, s1.uid > from ( > select d.type type, d.uid uid, flatten(d.map.rm) rms from > dfs.`/drill/testdata/resource-manager/nested-large.json` d order by d.uid > ) s1 > ) s2 > order by s2.rms.mapid > ); > ALTER SESSION SET `exec.sort.disable_managed` = true; > alter session set `planner.width.max_per_node` = 17; > alter session set `planner.disable_exchanges` = false; > alter session set `planner.width.max_per_query` = 1000; > alter session set `planner.memory.max_query_memory_per_node` = 2147483648; > {noformat} > Stack trace is: > {noformat} > 2017-08-23 06:59:42,763 [266275e5-ebdb-14ae-d52d-00fa3a154f6d:frag:0:0] INFO > o.a.d.e.w.fragment.FragmentExecutor - User Error Occurred: One or more nodes > ran out of memory while executing the query. (Unable to allocate buffer of > size 4194304 (rounded from 3276750) due to memory limit. Current allocation: 7 > 9986944) > org.apache.drill.common.exceptions.UserException: RESOURCE ERROR: One or more > nodes ran out of memory while executing the query. > Unable to allocate buffer of size 4194304 (rounded from 3276750) due to > memory limit. Current allocation: 79986944 > [Error Id: 4f4959df-0921-4a50-b75e-56488469ab10 ] > at > org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:550) > ~[drill-common-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:244) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38) > [drill-common-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > [na:1.7.0_51] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > [na:1.7.0_51] > at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51] > Caused by: org.apache.drill.exec.exception.OutOfMemoryException: Unable to > allocate buffer of size 4194304 (rounded from 3276750) due to memory limit. > Cur > rent allocation: 79986944 > at > org.apache.drill.exec.memory.BaseAllocator.buffer(BaseAllocator.java:238) > ~[drill-memory-base-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.memory.BaseAllocator.buffer(BaseAllocator.java:213) > ~[drill-memory-base-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.vector.VarCharVector.allocateNew(VarCharVector.java:402) > ~[vector-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.vector.NullableVarCharVector.allocateNew(NullableVarCharVector.java:236) > ~[vector-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.vector.AllocationHelper.allocatePrecomputedChildCount(AllocationHelper.java:33) > ~[vector-1.12.0-SNAPSHOT.jar:1.12.0-SNAPS > HOT] > at > org.apache.drill.exec.vector.AllocationHelper.allocate(AllocationHelper.java:46) > ~[vector-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.record.VectorInitializer.allocateVector(VectorInitializer.java:113) > ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT > ] > at > org.apache.drill.exec.record.VectorInitializer.allocateVector(VectorInitializer.java:95) > ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.record.VectorInitializer.allocateMap(VectorInitializer.java:130) > ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.record.VectorInitializer.allocateVector(VectorInitializer.java:93) > ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.record.VectorInitializer.allocateBatch(VectorInitializer.java:85) >
[jira] [Updated] (DRILL-5744) External sort fails with OOM error
[ https://issues.apache.org/jira/browse/DRILL-5744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Hou updated DRILL-5744: -- Description: Query is: {noformat} ALTER SESSION SET `exec.sort.disable_managed` = false; alter session set `planner.width.max_per_node` = 1; alter session set `planner.disable_exchanges` = true; alter session set `planner.width.max_per_query` = 1; alter session set `planner.memory.max_query_memory_per_node` = 152428800; select count(*) from ( select * from ( select s1.type type, flatten(s1.rms.rptd) rptds, s1.rms, s1.uid from ( select d.type type, d.uid uid, flatten(d.map.rm) rms from dfs.`/drill/testdata/resource-manager/nested-large.json` d order by d.uid ) s1 ) s2 order by s2.rms.mapid ); ALTER SESSION SET `exec.sort.disable_managed` = true; alter session set `planner.width.max_per_node` = 17; alter session set `planner.disable_exchanges` = false; alter session set `planner.width.max_per_query` = 1000; alter session set `planner.memory.max_query_memory_per_node` = 2147483648; {noformat} Stack trace is: {noformat} 2017-08-23 06:59:42,763 [266275e5-ebdb-14ae-d52d-00fa3a154f6d:frag:0:0] INFO o.a.d.e.w.fragment.FragmentExecutor - User Error Occurred: One or more nodes ran out of memory while executing the query. (Unable to allocate buffer of size 4194304 (rounded from 3276750) due to memory limit. Current allocation: 7 9986944) org.apache.drill.common.exceptions.UserException: RESOURCE ERROR: One or more nodes ran out of memory while executing the query. Unable to allocate buffer of size 4194304 (rounded from 3276750) due to memory limit. Current allocation: 79986944 [Error Id: 4f4959df-0921-4a50-b75e-56488469ab10 ] at org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:550) ~[drill-common-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:244) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38) [drill-common-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_51] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_51] at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51] Caused by: org.apache.drill.exec.exception.OutOfMemoryException: Unable to allocate buffer of size 4194304 (rounded from 3276750) due to memory limit. Cur rent allocation: 79986944 at org.apache.drill.exec.memory.BaseAllocator.buffer(BaseAllocator.java:238) ~[drill-memory-base-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.memory.BaseAllocator.buffer(BaseAllocator.java:213) ~[drill-memory-base-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.vector.VarCharVector.allocateNew(VarCharVector.java:402) ~[vector-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.vector.NullableVarCharVector.allocateNew(NullableVarCharVector.java:236) ~[vector-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.vector.AllocationHelper.allocatePrecomputedChildCount(AllocationHelper.java:33) ~[vector-1.12.0-SNAPSHOT.jar:1.12.0-SNAPS HOT] at org.apache.drill.exec.vector.AllocationHelper.allocate(AllocationHelper.java:46) ~[vector-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.record.VectorInitializer.allocateVector(VectorInitializer.java:113) ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT ] at org.apache.drill.exec.record.VectorInitializer.allocateVector(VectorInitializer.java:95) ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.record.VectorInitializer.allocateMap(VectorInitializer.java:130) ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.record.VectorInitializer.allocateVector(VectorInitializer.java:93) ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.record.VectorInitializer.allocateBatch(VectorInitializer.java:85) ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.physical.impl.xsort.managed.PriorityQueueCopierWrapper$BatchMerger.next(PriorityQueueCopierWrapper.java:262) ~[drill-java -exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.load(ExternalSortBatch.java:374) ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12 .0-SNAPSHOT] at org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.innerNext(ExternalSortBatch.java:303) ~[drill-java-exec-1.12.0-SNAPSHOT.jar :1.12.0-SNAPSHOT] at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:164)
[jira] [Updated] (DRILL-5744) External sort fails with OOM error
[ https://issues.apache.org/jira/browse/DRILL-5744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Hou updated DRILL-5744: -- Attachment: (was: 266275e5-ebdb-14ae-d52d-00fa3a154f6d.sys.drill) > External sort fails with OOM error > -- > > Key: DRILL-5744 > URL: https://issues.apache.org/jira/browse/DRILL-5744 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Relational Operators >Affects Versions: 1.10.0 >Reporter: Robert Hou >Assignee: Paul Rogers > Fix For: 1.12.0 > > > Query is: > {noformat} > ALTER SESSION SET `exec.sort.disable_managed` = false; > alter session set `planner.width.max_per_node` = 1; > alter session set `planner.disable_exchanges` = true; > alter session set `planner.width.max_per_query` = 1; > alter session set `planner.memory.max_query_memory_per_node` = 152428800; > select count(*) from ( > select * from ( > select s1.type type, flatten(s1.rms.rptd) rptds, s1.rms, s1.uid > from ( > select d.type type, d.uid uid, flatten(d.map.rm) rms from > dfs.`/drill/testdata/resource-manager/nested-large.json` d order by d.uid > ) s1 > ) s2 > order by s2.rms.mapid > ); > ALTER SESSION SET `exec.sort.disable_managed` = true; > alter session set `planner.width.max_per_node` = 17; > alter session set `planner.disable_exchanges` = false; > alter session set `planner.width.max_per_query` = 1000; > alter session set `planner.memory.max_query_memory_per_node` = 2147483648; > {noformat} > Stack trace is: > {noformat} > 2017-08-23 06:59:42,763 [266275e5-ebdb-14ae-d52d-00fa3a154f6d:frag:0:0] INFO > o.a.d.e.w.fragment.FragmentExecutor - User Error Occurred: One or more nodes > ran out of memory while executing the query. (Unable to allocate buffer of > size 4194304 (rounded from 3276750) due to memory limit. Current allocation: 7 > 9986944) > org.apache.drill.common.exceptions.UserException: RESOURCE ERROR: One or more > nodes ran out of memory while executing the query. > Unable to allocate buffer of size 4194304 (rounded from 3276750) due to > memory limit. Current allocation: 79986944 > [Error Id: 4f4959df-0921-4a50-b75e-56488469ab10 ] > at > org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:550) > ~[drill-common-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:244) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38) > [drill-common-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > [na:1.7.0_51] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > [na:1.7.0_51] > at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51] > Caused by: org.apache.drill.exec.exception.OutOfMemoryException: Unable to > allocate buffer of size 4194304 (rounded from 3276750) due to memory limit. > Cur > rent allocation: 79986944 > at > org.apache.drill.exec.memory.BaseAllocator.buffer(BaseAllocator.java:238) > ~[drill-memory-base-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.memory.BaseAllocator.buffer(BaseAllocator.java:213) > ~[drill-memory-base-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.vector.VarCharVector.allocateNew(VarCharVector.java:402) > ~[vector-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.vector.NullableVarCharVector.allocateNew(NullableVarCharVector.java:236) > ~[vector-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.vector.AllocationHelper.allocatePrecomputedChildCount(AllocationHelper.java:33) > ~[vector-1.12.0-SNAPSHOT.jar:1.12.0-SNAPS > HOT] > at > org.apache.drill.exec.vector.AllocationHelper.allocate(AllocationHelper.java:46) > ~[vector-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.record.VectorInitializer.allocateVector(VectorInitializer.java:113) > ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT > ] > at > org.apache.drill.exec.record.VectorInitializer.allocateVector(VectorInitializer.java:95) > ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.record.VectorInitializer.allocateMap(VectorInitializer.java:130) > ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.record.VectorInitializer.allocateVector(VectorInitializer.java:93) > ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.record.VectorInitializer.allocateBatch(VectorInitializer.java:85) > ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at >
[jira] [Updated] (DRILL-5744) External sort fails with OOM error
[ https://issues.apache.org/jira/browse/DRILL-5744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Hou updated DRILL-5744: -- Attachment: 265b163b-cf44-d2ff-2e70-4cd746b56611.sys.drill q34.drillbit.log > External sort fails with OOM error > -- > > Key: DRILL-5744 > URL: https://issues.apache.org/jira/browse/DRILL-5744 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Relational Operators >Affects Versions: 1.10.0 >Reporter: Robert Hou >Assignee: Paul Rogers > Fix For: 1.12.0 > > Attachments: 265b163b-cf44-d2ff-2e70-4cd746b56611.sys.drill, > q34.drillbit.log > > > Query is: > {noformat} > ALTER SESSION SET `exec.sort.disable_managed` = false; > alter session set `planner.width.max_per_node` = 1; > alter session set `planner.disable_exchanges` = true; > alter session set `planner.width.max_per_query` = 1; > alter session set `planner.memory.max_query_memory_per_node` = 152428800; > select count(*) from ( > select * from ( > select s1.type type, flatten(s1.rms.rptd) rptds, s1.rms, s1.uid > from ( > select d.type type, d.uid uid, flatten(d.map.rm) rms from > dfs.`/drill/testdata/resource-manager/nested-large.json` d order by d.uid > ) s1 > ) s2 > order by s2.rms.mapid > ); > ALTER SESSION SET `exec.sort.disable_managed` = true; > alter session set `planner.width.max_per_node` = 17; > alter session set `planner.disable_exchanges` = false; > alter session set `planner.width.max_per_query` = 1000; > alter session set `planner.memory.max_query_memory_per_node` = 2147483648; > {noformat} > Stack trace is: > {noformat} > 2017-08-23 06:59:42,763 [266275e5-ebdb-14ae-d52d-00fa3a154f6d:frag:0:0] INFO > o.a.d.e.w.fragment.FragmentExecutor - User Error Occurred: One or more nodes > ran out of memory while executing the query. (Unable to allocate buffer of > size 4194304 (rounded from 3276750) due to memory limit. Current allocation: 7 > 9986944) > org.apache.drill.common.exceptions.UserException: RESOURCE ERROR: One or more > nodes ran out of memory while executing the query. > Unable to allocate buffer of size 4194304 (rounded from 3276750) due to > memory limit. Current allocation: 79986944 > [Error Id: 4f4959df-0921-4a50-b75e-56488469ab10 ] > at > org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:550) > ~[drill-common-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:244) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38) > [drill-common-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > [na:1.7.0_51] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > [na:1.7.0_51] > at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51] > Caused by: org.apache.drill.exec.exception.OutOfMemoryException: Unable to > allocate buffer of size 4194304 (rounded from 3276750) due to memory limit. > Cur > rent allocation: 79986944 > at > org.apache.drill.exec.memory.BaseAllocator.buffer(BaseAllocator.java:238) > ~[drill-memory-base-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.memory.BaseAllocator.buffer(BaseAllocator.java:213) > ~[drill-memory-base-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.vector.VarCharVector.allocateNew(VarCharVector.java:402) > ~[vector-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.vector.NullableVarCharVector.allocateNew(NullableVarCharVector.java:236) > ~[vector-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.vector.AllocationHelper.allocatePrecomputedChildCount(AllocationHelper.java:33) > ~[vector-1.12.0-SNAPSHOT.jar:1.12.0-SNAPS > HOT] > at > org.apache.drill.exec.vector.AllocationHelper.allocate(AllocationHelper.java:46) > ~[vector-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.record.VectorInitializer.allocateVector(VectorInitializer.java:113) > ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT > ] > at > org.apache.drill.exec.record.VectorInitializer.allocateVector(VectorInitializer.java:95) > ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.record.VectorInitializer.allocateMap(VectorInitializer.java:130) > ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.record.VectorInitializer.allocateVector(VectorInitializer.java:93) > ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at >
[jira] [Commented] (DRILL-5723) Support System/Session Internal Options
[ https://issues.apache.org/jira/browse/DRILL-5723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16151254#comment-16151254 ] Robert Hou commented on DRILL-5723: --- We should be careful about making some options as system-only options. This may be true from a user point of view, but QA needs to set some of these options at the session level so that they only apply for one test vs. another test. Case in point is memory per query. Normally this would be a system option. But some QA tests need to run with one view of the system while other QA tests need to run with a different view of the system. I assume some unit tests have a similar requirement. > Support System/Session Internal Options > --- > > Key: DRILL-5723 > URL: https://issues.apache.org/jira/browse/DRILL-5723 > Project: Apache Drill > Issue Type: New Feature >Reporter: Timothy Farkas >Assignee: Timothy Farkas > > This is a feature proposed by [~ben-zvi]. > Currently all the options are accessible by the user in sys.options. We would > like to add internal options which can be altered, but are not visible in the > sys.options table. These internal options could be seen by another alias > select * from internal.options. The intention would be to put new options we > weren't comfortable with exposing to the end user in this table. > After the options and their corresponding features are considered stable they > could be changed to appear in the sys.option table. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (DRILL-5443) Managed External Sort fails with OOM while spilling to disk
[ https://issues.apache.org/jira/browse/DRILL-5443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146377#comment-16146377 ] Robert Hou commented on DRILL-5443: --- The stack trace is: {noformat} 2017-08-29 16:55:26,610 [265a014b-8cae-30b5-adab-ff030b6c7086:frag:0:0] INFO o.a.d.e.w.fragment.FragmentExecutor - User Error Occurred: One or more nodes ran out of memory while executing the query. (Unable to allocate buffer of size 2097152 (rounded from 2097120) due to memory limit. Current allocation: 45088896) org.apache.drill.common.exceptions.UserException: RESOURCE ERROR: One or more nodes ran out of memory while executing the query. Unable to allocate buffer of size 2097152 (rounded from 2097120) due to memory limit. Current allocation: 45088896 [Error Id: 74361b0c-733f-453d-bc82-eda6bad4e64a ] at org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:550) ~[drill-common-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:244) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38) [drill-common-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_111] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_111] at java.lang.Thread.run(Thread.java:745) [na:1.7.0_111] Caused by: org.apache.drill.exec.exception.OutOfMemoryException: Unable to allocate buffer of size 2097152 (rounded from 2097120) due to memory limit. Current allocation: 45088896 at org.apache.drill.exec.memory.BaseAllocator.buffer(BaseAllocator.java:238) ~[drill-memory-base-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.memory.BaseAllocator.buffer(BaseAllocator.java:213) ~[drill-memory-base-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.vector.BigIntVector.reAlloc(BigIntVector.java:252) ~[vector-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.vector.BigIntVector$Mutator.setSafe(BigIntVector.java:452) ~[vector-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.vector.RepeatedBigIntVector$Mutator.addSafe(RepeatedBigIntVector.java:355) ~[vector-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.vector.RepeatedBigIntVector.copyFromSafe(RepeatedBigIntVector.java:220) ~[vector-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.vector.RepeatedBigIntVector$TransferImpl.copyValueSafe(RepeatedBigIntVector.java:202) ~[vector-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.vector.complex.MapVector$MapTransferPair.copyValueSafe(MapVector.java:225) ~[vector-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.vector.complex.MapVector$MapTransferPair.copyValueSafe(MapVector.java:225) ~[vector-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.vector.complex.MapVector.copyFromSafe(MapVector.java:82) ~[vector-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.test.generated.PriorityQueueCopierGen408.doCopy(PriorityQueueCopierTemplate.java:27) ~[na:na] at org.apache.drill.exec.test.generated.PriorityQueueCopierGen408.next(PriorityQueueCopierTemplate.java:77) ~[na:na] at org.apache.drill.exec.physical.impl.xsort.managed.PriorityQueueCopierWrapper$BatchMerger.next(PriorityQueueCopierWrapper.java:267) ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.load(ExternalSortBatch.java:374) ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.innerNext(ExternalSortBatch.java:303) ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:164) ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next(IteratorValidatorBatchIterator.java:225) ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119) ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109) ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51) ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at
[jira] [Commented] (DRILL-5732) Unable to allocate sv2 for 9039 records, and not enough batchGroups to spill.
[ https://issues.apache.org/jira/browse/DRILL-5732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146479#comment-16146479 ] Robert Hou commented on DRILL-5732: --- This query now completes. > Unable to allocate sv2 for 9039 records, and not enough batchGroups to spill. > - > > Key: DRILL-5732 > URL: https://issues.apache.org/jira/browse/DRILL-5732 > Project: Apache Drill > Issue Type: Bug >Affects Versions: 1.10.0 >Reporter: Robert Hou >Assignee: Paul Rogers > Attachments: 26621eb2-daec-cef9-efed-5986e72a750a.sys.drill, > drillbit.log.83 > > > git commit id: > {noformat} > | 1.12.0-SNAPSHOT | e9065b55ea560e7f737d6fcb4948f9e945b9b14f | DRILL-5660: > Parquet metadata caching improvements | 15.08.2017 @ 09:31:00 PDT | > r...@qa-node190.qa.lab | 15.08.2017 @ 13:29:26 PDT | > {noformat} > Query is: > {noformat} > ALTER SESSION SET `exec.sort.disable_managed` = false; > alter session set `planner.disable_exchanges` = true; > alter session set `planner.memory.max_query_memory_per_node` = 104857600; > alter session set `planner.width.max_per_node` = 1; > alter session set `planner.width.max_per_query` = 1; > select max(col1), max(cs_sold_date_sk), max(cs_sold_time_sk), > max(cs_ship_date_sk), max(cs_bill_customer_sk), max(cs_bill_cdemo_sk), > max(cs_bill_hdemo_sk), max(cs_bill_addr_sk), max(cs_ship_customer_sk), > max(cs_ship_cdemo_sk), max(cs_ship_hdemo_sk), max(cs_ship_addr_sk), > max(cs_call_center_sk), max(cs_catalog_page_sk), max(cs_ship_mode_sk), > min(cs_warehouse_sk), max(cs_item_sk), max(cs_promo_sk), > max(cs_order_number), max(cs_quantity), max(cs_wholesale_cost), > max(cs_list_price), max(cs_sales_price), max(cs_ext_discount_amt), > min(cs_ext_sales_price), max(cs_ext_wholesale_cost), min(cs_ext_list_price), > min(cs_ext_tax), min(cs_coupon_amt), max(cs_ext_ship_cost), max(cs_net_paid), > max(cs_net_paid_inc_tax), min(cs_net_paid_inc_ship), > min(cs_net_paid_inc_ship_tax), min(cs_net_profit), min(c_customer_sk), > min(length(c_customer_id)), max(c_current_cdemo_sk), max(c_current_hdemo_sk), > min(c_current_addr_sk), min(c_first_shipto_date_sk), > min(c_first_sales_date_sk), min(length(c_salutation)), > min(length(c_first_name)), min(length(c_last_name)), > min(length(c_preferred_cust_flag)), max(c_birth_day), min(c_birth_month), > min(c_birth_year), max(c_last_review_date), c_email_address from (select > cs_sold_date_sk+cs_sold_time_sk col1, * from > dfs.`/drill/testdata/resource-manager/md1362` order by c_email_address nulls > first) d where d.col1 > 2536816 and c_email_address is not null group by > c_email_address; > ALTER SESSION SET `exec.sort.disable_managed` = true; > alter session set `planner.disable_exchanges` = false; > alter session set `planner.memory.max_query_memory_per_node` = 2147483648; > alter session set `planner.width.max_per_node` = 17; > alter session set `planner.width.max_per_query` = 1000; > {noformat} > Here is the stack trace: > {noformat} > 2017-08-18 13:15:27,052 [2668b522-5833-8fd2-0b6d-e685197f0ae3:frag:0:0] DEBUG > o.a.d.e.t.g.SingleBatchSorterGen27 - Took 6445 us to sort 9039 records > 2017-08-18 13:15:27,420 [2668b522-5833-8fd2-0b6d-e685197f0ae3:frag:0:0] DEBUG > o.a.d.e.p.i.xsort.ExternalSortBatch - Copier allocator current allocation 0 > 2017-08-18 13:15:27,420 [2668b522-5833-8fd2-0b6d-e685197f0ae3:frag:0:0] DEBUG > o.a.d.e.p.i.xsort.ExternalSortBatch - mergeAndSpill: starting total size in > memory = 71964288 > 2017-08-18 13:15:27,421 [2668b522-5833-8fd2-0b6d-e685197f0ae3:frag:0:0] INFO > o.a.d.e.p.i.xsort.ExternalSortBatch - User Error Occurred: One or more nodes > ran out of memory while executing the query. > org.apache.drill.common.exceptions.UserException: RESOURCE ERROR: One or more > nodes ran out of memory while executing the query. > Unable to allocate sv2 for 9039 records, and not enough batchGroups to spill. > batchGroups.size 1 > spilledBatchGroups.size 0 > allocated memory 71964288 > allocator limit 52428800 > [Error Id: 7b248f12-2b31-4013-86b6-92e6c842db48 ] > at > org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:550) > ~[drill-common-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.xsort.ExternalSortBatch.newSV2(ExternalSortBatch.java:637) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.xsort.ExternalSortBatch.innerNext(ExternalSortBatch.java:379) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:164) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at >
[jira] [Updated] (DRILL-5443) Managed External Sort fails with OOM while spilling to disk
[ https://issues.apache.org/jira/browse/DRILL-5443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Hou updated DRILL-5443: -- Attachment: drill5443.drillbit.log 265a014b-8cae-30b5-adab-ff030b6c7086.sys.drill > Managed External Sort fails with OOM while spilling to disk > --- > > Key: DRILL-5443 > URL: https://issues.apache.org/jira/browse/DRILL-5443 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Relational Operators >Affects Versions: 1.10.0, 1.11.0 >Reporter: Rahul Challapalli >Assignee: Paul Rogers > Fix For: 1.12.0 > > Attachments: 265a014b-8cae-30b5-adab-ff030b6c7086.sys.drill, > 27016969-ef53-40dc-b582-eea25371fa1c.sys.drill, drill5443.drillbit.log, > drillbit.log > > > git.commit.id.abbrev=3e8b01d > The below query fails with an OOM > {code} > ALTER SESSION SET `exec.sort.disable_managed` = false; > alter session set `planner.width.max_per_node` = 1; > alter session set `planner.disable_exchanges` = true; > alter session set `planner.width.max_per_query` = 1; > alter session set `planner.memory.max_query_memory_per_node` = 52428800; > select s1.type type, flatten(s1.rms.rptd) rptds from (select d.type type, > d.uid uid, flatten(d.map.rm) rms from > dfs.`/drill/testdata/resource-manager/nested-large.json` d order by d.uid) s1 > order by s1.rms.mapid; > {code} > Exception from the logs > {code} > 2017-04-24 17:22:59,439 [27016969-ef53-40dc-b582-eea25371fa1c:frag:0:0] INFO > o.a.d.e.p.i.x.m.ExternalSortBatch - User Error Occurred: External Sort > encountered an error while spilling to disk (Unable to allocate buffer of > size 524288 (rounded from 307197) due to memory limit. Current allocation: > 25886728) > org.apache.drill.common.exceptions.UserException: RESOURCE ERROR: External > Sort encountered an error while spilling to disk > [Error Id: a64e3790-3a34-42c8-b4ea-4cb1df780e63 ] > at > org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:544) > ~[drill-common-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.doMergeAndSpill(ExternalSortBatch.java:1445) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.mergeAndSpill(ExternalSortBatch.java:1376) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.mergeRuns(ExternalSortBatch.java:1372) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.consolidateBatches(ExternalSortBatch.java:1299) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.mergeSpilledRuns(ExternalSortBatch.java:1195) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.load(ExternalSortBatch.java:689) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.innerNext(ExternalSortBatch.java:559) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next(IteratorValidatorBatchIterator.java:215) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.innerNext(RemovingRecordBatch.java:93) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next(IteratorValidatorBatchIterator.java:215) > [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT] > at >
[jira] [Resolved] (DRILL-5732) Unable to allocate sv2 for 9039 records, and not enough batchGroups to spill.
[ https://issues.apache.org/jira/browse/DRILL-5732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Hou resolved DRILL-5732. --- Resolution: Not A Problem > Unable to allocate sv2 for 9039 records, and not enough batchGroups to spill. > - > > Key: DRILL-5732 > URL: https://issues.apache.org/jira/browse/DRILL-5732 > Project: Apache Drill > Issue Type: Bug >Affects Versions: 1.10.0 >Reporter: Robert Hou >Assignee: Paul Rogers > Attachments: 26621eb2-daec-cef9-efed-5986e72a750a.sys.drill, > drillbit.log.83 > > > git commit id: > {noformat} > | 1.12.0-SNAPSHOT | e9065b55ea560e7f737d6fcb4948f9e945b9b14f | DRILL-5660: > Parquet metadata caching improvements | 15.08.2017 @ 09:31:00 PDT | > r...@qa-node190.qa.lab | 15.08.2017 @ 13:29:26 PDT | > {noformat} > Query is: > {noformat} > ALTER SESSION SET `exec.sort.disable_managed` = false; > alter session set `planner.disable_exchanges` = true; > alter session set `planner.memory.max_query_memory_per_node` = 104857600; > alter session set `planner.width.max_per_node` = 1; > alter session set `planner.width.max_per_query` = 1; > select max(col1), max(cs_sold_date_sk), max(cs_sold_time_sk), > max(cs_ship_date_sk), max(cs_bill_customer_sk), max(cs_bill_cdemo_sk), > max(cs_bill_hdemo_sk), max(cs_bill_addr_sk), max(cs_ship_customer_sk), > max(cs_ship_cdemo_sk), max(cs_ship_hdemo_sk), max(cs_ship_addr_sk), > max(cs_call_center_sk), max(cs_catalog_page_sk), max(cs_ship_mode_sk), > min(cs_warehouse_sk), max(cs_item_sk), max(cs_promo_sk), > max(cs_order_number), max(cs_quantity), max(cs_wholesale_cost), > max(cs_list_price), max(cs_sales_price), max(cs_ext_discount_amt), > min(cs_ext_sales_price), max(cs_ext_wholesale_cost), min(cs_ext_list_price), > min(cs_ext_tax), min(cs_coupon_amt), max(cs_ext_ship_cost), max(cs_net_paid), > max(cs_net_paid_inc_tax), min(cs_net_paid_inc_ship), > min(cs_net_paid_inc_ship_tax), min(cs_net_profit), min(c_customer_sk), > min(length(c_customer_id)), max(c_current_cdemo_sk), max(c_current_hdemo_sk), > min(c_current_addr_sk), min(c_first_shipto_date_sk), > min(c_first_sales_date_sk), min(length(c_salutation)), > min(length(c_first_name)), min(length(c_last_name)), > min(length(c_preferred_cust_flag)), max(c_birth_day), min(c_birth_month), > min(c_birth_year), max(c_last_review_date), c_email_address from (select > cs_sold_date_sk+cs_sold_time_sk col1, * from > dfs.`/drill/testdata/resource-manager/md1362` order by c_email_address nulls > first) d where d.col1 > 2536816 and c_email_address is not null group by > c_email_address; > ALTER SESSION SET `exec.sort.disable_managed` = true; > alter session set `planner.disable_exchanges` = false; > alter session set `planner.memory.max_query_memory_per_node` = 2147483648; > alter session set `planner.width.max_per_node` = 17; > alter session set `planner.width.max_per_query` = 1000; > {noformat} > Here is the stack trace: > {noformat} > 2017-08-18 13:15:27,052 [2668b522-5833-8fd2-0b6d-e685197f0ae3:frag:0:0] DEBUG > o.a.d.e.t.g.SingleBatchSorterGen27 - Took 6445 us to sort 9039 records > 2017-08-18 13:15:27,420 [2668b522-5833-8fd2-0b6d-e685197f0ae3:frag:0:0] DEBUG > o.a.d.e.p.i.xsort.ExternalSortBatch - Copier allocator current allocation 0 > 2017-08-18 13:15:27,420 [2668b522-5833-8fd2-0b6d-e685197f0ae3:frag:0:0] DEBUG > o.a.d.e.p.i.xsort.ExternalSortBatch - mergeAndSpill: starting total size in > memory = 71964288 > 2017-08-18 13:15:27,421 [2668b522-5833-8fd2-0b6d-e685197f0ae3:frag:0:0] INFO > o.a.d.e.p.i.xsort.ExternalSortBatch - User Error Occurred: One or more nodes > ran out of memory while executing the query. > org.apache.drill.common.exceptions.UserException: RESOURCE ERROR: One or more > nodes ran out of memory while executing the query. > Unable to allocate sv2 for 9039 records, and not enough batchGroups to spill. > batchGroups.size 1 > spilledBatchGroups.size 0 > allocated memory 71964288 > allocator limit 52428800 > [Error Id: 7b248f12-2b31-4013-86b6-92e6c842db48 ] > at > org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:550) > ~[drill-common-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.xsort.ExternalSortBatch.newSV2(ExternalSortBatch.java:637) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.xsort.ExternalSortBatch.innerNext(ExternalSortBatch.java:379) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:164) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at >
[jira] [Closed] (DRILL-5732) Unable to allocate sv2 for 9039 records, and not enough batchGroups to spill.
[ https://issues.apache.org/jira/browse/DRILL-5732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Hou closed DRILL-5732. - I have verified that this query completes. > Unable to allocate sv2 for 9039 records, and not enough batchGroups to spill. > - > > Key: DRILL-5732 > URL: https://issues.apache.org/jira/browse/DRILL-5732 > Project: Apache Drill > Issue Type: Bug >Affects Versions: 1.10.0 >Reporter: Robert Hou >Assignee: Paul Rogers > Attachments: 26621eb2-daec-cef9-efed-5986e72a750a.sys.drill, > drillbit.log.83 > > > git commit id: > {noformat} > | 1.12.0-SNAPSHOT | e9065b55ea560e7f737d6fcb4948f9e945b9b14f | DRILL-5660: > Parquet metadata caching improvements | 15.08.2017 @ 09:31:00 PDT | > r...@qa-node190.qa.lab | 15.08.2017 @ 13:29:26 PDT | > {noformat} > Query is: > {noformat} > ALTER SESSION SET `exec.sort.disable_managed` = false; > alter session set `planner.disable_exchanges` = true; > alter session set `planner.memory.max_query_memory_per_node` = 104857600; > alter session set `planner.width.max_per_node` = 1; > alter session set `planner.width.max_per_query` = 1; > select max(col1), max(cs_sold_date_sk), max(cs_sold_time_sk), > max(cs_ship_date_sk), max(cs_bill_customer_sk), max(cs_bill_cdemo_sk), > max(cs_bill_hdemo_sk), max(cs_bill_addr_sk), max(cs_ship_customer_sk), > max(cs_ship_cdemo_sk), max(cs_ship_hdemo_sk), max(cs_ship_addr_sk), > max(cs_call_center_sk), max(cs_catalog_page_sk), max(cs_ship_mode_sk), > min(cs_warehouse_sk), max(cs_item_sk), max(cs_promo_sk), > max(cs_order_number), max(cs_quantity), max(cs_wholesale_cost), > max(cs_list_price), max(cs_sales_price), max(cs_ext_discount_amt), > min(cs_ext_sales_price), max(cs_ext_wholesale_cost), min(cs_ext_list_price), > min(cs_ext_tax), min(cs_coupon_amt), max(cs_ext_ship_cost), max(cs_net_paid), > max(cs_net_paid_inc_tax), min(cs_net_paid_inc_ship), > min(cs_net_paid_inc_ship_tax), min(cs_net_profit), min(c_customer_sk), > min(length(c_customer_id)), max(c_current_cdemo_sk), max(c_current_hdemo_sk), > min(c_current_addr_sk), min(c_first_shipto_date_sk), > min(c_first_sales_date_sk), min(length(c_salutation)), > min(length(c_first_name)), min(length(c_last_name)), > min(length(c_preferred_cust_flag)), max(c_birth_day), min(c_birth_month), > min(c_birth_year), max(c_last_review_date), c_email_address from (select > cs_sold_date_sk+cs_sold_time_sk col1, * from > dfs.`/drill/testdata/resource-manager/md1362` order by c_email_address nulls > first) d where d.col1 > 2536816 and c_email_address is not null group by > c_email_address; > ALTER SESSION SET `exec.sort.disable_managed` = true; > alter session set `planner.disable_exchanges` = false; > alter session set `planner.memory.max_query_memory_per_node` = 2147483648; > alter session set `planner.width.max_per_node` = 17; > alter session set `planner.width.max_per_query` = 1000; > {noformat} > Here is the stack trace: > {noformat} > 2017-08-18 13:15:27,052 [2668b522-5833-8fd2-0b6d-e685197f0ae3:frag:0:0] DEBUG > o.a.d.e.t.g.SingleBatchSorterGen27 - Took 6445 us to sort 9039 records > 2017-08-18 13:15:27,420 [2668b522-5833-8fd2-0b6d-e685197f0ae3:frag:0:0] DEBUG > o.a.d.e.p.i.xsort.ExternalSortBatch - Copier allocator current allocation 0 > 2017-08-18 13:15:27,420 [2668b522-5833-8fd2-0b6d-e685197f0ae3:frag:0:0] DEBUG > o.a.d.e.p.i.xsort.ExternalSortBatch - mergeAndSpill: starting total size in > memory = 71964288 > 2017-08-18 13:15:27,421 [2668b522-5833-8fd2-0b6d-e685197f0ae3:frag:0:0] INFO > o.a.d.e.p.i.xsort.ExternalSortBatch - User Error Occurred: One or more nodes > ran out of memory while executing the query. > org.apache.drill.common.exceptions.UserException: RESOURCE ERROR: One or more > nodes ran out of memory while executing the query. > Unable to allocate sv2 for 9039 records, and not enough batchGroups to spill. > batchGroups.size 1 > spilledBatchGroups.size 0 > allocated memory 71964288 > allocator limit 52428800 > [Error Id: 7b248f12-2b31-4013-86b6-92e6c842db48 ] > at > org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:550) > ~[drill-common-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.xsort.ExternalSortBatch.newSV2(ExternalSortBatch.java:637) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.physical.impl.xsort.ExternalSortBatch.innerNext(ExternalSortBatch.java:379) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:164) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at >
[jira] [Updated] (DRILL-5753) Managed External Sort: One or more nodes ran out of memory while executing the query.
[ https://issues.apache.org/jira/browse/DRILL-5753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Hou updated DRILL-5753: -- Attachment: 26596b4e-9883-7dc2-6275-37134f7d63be.sys.drill drillbit.log > Managed External Sort: One or more nodes ran out of memory while executing > the query. > - > > Key: DRILL-5753 > URL: https://issues.apache.org/jira/browse/DRILL-5753 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Relational Operators >Affects Versions: 1.11.0 >Reporter: Robert Hou >Assignee: Paul Rogers > Fix For: 1.12.0 > > Attachments: 26596b4e-9883-7dc2-6275-37134f7d63be.sys.drill, > drillbit.log > > > The query is: > {noformat} > ALTER SESSION SET `exec.sort.disable_managed` = false; > alter session set `planner.memory.max_query_memory_per_node` = 1252428800; > select count(*) from ( > select * from ( > select s1.type type, flatten(s1.rms.rptd) rptds, s1.rms, s1.uid > from ( > select d.type type, d.uid uid, flatten(d.map.rm) rms from > dfs.`/drill/testdata/resource-manager/nested-large.json` d order by d.uid > ) s1 > ) s2 > order by s2.rms.mapid, s2.rptds.a, s2.rptds.do_not_exist > ); > ALTER SESSION SET `exec.sort.disable_managed` = true; > alter session set `planner.memory.max_query_memory_per_node` = 2147483648; > {noformat} > The stack trace is: > {noformat} > 2017-08-30 03:35:10,479 [BitServer-5] DEBUG > o.a.drill.exec.work.foreman.Foreman - 26596b4e-9883-7dc2-6275-37134f7d63be: > State change requested RUNNING --> FAILED > org.apache.drill.common.exceptions.UserRemoteException: RESOURCE ERROR: One > or more nodes ran out of memory while executing the query. > Unable to allocate buffer of size 4194304 due to memory limit. Current > allocation: 43960640 > Fragment 2:9 > [Error Id: f58210a2-7569-42d0-8961-8c7e42c7fea3 on atsqa6c80.qa.lab:31010] > (org.apache.drill.exec.exception.OutOfMemoryException) Unable to allocate > buffer of size 4194304 due to memory limit. Current allocation: 43960640 > org.apache.drill.exec.memory.BaseAllocator.buffer():238 > org.apache.drill.exec.memory.BaseAllocator.buffer():213 > org.apache.drill.exec.vector.BigIntVector.reAlloc():252 > org.apache.drill.exec.vector.BigIntVector$Mutator.setSafe():452 > org.apache.drill.exec.vector.RepeatedBigIntVector$Mutator.addSafe():355 > org.apache.drill.exec.vector.RepeatedBigIntVector.copyFromSafe():220 > > org.apache.drill.exec.vector.RepeatedBigIntVector$TransferImpl.copyValueSafe():202 > > org.apache.drill.exec.vector.complex.MapVector$MapTransferPair.copyValueSafe():225 > > org.apache.drill.exec.vector.complex.MapVector$MapTransferPair.copyValueSafe():225 > org.apache.drill.exec.vector.complex.MapVector.copyFromSafe():82 > > org.apache.drill.exec.test.generated.PriorityQueueCopierGen1466.doCopy():47 > org.apache.drill.exec.test.generated.PriorityQueueCopierGen1466.next():77 > > org.apache.drill.exec.physical.impl.xsort.managed.PriorityQueueCopierWrapper$BatchMerger.next():267 > > org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.load():374 > > org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.innerNext():303 > org.apache.drill.exec.record.AbstractRecordBatch.next():164 > org.apache.drill.exec.record.AbstractRecordBatch.next():119 > org.apache.drill.exec.record.AbstractRecordBatch.next():109 > org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51 > > org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.innerNext():93 > org.apache.drill.exec.record.AbstractRecordBatch.next():164 > org.apache.drill.exec.physical.impl.BaseRootExec.next():105 > > org.apache.drill.exec.physical.impl.SingleSenderCreator$SingleSenderRootExec.innerNext():92 > org.apache.drill.exec.physical.impl.BaseRootExec.next():95 > org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():234 > org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():227 > java.security.AccessController.doPrivileged():-2 > javax.security.auth.Subject.doAs():415 > org.apache.hadoop.security.UserGroupInformation.doAs():1595 > org.apache.drill.exec.work.fragment.FragmentExecutor.run():227 > org.apache.drill.common.SelfCleaningRunnable.run():38 > java.util.concurrent.ThreadPoolExecutor.runWorker():1145 > java.util.concurrent.ThreadPoolExecutor$Worker.run():615 > java.lang.Thread.run():744 > at > org.apache.drill.exec.work.foreman.QueryManager$1.statusUpdate(QueryManager.java:521) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at > org.apache.drill.exec.rpc.control.WorkEventBus.statusUpdate(WorkEventBus.java:71) >
[jira] [Commented] (DRILL-5753) Managed External Sort: One or more nodes ran out of memory while executing the query.
[ https://issues.apache.org/jira/browse/DRILL-5753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16149641#comment-16149641 ] Robert Hou commented on DRILL-5753: --- Sorry, got interrupted mid-stream yesterday, and forgot to finish the Jira. > Managed External Sort: One or more nodes ran out of memory while executing > the query. > - > > Key: DRILL-5753 > URL: https://issues.apache.org/jira/browse/DRILL-5753 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Relational Operators >Affects Versions: 1.11.0 >Reporter: Robert Hou >Assignee: Paul Rogers > Fix For: 1.12.0 > > Attachments: 26596b4e-9883-7dc2-6275-37134f7d63be.sys.drill, > drillbit.log > > > The query is: > {noformat} > ALTER SESSION SET `exec.sort.disable_managed` = false; > alter session set `planner.memory.max_query_memory_per_node` = 1252428800; > select count(*) from ( > select * from ( > select s1.type type, flatten(s1.rms.rptd) rptds, s1.rms, s1.uid > from ( > select d.type type, d.uid uid, flatten(d.map.rm) rms from > dfs.`/drill/testdata/resource-manager/nested-large.json` d order by d.uid > ) s1 > ) s2 > order by s2.rms.mapid, s2.rptds.a, s2.rptds.do_not_exist > ); > ALTER SESSION SET `exec.sort.disable_managed` = true; > alter session set `planner.memory.max_query_memory_per_node` = 2147483648; > {noformat} > The stack trace is: > {noformat} > 2017-08-30 03:35:10,479 [BitServer-5] DEBUG > o.a.drill.exec.work.foreman.Foreman - 26596b4e-9883-7dc2-6275-37134f7d63be: > State change requested RUNNING --> FAILED > org.apache.drill.common.exceptions.UserRemoteException: RESOURCE ERROR: One > or more nodes ran out of memory while executing the query. > Unable to allocate buffer of size 4194304 due to memory limit. Current > allocation: 43960640 > Fragment 2:9 > [Error Id: f58210a2-7569-42d0-8961-8c7e42c7fea3 on atsqa6c80.qa.lab:31010] > (org.apache.drill.exec.exception.OutOfMemoryException) Unable to allocate > buffer of size 4194304 due to memory limit. Current allocation: 43960640 > org.apache.drill.exec.memory.BaseAllocator.buffer():238 > org.apache.drill.exec.memory.BaseAllocator.buffer():213 > org.apache.drill.exec.vector.BigIntVector.reAlloc():252 > org.apache.drill.exec.vector.BigIntVector$Mutator.setSafe():452 > org.apache.drill.exec.vector.RepeatedBigIntVector$Mutator.addSafe():355 > org.apache.drill.exec.vector.RepeatedBigIntVector.copyFromSafe():220 > > org.apache.drill.exec.vector.RepeatedBigIntVector$TransferImpl.copyValueSafe():202 > > org.apache.drill.exec.vector.complex.MapVector$MapTransferPair.copyValueSafe():225 > > org.apache.drill.exec.vector.complex.MapVector$MapTransferPair.copyValueSafe():225 > org.apache.drill.exec.vector.complex.MapVector.copyFromSafe():82 > > org.apache.drill.exec.test.generated.PriorityQueueCopierGen1466.doCopy():47 > org.apache.drill.exec.test.generated.PriorityQueueCopierGen1466.next():77 > > org.apache.drill.exec.physical.impl.xsort.managed.PriorityQueueCopierWrapper$BatchMerger.next():267 > > org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.load():374 > > org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.innerNext():303 > org.apache.drill.exec.record.AbstractRecordBatch.next():164 > org.apache.drill.exec.record.AbstractRecordBatch.next():119 > org.apache.drill.exec.record.AbstractRecordBatch.next():109 > org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51 > > org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.innerNext():93 > org.apache.drill.exec.record.AbstractRecordBatch.next():164 > org.apache.drill.exec.physical.impl.BaseRootExec.next():105 > > org.apache.drill.exec.physical.impl.SingleSenderCreator$SingleSenderRootExec.innerNext():92 > org.apache.drill.exec.physical.impl.BaseRootExec.next():95 > org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():234 > org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():227 > java.security.AccessController.doPrivileged():-2 > javax.security.auth.Subject.doAs():415 > org.apache.hadoop.security.UserGroupInformation.doAs():1595 > org.apache.drill.exec.work.fragment.FragmentExecutor.run():227 > org.apache.drill.common.SelfCleaningRunnable.run():38 > java.util.concurrent.ThreadPoolExecutor.runWorker():1145 > java.util.concurrent.ThreadPoolExecutor$Worker.run():615 > java.lang.Thread.run():744 > at > org.apache.drill.exec.work.foreman.QueryManager$1.statusUpdate(QueryManager.java:521) > [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] > at >
[jira] [Comment Edited] (DRILL-5753) Managed External Sort: One or more nodes ran out of memory while executing the query.
[ https://issues.apache.org/jira/browse/DRILL-5753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16149641#comment-16149641 ] Robert Hou edited comment on DRILL-5753 at 8/31/17 9:34 PM: Sorry, got interrupted mid-stream yesterday, and forgot to finish the Jira. Just attached them. was (Author: rhou): Sorry, got interrupted mid-stream yesterday, and forgot to finish the Jira. > Managed External Sort: One or more nodes ran out of memory while executing > the query. > - > > Key: DRILL-5753 > URL: https://issues.apache.org/jira/browse/DRILL-5753 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Relational Operators >Affects Versions: 1.11.0 >Reporter: Robert Hou >Assignee: Paul Rogers > Fix For: 1.12.0 > > Attachments: 26596b4e-9883-7dc2-6275-37134f7d63be.sys.drill, > drillbit.log > > > The query is: > {noformat} > ALTER SESSION SET `exec.sort.disable_managed` = false; > alter session set `planner.memory.max_query_memory_per_node` = 1252428800; > select count(*) from ( > select * from ( > select s1.type type, flatten(s1.rms.rptd) rptds, s1.rms, s1.uid > from ( > select d.type type, d.uid uid, flatten(d.map.rm) rms from > dfs.`/drill/testdata/resource-manager/nested-large.json` d order by d.uid > ) s1 > ) s2 > order by s2.rms.mapid, s2.rptds.a, s2.rptds.do_not_exist > ); > ALTER SESSION SET `exec.sort.disable_managed` = true; > alter session set `planner.memory.max_query_memory_per_node` = 2147483648; > {noformat} > The stack trace is: > {noformat} > 2017-08-30 03:35:10,479 [BitServer-5] DEBUG > o.a.drill.exec.work.foreman.Foreman - 26596b4e-9883-7dc2-6275-37134f7d63be: > State change requested RUNNING --> FAILED > org.apache.drill.common.exceptions.UserRemoteException: RESOURCE ERROR: One > or more nodes ran out of memory while executing the query. > Unable to allocate buffer of size 4194304 due to memory limit. Current > allocation: 43960640 > Fragment 2:9 > [Error Id: f58210a2-7569-42d0-8961-8c7e42c7fea3 on atsqa6c80.qa.lab:31010] > (org.apache.drill.exec.exception.OutOfMemoryException) Unable to allocate > buffer of size 4194304 due to memory limit. Current allocation: 43960640 > org.apache.drill.exec.memory.BaseAllocator.buffer():238 > org.apache.drill.exec.memory.BaseAllocator.buffer():213 > org.apache.drill.exec.vector.BigIntVector.reAlloc():252 > org.apache.drill.exec.vector.BigIntVector$Mutator.setSafe():452 > org.apache.drill.exec.vector.RepeatedBigIntVector$Mutator.addSafe():355 > org.apache.drill.exec.vector.RepeatedBigIntVector.copyFromSafe():220 > > org.apache.drill.exec.vector.RepeatedBigIntVector$TransferImpl.copyValueSafe():202 > > org.apache.drill.exec.vector.complex.MapVector$MapTransferPair.copyValueSafe():225 > > org.apache.drill.exec.vector.complex.MapVector$MapTransferPair.copyValueSafe():225 > org.apache.drill.exec.vector.complex.MapVector.copyFromSafe():82 > > org.apache.drill.exec.test.generated.PriorityQueueCopierGen1466.doCopy():47 > org.apache.drill.exec.test.generated.PriorityQueueCopierGen1466.next():77 > > org.apache.drill.exec.physical.impl.xsort.managed.PriorityQueueCopierWrapper$BatchMerger.next():267 > > org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.load():374 > > org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.innerNext():303 > org.apache.drill.exec.record.AbstractRecordBatch.next():164 > org.apache.drill.exec.record.AbstractRecordBatch.next():119 > org.apache.drill.exec.record.AbstractRecordBatch.next():109 > org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51 > > org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.innerNext():93 > org.apache.drill.exec.record.AbstractRecordBatch.next():164 > org.apache.drill.exec.physical.impl.BaseRootExec.next():105 > > org.apache.drill.exec.physical.impl.SingleSenderCreator$SingleSenderRootExec.innerNext():92 > org.apache.drill.exec.physical.impl.BaseRootExec.next():95 > org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():234 > org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():227 > java.security.AccessController.doPrivileged():-2 > javax.security.auth.Subject.doAs():415 > org.apache.hadoop.security.UserGroupInformation.doAs():1595 > org.apache.drill.exec.work.fragment.FragmentExecutor.run():227 > org.apache.drill.common.SelfCleaningRunnable.run():38 > java.util.concurrent.ThreadPoolExecutor.runWorker():1145 > java.util.concurrent.ThreadPoolExecutor$Worker.run():615 > java.lang.Thread.run():744 > at >
[jira] [Commented] (DRILL-5670) Varchar vector throws an assertion error when allocating a new vector
[ https://issues.apache.org/jira/browse/DRILL-5670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16149648#comment-16149648 ] Robert Hou commented on DRILL-5670: --- Attached log file and profile. > Varchar vector throws an assertion error when allocating a new vector > - > > Key: DRILL-5670 > URL: https://issues.apache.org/jira/browse/DRILL-5670 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Relational Operators >Affects Versions: 1.11.0 >Reporter: Rahul Challapalli >Assignee: Paul Rogers > Fix For: 1.12.0 > > Attachments: 266290f3-5fdc-5873-7372-e9ee053bf867.sys.drill, > 269969ca-8d4d-073a-d916-9031e3d3fbf0.sys.drill, drillbit.log, drillbit.log, > drillbit.out, drill-override.conf > > > I am running this test on a private branch of [paul's > repository|https://github.com/paul-rogers/drill]. Below is the commit info > {code} > git.commit.id.abbrev=d86e16c > git.commit.user.email=prog...@maprtech.com > git.commit.message.full=DRILL-5601\: Rollup of external sort fixes an > improvements\n\n- DRILL-5513\: Managed External Sort \: OOM error during the > merge phase\n- DRILL-5519\: Sort fails to spill and results in an OOM\n- > DRILL-5522\: OOM during the merge and spill process of the managed external > sort\n- DRILL-5594\: Excessive buffer reallocations during merge phase of > external sort\n- DRILL-5597\: Incorrect "bits" vector allocation in nullable > vectors allocateNew()\n- DRILL-5602\: Repeated List Vector fails to > initialize the offset vector\n\nAll of the bugs have to do with handling > low-memory conditions, and with\ncorrectly estimating the sizes of vectors, > even when those vectors come\nfrom the spill file or from an exchange. Hence, > the changes for all of\nthe above issues are interrelated.\n > git.commit.id=d86e16c551e7d3553f2cde748a739b1c5a7a7659 > git.commit.message.short=DRILL-5601\: Rollup of external sort fixes an > improvements > git.commit.user.name=Paul Rogers > git.build.user.name=Rahul Challapalli > git.commit.id.describe=0.9.0-1078-gd86e16c > git.build.user.email=challapallira...@gmail.com > git.branch=d86e16c551e7d3553f2cde748a739b1c5a7a7659 > git.commit.time=05.07.2017 @ 20\:34\:39 PDT > git.build.time=12.07.2017 @ 14\:27\:03 PDT > git.remote.origin.url=g...@github.com\:paul-rogers/drill.git > {code} > Below query fails with an Assertion Error > {code} > 0: jdbc:drill:zk=10.10.100.190:5181> ALTER SESSION SET > `exec.sort.disable_managed` = false; > +---+-+ > | ok | summary | > +---+-+ > | true | exec.sort.disable_managed updated. | > +---+-+ > 1 row selected (1.044 seconds) > 0: jdbc:drill:zk=10.10.100.190:5181> alter session set > `planner.memory.max_query_memory_per_node` = 482344960; > +---++ > | ok | summary | > +---++ > | true | planner.memory.max_query_memory_per_node updated. | > +---++ > 1 row selected (0.372 seconds) > 0: jdbc:drill:zk=10.10.100.190:5181> alter session set > `planner.width.max_per_node` = 1; > +---+--+ > | ok | summary| > +---+--+ > | true | planner.width.max_per_node updated. | > +---+--+ > 1 row selected (0.292 seconds) > 0: jdbc:drill:zk=10.10.100.190:5181> alter session set > `planner.width.max_per_query` = 1; > +---+---+ > | ok |summary| > +---+---+ > | true | planner.width.max_per_query updated. | > +---+---+ > 1 row selected (0.25 seconds) > 0: jdbc:drill:zk=10.10.100.190:5181> select count(*) from (select * from > dfs.`/drill/testdata/resource-manager/3500cols.tbl` order by > columns[450],columns[330],columns[230],columns[220],columns[110],columns[90],columns[80],columns[70],columns[40],columns[10],columns[20],columns[30],columns[40],columns[50], > > columns[454],columns[413],columns[940],columns[834],columns[73],columns[140],columns[104],columns[],columns[30],columns[2420],columns[1520], > columns[1410], > columns[1110],columns[1290],columns[2380],columns[705],columns[45],columns[1054],columns[2430],columns[420],columns[404],columns[3350], > >
[jira] [Created] (DRILL-5840) A query that includes sort completes, and then loses Drill connection. Drill becomes unresponsive, and cannot restart because it cannot communicate with Zookeeper
Robert Hou created DRILL-5840: - Summary: A query that includes sort completes, and then loses Drill connection. Drill becomes unresponsive, and cannot restart because it cannot communicate with Zookeeper Key: DRILL-5840 URL: https://issues.apache.org/jira/browse/DRILL-5840 Project: Apache Drill Issue Type: Bug Components: Execution - Relational Operators Affects Versions: 1.11.0 Reporter: Robert Hou Assignee: Paul Rogers Fix For: 1.12.0 Query is: {noformat} ALTER SESSION SET `exec.sort.disable_managed` = false; select count(*) from (select * from dfs.`/drill/testdata/resource-manager/250wide.tbl` order by columns[0])d where d.columns[0] = 'ljdfhwuehnoiueyf'; {noformat} Query tries to complete, but cannot. >From the drillbit.log: {noformat} 2017-10-03 16:28:14,892 [262bec7f-3539-0dd7-6fea-f2959f9df3b6:frag:0:0] DEBUG o.a.drill.exec.work.foreman.Foreman - 262bec7f-3539-0dd7-6fea-f2959f9df3b6: State change requested RUNNING --> COMPLETED 2017-10-04 01:47:27,698 [UserServer-1] DEBUG o.a.d.e.r.u.UserServerRequestHandler - Received query to run. Returning query handle. 2017-10-04 03:30:02,916 [262bec7f-3539-0dd7-6fea-f2959f9df3b6:frag:0:0] WARN o.a.d.exec.work.foreman.QueryManager - Failure while trying to delete the estore profile for this query. org.apache.drill.common.exceptions.DrillRuntimeException: unable to delete node at /running/262bec7f-3539-0dd7-6fea-f2959f9df3b6 at org.apache.drill.exec.coord.zk.ZookeeperClient.delete(ZookeeperClient.java:343) ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.coord.zk.ZkEphemeralStore.remove(ZkEphemeralStore.java:108) ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.work.foreman.QueryManager.updateEphemeralState(QueryManager.java:293) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.work.foreman.Foreman.recordNewState(Foreman.java:1043) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.work.foreman.Foreman.moveToState(Foreman.java:964) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.work.foreman.Foreman.access$2600(Foreman.java:113) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.work.foreman.Foreman$StateSwitch.processEvent(Foreman.java:1025) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.work.foreman.Foreman$StateSwitch.processEvent(Foreman.java:1018) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.common.EventProcessor.processEvents(EventProcessor.java:107) [drill-common-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.common.EventProcessor.sendEvent(EventProcessor.java:65) [drill-common-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.work.foreman.Foreman$StateSwitch.addEvent(Foreman.java:1020) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.work.foreman.Foreman.addToEventQueue(Foreman.java:1038) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.work.foreman.QueryManager.nodeComplete(QueryManager.java:498) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.work.foreman.QueryManager.access$100(QueryManager.java:66) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.work.foreman.QueryManager$NodeTracker.fragmentComplete(QueryManager.java:462) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.work.foreman.QueryManager.fragmentDone(QueryManager.java:147) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.work.foreman.QueryManager.access$400(QueryManager.java:66) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.work.foreman.QueryManager$1.statusUpdate(QueryManager.java:525) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.rpc.control.WorkEventBus.statusUpdate(WorkEventBus.java:71) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.work.fragment.FragmentStatusReporter.sendStatus(FragmentStatusReporter.java:124) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.work.fragment.FragmentStatusReporter.stateChanged(FragmentStatusReporter.java:94) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.work.fragment.FragmentExecutor.sendFinalState(FragmentExecutor.java:304) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.work.fragment.FragmentExecutor.cleanup(FragmentExecutor.java:160)
[jira] [Updated] (DRILL-5840) A query that includes sort completes, and then loses Drill connection. Drill becomes unresponsive, and cannot restart because it cannot communicate with Zookeeper
[ https://issues.apache.org/jira/browse/DRILL-5840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Hou updated DRILL-5840: -- Description: Query is: {noformat} ALTER SESSION SET `exec.sort.disable_managed` = false; select count(*) from (select * from dfs.`/drill/testdata/resource-manager/250wide.tbl` order by columns[0])d where d.columns[0] = 'ljdfhwuehnoiueyf'; {noformat} Query tries to complete, but cannot. >From the drillbit.log: {noformat} 2017-10-03 16:28:14,892 [262bec7f-3539-0dd7-6fea-f2959f9df3b6:frag:0:0] DEBUG o.a.drill.exec.work.foreman.Foreman - 262bec7f-3539-0dd7-6fea-f2959f9df3b6: State change requested RUNNING --> COMPLETED 2017-10-04 01:47:27,698 [UserServer-1] DEBUG o.a.d.e.r.u.UserServerRequestHandler - Received query to run. Returning query handle. 2017-10-04 03:30:02,916 [262bec7f-3539-0dd7-6fea-f2959f9df3b6:frag:0:0] WARN o.a.d.exec.work.foreman.QueryManager - Failure while trying to delete the estore profile for this query. org.apache.drill.common.exceptions.DrillRuntimeException: unable to delete node at /running/262bec7f-3539-0dd7-6fea-f2959f9df3b6 at org.apache.drill.exec.coord.zk.ZookeeperClient.delete(ZookeeperClient.java:343) ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.coord.zk.ZkEphemeralStore.remove(ZkEphemeralStore.java:108) ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.work.foreman.QueryManager.updateEphemeralState(QueryManager.java:293) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.work.foreman.Foreman.recordNewState(Foreman.java:1043) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.work.foreman.Foreman.moveToState(Foreman.java:964) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.work.foreman.Foreman.access$2600(Foreman.java:113) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.work.foreman.Foreman$StateSwitch.processEvent(Foreman.java:1025) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.work.foreman.Foreman$StateSwitch.processEvent(Foreman.java:1018) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.common.EventProcessor.processEvents(EventProcessor.java:107) [drill-common-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.common.EventProcessor.sendEvent(EventProcessor.java:65) [drill-common-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.work.foreman.Foreman$StateSwitch.addEvent(Foreman.java:1020) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.work.foreman.Foreman.addToEventQueue(Foreman.java:1038) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.work.foreman.QueryManager.nodeComplete(QueryManager.java:498) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.work.foreman.QueryManager.access$100(QueryManager.java:66) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.work.foreman.QueryManager$NodeTracker.fragmentComplete(QueryManager.java:462) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.work.foreman.QueryManager.fragmentDone(QueryManager.java:147) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.work.foreman.QueryManager.access$400(QueryManager.java:66) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.work.foreman.QueryManager$1.statusUpdate(QueryManager.java:525) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.rpc.control.WorkEventBus.statusUpdate(WorkEventBus.java:71) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.work.fragment.FragmentStatusReporter.sendStatus(FragmentStatusReporter.java:124) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.work.fragment.FragmentStatusReporter.stateChanged(FragmentStatusReporter.java:94) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.work.fragment.FragmentExecutor.sendFinalState(FragmentExecutor.java:304) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.work.fragment.FragmentExecutor.cleanup(FragmentExecutor.java:160) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:267) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38) [drill-common-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at
[jira] [Updated] (DRILL-5840) A query that includes sort completes, and then loses Drill connection. Drill becomes unresponsive, and cannot restart because it cannot communicate with Zookeeper
[ https://issues.apache.org/jira/browse/DRILL-5840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Hou updated DRILL-5840: -- Description: Query is: {noformat} ALTER SESSION SET `exec.sort.disable_managed` = false; select count(*) from (select * from dfs.`/drill/testdata/resource-manager/250wide.tbl` order by columns[0])d where d.columns[0] = 'ljdfhwuehnoiueyf'; {noformat} Query tries to complete, but cannot. It takes 20 hours from the time the query tries to complete, to the time Drill finally loses its connection. >From the drillbit.log: {noformat} 2017-10-03 16:28:14,892 [262bec7f-3539-0dd7-6fea-f2959f9df3b6:frag:0:0] DEBUG o.a.drill.exec.work.foreman.Foreman - 262bec7f-3539-0dd7-6fea-f2959f9df3b6: State change requested RUNNING --> COMPLETED 2017-10-04 01:47:27,698 [UserServer-1] DEBUG o.a.d.e.r.u.UserServerRequestHandler - Received query to run. Returning query handle. 2017-10-04 03:30:02,916 [262bec7f-3539-0dd7-6fea-f2959f9df3b6:frag:0:0] WARN o.a.d.exec.work.foreman.QueryManager - Failure while trying to delete the estore profile for this query. org.apache.drill.common.exceptions.DrillRuntimeException: unable to delete node at /running/262bec7f-3539-0dd7-6fea-f2959f9df3b6 at org.apache.drill.exec.coord.zk.ZookeeperClient.delete(ZookeeperClient.java:343) ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.coord.zk.ZkEphemeralStore.remove(ZkEphemeralStore.java:108) ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.work.foreman.QueryManager.updateEphemeralState(QueryManager.java:293) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.work.foreman.Foreman.recordNewState(Foreman.java:1043) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.work.foreman.Foreman.moveToState(Foreman.java:964) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.work.foreman.Foreman.access$2600(Foreman.java:113) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.work.foreman.Foreman$StateSwitch.processEvent(Foreman.java:1025) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.work.foreman.Foreman$StateSwitch.processEvent(Foreman.java:1018) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.common.EventProcessor.processEvents(EventProcessor.java:107) [drill-common-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.common.EventProcessor.sendEvent(EventProcessor.java:65) [drill-common-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.work.foreman.Foreman$StateSwitch.addEvent(Foreman.java:1020) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.work.foreman.Foreman.addToEventQueue(Foreman.java:1038) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.work.foreman.QueryManager.nodeComplete(QueryManager.java:498) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.work.foreman.QueryManager.access$100(QueryManager.java:66) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.work.foreman.QueryManager$NodeTracker.fragmentComplete(QueryManager.java:462) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.work.foreman.QueryManager.fragmentDone(QueryManager.java:147) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.work.foreman.QueryManager.access$400(QueryManager.java:66) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.work.foreman.QueryManager$1.statusUpdate(QueryManager.java:525) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.rpc.control.WorkEventBus.statusUpdate(WorkEventBus.java:71) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.work.fragment.FragmentStatusReporter.sendStatus(FragmentStatusReporter.java:124) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.work.fragment.FragmentStatusReporter.stateChanged(FragmentStatusReporter.java:94) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.work.fragment.FragmentExecutor.sendFinalState(FragmentExecutor.java:304) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.work.fragment.FragmentExecutor.cleanup(FragmentExecutor.java:160) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:267) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at
[jira] [Commented] (DRILL-5889) sqlline loses RPC connection
[ https://issues.apache.org/jira/browse/DRILL-5889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16210318#comment-16210318 ] Robert Hou commented on DRILL-5889: --- I allocated 10GB to Drill. > sqlline loses RPC connection > > > Key: DRILL-5889 > URL: https://issues.apache.org/jira/browse/DRILL-5889 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Relational Operators >Affects Versions: 1.11.0 >Reporter: Robert Hou >Assignee: Pritesh Maker > Attachments: 26183ef9-44b2-ef32-adf8-cc2b5ba9f9c0.sys.drill, > drillbit.log > > > Query is: > {noformat} > alter session set `planner.memory.max_query_memory_per_node` = 10737418240; > select count(*), max(`filename`) from dfs.`/drill/testdata/hash-agg/data1` > group by no_nulls_col, nulls_col; > {noformat} > Error is: > {noformat} > 0: jdbc:drill:drillbit=10.10.100.190> select count(*), max(`filename`) from > dfs.`/drill/testdata/hash-agg/data1` group by no_nulls_col, nulls_col; > Error: CONNECTION ERROR: Connection /10.10.100.190:45776 <--> > /10.10.100.190:31010 (user client) closed unexpectedly. Drillbit down? > [Error Id: db4aea70-11e6-4e63-b0cc-13cdba0ee87a ] (state=,code=0) > {noformat} > From drillbit.log: > 2017-10-18 14:04:23,044 [UserServer-1] INFO > o.a.drill.exec.rpc.user.UserServer - RPC connection /10.10.100.190:31010 <--> > /10.10.100.190:45776 (user server) timed out. Timeout was set to 30 seconds. > Closing connection. > Plan is: > {noformat} > | 00-00Screen > 00-01 Project(EXPR$0=[$0], EXPR$1=[$1]) > 00-02UnionExchange > 01-01 Project(EXPR$0=[$2], EXPR$1=[$3]) > 01-02HashAgg(group=[{0, 1}], EXPR$0=[$SUM0($2)], EXPR$1=[MAX($3)]) > 01-03 Project(no_nulls_col=[$0], nulls_col=[$1], EXPR$0=[$2], > EXPR$1=[$3]) > 01-04HashToRandomExchange(dist0=[[$0]], dist1=[[$1]]) > 02-01 UnorderedMuxExchange > 03-01Project(no_nulls_col=[$0], nulls_col=[$1], > EXPR$0=[$2], EXPR$1=[$3], E_X_P_R_H_A_S_H_F_I_E_L_D=[hash32AsDouble($1, > hash32AsDouble($0, 1301011))]) > 03-02 HashAgg(group=[{0, 1}], EXPR$0=[COUNT()], > EXPR$1=[MAX($2)]) > 03-03Scan(groupscan=[ParquetGroupScan > [entries=[ReadEntryWithPath [path=maprfs:///drill/testdata/hash-agg/data1]], > selectionRoot=maprfs:/drill/testdata/hash-agg/data1, numFiles=1, > usedMetadataFile=false, columns=[`no_nulls_col`, `nulls_col`, `filename`]]]) > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (DRILL-5889) sqlline loses RPC connection
[ https://issues.apache.org/jira/browse/DRILL-5889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16210315#comment-16210315 ] Robert Hou commented on DRILL-5889: --- It may take time to display all the contents in your web browser. I would download the file first. So display the contents, and the save the page to disk. The error is on line 1253091. > sqlline loses RPC connection > > > Key: DRILL-5889 > URL: https://issues.apache.org/jira/browse/DRILL-5889 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Relational Operators >Affects Versions: 1.11.0 >Reporter: Robert Hou >Assignee: Pritesh Maker > Attachments: 26183ef9-44b2-ef32-adf8-cc2b5ba9f9c0.sys.drill, > drillbit.log > > > Query is: > {noformat} > alter session set `planner.memory.max_query_memory_per_node` = 10737418240; > select count(*), max(`filename`) from dfs.`/drill/testdata/hash-agg/data1` > group by no_nulls_col, nulls_col; > {noformat} > Error is: > {noformat} > 0: jdbc:drill:drillbit=10.10.100.190> select count(*), max(`filename`) from > dfs.`/drill/testdata/hash-agg/data1` group by no_nulls_col, nulls_col; > Error: CONNECTION ERROR: Connection /10.10.100.190:45776 <--> > /10.10.100.190:31010 (user client) closed unexpectedly. Drillbit down? > [Error Id: db4aea70-11e6-4e63-b0cc-13cdba0ee87a ] (state=,code=0) > {noformat} > From drillbit.log: > 2017-10-18 14:04:23,044 [UserServer-1] INFO > o.a.drill.exec.rpc.user.UserServer - RPC connection /10.10.100.190:31010 <--> > /10.10.100.190:45776 (user server) timed out. Timeout was set to 30 seconds. > Closing connection. > Plan is: > {noformat} > | 00-00Screen > 00-01 Project(EXPR$0=[$0], EXPR$1=[$1]) > 00-02UnionExchange > 01-01 Project(EXPR$0=[$2], EXPR$1=[$3]) > 01-02HashAgg(group=[{0, 1}], EXPR$0=[$SUM0($2)], EXPR$1=[MAX($3)]) > 01-03 Project(no_nulls_col=[$0], nulls_col=[$1], EXPR$0=[$2], > EXPR$1=[$3]) > 01-04HashToRandomExchange(dist0=[[$0]], dist1=[[$1]]) > 02-01 UnorderedMuxExchange > 03-01Project(no_nulls_col=[$0], nulls_col=[$1], > EXPR$0=[$2], EXPR$1=[$3], E_X_P_R_H_A_S_H_F_I_E_L_D=[hash32AsDouble($1, > hash32AsDouble($0, 1301011))]) > 03-02 HashAgg(group=[{0, 1}], EXPR$0=[COUNT()], > EXPR$1=[MAX($2)]) > 03-03Scan(groupscan=[ParquetGroupScan > [entries=[ReadEntryWithPath [path=maprfs:///drill/testdata/hash-agg/data1]], > selectionRoot=maprfs:/drill/testdata/hash-agg/data1, numFiles=1, > usedMetadataFile=false, columns=[`no_nulls_col`, `nulls_col`, `filename`]]]) > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029)