I tried the same query on v 1.11, specified *planner.width.max_per_node =
48* (All cores on the machine), same configurations mentioned in the
previous email, but the query was cancelled too after around 2 minues. When
I checked the logs, I found the following error but no stacktrace for it

[UserServer-1] INFO org.apache.drill.exec.rpc.user.UserServer - RPC
> connection /127.0.0.1:31010 <--> /127.0.0.1:40244 (user server) timed
> out.  Timeout was set to 30 seconds. Closing connection.
> [UserServer-1] WARN io.netty.channel.DefaultChannelPipeline - An exception
> was thrown by a user handler's exceptionCaught() method while handling the
> following exception:
> io.netty.handler.timeout.ReadTimeoutException


But after a many messages like this one

[26710bae-e082-04a1-3899-1ed76bfa0dc6:frag:3:13] INFO
> org.apache.drill.exec.work.fragment.FragmentExecutor -
> 26710bae-e082-04a1-3899-1ed76bfa0dc6:3:13: State change requested
> CANCELLATION_REQUESTED --> FINISHED
> [26710bae-e082-04a1-3899-1ed76bfa0dc6:frag:3:13] INFO
> org.apache.drill.exec.work.fragment.FragmentStatusReporter -
> 26710bae-e082-04a1-3899-1ed76bfa0dc6:3:13: State to report: CANCELLED
> [drill-executor-123] WARN org.apache.drill.exec.rpc.control.WorkEventBus -
> Fragment 26710bae-e082-04a1-3899-1ed76bfa0dc6:3:13 not found in the work
> bus.


I found this stacktrace

[BitServer-4] INFO org.apache.drill.exec.work.foreman.Foreman - Failure
> while trying communicate query result to initiating client. This would
> happen if a client is disconnected before response notice can be sent.
> org.apache.drill.exec.rpc.RpcException: Failure sending message.
>         at org.apache.drill.exec.rpc.RpcBus.send(RpcBus.java:124)
>         at
> org.apache.drill.exec.rpc.user.UserServer$BitToUserConnection.sendResult(UserServer.java:167)
>         at
> org.apache.drill.exec.work.foreman.Foreman$ForemanResult.close(Foreman.java:868)
>         at
> org.apache.drill.exec.work.foreman.Foreman.moveToState(Foreman.java:1001)
>         at
> org.apache.drill.exec.work.foreman.Foreman.access$2600(Foreman.java:115)
>         at
> org.apache.drill.exec.work.foreman.Foreman$StateSwitch.processEvent(Foreman.java:1027)
>         at
> org.apache.drill.exec.work.foreman.Foreman$StateSwitch.processEvent(Foreman.java:1020)
>         at
> org.apache.drill.common.EventProcessor.processEvents(EventProcessor.java:107)
>         at
> org.apache.drill.common.EventProcessor.sendEvent(EventProcessor.java:65)
>         at
> org.apache.drill.exec.work.foreman.Foreman$StateSwitch.addEvent(Foreman.java:1022)
>         at
> org.apache.drill.exec.work.foreman.Foreman.addToEventQueue(Foreman.java:1040)
>         at
> org.apache.drill.exec.work.foreman.QueryManager.nodeComplete(QueryManager.java:498)
>         at
> org.apache.drill.exec.work.foreman.QueryManager.access$100(QueryManager.java:66)
>         at
> org.apache.drill.exec.work.foreman.QueryManager$NodeTracker.fragmentComplete(QueryManager.java:462)
>         at
> org.apache.drill.exec.work.foreman.QueryManager.fragmentDone(QueryManager.java:147)
>         at
> org.apache.drill.exec.work.foreman.QueryManager.access$400(QueryManager.java:66)
>         at
> org.apache.drill.exec.work.foreman.QueryManager$1.statusUpdate(QueryManager.java:525)
>         at
> org.apache.drill.exec.rpc.control.WorkEventBus.statusUpdate(WorkEventBus.java:71)
>         at
> org.apache.drill.exec.work.batch.ControlMessageHandler.handle(ControlMessageHandler.java:94)
>         at
> org.apache.drill.exec.work.batch.ControlMessageHandler.handle(ControlMessageHandler.java:55)
>         at
> org.apache.drill.exec.rpc.BasicServer.handle(BasicServer.java:157)
>         at
> org.apache.drill.exec.rpc.BasicServer.handle(BasicServer.java:53)
>         at
> org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:274)
>         at
> org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:244)
>         at
> io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:89)
>         at
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
>         at
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
>         at
> io.netty.handler.timeout.ReadTimeoutHandler.channelRead(ReadTimeoutHandler.java:150)
>         at
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
>         at
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
>         at
> io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
>         at
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
>         at
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
>         at
> io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:242)
>         at
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
>         at
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
>         at
> io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
>         at
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
>         at
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
>         at
> io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:847)
>         at
> io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
>         at
> io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
>         at
> io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
>         at
> io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
>         at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
>         at
> io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
>         at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.IllegalArgumentException: Attempted to send a message
> when connection is no longer valid.
>         at
> com.google.common.base.Preconditions.checkArgument(Preconditions.java:122)
>         at
> org.apache.drill.exec.rpc.RequestIdMap.createNewRpcListener(RequestIdMap.java:88)
>         at
> org.apache.drill.exec.rpc.AbstractRemoteConnection.createNewRpcListener(AbstractRemoteConnection.java:162)
>         at org.apache.drill.exec.rpc.RpcBus.send(RpcBus.java:117)
>         ... 46 more


These error messages were found in my application's logs

> INFO: [08:28:59] Channel closed /127.0.0.1:40242 <--> localhost/
> 127.0.0.1:31010.
> [org.apache.drill.exec.rpc.RpcBus$ChannelClosedHandler.operationComplete]
>  INFO: [08:28:59] Channel closed /127.0.0.1:40240 <--> localhost/
> 127.0.0.1:31010.
> [org.apache.drill.exec.rpc.RpcBus$ChannelClosedHandler.operationComplete]
>  INFO: [08:28:59] Channel closed /127.0.0.1:40238 <--> localhost/
> 127.0.0.1:31010.
> [org.apache.drill.exec.rpc.RpcBus$ChannelClosedHandler.operationComplete]
>  INFO: [08:28:59] Channel closed /127.0.0.1:40236 <--> localhost/
> 127.0.0.1:31010.
> [org.apache.drill.exec.rpc.RpcBus$ChannelClosedHandler.operationComplete]
>  INFO: [08:28:59] Channel closed /127.0.0.1:40234 <--> localhost/
> 127.0.0.1:31010.
> [org.apache.drill.exec.rpc.RpcBus$ChannelClosedHandler.operationComplete]
>  INFO: [08:28:59] Channel closed /127.0.0.1:40232 <--> localhost/
> 127.0.0.1:31010.
> [org.apache.drill.exec.rpc.RpcBus$ChannelClosedHandler.operationComplete]
>  INFO: [08:28:59] Channel closed /127.0.0.1:40230 <--> localhost/
> 127.0.0.1:31010.
> [org.apache.drill.exec.rpc.RpcBus$ChannelClosedHandler.operationComplete]
>  INFO: [08:28:59] Channel closed /127.0.0.1:40228 <--> localhost/
> 127.0.0.1:31010.
> [org.apache.drill.exec.rpc.RpcBus$ChannelClosedHandler.operationComplete]
>  INFO: [08:30:58] Channel closed /127.0.0.1:40226 <--> localhost/
> 127.0.0.1:31010.
> [org.apache.drill.exec.rpc.RpcBus$ChannelClosedHandler.operationComplete]
>  INFO: [08:30:58] Channel closed /127.0.0.1:40244 <--> localhost/
> 127.0.0.1:31010.
> [org.apache.drill.exec.rpc.RpcBus$ChannelClosedHandler.operationComplete]


I had to put *slf4j-simple-1.7.25.jar* in *jars/3rdparty/* to resolve of
the *SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder"*
 warning.

Thanks,
Gelbana

On Sat, Aug 12, 2017 at 2:55 AM, Muhammad Gelbana <m.gelb...@gmail.com>
wrote:

> I'm trying to run the following query
>
> *​​*
> *SELECT op.​​platform, op.name <http://op.name>, op.paymentType,
> ck.posDiscountName, sum(op.amount) amt FROM `dfs`.`/​path_to_parquet` op,
> `dfs`.`​path_to_parquet​2​` ck WHERE ck.id <http://ck.id> = op.check_id
> GROUP BY op.platform, op.name <http://op.name>, op.paymentType,
> ck.posDiscountName LIMIT 2147483647*
>
> I also tried the same query without the LIMIT clause
> <https://issues.apache.org/jira/browse/DRILL-5435> but it still fails for
> the same reason.
> ​​
> I'm facing the following exception in the logs and I'm not sure how to
> resolve it.
>
> Suppressed: java.lang.IllegalStateException: Memory was leaked by query.
>> Memory leaked: (4194304)
>> Allocator(op:0:0:0:Screen) 1000000/4194304/12582912/10000000000
>> (res/actual/peak/limit)
>>                 at org.apache.drill.exec.memory.BaseAllocator.close(
>> BaseAllocator.java:492)
>>                 at org.apache.drill.exec.ops.OperatorContextImpl.close(
>> OperatorContextImpl.java:141)
>>                 at org.apache.drill.exec.ops.FragmentContext.
>> suppressingClose(FragmentContext.java:422)
>>                 at org.apache.drill.exec.ops.FragmentContext.close(
>> FragmentContext.java:411)
>>                 at org.apache.drill.exec.work.fragment.FragmentExecutor.
>> closeOutResources(FragmentExecutor.java:318)
>>                 at org.apache.drill.exec.work.fragment.FragmentExecutor.
>> cleanup(FragmentExecutor.java:155)
>>                 at org.apache.drill.exec.work.
>> fragment.FragmentExecutor.run(FragmentExecutor.java:262)
>>                 at org.apache.drill.common.SelfCleaningRunnable.run(
>> SelfCleaningRunnable.java:38)
>>                 at java.util.concurrent.ThreadPoolExecutor.runWorker(
>> ThreadPoolExecutor.java:1142)
>>                 at java.util.concurrent.ThreadPoolExecutor$Worker.run(
>> ThreadPoolExecutor.java:617)
>>                 ... 1 more
>>         Suppressed: java.lang.IllegalStateException: Memory was leaked
>> by query. Memory leaked: (4194304)
>> Allocator(frag:0:0) 3000000/4194304/1511949440/30000000000
>> (res/actual/peak/limit)
>>                 at org.apache.drill.exec.memory.BaseAllocator.close(
>> BaseAllocator.java:492)
>>                 at org.apache.drill.exec.ops.FragmentContext.
>> suppressingClose(FragmentContext.java:422)
>>                 at org.apache.drill.exec.ops.FragmentContext.close(
>> FragmentContext.java:416)
>>                 at org.apache.drill.exec.work.fragment.FragmentExecutor.
>> closeOutResources(FragmentExecutor.java:318)
>>                 at org.apache.drill.exec.work.fragment.FragmentExecutor.
>> cleanup(FragmentExecutor.java:155)
>>                 at org.apache.drill.exec.work.
>> fragment.FragmentExecutor.run(FragmentExecutor.java:262)
>>                 at org.apache.drill.common.SelfCleaningRunnable.run(
>> SelfCleaningRunnable.java:38)
>>                 at java.util.concurrent.ThreadPoolExecutor.runWorker(
>> ThreadPoolExecutor.java:1142)
>>                 at java.util.concurrent.ThreadPoolExecutor$Worker.run(
>> ThreadPoolExecutor.java:617)
>>                 ... 1 more
>
>
> The UI is showing the following error
> *org.apache.drill.common.exceptions.UserException: CONNECTION ERROR:
> Connection /1.1.1.1:40834 <http://1.1.1.1:40834> <--> Gelbana/1.1.1.1:31010
> <http://1.1.1.1:31010> (user client) closed unexpectedly. Drillbit down?
> [Error Id: 268bc3a7-114f-4681-984c-05d143f7ebd9 ]*
>
> I understand that this bug has been fixed in 1.9
> <https://issues.apache.org/jira/browse/DRILL-4616>, which is the version
> I'm using. I did what the comments suggested which is to tell Drill to use
> a tmp directory that has enough space, so I set the JVM option
> *java.io.tmpdir* to /home/mgelbana/server*/temp/* which has over 100GB of
> free space, and modified the drill-override.conf file to have the
> following
>
> tmp: {
>>     directories: ["/home/mgelbana/server/temp/"],
>>     filesystem: "file:///"
>>   },
>>   sort: {
>>     external: {
>>       spill: {
>>         batch.size : 4000,
>>         group.size : 100,
>>         threshold : 200,
>>         directories : [ "/home/mgelbana/server/temp/spill" ],
>>         fs : "file:///"
>>       }
>>     }
>>   }
>
>
> I'm running a single Drillbit on a single machine with 25 GB of heap
> memory and a 100 GB of direct memory. The machine has 48 cores (i.e. the
> output of *nproc* on linux)
>
> *planner.width.max_per_node = 40*
> *planner.memory.max_query_memory_per_node = 8589934592 (8 GB)*
>
> That's the plan of the query is
>
> 00-00    Screen : rowType = RecordType(ANY platform, ANY name, ANY
>> paymentType, ANY posDiscountName, ANY amt): rowcount = 2.147483647E9,
>> cumulative cost = {7.001208181169999E9 rows, 3.700395926736E10 cpu, 0.0 io,
>> 8.6479703758848E12 network, 1.4449960681199999E10 memory}, id = 24229
>> 00-01      Project(platform=[$0], name=[$1], paymentType=[$2],
>> posDiscountName=[$3], amt=[$4]) : rowType = RecordType(ANY platform, ANY
>> name, ANY paymentType, ANY posDiscountName, ANY amt): rowcount =
>> 2.147483647E9, cumulative cost = {6.786459816469999E9 rows,
>> 3.678921090266E10 cpu, 0.0 io, 8.6479703758848E12 network,
>> 1.4449960681199999E10 memory}, id = 24228
>> 00-02        SelectionVectorRemover : rowType = RecordType(ANY platform,
>> ANY name, ANY paymentType, ANY posDiscountName, ANY amt): rowcount =
>> 2.147483647E9, cumulative cost = {6.786459816469999E9 rows,
>> 3.678921090266E10 cpu, 0.0 io, 8.6479703758848E12 network,
>> 1.4449960681199999E10 memory}, id = 24227
>> 00-03          Limit(fetch=[2147483647]) : rowType = RecordType(ANY
>> platform, ANY name, ANY paymentType, ANY posDiscountName, ANY amt):
>> rowcount = 2.147483647E9, cumulative cost = {4.638976169469999E9 rows,
>> 3.464172725566E10 cpu, 0.0 io, 8.6479703758848E12 network,
>> 1.4449960681199999E10 memory}, id = 24226
>> 00-04            UnionExchange : rowType = RecordType(ANY platform, ANY
>> name, ANY paymentType, ANY posDiscountName, ANY amt): rowcount =
>> 2198378.67, cumulative cost = {2.4914925224699993E9 rows, 2.605179266766E10
>> cpu, 0.0 io, 8.6479703758848E12 network, 1.4449960681199999E10 memory}, id
>> = 24225
>> 01-01              HashAgg(group=[{0, 1, 2, 3}], amt=[SUM($4)]) : rowType
>> = RecordType(ANY platform, ANY name, ANY paymentType, ANY posDiscountName,
>> ANY amt): rowcount = 2198378.67, cumulative cost = {2.489294143799999E9
>> rows, 2.60342056383E10 cpu, 0.0 io, 8.6029475807232E12 network,
>> 1.4449960681199999E10 memory}, id = 24224
>> 01-02                Project(platform=[$0], name=[$1], paymentType=[$2],
>> posDiscountName=[$3], amt=[$4]) : rowType = RecordType(ANY platform, ANY
>> name, ANY paymentType, ANY posDiscountName, ANY amt): rowcount =
>> 2.19837867E7, cumulative cost = {2.4673103570999994E9 rows,
>> 2.50669190235E10 cpu, 0.0 io, 8.6029475807232E12 network, 1.34826740664E10
>> memory}, id = 24223
>> 01-03                  HashToRandomExchange(dist0=[[$0]], dist1=[[$1]],
>> dist2=[[$2]], dist3=[[$3]]) : rowType = RecordType(ANY platform, ANY name,
>> ANY paymentType, ANY posDiscountName, ANY amt, ANY
>> E_X_P_R_H_A_S_H_F_I_E_L_D): rowcount = 2.19837867E7, cumulative cost =
>> {2.4673103570999994E9 rows, 2.50669190235E10 cpu, 0.0 io,
>> 8.6029475807232E12 network, 1.34826740664E10 memory}, id = 24222
>> 02-01                    UnorderedMuxExchange : rowType = RecordType(ANY
>> platform, ANY name, ANY paymentType, ANY posDiscountName, ANY amt, ANY
>> E_X_P_R_H_A_S_H_F_I_E_L_D): rowcount = 2.19837867E7, cumulative cost =
>> {2.4453265703999996E9 rows, 2.48470811565E10 cpu, 0.0 io, 8.062674038784E12
>> network, 1.34826740664E10 memory}, id = 24221
>> 03-01                      Project(platform=[$0], name=[$1],
>> paymentType=[$2], posDiscountName=[$3], amt=[$4],
>> E_X_P_R_H_A_S_H_F_I_E_L_D=[hash32AsDouble($3, hash32AsDouble($2,
>> hash32AsDouble($1, hash32AsDouble($0))))]) : rowType = RecordType(ANY
>> platform, ANY name, ANY paymentType, ANY posDiscountName, ANY amt, ANY
>> E_X_P_R_H_A_S_H_F_I_E_L_D): rowcount = 2.19837867E7, cumulative cost =
>> {2.4233427837E9 rows, 2.48250973698E10 cpu, 0.0 io, 8.062674038784E12
>> network, 1.34826740664E10 memory}, id = 24220
>> 03-02                        HashAgg(group=[{0, 1, 2, 3}], amt=[SUM($4)])
>> : rowType = RecordType(ANY platform, ANY name, ANY paymentType, ANY
>> posDiscountName, ANY amt): rowcount = 2.19837867E7, cumulative cost =
>> {2.401358997E9 rows, 2.4737162223E10 cpu, 0.0 io, 8.062674038784E12
>> network, 1.34826740664E10 memory}, id = 24219
>> 03-03                          Project(platform=[$0], name=[$1],
>> paymentType=[$2], posDiscountName=[$5], amount=[$4]) : rowType =
>> RecordType(ANY platform, ANY name, ANY paymentType, ANY posDiscountName,
>> ANY amount): rowcount = 2.19837867E8, cumulative cost = {2.18152113E9 rows,
>> 1.5064296075E10 cpu, 0.0 io, 8.062674038784E12 network, 3.8098079184E9
>> memory}, id = 24218
>> 03-04                            HashJoin(condition=[=($3, $6)],
>> joinType=[inner]) : rowType = RecordType(ANY platform, ANY name, ANY
>> paymentType, ANY check_id, ANY amount, ANY posDiscountName, ANY id):
>> rowcount = 2.19837867E8, cumulative cost = {2.18152113E9 rows,
>> 1.5064296075E10 cpu, 0.0 io, 8.062674038784E12 network, 3.8098079184E9
>> memory}, id = 24217
>> 03-05                              Project(posDiscountName=[$0], id=[$1])
>> : rowType = RecordType(ANY posDiscountName, ANY id): rowcount =
>> 2.16466359E8, cumulative cost = {8.65865436E8 rows, 4.978726257E9 cpu, 0.0
>> io, 2.659938619392E12 network, 0.0 memory}, id = 24216
>> 03-07                                HashToRandomExchange(dist0=[[$1]])
>> : rowType = RecordType(ANY posDiscountName, ANY id, ANY
>> E_X_P_R_H_A_S_H_F_I_E_L_D): rowcount = 2.16466359E8, cumulative cost =
>> {8.65865436E8 rows, 4.978726257E9 cpu, 0.0 io, 2.659938619392E12 network,
>> 0.0 memory}, id = 24215
>> 05-01                                  UnorderedMuxExchange : rowType =
>> RecordType(ANY posDiscountName, ANY id, ANY E_X_P_R_H_A_S_H_F_I_E_L_D):
>> rowcount = 2.16466359E8, cumulative cost = {6.49399077E8 rows,
>> 1.515264513E9 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 24214
>> 07-01                                    Project(posDiscountName=[$0],
>> id=[$1], E_X_P_R_H_A_S_H_F_I_E_L_D=[hash32AsDouble($1)]) : rowType =
>> RecordType(ANY posDiscountName, ANY id, ANY E_X_P_R_H_A_S_H_F_I_E_L_D):
>> rowcount = 2.16466359E8, cumulative cost = {4.32932718E8 rows,
>> 1.298798154E9 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 24213
>> 07-02                                      Scan(groupscan=[ParquetGroupScan
>> [entries=[ReadEntryWithPath [path=file:/path_to_parquet]],
>> selectionRoot=file:/path_to_parquet, numFiles=1, usedMetadataFile=false,
>> columns=[`posDiscountName`, `id`]]]) : rowType = RecordType(ANY
>> posDiscountName, ANY id): rowcount = 2.16466359E8, cumulative cost =
>> {2.16466359E8 rows, 4.32932718E8 cpu, 0.0 io, 0.0 network, 0.0 memory}, id
>> = 24212
>> 03-06                              Project(platform=[$0], name=[$1],
>> paymentType=[$2], check_id=[$3], amount=[$4]) : rowType = RecordType(ANY
>> platform, ANY name, ANY paymentType, ANY check_id, ANY amount): rowcount =
>> 2.19837867E8, cumulative cost = {8.79351468E8 rows, 5.715784542E9 cpu, 0.0
>> io, 5.402735419392E12 network, 0.0 memory}, id = 24211
>> 03-08                                HashToRandomExchange(dist0=[[$3]])
>> : rowType = RecordType(ANY platform, ANY name, ANY paymentType, ANY
>> check_id, ANY amount, ANY E_X_P_R_H_A_S_H_F_I_E_L_D): rowcount =
>> 2.19837867E8, cumulative cost = {8.79351468E8 rows, 5.715784542E9 cpu, 0.0
>> io, 5.402735419392E12 network, 0.0 memory}, id = 24210
>> 04-01                                  UnorderedMuxExchange : rowType =
>> RecordType(ANY platform, ANY name, ANY paymentType, ANY check_id, ANY
>> amount, ANY E_X_P_R_H_A_S_H_F_I_E_L_D): rowcount = 2.19837867E8, cumulative
>> cost = {6.59513601E8 rows, 2.19837867E9 cpu, 0.0 io, 0.0 network, 0.0
>> memory}, id = 24209
>> 06-01                                    Project(platform=[$0],
>> name=[$1], paymentType=[$2], check_id=[$3], amount=[$4],
>> E_X_P_R_H_A_S_H_F_I_E_L_D=[hash32AsDouble($3)]) : rowType =
>> RecordType(ANY platform, ANY name, ANY paymentType, ANY check_id, ANY
>> amount, ANY E_X_P_R_H_A_S_H_F_I_E_L_D): rowcount = 2.19837867E8, cumulative
>> cost = {4.39675734E8 rows, 1.978540803E9 cpu, 0.0 io, 0.0 network, 0.0
>> memory}, id = 24208
>> 06-02                                      Scan(groupscan=[ParquetGroupScan
>> [entries=[ReadEntryWithPath [path=file:/path_to_parquet2]],
>> selectionRoot=file:/path_to_parquet2, numFiles=1,
>> usedMetadataFile=false, columns=[`platform`, `name`, `paymentType`,
>> `check_id`, `amount`]]]) : rowType = RecordType(ANY platform, ANY name, ANY
>> paymentType, ANY check_id, ANY amount): rowcount = 2.19837867E8, cumulative
>> cost = {2.19837867E8 rows, 1.099189335E9 cpu, 0.0 io, 0.0 network, 0.0
>> memory}, id = 24207
>
>
> The machine I'm using has plenty of resources and I just can't believe I
> can't use drill to run this query ! I really appreciate any insight please.
>
> Thanks,
> Gelbana
>

Reply via email to