[jira] [Commented] (IMPALA-10703) PrintPath() crashes with ARRAY in ORC format

2021-07-13 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-10703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17380247#comment-17380247
 ] 

ASF subversion and git services commented on IMPALA-10703:
--

Commit 77283d87d8ed4747aa8d2f5caa9f2d4cf751e3ce in impala's branch 
refs/heads/master from Amogh Margoor
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=77283d8 ]

IMPALA-10703: Fix crash on reading ACID table while printing SchemaPath of 
tuple/slots.

While reading ACID ORC file, the SchemaPath from TupleDescriptor
or SlotDescriptor are converted to fully qualified path via
PrintPath on few codepaths. PrintPath needs non-canonical table
path though. For non-ACID table this will be same as SchemaPath
of tuple/slot. However for ACID tables, it will be different as
file schema and table schema are not same.
E.g., ACID table foo(id int) will look like following in file:

{
  operation: int,
  originalTransaction: bigInt,
  bucket: int,
  rowId: bigInt,
  currentTransaction: bigInt,
  row: struct
}
So SchemaPath for id will [5, 0], but PrintPath would not
understand that. It needs to be converted into table path [1]
as table schema looks like this:

{
  row_id: struct < ...ACID Columns>
  id: int
}

Testing:
1. Manually ran queries against functional_orc_def.complextypestbl
   with log level 3. These queries were crashing earlier.
2. Ran existing regression tests on DEBUG build for few changes not
   behind VLOG(3).

Change-Id: Ib7f15c31e0f8fc5d90555d1f2d51313eaffeb074
Reviewed-on: http://gerrit.cloudera.org:8080/17658
Reviewed-by: Impala Public Jenkins 
Tested-by: Impala Public Jenkins 


> PrintPath() crashes with ARRAY in ORC format
> 
>
> Key: IMPALA-10703
> URL: https://issues.apache.org/jira/browse/IMPALA-10703
> Project: IMPALA
>  Issue Type: Bug
>  Components: Backend
>Affects Versions: Impala 3.4.0
>Reporter: Gabor Kaszab
>Assignee: Amogh Margoor
>Priority: Major
>  Labels: complextype, orc
>
> Repro steps:
>  - Issue only happens in debug build as apparently there is a DCHECK failing.
>  - You have to launch Impala with --log_level=3 option to increase the log 
> level.
>  - Then running this query crashes Impala:
> {code:java}
> select inner_arr.ITEM.e from functional_orc_def.complextypestbl tbl, 
> functional_orc_def.complextypestbl.nested_struct.c.d.ITEM inner_arr;
> {code}
>  
> Backtrace (relevant part):
> {code:java}
> #7  0x0280c2b4 in 
> impala::PrintPath[abi:cxx11](impala::TableDescriptor const&, std::vector std::allocator > const&) (tbl_desc=..., path=...) at 
> /home/gaborkaszab/shadow/Impala-upstream/be/src/util/debug-util.cc:237
> #8  0x02a69eeb in impala::HdfsOrcScanner::ResolveColumns 
> (this=0x10e79000, tuple_desc=..., 
> selected_nodes=0x7fe54980a7d0, pos_slots=0x7fe54980a780)
> at 
> /home/gaborkaszab/shadow/Impala-upstream/be/src/exec/hdfs-orc-scanner.cc:452
> #9  0x02a69cf7 in impala::HdfsOrcScanner::ResolveColumns 
> (this=0x10e79000, tuple_desc=..., 
> selected_nodes=0x7fe54980a7d0, pos_slots=0x7fe54980a780)
> at 
> /home/gaborkaszab/shadow/Impala-upstream/be/src/exec/hdfs-orc-scanner.cc:449
> #10 0x02a6a547 in impala::HdfsOrcScanner::SelectColumns 
> (this=0x10e79000, tuple_desc=...)
> at 
> /home/gaborkaszab/shadow/Impala-upstream/be/src/exec/hdfs-orc-scanner.cc:497
> #11 0x02a67720 in impala::HdfsOrcScanner::Open (this=0x10e79000, 
> context=0x7fe54980b260)
> at 
> /home/gaborkaszab/shadow/Impala-upstream/be/src/exec/hdfs-orc-scanner.cc:237
> #12 0x029f19c9 in 
> impala::HdfsScanNodeBase::CreateAndOpenScannerHelper (this=0xd280800, 
> partition=0xaac3d80, 
> context=0x7fe54980b260, scanner=0x7fe54980b258)
> at 
> /home/gaborkaszab/shadow/Impala-upstream/be/src/exec/hdfs-scan-node-base.cc:874
> #13 0x02baab86 in impala::HdfsScanNode::ProcessSplit (this=0xd280800, 
> filter_ctxs=..., 
> expr_results_pool=0x7fe54980b500, scan_range=0xac59c00, 
> scanner_thread_reservation=0x7fe54980b428)
> at 
> /home/gaborkaszab/shadow/Impala-upstream/be/src/exec/hdfs-scan-node.cc:480
> #14 0x02baa28a in impala::HdfsScanNode::ScannerThread 
> (this=0xd280800, first_thread=true, 
> scanner_thread_reservation=8192) at 
> /home/gaborkaszab/shadow/Impala-upstream/be/src/exec/hdfs-scan-node.cc:418
> #15 0x02ba95f2 in impala::HdfsScanNodeoperator()(void) 
> const (__closure=0x7fe54980bc28)
> at 
> /home/gaborkaszab/shadow/Impala-upstream/be/src/exec/hdfs-scan-node.cc:339
> {code}
> This DCHECK fails:
>  
> [https://github.com/apache/impala/blob/a47700ed790c2415e52a85e40063bed53a7cb9e8/be/src/util/debug-util.cc#L237]
> {code:java}
> Check failed: path[i] == 1 (5 vs. 1)
> {code}
> There was a similar issue recently, but here a different DCHECK 

[jira] [Commented] (IMPALA-10766) Better selectivity for = (equals)

2021-07-13 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-10766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17380246#comment-17380246
 ] 

ASF subversion and git services commented on IMPALA-10766:
--

Commit 4c5fa0591706ec1399a6b92ab10e7028ad159aef in impala's branch 
refs/heads/master from liuyao
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=4c5fa05 ]

IMPALA-10766: Better selectivity for =,not distinct

For = :
If the right side is null, then selectivity is 0.
If the left side is null, null should be excluded when calculating
selectivity.

For is not distinct from :
If the right side is null, non null should be excluded when calculating
selectivity, and only null should be included.
If the left side is null and the right side is not null, null should be
excluded when calculating selectivity, including part of non-null.

Testing :
Change the UT, modify the selectivity calculation error, add two new
cases column != null and column = null

Change-Id: Ib8ec62f2355a7036125cc0d261b790644b9f4b60
Reviewed-on: http://gerrit.cloudera.org:8080/17637
Tested-by: Impala Public Jenkins 
Reviewed-by: Qifan Chen 


> Better selectivity for = (equals)
> -
>
> Key: IMPALA-10766
> URL: https://issues.apache.org/jira/browse/IMPALA-10766
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Frontend
>Affects Versions: Impala 3.4.0
>Reporter: liuyao
>Assignee: liuyao
>Priority: Major
>
> When calculating the selectivity of =, the null value is not considered. For 
> =, the null values are always false



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-10502) delayed 'Invalidated objects in cache' cause 'Table already exists'

2021-07-13 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-10502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17380245#comment-17380245
 ] 

ASF subversion and git services commented on IMPALA-10502:
--

Commit 7f7a631e92c69a6dafc1f25ceb407f7b79db10e9 in impala's branch 
refs/heads/master from Vihang Karajgaonkar
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=7f7a631 ]

IMPALA-10502: Handle CREATE/DROP events correctly

The current way to detect self-events in case of CREATE/DROP events on
database, table and partition is problematic when the same object is
created and dropped repeatedly in quick succession. This happens mainly
due to couple of reasons. For example if we have the below
sequence of DDLs in Impala:
1. create table foo; --> catalogd creates table foo
2. drop table foo; --> catalogd drops table foo
...
Events processor receives the CREATE_TABLE event pertaining to (1)
above. Now it cannot determine whether the table needs to be created
or not. Similarly, if we interchange the order of DROP and CREATE
statements above, the DROP_TABLE event received by the events processor
will unnecessarily remove the table when it should not.

This can cause problems for queries which expect the table to exist or
not exist. E.g create table query fails with a table already exists or
a drop table query fails with table does not exist error.

In order to fix this issue, catalogd now keeps track of dropped objects
in a deleteLog which are garbage collected as the events come in. Every
time a database, table or partition is dropped, the deleteLog is
populated with the drop event id generated due to the drop
operation. This deleteLog is looked up when the event is received to
determine if the event can be ignored. Additionally, catalogd keeps
track of the create event id at the Database, Table or Partition level
during the create DDL execution so that the event can be ignored later
by events processor.

Testing:
1. Added test_create_drop_events and test_local_catalog_create_drop_events
test which loops to create create/drop events for database, table and
partitions.
2. Added new metrics which the test verifies to ensure that events
don't create or drop the object.

Change-Id: Ia2c5e96b48abac015240f20295b3ec3b1d71f24a
Reviewed-on: http://gerrit.cloudera.org:8080/17308
Tested-by: Impala Public Jenkins 
Reviewed-by: Vihang Karajgaonkar 


> delayed 'Invalidated objects in cache' cause 'Table already exists'
> ---
>
> Key: IMPALA-10502
> URL: https://issues.apache.org/jira/browse/IMPALA-10502
> Project: IMPALA
>  Issue Type: Bug
>  Components: Catalog, Clients, Frontend
>Affects Versions: Impala 3.4.0
>Reporter: Adriano
>Assignee: Vihang Karajgaonkar
>Priority: Critical
> Fix For: Impala 4.1
>
>
> In fast paced environment where the interval between the step 1 and 2 is # < 
> 100ms (a simplified pipeline looks like):
> 0- catalog 'on demand' in use and disableHmsSync (enabled or disabled: no 
> difference)
> 1- open session to coord A -> DROP TABLE X -> close session
> 2- open session to coord A -> CREATE TABLE X-> close session
> Results: the step -2- can fail with table already exist.
> During the internal investigation was discovered that IMPALA-9913 will 
> regress the issue in almost all scenarios.
> However considering that the investigation are internally ongoing it is nice 
> to have the event tracked also here.
> Once we are sure that IMPALA-9913 fix these events we can close this as 
> duplicate, in alternative carry on the investigation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Resolved] (IMPALA-10502) delayed 'Invalidated objects in cache' cause 'Table already exists'

2021-07-13 Thread Vihang Karajgaonkar (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-10502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vihang Karajgaonkar resolved IMPALA-10502.
--
Fix Version/s: Impala 4.1
   Resolution: Fixed

> delayed 'Invalidated objects in cache' cause 'Table already exists'
> ---
>
> Key: IMPALA-10502
> URL: https://issues.apache.org/jira/browse/IMPALA-10502
> Project: IMPALA
>  Issue Type: Bug
>  Components: Catalog, Clients, Frontend
>Affects Versions: Impala 3.4.0
>Reporter: Adriano
>Assignee: Vihang Karajgaonkar
>Priority: Critical
> Fix For: Impala 4.1
>
>
> In fast paced environment where the interval between the step 1 and 2 is # < 
> 100ms (a simplified pipeline looks like):
> 0- catalog 'on demand' in use and disableHmsSync (enabled or disabled: no 
> difference)
> 1- open session to coord A -> DROP TABLE X -> close session
> 2- open session to coord A -> CREATE TABLE X-> close session
> Results: the step -2- can fail with table already exist.
> During the internal investigation was discovered that IMPALA-9913 will 
> regress the issue in almost all scenarios.
> However considering that the investigation are internally ongoing it is nice 
> to have the event tracked also here.
> Once we are sure that IMPALA-9913 fix these events we can close this as 
> duplicate, in alternative carry on the investigation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Assigned] (IMPALA-8762) Track number of running queries on all backends in admission controller

2021-07-13 Thread Bikramjeet Vig (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bikramjeet Vig reassigned IMPALA-8762:
--

Assignee: Bikramjeet Vig

> Track number of running queries on all backends in admission controller
> ---
>
> Key: IMPALA-8762
> URL: https://issues.apache.org/jira/browse/IMPALA-8762
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Backend
>Affects Versions: Impala 3.3.0
>Reporter: Lars Volker
>Assignee: Bikramjeet Vig
>Priority: Major
>  Labels: admission-control, scalability
>
> To support running multiple coordinators with executor groups and slot based 
> admission checks, all executors need to include the number of currently 
> running queries in their statestore updates, similar to mem reserved.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-10793) ColumnStatsReader should convert timestamps during batch decoding

2021-07-13 Thread Jira
Zoltán Borók-Nagy created IMPALA-10793:
--

 Summary: ColumnStatsReader should convert timestamps during batch 
decoding
 Key: IMPALA-10793
 URL: https://issues.apache.org/jira/browse/IMPALA-10793
 Project: IMPALA
  Issue Type: Bug
Reporter: Zoltán Borók-Nagy


ColumnStatsReader currently doesn't convert timestamp values during batched 
decoding in 
DecodeBatchOneBoundsCheck.
 
This might causes wrong results when min/max filtering is used on a timestamp 
column.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-10784) Add support for retaining cookies among http requests in impala-shell

2021-07-13 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-10784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17379917#comment-17379917
 ] 

ASF subversion and git services commented on IMPALA-10784:
--

Commit 2b815cbd51f55c6000dcde81cb1ee399bb1a545c in impala's branch 
refs/heads/master from wzhou-code
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=2b815cb ]

IMPALA-10784: Add support for retaining cookies in impala-shell

IMPALA-10234 added support for cookie authentication for LDAP to
impala-shell. But it does not accept user input cookie name via
startup flags, and it retains only one cookie.

In some scenarios, we could use proxy to manage the sessions with
additional HTTP cookies added by proxy.
This patch made cookie support more generic for impala-shell.
It lets the user specify cookie names via a startup flag
"--http_cookie_names" and could retain more than one cookies.

Testing:
 - Manualy tested the multiple cookies in HTTP headers with a
   customized Impala server which could send and receive multiple
   cookies.
 - Passed core test, including new test cases.

Change-Id: I193422d5ec891886a522d82ecb0e9d974132ff2a
Reviewed-on: http://gerrit.cloudera.org:8080/17667
Reviewed-by: Impala Public Jenkins 
Tested-by: Impala Public Jenkins 


> Add support for retaining cookies among http requests in impala-shell
> -
>
> Key: IMPALA-10784
> URL: https://issues.apache.org/jira/browse/IMPALA-10784
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Clients
>Affects Versions: Impala 4.1
>Reporter: Wenzhe Zhou
>Assignee: Wenzhe Zhou
>Priority: Major
> Fix For: Impala 4.1
>
>
> IMPALA-10234 added support for cookie authentication to impala-shell. But it 
> not accept user input cookie name, and it retains only one cookie.
> We need to make support for cookie more generic for impala-shell. We should 
> allow user to specify cookie names via startup flags, and make impala-shell 
> retains more than one cookies.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-10234) impala-shell: add support for cookie-based authentication

2021-07-13 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-10234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17379918#comment-17379918
 ] 

ASF subversion and git services commented on IMPALA-10234:
--

Commit 2b815cbd51f55c6000dcde81cb1ee399bb1a545c in impala's branch 
refs/heads/master from wzhou-code
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=2b815cb ]

IMPALA-10784: Add support for retaining cookies in impala-shell

IMPALA-10234 added support for cookie authentication for LDAP to
impala-shell. But it does not accept user input cookie name via
startup flags, and it retains only one cookie.

In some scenarios, we could use proxy to manage the sessions with
additional HTTP cookies added by proxy.
This patch made cookie support more generic for impala-shell.
It lets the user specify cookie names via a startup flag
"--http_cookie_names" and could retain more than one cookies.

Testing:
 - Manualy tested the multiple cookies in HTTP headers with a
   customized Impala server which could send and receive multiple
   cookies.
 - Passed core test, including new test cases.

Change-Id: I193422d5ec891886a522d82ecb0e9d974132ff2a
Reviewed-on: http://gerrit.cloudera.org:8080/17667
Reviewed-by: Impala Public Jenkins 
Tested-by: Impala Public Jenkins 


> impala-shell: add support for cookie-based authentication
> -
>
> Key: IMPALA-10234
> URL: https://issues.apache.org/jira/browse/IMPALA-10234
> Project: IMPALA
>  Issue Type: New Feature
>  Components: Clients
>Affects Versions: Impala 3.4.0
>Reporter: Attila Jeges
>Assignee: Attila Jeges
>Priority: Major
> Fix For: Impala 4.0
>
>
> IMPALA-8584 added support for cookie authentication to Impala. Need to add 
> cookie authentication support to impala-shell as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-10724) Add mutable validWriteIdList

2021-07-13 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-10724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17379919#comment-17379919
 ] 

ASF subversion and git services commented on IMPALA-10724:
--

Commit 00c8e157ddcaec3f12a09ef410d8716b3dac03a4 in impala's branch 
refs/heads/master from Yu-Wen Lai
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=00c8e15 ]

IMPALA-10724: Add mutable validWriteIdList

In this patch, we add a new class for manually updating writeIdList.
In terms of updating writeIdList, we introduce three methods including
addOpenWriteId, addAbortedWriteIds, and addCommittedWriteIds.

We will use this class in MetastoreEventProcessor for fine-grained
table refreshing. With the control of writeIdList, we will be able to
update the transactional table partially and keep it consistent.

There are some restrictions for MutableValidWriteIdList.
1. We need to mark a writeId open before mark it committed/aborted.
2. We only allow two writeId state transitions, open -> committed or
open -> aborted. Any other transition is NOT allowed.

Change-Id: I28e60db0afd5d4398af24449b72abc928421f7c6
Reviewed-on: http://gerrit.cloudera.org:8080/17538
Tested-by: Impala Public Jenkins 
Reviewed-by: Quanlong Huang 


> Add mutable validWriteIdList
> 
>
> Key: IMPALA-10724
> URL: https://issues.apache.org/jira/browse/IMPALA-10724
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Catalog
>Reporter: Yu-Wen Lai
>Assignee: Yu-Wen Lai
>Priority: Major
>
> Although the current implementation for validWriteIdList is not strictly 
> immutable, it is in some sense to provide a read-only view snapshot. This 
> change is to add another class to provide functionalities for manipulating 
> the writeIdList. We could use this to keep writeIdList up-to-date for 
> event-based metadata refreshing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Closed] (IMPALA-10414) Retrying failed query may cause memory leak

2021-07-13 Thread Xianqing He (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-10414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xianqing He closed IMPALA-10414.

Fix Version/s: Impala 4.0
   Resolution: Fixed

> Retrying failed query may cause memory leak
> ---
>
> Key: IMPALA-10414
> URL: https://issues.apache.org/jira/browse/IMPALA-10414
> Project: IMPALA
>  Issue Type: Bug
>  Components: Backend
>Affects Versions: Impala 4.0
>Reporter: Xianqing He
>Assignee: Xianqing He
>Priority: Minor
> Fix For: Impala 4.0
>
>
> When cancel the retrying failed query, the qrery may not close and cause 
> memory leak.
> {code:java}
> Process: Limit=7.23 GB Total=137.45 MB Peak=254.58 MB
>   Buffer Pool: Free Buffers: Total=0
>   Buffer Pool: Clean Pages: Total=0
>   Buffer Pool: Unused Reservation: Total=0
>   Control Service Queue: Limit=74.07 MB Total=0 Peak=52.05 KB
>   Data Stream Service Queue: Limit=370.35 MB Total=0 Peak=69.51 KB
>   Data Stream Manager Early RPCs: Total=0 Peak=0
>   TCMalloc Overhead: Total=31.12 MB
>   RequestPool=default-pool: Total=0 Peak=110.10 MB
> Query(bf406dca85fc951d:7cd2b15d): Total=0 Peak=0
> Query(c146e822b2b670ad:fe8fc42e): Reservation=0 
> ReservationLimit=5.79 GB OtherMemory=0 Total=0 Peak=110.10 MB
>   Fragment c146e822b2b670ad:fe8fc42e0006: Reservation=0 OtherMemory=0 
> Total=0 Peak=3.21 MB
> HDFS_SCAN_NODE (id=2): Reservation=0 OtherMemory=0 Total=0 Peak=3.05 
> MB
> KrpcDataStreamSender (dst_id=9): Total=0 Peak=148.44 KB
>   Fragment c146e822b2b670ad:fe8fc42e0003: Reservation=0 OtherMemory=0 
> Total=0 Peak=3.21 MB
> HDFS_SCAN_NODE (id=1): Reservation=0 OtherMemory=0 Total=0 Peak=3.05 
> MB
> KrpcDataStreamSender (dst_id=7): Total=0 Peak=148.44 KB
>   CodeGen: Total=0 Peak=239.50 KB
>   Fragment c146e822b2b670ad:fe8fc42e0001: Reservation=0 OtherMemory=0 
> Total=0 Peak=88.13 MB
> HDFS_SCAN_NODE (id=0): Reservation=0 OtherMemory=0 Total=0 Peak=87.97 
> MB
> KrpcDataStreamSender (dst_id=6): Total=0 Peak=128.81 KB
>   CodeGen: Total=0 Peak=239.50 KB
>   CodeGen: Total=0 Peak=239.50 KB
>   Fragment c146e822b2b670ad:fe8fc42e0004: Reservation=0 OtherMemory=0 
> Total=0 Peak=14.72 MB
> HASH_JOIN_NODE (id=3): Reservation=0 OtherMemory=0 Total=0 Peak=3.05 
> MB
>   Hash Join Builder (join_node_id=3): Total=0 Peak=21.12 KB
> EXCHANGE_NODE (id=6): Reservation=0 OtherMemory=0 Total=0 Peak=11.61 
> MB
>   KrpcDeferredRpcs: Total=0 Peak=35.93 KB
> EXCHANGE_NODE (id=7): Reservation=0 OtherMemory=0 Total=0 Peak=112.00 
> KB
>   KrpcDeferredRpcs: Total=0 Peak=0
> KrpcDataStreamSender (dst_id=8): Total=0 Peak=133.47 KB
>   Fragment c146e822b2b670ad:fe8fc42e: Reservation=0 OtherMemory=0 
> Total=0 Peak=56.00 KB
> AGGREGATION_NODE (id=11): Total=0 Peak=16.00 KB
>   NonGroupingAggregator 0: Total=0 Peak=8.00 KB
> EXCHANGE_NODE (id=10): Reservation=0 OtherMemory=0 Total=0 Peak=32.00 
> KB
>   KrpcDeferredRpcs: Total=0 Peak=0
> PLAN_ROOT_SINK: Total=0 Peak=0
>   CodeGen: Total=0 Peak=6.50 KB
>   CodeGen: Total=0 Peak=1.54 MB
>   Fragment c146e822b2b670ad:fe8fc42e0007: Reservation=0 OtherMemory=0 
> Total=0 Peak=7.33 MB
> AGGREGATION_NODE (id=5): Total=0 Peak=32.00 KB
>   NonGroupingAggregator 0: Total=0 Peak=8.00 KB
> HASH_JOIN_NODE (id=4): Reservation=0 OtherMemory=0 Total=0 Peak=3.06 
> MB
>   Hash Join Builder (join_node_id=4): Total=0 Peak=21.12 KB
> EXCHANGE_NODE (id=8): Reservation=0 OtherMemory=0 Total=0 Peak=4.14 MB
>   KrpcDeferredRpcs: Total=0 Peak=0
> EXCHANGE_NODE (id=9): Reservation=0 OtherMemory=0 Total=0 Peak=112.00 
> KB
>   KrpcDeferredRpcs: Total=0 Peak=0
> KrpcDataStreamSender (dst_id=10): Total=0 Peak=16.00 KB
>   CodeGen: Total=0 Peak=1.43 MB
>   Untracked Memory: Total=106.33 MB
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-10414) Retrying failed query may cause memory leak

2021-07-13 Thread Xianqing He (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-10414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17379727#comment-17379727
 ] 

Xianqing He commented on IMPALA-10414:
--

May fixed by IMPALA-10704

> Retrying failed query may cause memory leak
> ---
>
> Key: IMPALA-10414
> URL: https://issues.apache.org/jira/browse/IMPALA-10414
> Project: IMPALA
>  Issue Type: Bug
>  Components: Backend
>Affects Versions: Impala 4.0
>Reporter: Xianqing He
>Assignee: Xianqing He
>Priority: Minor
>
> When cancel the retrying failed query, the qrery may not close and cause 
> memory leak.
> {code:java}
> Process: Limit=7.23 GB Total=137.45 MB Peak=254.58 MB
>   Buffer Pool: Free Buffers: Total=0
>   Buffer Pool: Clean Pages: Total=0
>   Buffer Pool: Unused Reservation: Total=0
>   Control Service Queue: Limit=74.07 MB Total=0 Peak=52.05 KB
>   Data Stream Service Queue: Limit=370.35 MB Total=0 Peak=69.51 KB
>   Data Stream Manager Early RPCs: Total=0 Peak=0
>   TCMalloc Overhead: Total=31.12 MB
>   RequestPool=default-pool: Total=0 Peak=110.10 MB
> Query(bf406dca85fc951d:7cd2b15d): Total=0 Peak=0
> Query(c146e822b2b670ad:fe8fc42e): Reservation=0 
> ReservationLimit=5.79 GB OtherMemory=0 Total=0 Peak=110.10 MB
>   Fragment c146e822b2b670ad:fe8fc42e0006: Reservation=0 OtherMemory=0 
> Total=0 Peak=3.21 MB
> HDFS_SCAN_NODE (id=2): Reservation=0 OtherMemory=0 Total=0 Peak=3.05 
> MB
> KrpcDataStreamSender (dst_id=9): Total=0 Peak=148.44 KB
>   Fragment c146e822b2b670ad:fe8fc42e0003: Reservation=0 OtherMemory=0 
> Total=0 Peak=3.21 MB
> HDFS_SCAN_NODE (id=1): Reservation=0 OtherMemory=0 Total=0 Peak=3.05 
> MB
> KrpcDataStreamSender (dst_id=7): Total=0 Peak=148.44 KB
>   CodeGen: Total=0 Peak=239.50 KB
>   Fragment c146e822b2b670ad:fe8fc42e0001: Reservation=0 OtherMemory=0 
> Total=0 Peak=88.13 MB
> HDFS_SCAN_NODE (id=0): Reservation=0 OtherMemory=0 Total=0 Peak=87.97 
> MB
> KrpcDataStreamSender (dst_id=6): Total=0 Peak=128.81 KB
>   CodeGen: Total=0 Peak=239.50 KB
>   CodeGen: Total=0 Peak=239.50 KB
>   Fragment c146e822b2b670ad:fe8fc42e0004: Reservation=0 OtherMemory=0 
> Total=0 Peak=14.72 MB
> HASH_JOIN_NODE (id=3): Reservation=0 OtherMemory=0 Total=0 Peak=3.05 
> MB
>   Hash Join Builder (join_node_id=3): Total=0 Peak=21.12 KB
> EXCHANGE_NODE (id=6): Reservation=0 OtherMemory=0 Total=0 Peak=11.61 
> MB
>   KrpcDeferredRpcs: Total=0 Peak=35.93 KB
> EXCHANGE_NODE (id=7): Reservation=0 OtherMemory=0 Total=0 Peak=112.00 
> KB
>   KrpcDeferredRpcs: Total=0 Peak=0
> KrpcDataStreamSender (dst_id=8): Total=0 Peak=133.47 KB
>   Fragment c146e822b2b670ad:fe8fc42e: Reservation=0 OtherMemory=0 
> Total=0 Peak=56.00 KB
> AGGREGATION_NODE (id=11): Total=0 Peak=16.00 KB
>   NonGroupingAggregator 0: Total=0 Peak=8.00 KB
> EXCHANGE_NODE (id=10): Reservation=0 OtherMemory=0 Total=0 Peak=32.00 
> KB
>   KrpcDeferredRpcs: Total=0 Peak=0
> PLAN_ROOT_SINK: Total=0 Peak=0
>   CodeGen: Total=0 Peak=6.50 KB
>   CodeGen: Total=0 Peak=1.54 MB
>   Fragment c146e822b2b670ad:fe8fc42e0007: Reservation=0 OtherMemory=0 
> Total=0 Peak=7.33 MB
> AGGREGATION_NODE (id=5): Total=0 Peak=32.00 KB
>   NonGroupingAggregator 0: Total=0 Peak=8.00 KB
> HASH_JOIN_NODE (id=4): Reservation=0 OtherMemory=0 Total=0 Peak=3.06 
> MB
>   Hash Join Builder (join_node_id=4): Total=0 Peak=21.12 KB
> EXCHANGE_NODE (id=8): Reservation=0 OtherMemory=0 Total=0 Peak=4.14 MB
>   KrpcDeferredRpcs: Total=0 Peak=0
> EXCHANGE_NODE (id=9): Reservation=0 OtherMemory=0 Total=0 Peak=112.00 
> KB
>   KrpcDeferredRpcs: Total=0 Peak=0
> KrpcDataStreamSender (dst_id=10): Total=0 Peak=16.00 KB
>   CodeGen: Total=0 Peak=1.43 MB
>   Untracked Memory: Total=106.33 MB
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org