[jira] [Commented] (IMPALA-7010) Multiple flaky tests failing with MemLimitExceeded on S3

2018-05-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/IMPALA-7010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16480157#comment-16480157
 ] 

ASF subversion and git services commented on IMPALA-7010:
-

Commit 0297d76a54c79edcdcabcf9cfb570f21850e99d3 in impala's branch 
refs/heads/2.x from [~joemcdonnell]
[ https://git-wip-us.apache.org/repos/asf?p=impala.git;h=0297d76 ]

IMPALA-7023: Wait for fragments to finish for test_insert.py

The arrangement of tests in test_insert.py changed with
IMPALA-7010, splitting out the memory limit tests into
test_insert_mem_limit(). On exhaustive, the combination
of test dimensions means test_insert_mem_limit() executes
11 different combinations. Each of these statements can
use a large amount of memory and this is not cleaned
up immediately. This has been causing
test_insert_overwrite(), which immediately follows
test_insert_mem_limit(), to hit the process memory limit.

This changes test_insert_mem_limit() to make it wait
for its fragments to finish.

Change-Id: I5642e9cb32dd02afd74dde7e0d3b31bddbee3ccd
Reviewed-on: http://gerrit.cloudera.org:8080/10426
Reviewed-by: Philip Zeyliger 
Tested-by: Impala Public Jenkins 


> Multiple flaky tests failing with MemLimitExceeded on S3
> 
>
> Key: IMPALA-7010
> URL: https://issues.apache.org/jira/browse/IMPALA-7010
> Project: IMPALA
>  Issue Type: Bug
>  Components: Infrastructure
>Affects Versions: Impala 3.0, Impala 2.13.0
>Reporter: Sailesh Mukil
>Assignee: Tim Armstrong
>Priority: Blocker
>  Labels: flaky
> Fix For: Impala 2.13.0, Impala 3.1.0
>
>
> *test_low_mem_limit_orderby_all*
> {code:java}
> Error Message
> query_test/test_mem_usage_scaling.py:272: in test_low_mem_limit_orderby_all   
>   self.run_primitive_query(vector, 'primitive_orderby_all') 
> query_test/test_mem_usage_scaling.py:260: in run_primitive_query 
> self.low_memory_limit_test(vector, query_name, self.MIN_MEM[query_name]) 
> query_test/test_mem_usage_scaling.py:114: in low_memory_limit_test 
> self.run_test_case(tpch_query, new_vector) common/impala_test_suite.py:405: 
> in run_test_case result = self.__execute_query(target_impalad_client, 
> query, user=user) common/impala_test_suite.py:620: in __execute_query 
> return impalad_client.execute(query, user=user) 
> common/impala_connection.py:160: in execute return 
> self.__beeswax_client.execute(sql_stmt, user=user) 
> beeswax/impala_beeswax.py:173: in execute handle = 
> self.__execute_query(query_string.strip(), user=user) 
> beeswax/impala_beeswax.py:341: in __execute_query 
> self.wait_for_completion(handle) beeswax/impala_beeswax.py:361: in 
> wait_for_completion raise ImpalaBeeswaxException("Query aborted:" + 
> error_log, None) E   ImpalaBeeswaxException: ImpalaBeeswaxException: E
> Query aborted:Memory limit exceeded: Failed to allocate tuple buffer E   
> HDFS_SCAN_NODE (id=0) could not allocate 190.00 KB without exceeding limit. E 
>   Error occurred on backend 
> ec2-m2-4xlarge-centos-6-4-0e8b.vpc.cloudera.com:22001 by fragment 
> db44c56dcd2fce95:7d746e080003 E   Memory left in process limit: 11.40 GB 
> E   Memory left in query limit: 51.61 KB E   
> Query(db44c56dcd2fce95:7d746e08): Limit=200.00 MB Reservation=158.50 
> MB ReservationLimit=160.00 MB OtherMemory=41.45 MB Total=199.95 MB 
> Peak=199.95 MB E Fragment db44c56dcd2fce95:7d746e080003: 
> Reservation=158.50 MB OtherMemory=41.45 MB Total=199.95 MB Peak=199.95 MB E   
> SORT_NODE (id=1): Reservation=9.00 MB OtherMemory=8.00 KB Total=9.01 MB 
> Peak=22.31 MB E   HDFS_SCAN_NODE (id=0): Reservation=149.50 MB 
> OtherMemory=41.43 MB Total=190.93 MB Peak=192.13 MB E Exprs: 
> Total=4.00 KB Peak=4.00 KB E   KrpcDataStreamSender (dst_id=4): 
> Total=688.00 B Peak=688.00 B E   CodeGen: Total=7.72 KB Peak=973.50 KB E  
>   E   Memory limit exceeded: Failed to allocate tuple buffer E   
> HDFS_SCAN_NODE (id=0) could not allocate 190.00 KB without exceeding limit. E 
>   Error occurred on backend 
> ec2-m2-4xlarge-centos-6-4-0e8b.vpc.cloudera.com:22001 by fragment 
> db44c56dcd2fce95:7d746e080003 E   Memory left in process limit: 11.40 GB 
> E   Memory left in query limit: 51.61 KB E   
> Query(db44c56dcd2fce95:7d746e08): Limit=200.00 MB Reservation=158.50 
> MB ReservationLimit=160.00 MB OtherMemory=41.45 MB Total=199.95 MB 
> Peak=199.95 MB E Fragment db44c56dcd2fce95:7d746e080003: 
> Reservation=158.50 MB OtherMemory=41.45 MB Total=199.95 MB Peak=199.95 MB E   
> SORT_NODE (id=1): Reservation=9.00 MB OtherMemory=8.00 KB Total=9.01 MB 
> Peak=22.31 MB E   HDFS_SCAN_NODE (id=0): Reservation=149.50 MB 
> OtherMemory=41.43 MB Total=190.93 MB Peak=192.13 MB E Exprs: 
>

[jira] [Commented] (IMPALA-7010) Multiple flaky tests failing with MemLimitExceeded on S3

2018-05-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/IMPALA-7010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16479333#comment-16479333
 ] 

ASF subversion and git services commented on IMPALA-7010:
-

Commit 1e6544f7da1d756b437d8b0f12a6446f10f1f836 in impala's branch 
refs/heads/master from [~joemcdonnell]
[ https://git-wip-us.apache.org/repos/asf?p=impala.git;h=1e6544f ]

IMPALA-7023: Wait for fragments to finish for test_insert.py

The arrangement of tests in test_insert.py changed with
IMPALA-7010, splitting out the memory limit tests into
test_insert_mem_limit(). On exhaustive, the combination
of test dimensions means test_insert_mem_limit() executes
11 different combinations. Each of these statements can
use a large amount of memory and this is not cleaned
up immediately. This has been causing
test_insert_overwrite(), which immediately follows
test_insert_mem_limit(), to hit the process memory limit.

This changes test_insert_mem_limit() to make it wait
for its fragments to finish.

Change-Id: I5642e9cb32dd02afd74dde7e0d3b31bddbee3ccd
Reviewed-on: http://gerrit.cloudera.org:8080/10426
Reviewed-by: Philip Zeyliger 
Tested-by: Impala Public Jenkins 


> Multiple flaky tests failing with MemLimitExceeded on S3
> 
>
> Key: IMPALA-7010
> URL: https://issues.apache.org/jira/browse/IMPALA-7010
> Project: IMPALA
>  Issue Type: Bug
>  Components: Infrastructure
>Affects Versions: Impala 3.0, Impala 2.13.0
>Reporter: Sailesh Mukil
>Assignee: Tim Armstrong
>Priority: Blocker
>  Labels: flaky
> Fix For: Impala 2.13.0, Impala 3.1.0
>
>
> *test_low_mem_limit_orderby_all*
> {code:java}
> Error Message
> query_test/test_mem_usage_scaling.py:272: in test_low_mem_limit_orderby_all   
>   self.run_primitive_query(vector, 'primitive_orderby_all') 
> query_test/test_mem_usage_scaling.py:260: in run_primitive_query 
> self.low_memory_limit_test(vector, query_name, self.MIN_MEM[query_name]) 
> query_test/test_mem_usage_scaling.py:114: in low_memory_limit_test 
> self.run_test_case(tpch_query, new_vector) common/impala_test_suite.py:405: 
> in run_test_case result = self.__execute_query(target_impalad_client, 
> query, user=user) common/impala_test_suite.py:620: in __execute_query 
> return impalad_client.execute(query, user=user) 
> common/impala_connection.py:160: in execute return 
> self.__beeswax_client.execute(sql_stmt, user=user) 
> beeswax/impala_beeswax.py:173: in execute handle = 
> self.__execute_query(query_string.strip(), user=user) 
> beeswax/impala_beeswax.py:341: in __execute_query 
> self.wait_for_completion(handle) beeswax/impala_beeswax.py:361: in 
> wait_for_completion raise ImpalaBeeswaxException("Query aborted:" + 
> error_log, None) E   ImpalaBeeswaxException: ImpalaBeeswaxException: E
> Query aborted:Memory limit exceeded: Failed to allocate tuple buffer E   
> HDFS_SCAN_NODE (id=0) could not allocate 190.00 KB without exceeding limit. E 
>   Error occurred on backend 
> ec2-m2-4xlarge-centos-6-4-0e8b.vpc.cloudera.com:22001 by fragment 
> db44c56dcd2fce95:7d746e080003 E   Memory left in process limit: 11.40 GB 
> E   Memory left in query limit: 51.61 KB E   
> Query(db44c56dcd2fce95:7d746e08): Limit=200.00 MB Reservation=158.50 
> MB ReservationLimit=160.00 MB OtherMemory=41.45 MB Total=199.95 MB 
> Peak=199.95 MB E Fragment db44c56dcd2fce95:7d746e080003: 
> Reservation=158.50 MB OtherMemory=41.45 MB Total=199.95 MB Peak=199.95 MB E   
> SORT_NODE (id=1): Reservation=9.00 MB OtherMemory=8.00 KB Total=9.01 MB 
> Peak=22.31 MB E   HDFS_SCAN_NODE (id=0): Reservation=149.50 MB 
> OtherMemory=41.43 MB Total=190.93 MB Peak=192.13 MB E Exprs: 
> Total=4.00 KB Peak=4.00 KB E   KrpcDataStreamSender (dst_id=4): 
> Total=688.00 B Peak=688.00 B E   CodeGen: Total=7.72 KB Peak=973.50 KB E  
>   E   Memory limit exceeded: Failed to allocate tuple buffer E   
> HDFS_SCAN_NODE (id=0) could not allocate 190.00 KB without exceeding limit. E 
>   Error occurred on backend 
> ec2-m2-4xlarge-centos-6-4-0e8b.vpc.cloudera.com:22001 by fragment 
> db44c56dcd2fce95:7d746e080003 E   Memory left in process limit: 11.40 GB 
> E   Memory left in query limit: 51.61 KB E   
> Query(db44c56dcd2fce95:7d746e08): Limit=200.00 MB Reservation=158.50 
> MB ReservationLimit=160.00 MB OtherMemory=41.45 MB Total=199.95 MB 
> Peak=199.95 MB E Fragment db44c56dcd2fce95:7d746e080003: 
> Reservation=158.50 MB OtherMemory=41.45 MB Total=199.95 MB Peak=199.95 MB E   
> SORT_NODE (id=1): Reservation=9.00 MB OtherMemory=8.00 KB Total=9.01 MB 
> Peak=22.31 MB E   HDFS_SCAN_NODE (id=0): Reservation=149.50 MB 
> OtherMemory=41.43 MB Total=190.93 MB Peak=192.13 MB E Exprs:

[jira] [Commented] (IMPALA-7010) Multiple flaky tests failing with MemLimitExceeded on S3

2018-05-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/IMPALA-7010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16473161#comment-16473161
 ] 

ASF subversion and git services commented on IMPALA-7010:
-

Commit 22244bb0715417a2589cb53522debb14262bec06 in impala's branch 
refs/heads/2.x from [~tarmstr...@cloudera.com]
[ https://git-wip-us.apache.org/repos/asf?p=impala.git;h=22244bb ]

IMPALA-7010: don't run memory usage tests on non-HDFS

Moved a number of tests with tuned mem_limits. In some cases
this required separating the tests from non-tuned functional
tests.

TestQueryMemLimit used very high and very low limits only, so seemed
safe to run in all configurations.

Change-Id: I9686195a29dde2d87b19ef8bb0e93e08f8bee662
Reviewed-on: http://gerrit.cloudera.org:8080/10370
Reviewed-by: Tim Armstrong 
Tested-by: Impala Public Jenkins 
Reviewed-on: http://gerrit.cloudera.org:8080/10387


> Multiple flaky tests failing with MemLimitExceeded on S3
> 
>
> Key: IMPALA-7010
> URL: https://issues.apache.org/jira/browse/IMPALA-7010
> Project: IMPALA
>  Issue Type: Bug
>  Components: Infrastructure
>Affects Versions: Impala 3.0, Impala 2.13.0
>Reporter: Sailesh Mukil
>Assignee: Tim Armstrong
>Priority: Blocker
>  Labels: flaky
> Fix For: Impala 2.13.0, Impala 3.1.0
>
>
> *test_low_mem_limit_orderby_all*
> {code:java}
> Error Message
> query_test/test_mem_usage_scaling.py:272: in test_low_mem_limit_orderby_all   
>   self.run_primitive_query(vector, 'primitive_orderby_all') 
> query_test/test_mem_usage_scaling.py:260: in run_primitive_query 
> self.low_memory_limit_test(vector, query_name, self.MIN_MEM[query_name]) 
> query_test/test_mem_usage_scaling.py:114: in low_memory_limit_test 
> self.run_test_case(tpch_query, new_vector) common/impala_test_suite.py:405: 
> in run_test_case result = self.__execute_query(target_impalad_client, 
> query, user=user) common/impala_test_suite.py:620: in __execute_query 
> return impalad_client.execute(query, user=user) 
> common/impala_connection.py:160: in execute return 
> self.__beeswax_client.execute(sql_stmt, user=user) 
> beeswax/impala_beeswax.py:173: in execute handle = 
> self.__execute_query(query_string.strip(), user=user) 
> beeswax/impala_beeswax.py:341: in __execute_query 
> self.wait_for_completion(handle) beeswax/impala_beeswax.py:361: in 
> wait_for_completion raise ImpalaBeeswaxException("Query aborted:" + 
> error_log, None) E   ImpalaBeeswaxException: ImpalaBeeswaxException: E
> Query aborted:Memory limit exceeded: Failed to allocate tuple buffer E   
> HDFS_SCAN_NODE (id=0) could not allocate 190.00 KB without exceeding limit. E 
>   Error occurred on backend 
> ec2-m2-4xlarge-centos-6-4-0e8b.vpc.cloudera.com:22001 by fragment 
> db44c56dcd2fce95:7d746e080003 E   Memory left in process limit: 11.40 GB 
> E   Memory left in query limit: 51.61 KB E   
> Query(db44c56dcd2fce95:7d746e08): Limit=200.00 MB Reservation=158.50 
> MB ReservationLimit=160.00 MB OtherMemory=41.45 MB Total=199.95 MB 
> Peak=199.95 MB E Fragment db44c56dcd2fce95:7d746e080003: 
> Reservation=158.50 MB OtherMemory=41.45 MB Total=199.95 MB Peak=199.95 MB E   
> SORT_NODE (id=1): Reservation=9.00 MB OtherMemory=8.00 KB Total=9.01 MB 
> Peak=22.31 MB E   HDFS_SCAN_NODE (id=0): Reservation=149.50 MB 
> OtherMemory=41.43 MB Total=190.93 MB Peak=192.13 MB E Exprs: 
> Total=4.00 KB Peak=4.00 KB E   KrpcDataStreamSender (dst_id=4): 
> Total=688.00 B Peak=688.00 B E   CodeGen: Total=7.72 KB Peak=973.50 KB E  
>   E   Memory limit exceeded: Failed to allocate tuple buffer E   
> HDFS_SCAN_NODE (id=0) could not allocate 190.00 KB without exceeding limit. E 
>   Error occurred on backend 
> ec2-m2-4xlarge-centos-6-4-0e8b.vpc.cloudera.com:22001 by fragment 
> db44c56dcd2fce95:7d746e080003 E   Memory left in process limit: 11.40 GB 
> E   Memory left in query limit: 51.61 KB E   
> Query(db44c56dcd2fce95:7d746e08): Limit=200.00 MB Reservation=158.50 
> MB ReservationLimit=160.00 MB OtherMemory=41.45 MB Total=199.95 MB 
> Peak=199.95 MB E Fragment db44c56dcd2fce95:7d746e080003: 
> Reservation=158.50 MB OtherMemory=41.45 MB Total=199.95 MB Peak=199.95 MB E   
> SORT_NODE (id=1): Reservation=9.00 MB OtherMemory=8.00 KB Total=9.01 MB 
> Peak=22.31 MB E   HDFS_SCAN_NODE (id=0): Reservation=149.50 MB 
> OtherMemory=41.43 MB Total=190.93 MB Peak=192.13 MB E Exprs: 
> Total=4.00 KB Peak=4.00 KB E   KrpcDataStreamSender (dst_id=4): 
> Total=688.00 B Peak=688.00 B E   CodeGen: Total=7.72 KB Peak=973.50 KB (1 
> of 3 similar)
> Stacktrace
> query_test/test_mem_usage_scaling.py:272: in test_low_mem_limit_orderby_all
> self.run_prim

[jira] [Commented] (IMPALA-7010) Multiple flaky tests failing with MemLimitExceeded on S3

2018-05-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/IMPALA-7010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16472787#comment-16472787
 ] 

ASF subversion and git services commented on IMPALA-7010:
-

Commit 25c13bfdd6f7b6e12095a22dd4a44832129b5fe4 in impala's branch 
refs/heads/master from [~tarmstr...@cloudera.com]
[ https://git-wip-us.apache.org/repos/asf?p=impala.git;h=25c13bf ]

IMPALA-7010: don't run memory usage tests on non-HDFS

Moved a number of tests with tuned mem_limits. In some cases
this required separating the tests from non-tuned functional
tests.

TestQueryMemLimit used very high and very low limits only, so seemed
safe to run in all configurations.

Change-Id: I9686195a29dde2d87b19ef8bb0e93e08f8bee662
Reviewed-on: http://gerrit.cloudera.org:8080/10370
Reviewed-by: Tim Armstrong 
Tested-by: Impala Public Jenkins 


> Multiple flaky tests failing with MemLimitExceeded on S3
> 
>
> Key: IMPALA-7010
> URL: https://issues.apache.org/jira/browse/IMPALA-7010
> Project: IMPALA
>  Issue Type: Bug
>  Components: Infrastructure
>Affects Versions: Impala 3.0, Impala 2.13.0
>Reporter: Sailesh Mukil
>Assignee: Tim Armstrong
>Priority: Blocker
>  Labels: flaky
> Fix For: Impala 2.13.0, Impala 3.1.0
>
>
> *test_low_mem_limit_orderby_all*
> {code:java}
> Error Message
> query_test/test_mem_usage_scaling.py:272: in test_low_mem_limit_orderby_all   
>   self.run_primitive_query(vector, 'primitive_orderby_all') 
> query_test/test_mem_usage_scaling.py:260: in run_primitive_query 
> self.low_memory_limit_test(vector, query_name, self.MIN_MEM[query_name]) 
> query_test/test_mem_usage_scaling.py:114: in low_memory_limit_test 
> self.run_test_case(tpch_query, new_vector) common/impala_test_suite.py:405: 
> in run_test_case result = self.__execute_query(target_impalad_client, 
> query, user=user) common/impala_test_suite.py:620: in __execute_query 
> return impalad_client.execute(query, user=user) 
> common/impala_connection.py:160: in execute return 
> self.__beeswax_client.execute(sql_stmt, user=user) 
> beeswax/impala_beeswax.py:173: in execute handle = 
> self.__execute_query(query_string.strip(), user=user) 
> beeswax/impala_beeswax.py:341: in __execute_query 
> self.wait_for_completion(handle) beeswax/impala_beeswax.py:361: in 
> wait_for_completion raise ImpalaBeeswaxException("Query aborted:" + 
> error_log, None) E   ImpalaBeeswaxException: ImpalaBeeswaxException: E
> Query aborted:Memory limit exceeded: Failed to allocate tuple buffer E   
> HDFS_SCAN_NODE (id=0) could not allocate 190.00 KB without exceeding limit. E 
>   Error occurred on backend 
> ec2-m2-4xlarge-centos-6-4-0e8b.vpc.cloudera.com:22001 by fragment 
> db44c56dcd2fce95:7d746e080003 E   Memory left in process limit: 11.40 GB 
> E   Memory left in query limit: 51.61 KB E   
> Query(db44c56dcd2fce95:7d746e08): Limit=200.00 MB Reservation=158.50 
> MB ReservationLimit=160.00 MB OtherMemory=41.45 MB Total=199.95 MB 
> Peak=199.95 MB E Fragment db44c56dcd2fce95:7d746e080003: 
> Reservation=158.50 MB OtherMemory=41.45 MB Total=199.95 MB Peak=199.95 MB E   
> SORT_NODE (id=1): Reservation=9.00 MB OtherMemory=8.00 KB Total=9.01 MB 
> Peak=22.31 MB E   HDFS_SCAN_NODE (id=0): Reservation=149.50 MB 
> OtherMemory=41.43 MB Total=190.93 MB Peak=192.13 MB E Exprs: 
> Total=4.00 KB Peak=4.00 KB E   KrpcDataStreamSender (dst_id=4): 
> Total=688.00 B Peak=688.00 B E   CodeGen: Total=7.72 KB Peak=973.50 KB E  
>   E   Memory limit exceeded: Failed to allocate tuple buffer E   
> HDFS_SCAN_NODE (id=0) could not allocate 190.00 KB without exceeding limit. E 
>   Error occurred on backend 
> ec2-m2-4xlarge-centos-6-4-0e8b.vpc.cloudera.com:22001 by fragment 
> db44c56dcd2fce95:7d746e080003 E   Memory left in process limit: 11.40 GB 
> E   Memory left in query limit: 51.61 KB E   
> Query(db44c56dcd2fce95:7d746e08): Limit=200.00 MB Reservation=158.50 
> MB ReservationLimit=160.00 MB OtherMemory=41.45 MB Total=199.95 MB 
> Peak=199.95 MB E Fragment db44c56dcd2fce95:7d746e080003: 
> Reservation=158.50 MB OtherMemory=41.45 MB Total=199.95 MB Peak=199.95 MB E   
> SORT_NODE (id=1): Reservation=9.00 MB OtherMemory=8.00 KB Total=9.01 MB 
> Peak=22.31 MB E   HDFS_SCAN_NODE (id=0): Reservation=149.50 MB 
> OtherMemory=41.43 MB Total=190.93 MB Peak=192.13 MB E Exprs: 
> Total=4.00 KB Peak=4.00 KB E   KrpcDataStreamSender (dst_id=4): 
> Total=688.00 B Peak=688.00 B E   CodeGen: Total=7.72 KB Peak=973.50 KB (1 
> of 3 similar)
> Stacktrace
> query_test/test_mem_usage_scaling.py:272: in test_low_mem_limit_orderby_all
> self.run_primitive_query(vector, 'primitive_orderby_all')
> q

[jira] [Commented] (IMPALA-7010) Multiple flaky tests failing with MemLimitExceeded on S3

2018-05-10 Thread Tim Armstrong (JIRA)

[ 
https://issues.apache.org/jira/browse/IMPALA-7010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16471252#comment-16471252
 ] 

Tim Armstrong commented on IMPALA-7010:
---

I'm looking at this. I'll try to move a bunch of these mem_limit tests into 
separate tests that we can skip on S3 and other filesystems with different 
timing.

> Multiple flaky tests failing with MemLimitExceeded on S3
> 
>
> Key: IMPALA-7010
> URL: https://issues.apache.org/jira/browse/IMPALA-7010
> Project: IMPALA
>  Issue Type: Bug
>  Components: Infrastructure
>Affects Versions: Impala 3.0, Impala 2.13.0
>Reporter: Sailesh Mukil
>Assignee: Tim Armstrong
>Priority: Blocker
>  Labels: flaky
>
> *test_low_mem_limit_orderby_all*
> {code:java}
> Error Message
> query_test/test_mem_usage_scaling.py:272: in test_low_mem_limit_orderby_all   
>   self.run_primitive_query(vector, 'primitive_orderby_all') 
> query_test/test_mem_usage_scaling.py:260: in run_primitive_query 
> self.low_memory_limit_test(vector, query_name, self.MIN_MEM[query_name]) 
> query_test/test_mem_usage_scaling.py:114: in low_memory_limit_test 
> self.run_test_case(tpch_query, new_vector) common/impala_test_suite.py:405: 
> in run_test_case result = self.__execute_query(target_impalad_client, 
> query, user=user) common/impala_test_suite.py:620: in __execute_query 
> return impalad_client.execute(query, user=user) 
> common/impala_connection.py:160: in execute return 
> self.__beeswax_client.execute(sql_stmt, user=user) 
> beeswax/impala_beeswax.py:173: in execute handle = 
> self.__execute_query(query_string.strip(), user=user) 
> beeswax/impala_beeswax.py:341: in __execute_query 
> self.wait_for_completion(handle) beeswax/impala_beeswax.py:361: in 
> wait_for_completion raise ImpalaBeeswaxException("Query aborted:" + 
> error_log, None) E   ImpalaBeeswaxException: ImpalaBeeswaxException: E
> Query aborted:Memory limit exceeded: Failed to allocate tuple buffer E   
> HDFS_SCAN_NODE (id=0) could not allocate 190.00 KB without exceeding limit. E 
>   Error occurred on backend 
> ec2-m2-4xlarge-centos-6-4-0e8b.vpc.cloudera.com:22001 by fragment 
> db44c56dcd2fce95:7d746e080003 E   Memory left in process limit: 11.40 GB 
> E   Memory left in query limit: 51.61 KB E   
> Query(db44c56dcd2fce95:7d746e08): Limit=200.00 MB Reservation=158.50 
> MB ReservationLimit=160.00 MB OtherMemory=41.45 MB Total=199.95 MB 
> Peak=199.95 MB E Fragment db44c56dcd2fce95:7d746e080003: 
> Reservation=158.50 MB OtherMemory=41.45 MB Total=199.95 MB Peak=199.95 MB E   
> SORT_NODE (id=1): Reservation=9.00 MB OtherMemory=8.00 KB Total=9.01 MB 
> Peak=22.31 MB E   HDFS_SCAN_NODE (id=0): Reservation=149.50 MB 
> OtherMemory=41.43 MB Total=190.93 MB Peak=192.13 MB E Exprs: 
> Total=4.00 KB Peak=4.00 KB E   KrpcDataStreamSender (dst_id=4): 
> Total=688.00 B Peak=688.00 B E   CodeGen: Total=7.72 KB Peak=973.50 KB E  
>   E   Memory limit exceeded: Failed to allocate tuple buffer E   
> HDFS_SCAN_NODE (id=0) could not allocate 190.00 KB without exceeding limit. E 
>   Error occurred on backend 
> ec2-m2-4xlarge-centos-6-4-0e8b.vpc.cloudera.com:22001 by fragment 
> db44c56dcd2fce95:7d746e080003 E   Memory left in process limit: 11.40 GB 
> E   Memory left in query limit: 51.61 KB E   
> Query(db44c56dcd2fce95:7d746e08): Limit=200.00 MB Reservation=158.50 
> MB ReservationLimit=160.00 MB OtherMemory=41.45 MB Total=199.95 MB 
> Peak=199.95 MB E Fragment db44c56dcd2fce95:7d746e080003: 
> Reservation=158.50 MB OtherMemory=41.45 MB Total=199.95 MB Peak=199.95 MB E   
> SORT_NODE (id=1): Reservation=9.00 MB OtherMemory=8.00 KB Total=9.01 MB 
> Peak=22.31 MB E   HDFS_SCAN_NODE (id=0): Reservation=149.50 MB 
> OtherMemory=41.43 MB Total=190.93 MB Peak=192.13 MB E Exprs: 
> Total=4.00 KB Peak=4.00 KB E   KrpcDataStreamSender (dst_id=4): 
> Total=688.00 B Peak=688.00 B E   CodeGen: Total=7.72 KB Peak=973.50 KB (1 
> of 3 similar)
> Stacktrace
> query_test/test_mem_usage_scaling.py:272: in test_low_mem_limit_orderby_all
> self.run_primitive_query(vector, 'primitive_orderby_all')
> query_test/test_mem_usage_scaling.py:260: in run_primitive_query
> self.low_memory_limit_test(vector, query_name, self.MIN_MEM[query_name])
> query_test/test_mem_usage_scaling.py:114: in low_memory_limit_test
> self.run_test_case(tpch_query, new_vector)
> common/impala_test_suite.py:405: in run_test_case
> result = self.__execute_query(target_impalad_client, query, user=user)
> common/impala_test_suite.py:620: in __execute_query
> return impalad_client.execute(query, user=user)
> common/impala_connection.py:160: in execute
> return self.__beeswax