[jira] [Resolved] (IMPALA-7748) Remove the appx_count_distinct query option

2018-10-25 Thread Thomas Tauber-Marshall (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-7748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Tauber-Marshall resolved IMPALA-7748.

Resolution: Not A Problem

After further discussion, we've decided t leave appx_count_distinct as is

> Remove the appx_count_distinct query option
> ---
>
> Key: IMPALA-7748
> URL: https://issues.apache.org/jira/browse/IMPALA-7748
> Project: IMPALA
>  Issue Type: Improvement
>Affects Versions: Impala 3.1.0
>Reporter: Thomas Tauber-Marshall
>Priority: Minor
>
> With IMPALA-110, we can now support multiple count distinct directly, and the 
> appx_count_distinct query option is no longer needed. Users who want the perf 
> improvement from it can always just use the ndv() function directly in their 
> sql.
> We'll mark this option as deprecated in the docs starting from 3.1. Removing 
> it can be targeted for 4.0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Assigned] (IMPALA-5505) Partition and sort Kudu UPDATE and DELETE

2018-10-25 Thread Thomas Tauber-Marshall (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-5505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Tauber-Marshall reassigned IMPALA-5505:
--

Assignee: (was: Thomas Tauber-Marshall)

> Partition and sort Kudu UPDATE and DELETE
> -
>
> Key: IMPALA-5505
> URL: https://issues.apache.org/jira/browse/IMPALA-5505
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Frontend
>Affects Versions: Impala 2.10.0
>Reporter: Thomas Tauber-Marshall
>Priority: Minor
>  Labels: kudu, performance
>
> A recent change (IMPALA-3742) added partitioning and sorting for Kudu 
> INSERTs. We should extend this to also cover UPDATE and DELETE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Updated] (IMPALA-5505) Partition and sort Kudu UPDATE and DELETE

2018-10-25 Thread Thomas Tauber-Marshall (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-5505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Tauber-Marshall updated IMPALA-5505:
---
Priority: Minor  (was: Major)

> Partition and sort Kudu UPDATE and DELETE
> -
>
> Key: IMPALA-5505
> URL: https://issues.apache.org/jira/browse/IMPALA-5505
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Frontend
>Affects Versions: Impala 2.10.0
>Reporter: Thomas Tauber-Marshall
>Assignee: Thomas Tauber-Marshall
>Priority: Minor
>  Labels: kudu, performance
>
> A recent change (IMPALA-3742) added partitioning and sorting for Kudu 
> INSERTs. We should extend this to also cover UPDATE and DELETE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Resolved] (IMPALA-4060) Timeout in stress test TPC-H queries when run against Kudu

2018-10-25 Thread Thomas Tauber-Marshall (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-4060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Tauber-Marshall resolved IMPALA-4060.

Resolution: Cannot Reproduce

> Timeout in stress test TPC-H queries when run against Kudu
> --
>
> Key: IMPALA-4060
> URL: https://issues.apache.org/jira/browse/IMPALA-4060
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Backend
>Affects Versions: Impala 2.7.0
>Reporter: Dimitris Tsirogiannis
>Assignee: Thomas Tauber-Marshall
>Priority: Major
>  Labels: kudu, stress
>
> Running the stress test against Kudu results in a large number of queries 
> timing out with the following error:
> {noformat}
> I0831 22:57:36.206172 24679 jni-util.cc:169] java.lang.RuntimeException: 
> Loading Kudu Table failed
>   at 
> com.cloudera.impala.planner.KuduScanNode.computeScanRangeLocations(KuduScanNode.java:155)
>   at 
> com.cloudera.impala.planner.KuduScanNode.init(KuduScanNode.java:104)
>   at 
> com.cloudera.impala.planner.SingleNodePlanner.createScanNode(SingleNodePlanner.java:1257)
>   at 
> com.cloudera.impala.planner.SingleNodePlanner.createTableRefNode(SingleNodePlanner.java:1470)
>   at 
> com.cloudera.impala.planner.SingleNodePlanner.createTableRefsPlan(SingleNodePlanner.java:745)
>   at 
> com.cloudera.impala.planner.SingleNodePlanner.createSelectPlan(SingleNodePlanner.java:585)
>   at 
> com.cloudera.impala.planner.SingleNodePlanner.createQueryPlan(SingleNodePlanner.java:236)
>   at 
> com.cloudera.impala.planner.SingleNodePlanner.createSingleNodePlan(SingleNodePlanner.java:144)
>   at 
> com.cloudera.impala.planner.Planner.createPlan(Planner.java:62)
>   at 
> com.cloudera.impala.service.Frontend.createExecRequest(Frontend.java:975)
>   at 
> com.cloudera.impala.service.JniFrontend.createExecRequest(JniFrontend.java:150)
> Caused by: com.stumbleupon.async.TimeoutException: Timed out after 1ms 
> when joining Deferred@182109(state=PENDING, result=null, callback=get 
> tablet locations from the master for table Kudu Master -> release master 
> lookup permit -> retry RPC -> org.kududb.client.AsyncKuduClient$4@8e968ff -> 
> wakeup thread Thread-55, errback=passthrough -> release master lookup permit 
> -> retry RPC after error -> passthrough -> wakeup thread Thread-55)
>   at com.stumbleupon.async.Deferred.doJoin(Deferred.java:1161)
>   at com.stumbleupon.async.Deferred.join(Deferred.java:1029)
>   at org.kududb.client.KuduClient.openTable(KuduClient.java:181)
>   at 
> com.cloudera.impala.planner.KuduScanNode.computeScanRangeLocations(KuduScanNode.java:119)
>   ... 10 more
> {noformat}
> Another entry in the log that seems quite relevant is related to a long 
> thread creation time:
> {noformat}
> W0831 22:52:59.616951  8714 thread.cc:502] negotiator [worker] (thread pool) 
> Time spent creating pthread: real 37.363suser 0.000s sys 0.000s
> {noformat}
> The stress test was run in an EC2 8 node cluster with CDH 5.9.1 installed. 
> The latest Kudu was installed using packages. OS version is Ubuntu 14.04. The 
> stress test run TPC-H queries in scale factor 10. 
> Filing this JIRA for Impala now until the necessary changes in the stress 
> test generator are checked in.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Resolved] (IMPALA-4462) Kudu stress test causes Timed out errors

2018-10-25 Thread Thomas Tauber-Marshall (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-4462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Tauber-Marshall resolved IMPALA-4462.

Resolution: Cannot Reproduce

> Kudu stress test causes Timed out errors
> 
>
> Key: IMPALA-4462
> URL: https://issues.apache.org/jira/browse/IMPALA-4462
> Project: IMPALA
>  Issue Type: Bug
>  Components: Backend
>Affects Versions: Impala 2.8.0
>Reporter: Taras Bobrovytsky
>Assignee: Thomas Tauber-Marshall
>Priority: Major
>  Labels: kudu, stress
>
> When the stress test runs with high concurrency on the EC2 cluster, sometimes 
> queries fail with the following error message. This was observed in cases 
> where we run only select queries and in cases where we also run CRUD queries.
> {code}
> status.cc:114] Unable to open scanner: Timed out: Client connection 
> negotiation failed: client connection to 172.28.195.167:7050: Timeout 
> exceeded waiting to connect: Timed out: Client connection negotiation failed: 
> client connection to 172.28.194.188:7050: Timeout exceeded waiting to connect
> @  0x11b3ba1  (unknown)
> @  0x17d5de7  (unknown)
> @  0x1771308  (unknown)
> @  0x1771726  (unknown)
> @  0x1773646  (unknown)
> @  0x177350c  (unknown)
> @  0x177308d  (unknown)
> @  0x1772ec3  (unknown)
> @  0x132d4f4  (unknown)
> @  0x15d6273  (unknown)
> @  0x15dd24c  (unknown)
> @  0x15dd18f  (unknown)
> @  0x15dd0ea  (unknown)
> @  0x1a2518a  (unknown)
> @ 0x7faf323299d1  (unknown)
> @ 0x7faf320768fd  (unknown)
> runtime-state.cc:208] Error from query 554b52e24918e84c:9be6b9c9: 
> Unable to open scanner: Timed out: Client connection negotiation failed: 
> client connection to 172.28.195.167:7050: Timeout exceeded waiting to 
> connect: Timed out: Client connection negotiation failed: client connection 
> to 172.28.194.188:7050: Timeout exceeded waiting to connect
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Assigned] (IMPALA-6445) Whitespace should be stripped or detected in kudu master addresses metadata

2018-10-25 Thread Thomas Tauber-Marshall (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-6445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Tauber-Marshall reassigned IMPALA-6445:
--

Assignee: (was: Thomas Tauber-Marshall)

> Whitespace should be stripped or detected in kudu master addresses metadata
> ---
>
> Key: IMPALA-6445
> URL: https://issues.apache.org/jira/browse/IMPALA-6445
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Frontend
>Reporter: Todd Lipcon
>Priority: Major
>
> Currently the kudu master list metadata is split on ',' and directly fed 
> through to Kudu. This means that if a user specifies a list such as "m1, m2, 
> m3" with spaces after the commas, it will pass those hosts on to Kudu as 
> "m1", " m2", and " m3". Two of those three hostnames are of course invalid 
> and Kudu will only be able to connect when m1 is the active master.
> We should either strip those spaces or detect this case and throw an error on 
> the bad metadata. (I prefer stripping)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Assigned] (IMPALA-6653) Unicode support for Kudu table names

2018-10-25 Thread Thomas Tauber-Marshall (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-6653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Tauber-Marshall reassigned IMPALA-6653:
--

Assignee: (was: Thomas Tauber-Marshall)

> Unicode support for Kudu table names
> 
>
> Key: IMPALA-6653
> URL: https://issues.apache.org/jira/browse/IMPALA-6653
> Project: IMPALA
>  Issue Type: Bug
>Reporter: Jim Halfpenny
>Priority: Major
>
> It is possible to create a Kudu table containing unicode characters in its in 
> Impala by specifying the kudu.table_name attribute. When trying to select 
> from this table you receive an error that the underlying table does not exist.
> The example below shows a table being created successfully, but failing on a 
> select * statement.
> {{[jh-kafka-2:21000] > create table test2( a int primary key) stored as kudu 
> TBLPROPERTIES('kudu.table_name' = 'impala::kudutest.');}}
> {{Query: create table test2( a int primary key) stored as kudu 
> TBLPROPERTIES('kudu.table_name' = 'impala::kudutest.')}}
> {{WARNINGS: Unpartitioned Kudu tables are inefficient for large data 
> sizes.}}{{Fetched 0 row(s) in 0.64s}}
> {{[jh-kafka-2:21000] > select * from test2;}}
> {{Query: select * from test2}}
> {{Query submitted at: 2018-03-13 08:23:29 (Coordinator: 
> https://jh-kafka-2:25000)}}
> {{ERROR: AnalysisException: Failed to load metadata for table: 'test2'}}
> {{CAUSED BY: TableLoadingException: Error loading metadata for Kudu table 
> impala::kudutest.}}
> {{CAUSED BY: ImpalaRuntimeException: Error opening Kudu table 
> 'impala::kudutest.', Kudu error: The table does not exist: table_name: 
> "impala::kudutest."}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Work started] (IMPALA-3652) Fix resource transfer in subplans with limits

2018-10-26 Thread Thomas Tauber-Marshall (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-3652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on IMPALA-3652 started by Thomas Tauber-Marshall.
--
> Fix resource transfer in subplans with limits
> -
>
> Key: IMPALA-3652
> URL: https://issues.apache.org/jira/browse/IMPALA-3652
> Project: IMPALA
>  Issue Type: Task
>  Components: Backend
>Affects Versions: Impala 2.6.0
>Reporter: Tim Armstrong
>Assignee: Thomas Tauber-Marshall
>Priority: Major
>  Labels: resource-management
>
> There is a tricky corner case in our resource transfer model with subplans 
> and limits. The problem is that the limit in the subplan may mean that the 
> exec node is reset before it has returned its full output. The resource 
> transfer logic generally attaches resources to batches at specific points in 
> the output, e.g. end of partition, end of block, so it's possible that 
> batches returned before the Reset() may reference resources that have not yet 
> been transferred. It's unclear if we test this scenario consistently or if 
> it's always handled correctly.
> One example is this query, reported in IMPALA-5456:
> {code}
> select c_custkey, c_mktsegment, o_orderkey, o_orderdate
> from customer c,
>   (select o1.o_orderkey, o2.o_orderdate
>from c.c_orders o1, c.c_orders o2
>where o1.o_orderkey = o2.o_orderkey limit 10) v limit 500;
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Work started] (IMPALA-7761) Add multiple count distinct to targeted stress and targeted perf

2018-10-26 Thread Thomas Tauber-Marshall (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-7761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on IMPALA-7761 started by Thomas Tauber-Marshall.
--
> Add multiple count distinct to targeted stress and targeted perf
> 
>
> Key: IMPALA-7761
> URL: https://issues.apache.org/jira/browse/IMPALA-7761
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Infrastructure
>Affects Versions: Impala 3.1.0
>Reporter: Thomas Tauber-Marshall
>Assignee: Thomas Tauber-Marshall
>Priority: Blocker
>
> With IMPALA-110 in, we should add queries with multiple count distinct to 
> targeted stress and targeted perf



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Resolved] (IMPALA-3652) Fix resource transfer in subplans with limits

2018-11-07 Thread Thomas Tauber-Marshall (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-3652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Tauber-Marshall resolved IMPALA-3652.

   Resolution: Fixed
Fix Version/s: Impala 3.1.0

> Fix resource transfer in subplans with limits
> -
>
> Key: IMPALA-3652
> URL: https://issues.apache.org/jira/browse/IMPALA-3652
> Project: IMPALA
>  Issue Type: Bug
>  Components: Backend
>Affects Versions: Impala 2.6.0
>Reporter: Tim Armstrong
>Assignee: Thomas Tauber-Marshall
>Priority: Major
>  Labels: resource-management
> Fix For: Impala 3.1.0
>
>
> There is a tricky corner case in our resource transfer model with subplans 
> and limits. The problem is that the limit in the subplan may mean that the 
> exec node is reset before it has returned its full output. The resource 
> transfer logic generally attaches resources to batches at specific points in 
> the output, e.g. end of partition, end of block, so it's possible that 
> batches returned before the Reset() may reference resources that have not yet 
> been transferred. It's unclear if we test this scenario consistently or if 
> it's always handled correctly.
> One example is this query, reported in IMPALA-5456:
> {code}
> select c_custkey, c_mktsegment, o_orderkey, o_orderdate
> from customer c,
>   (select o1.o_orderkey, o2.o_orderdate
>from c.c_orders o1, c.c_orders o2
>where o1.o_orderkey = o2.o_orderkey limit 10) v limit 500;
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-7832) Support IF NOT EXISTS in alter table add columns

2018-11-07 Thread Thomas Tauber-Marshall (JIRA)
Thomas Tauber-Marshall created IMPALA-7832:
--

 Summary: Support IF NOT EXISTS in alter table add columns
 Key: IMPALA-7832
 URL: https://issues.apache.org/jira/browse/IMPALA-7832
 Project: IMPALA
  Issue Type: New Feature
  Components: Frontend
Affects Versions: Impala 3.1.0
Reporter: Thomas Tauber-Marshall


alter table  add [if not exists] columns (  [,  
...])

would add the column only if a column of the same name does not already exist

Probably worth checking out what other databases do in different situations, 
eg. if the column already exists but with a different type, if "replace" is 
used instead of "add", etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-7840) test_concurrent_schema_change is missing a possible error message

2018-11-08 Thread Thomas Tauber-Marshall (JIRA)
Thomas Tauber-Marshall created IMPALA-7840:
--

 Summary: test_concurrent_schema_change is missing a possible error 
message
 Key: IMPALA-7840
 URL: https://issues.apache.org/jira/browse/IMPALA-7840
 Project: IMPALA
  Issue Type: Bug
Affects Versions: Impala 3.1.0
Reporter: Thomas Tauber-Marshall
Assignee: Thomas Tauber-Marshall


test_concurrent_schema_change runs a series of alters and inserts on the same 
Kudu table concurrently to ensure that Impala can handle this without crashing.

There is a list of expected error messages in the test. One possible legitimate 
error is missing, causing the test to sometimes be flaky.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Resolved] (IMPALA-7840) test_concurrent_schema_change is missing a possible error message

2018-11-13 Thread Thomas Tauber-Marshall (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-7840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Tauber-Marshall resolved IMPALA-7840.

   Resolution: Fixed
Fix Version/s: Impala 3.1.0

> test_concurrent_schema_change is missing a possible error message
> -
>
> Key: IMPALA-7840
> URL: https://issues.apache.org/jira/browse/IMPALA-7840
> Project: IMPALA
>  Issue Type: Bug
>Affects Versions: Impala 3.1.0
>Reporter: Thomas Tauber-Marshall
>Assignee: Thomas Tauber-Marshall
>Priority: Critical
> Fix For: Impala 3.1.0
>
>
> test_concurrent_schema_change runs a series of alters and inserts on the same 
> Kudu table concurrently to ensure that Impala can handle this without 
> crashing.
> There is a list of expected error messages in the test. One possible 
> legitimate error is missing, causing the test to sometimes be flaky.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Resolved] (IMPALA-7761) Add multiple count distinct to targeted stress and targeted perf

2018-11-13 Thread Thomas Tauber-Marshall (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-7761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Tauber-Marshall resolved IMPALA-7761.

   Resolution: Fixed
Fix Version/s: Impala 3.2.0

> Add multiple count distinct to targeted stress and targeted perf
> 
>
> Key: IMPALA-7761
> URL: https://issues.apache.org/jira/browse/IMPALA-7761
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Infrastructure
>Affects Versions: Impala 3.1.0
>Reporter: Thomas Tauber-Marshall
>Assignee: Thomas Tauber-Marshall
>Priority: Blocker
> Fix For: Impala 3.2.0
>
>
> With IMPALA-110 in, we should add queries with multiple count distinct to 
> targeted stress and targeted perf



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Assigned] (IMPALA-6924) Compute stats profiles should include reference to child queries

2018-11-15 Thread Thomas Tauber-Marshall (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-6924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Tauber-Marshall reassigned IMPALA-6924:
--

Assignee: Thomas Tauber-Marshall

> Compute stats profiles should include reference to child queries
> 
>
> Key: IMPALA-6924
> URL: https://issues.apache.org/jira/browse/IMPALA-6924
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Backend
>Affects Versions: Impala 3.0, Impala 2.12.0
>Reporter: Tim Armstrong
>Assignee: Thomas Tauber-Marshall
>Priority: Major
>  Labels: observability, supportability
>
> "Compute stats" queries spawn off child queries that do most of the work. 
> It's non-trivial to track down the child queries and get their profiles if 
> something goes wrong. We really should have, at a minimum, the query IDs of 
> the child queries in the parent's profile and vice-versa.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Assigned] (IMPALA-5847) Some query options do not work as expected in .test files

2018-11-15 Thread Thomas Tauber-Marshall (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-5847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Tauber-Marshall reassigned IMPALA-5847:
--

Assignee: Thomas Tauber-Marshall

> Some query options do not work as expected in .test files
> -
>
> Key: IMPALA-5847
> URL: https://issues.apache.org/jira/browse/IMPALA-5847
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Infrastructure
>Reporter: Alexander Behm
>Assignee: Thomas Tauber-Marshall
>Priority: Minor
>
> We often use "set" in .test files to alter query options. Theoretically, a 
> "set" command should change the session-level query options and in most cases 
> a single .test file is executed from the same Impala session. However, for 
> some options using "set" within a query section does not seem to work. For 
> example, "num_nodes" does not work as expected as shown below.
> PyTest:
> {code}
> import pytest
> from tests.common.impala_test_suite import ImpalaTestSuite
> class TestStringQueries(ImpalaTestSuite):
>   @classmethod
>   def get_workload(cls):
> return 'functional-query'
>   def test_set_bug(self, vector):
> self.run_test_case('QueryTest/set_bug', vector)
> {code}
> Corresponding .test file:
> {code}
> 
>  QUERY
> set num_nodes=1;
> select count(*) from functional.alltypes;
> select count(*) from functional.alltypes;
> select count(*) from functional.alltypes;
>  RESULTS
> 7300
>  TYPES
> BIGINT
> 
> {code}
> After running the test above, I validated that the 3 queries were run from 
> the same session, and that the queries run a distributed plan. The 
> "num_nodes" option was definitely not picked up. I am not sure which query 
> options are affected. In several .test files setting other query options does 
> seem to work as expected.
> I suspect that the test framework might keep its own list of default query 
> options which get submitted together with the query, so the session-level 
> options are overridden on a per-request basis. For example, if I change the 
> pytest to remove the "num_nodes" dictionary entry, then the test works as 
> expected.
> PyTest workaround:
> {code}
> import pytest
> from tests.common.impala_test_suite import ImpalaTestSuite
> class TestStringQueries(ImpalaTestSuite):
>   @classmethod
>   def get_workload(cls):
> return 'functional-query'
>   def test_set_bug(self, vector):
> # Workaround SET bug
> vector.get_value('exec_option').pop('num_nodes', None)
> self.run_test_case('QueryTest/set_bug', vector)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Resolved] (IMPALA-341) Remote profiles may be ignored by coordinator if query has a limit

2018-11-14 Thread Thomas Tauber-Marshall (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Tauber-Marshall resolved IMPALA-341.
---
   Resolution: Fixed
Fix Version/s: Impala 3.2.0

This was fixed as a side effect of IMPALA-4063

> Remote profiles may be ignored by coordinator if query has a limit
> --
>
> Key: IMPALA-341
> URL: https://issues.apache.org/jira/browse/IMPALA-341
> Project: IMPALA
>  Issue Type: Bug
>  Components: Backend
>Affects Versions: Impala 1.0, Impala 2.3.0
>Reporter: Henry Robinson
>Assignee: Thomas Tauber-Marshall
>Priority: Major
>  Labels: query-lifecycle, supportability
> Fix For: Impala 3.2.0
>
>
> For a query with a {{LIMIT}}, the coordinator cancels remote fragments once 
> the limit has been reached. This can happen before the backends have 
> successfully sent a profile update, and any subsequent updates are ignored 
> because the query is cancelled, so the complete profile is never fleshed out.
> When the query is an {{INSERT}}, it is possible that a race in the 
> coordinator code handling the remote profiles leads to a crash. This only 
> seems to happen occasionally with large clusters (as reported), and we 
> haven't been able to reproduce this internally yet.
> We should probably wait until all backends have completed one way or the 
> other before tearing down a query completely.
> A workaround is to remove the {{LIMIT}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-7660) Support ECDH ciphers for debug webserver

2018-10-04 Thread Thomas Tauber-Marshall (JIRA)
Thomas Tauber-Marshall created IMPALA-7660:
--

 Summary: Support ECDH ciphers for debug webserver
 Key: IMPALA-7660
 URL: https://issues.apache.org/jira/browse/IMPALA-7660
 Project: IMPALA
  Issue Type: Improvement
  Components: Infrastructure
Affects Versions: Impala 3.1.0
Reporter: Thomas Tauber-Marshall
Assignee: Thomas Tauber-Marshall


A recent change (IMPALA-7519) added support for ecdh ciphers for our 
beeswax/hs2 server. It would be useful to support this for our debug webpage 
server, which is based on squeasel

A recent commit on squeasel 
(https://github.com/cloudera/squeasel/commit/8aa6177ba08e69cd4498c4c7a453340d86c3ad0f)
 added support for this, so this is basically just pulling that commit in and 
adding tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Assigned] (IMPALA-5821) Distinguish numeric types and show implicit cast in EXTENDED explain plans

2018-10-02 Thread Thomas Tauber-Marshall (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-5821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Tauber-Marshall reassigned IMPALA-5821:
--

Assignee: Andrew Sherman

> Distinguish numeric types and show implicit cast in EXTENDED explain plans
> --
>
> Key: IMPALA-5821
> URL: https://issues.apache.org/jira/browse/IMPALA-5821
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Frontend
>Affects Versions: Impala 2.8.0
>Reporter: Matthew Jacobs
>Assignee: Andrew Sherman
>Priority: Minor
>  Labels: supportability, usability
>
> In this plan, it wasn't clear that the constant in the predicate was being 
> evaluated to a double. Then the lhs required an implicit cast, and the 
> predicate couldn't be pushed to Kudu:
> {code}
> [localhost:21000] > explain select * from functional_kudu.alltypestiny where 
> bigint_col < 1000 / 100;
> Query: explain select * from functional_kudu.alltypestiny where bigint_col < 
> 1000 / 100
> +-+
> | Explain String  |
> +-+
> | Per-Host Resource Reservation: Memory=0B|
> | Per-Host Resource Estimates: Memory=10.00MB |
> | Codegen disabled by planner |
> | |
> | PLAN-ROOT SINK  |
> | |   |
> | 00:SCAN KUDU [functional_kudu.alltypestiny] |
> |predicates: bigint_col < 10  |
> +-+
> {code}
> We should make it more clear by printing it in a way that makes it clear that 
> it's being interpreted as a DOUBLE, e.g. by wrapping in a cast.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Reopened] (IMPALA-6261) Crash impalad for use RuntimeMinMaxFilter

2018-10-10 Thread Thomas Tauber-Marshall (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-6261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Tauber-Marshall reopened IMPALA-6261:


>   Crash impalad  for use RuntimeMinMaxFilter
> 
>
> Key: IMPALA-6261
> URL: https://issues.apache.org/jira/browse/IMPALA-6261
> Project: IMPALA
>  Issue Type: Bug
>  Components: Backend
>Affects Versions: Impala 2.11.0
>Reporter: yyzzjj
>Assignee: Thomas Tauber-Marshall
>Priority: Blocker
>
> {code:java}
>   select 
>   case when m.parent_id is null then m.so_no else 
> concat('ESL',cast(m.parent_id as string)) end as parent_no, 
>   it.so_id   , 
>   m.so_no   , 
>   sum(it.apply_outstore_qty) as goods_num, 
>   group_concat(concat(it.goods_name, ':', cast( 
>   it.apply_outstore_qty as string)), ';') as sku 
>   from 
>   ( 
>   select 
>   so_id, 
>   goods_name, 
>   apply_outstore_qty 
>   from 
>   eclp_so1_so_item 
>   ) 
>   it 
>   inner join [shuffle] 
>   ( 
>   select 
>   * 
>   from 
>   eclp_so1_so_main 
>   where 
>   so_type = 1 
>   and create_time < '2017-11-30 16:00:00' 
>   and so_status  != 10034 
>   ) 
>   m 
>   on   cast(substring(case when m.parent_id is null then m.so_no else 
> concat('ESL', cast(m.parent_id as string)) end, 4) as string) = cast(it.so_id 
> as string) 
>   group by it.so_id, case when m.parent_id is null then m.so_no else 
> concat('ESL', cast(m.parent_id as string)) end, m.so_no 
>  table [eclp_so1_so_item]   so_id not null
>  
>  table [eclp_so1_so_main]parent_id   nullable ,  so_no not null
> {code}
> #0  impala::MemPool::FindChunk (this=this@entry=0xb0388538, 
> min_size=min_size@entry=2, check_limits=check_limits@entry=true) at 
> /export/ldb/online/impala/be/src/runtime/mem-pool.cc:119
> #1  0x7f5fb81b4d22 in impala::MemPool::Allocate (alignment=8, 
> size=, this=0xb0388538) at 
> /export/ldb/online/impala/be/src/runtime/mem-pool.h:270
> #2  impala::MemPool::TryAllocate (size=, this=0xb0388538) at 
> /export/ldb/online/impala/be/src/runtime/mem-pool.h:109
> #3  impala::StringBuffer::GrowBuffer (new_size=2, this=0x7f5ab9180478) at 
> /export/ldb/online/impala/be/src/runtime/string-buffer.h:85
> #4  impala::StringBuffer::Append (str_len=2, str=0x8bc0a2cb  access memory at address 0x8bc0a2cb>, this=0x7f5ab9180478) at 
> /export/ldb/online/impala/be/src/runtime/string-buffer.h:53
> #5  impala::StringMinMaxFilter::CopyToBuffer (this=0x7f5ab9180450, 
> buffer=0x7f5ab9180478, value=0x7f5ab9180458, len=2) at 
> /export/ldb/online/impala/be/src/util/min-max-filter.cc:304
> #6  0x7f5fb81b50c5 in impala::StringMinMaxFilter::MaterializeValues 
> (this=0x7f5ab9180450) at 
> /export/ldb/online/impala/be/src/util/min-max-filter.cc:229
> #7  0x7f5f2c21f586 in ?? ()
> #8  0x7f595dc85ad0 in ?? ()
> #9  0x in ?? ()



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-7691) test_web_pages not being run

2018-10-10 Thread Thomas Tauber-Marshall (JIRA)
Thomas Tauber-Marshall created IMPALA-7691:
--

 Summary: test_web_pages not being run
 Key: IMPALA-7691
 URL: https://issues.apache.org/jira/browse/IMPALA-7691
 Project: IMPALA
  Issue Type: Improvement
  Components: Infrastructure
Reporter: Thomas Tauber-Marshall
Assignee: Thomas Tauber-Marshall


test_web_pages.py is not being run by test/run-tests.py because the 'webserver' 
directory is missing from VALID_TEST_DIRS



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Resolved] (IMPALA-6261) Crash impalad for use RuntimeMinMaxFilter

2018-10-10 Thread Thomas Tauber-Marshall (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-6261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Tauber-Marshall resolved IMPALA-6261.

Resolution: Duplicate

IMPALA-7272

>   Crash impalad  for use RuntimeMinMaxFilter
> 
>
> Key: IMPALA-6261
> URL: https://issues.apache.org/jira/browse/IMPALA-6261
> Project: IMPALA
>  Issue Type: Bug
>  Components: Backend
>Affects Versions: Impala 2.11.0
>Reporter: yyzzjj
>Assignee: Thomas Tauber-Marshall
>Priority: Blocker
>
> {code:java}
>   select 
>   case when m.parent_id is null then m.so_no else 
> concat('ESL',cast(m.parent_id as string)) end as parent_no, 
>   it.so_id   , 
>   m.so_no   , 
>   sum(it.apply_outstore_qty) as goods_num, 
>   group_concat(concat(it.goods_name, ':', cast( 
>   it.apply_outstore_qty as string)), ';') as sku 
>   from 
>   ( 
>   select 
>   so_id, 
>   goods_name, 
>   apply_outstore_qty 
>   from 
>   eclp_so1_so_item 
>   ) 
>   it 
>   inner join [shuffle] 
>   ( 
>   select 
>   * 
>   from 
>   eclp_so1_so_main 
>   where 
>   so_type = 1 
>   and create_time < '2017-11-30 16:00:00' 
>   and so_status  != 10034 
>   ) 
>   m 
>   on   cast(substring(case when m.parent_id is null then m.so_no else 
> concat('ESL', cast(m.parent_id as string)) end, 4) as string) = cast(it.so_id 
> as string) 
>   group by it.so_id, case when m.parent_id is null then m.so_no else 
> concat('ESL', cast(m.parent_id as string)) end, m.so_no 
>  table [eclp_so1_so_item]   so_id not null
>  
>  table [eclp_so1_so_main]parent_id   nullable ,  so_no not null
> {code}
> #0  impala::MemPool::FindChunk (this=this@entry=0xb0388538, 
> min_size=min_size@entry=2, check_limits=check_limits@entry=true) at 
> /export/ldb/online/impala/be/src/runtime/mem-pool.cc:119
> #1  0x7f5fb81b4d22 in impala::MemPool::Allocate (alignment=8, 
> size=, this=0xb0388538) at 
> /export/ldb/online/impala/be/src/runtime/mem-pool.h:270
> #2  impala::MemPool::TryAllocate (size=, this=0xb0388538) at 
> /export/ldb/online/impala/be/src/runtime/mem-pool.h:109
> #3  impala::StringBuffer::GrowBuffer (new_size=2, this=0x7f5ab9180478) at 
> /export/ldb/online/impala/be/src/runtime/string-buffer.h:85
> #4  impala::StringBuffer::Append (str_len=2, str=0x8bc0a2cb  access memory at address 0x8bc0a2cb>, this=0x7f5ab9180478) at 
> /export/ldb/online/impala/be/src/runtime/string-buffer.h:53
> #5  impala::StringMinMaxFilter::CopyToBuffer (this=0x7f5ab9180450, 
> buffer=0x7f5ab9180478, value=0x7f5ab9180458, len=2) at 
> /export/ldb/online/impala/be/src/util/min-max-filter.cc:304
> #6  0x7f5fb81b50c5 in impala::StringMinMaxFilter::MaterializeValues 
> (this=0x7f5ab9180450) at 
> /export/ldb/online/impala/be/src/util/min-max-filter.cc:229
> #7  0x7f5f2c21f586 in ?? ()
> #8  0x7f595dc85ad0 in ?? ()
> #9  0x in ?? ()



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Resolved] (IMPALA-110) Add support for multiple distinct operators in the same query block

2018-10-01 Thread Thomas Tauber-Marshall (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Tauber-Marshall resolved IMPALA-110.
---
Resolution: Fixed

> Add support for multiple distinct operators in the same query block
> ---
>
> Key: IMPALA-110
> URL: https://issues.apache.org/jira/browse/IMPALA-110
> Project: IMPALA
>  Issue Type: New Feature
>  Components: Backend, Frontend
>Affects Versions: Impala 0.5, Impala 1.4, Impala 2.0, Impala 2.2, Impala 
> 2.3.0
>Reporter: Greg Rahn
>Assignee: Thomas Tauber-Marshall
>Priority: Major
>  Labels: sql-language
> Fix For: Impala 3.1.0
>
>
> Impala only allows a single (DISTINCT columns) expression in each query.
> {color:red}Note:
> If you do not need precise accuracy, you can produce an estimate of the 
> distinct values for a column by specifying NDV(column); a query can contain 
> multiple instances of NDV(column). To make Impala automatically rewrite 
> COUNT(DISTINCT) expressions to NDV(), enable the APPX_COUNT_DISTINCT query 
> option.
> {color}
> {code}
> [impala:21000] > select count(distinct i_class_id) from item;
> Query: select count(distinct i_class_id) from item
> Query finished, fetching results ...
> 16
> Returned 1 row(s) in 1.51s
> {code}
> {code}
> [impala:21000] > select count(distinct i_class_id), count(distinct 
> i_brand_id) from item;
> Query: select count(distinct i_class_id), count(distinct i_brand_id) from item
> ERROR: com.cloudera.impala.common.AnalysisException: Analysis exception (in 
> select count(distinct i_class_id), count(distinct i_brand_id) from item)
>   at 
> com.cloudera.impala.analysis.AnalysisContext.analyze(AnalysisContext.java:133)
>   at 
> com.cloudera.impala.service.Frontend.createExecRequest(Frontend.java:221)
>   at 
> com.cloudera.impala.service.JniFrontend.createExecRequest(JniFrontend.java:89)
> Caused by: com.cloudera.impala.common.AnalysisException: all DISTINCT 
> aggregate functions need to have the same set of parameters as COUNT(DISTINCT 
> i_class_id); deviating function: COUNT(DISTINCT i_brand_id)
>   at 
> com.cloudera.impala.analysis.AggregateInfo.createDistinctAggInfo(AggregateInfo.java:196)
>   at 
> com.cloudera.impala.analysis.AggregateInfo.create(AggregateInfo.java:143)
>   at 
> com.cloudera.impala.analysis.SelectStmt.createAggInfo(SelectStmt.java:466)
>   at 
> com.cloudera.impala.analysis.SelectStmt.analyzeAggregation(SelectStmt.java:347)
>   at com.cloudera.impala.analysis.SelectStmt.analyze(SelectStmt.java:155)
>   at 
> com.cloudera.impala.analysis.AnalysisContext.analyze(AnalysisContext.java:130)
>   ... 2 more
> {code}
> Hive supports this:
> {code}
> $ hive -e "select count(distinct i_class_id), count(distinct i_brand_id) from 
> item;"
> Logging initialized using configuration in 
> file:/etc/hive/conf.dist/hive-log4j.properties
> Hive history file=/tmp/grahn/hive_job_log_grahn_201303052234_1625576708.txt
> Total MapReduce jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks determined at compile time: 1
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=
> In order to set a constant number of reducers:
>   set mapred.reduce.tasks=
> Starting Job = job_201302081514_0073, Tracking URL = 
> http://impala:50030/jobdetails.jsp?jobid=job_201302081514_0073
> Kill Command = /usr/lib/hadoop/bin/hadoop job  
> -Dmapred.job.tracker=m0525.mtv.cloudera.com:8021 -kill job_201302081514_0073
> Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 
> 1
> 2013-03-05 22:34:43,255 Stage-1 map = 0%,  reduce = 0%
> 2013-03-05 22:34:49,323 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 4.81 
> sec
> 2013-03-05 22:34:50,337 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 4.81 
> sec
> 2013-03-05 22:34:51,351 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 4.81 
> sec
> 2013-03-05 22:34:52,360 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 4.81 
> sec
> 2013-03-05 22:34:53,370 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 4.81 
> sec
> 2013-03-05 22:34:54,379 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 4.81 
> sec
> 2013-03-05 22:34:55,389 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 
> 8.58 sec
> 2013-03-05 22:34:56,402 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 
> 8.58 sec
> 2013-03-05 22:34:57,413 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 
> 8.58 sec
> 2013-03-05 22:34:58,424 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 
> 8.58 sec
> MapReduce Total cumulative CPU time: 8 seconds 580 msec
> Ended Job = job_201302081514_0073
> MapReduce Jobs Launched: 
> Job 0: Map: 1  Reduce: 1   

[jira] [Created] (IMPALA-7652) Fix BytesRead for Kudu scans

2018-10-03 Thread Thomas Tauber-Marshall (JIRA)
Thomas Tauber-Marshall created IMPALA-7652:
--

 Summary: Fix BytesRead for Kudu scans
 Key: IMPALA-7652
 URL: https://issues.apache.org/jira/browse/IMPALA-7652
 Project: IMPALA
  Issue Type: Improvement
  Components: Backend
Affects Versions: Impala 3.1.0
Reporter: Thomas Tauber-Marshall


In the runtime profile, all scan nodes have a BytesRead counter. Kudu scan 
nodes do not actually set this counter, which creates confusion when Kudu scans 
report 0 bytes read even though they did in fact read data.

We should either fill in this counter for Kudu scans, or if its difficult to 
get the count of bytes read from Kudu then we should eliminate the counter for 
Kudu scan node profiles.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-7683) Write Impala git hash in stress test runtime_info.json

2018-10-09 Thread Thomas Tauber-Marshall (JIRA)
Thomas Tauber-Marshall created IMPALA-7683:
--

 Summary: Write Impala git hash in stress test runtime_info.json
 Key: IMPALA-7683
 URL: https://issues.apache.org/jira/browse/IMPALA-7683
 Project: IMPALA
  Issue Type: Improvement
  Components: Infrastructure
Reporter: Thomas Tauber-Marshall


The stress test generates a file, runtime_info.json, containing info about the 
mem limits, running time, and result hash of the queries that got run.

Since all of these things are dependent on the version of Impala being run, it 
would be useful to also write the commit hash of Impala that was being used, so 
that if there's any discrepancy in the future the results can be regenerated 
from the exact same version of Impala.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Resolved] (IMPALA-4308) Make the minidumps archived in our Jenkins jobs usable

2018-10-01 Thread Thomas Tauber-Marshall (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-4308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Tauber-Marshall resolved IMPALA-4308.

   Resolution: Fixed
Fix Version/s: Impala 3.1.0

> Make the minidumps archived in our Jenkins jobs usable
> --
>
> Key: IMPALA-4308
> URL: https://issues.apache.org/jira/browse/IMPALA-4308
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Infrastructure
>Affects Versions: Impala 2.8.0
>Reporter: Taras Bobrovytsky
>Assignee: Thomas Tauber-Marshall
>Priority: Major
>  Labels: breakpad, test-infra
> Fix For: Impala 3.1.0
>
>
> The minidumps that are archived in our Jenkins jobs are unusable because we 
> do not save the symbols that are required to extract stack traces. As part of 
> the log archiving process, we should:
> # Extract the necessary symbols and save them into the $IMPALA_HOME/logs 
> directory.
> # Automatically collect the backtraces from the minidumps and save them into 
> $IMPALA_HOME/logs directory in a text file



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-110) Add support for multiple distinct operators in the same query block

2018-09-26 Thread Thomas Tauber-Marshall (JIRA)


[ 
https://issues.apache.org/jira/browse/IMPALA-110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16629207#comment-16629207
 ] 

Thomas Tauber-Marshall commented on IMPALA-110:
---

Barring any unforeseen problems, this will be part of the 3.1 Apache release. I 
am not currently aware of any plans for a 2.13 release.

Cloudera does not in general make commitments about when features will land in 
CDH. For more info about that, you can contact Cloudera directly.

> Add support for multiple distinct operators in the same query block
> ---
>
> Key: IMPALA-110
> URL: https://issues.apache.org/jira/browse/IMPALA-110
> Project: IMPALA
>  Issue Type: New Feature
>  Components: Backend, Frontend
>Affects Versions: Impala 0.5, Impala 1.4, Impala 2.0, Impala 2.2, Impala 
> 2.3.0
>Reporter: Greg Rahn
>Assignee: Thomas Tauber-Marshall
>Priority: Major
>  Labels: sql-language
> Fix For: Impala 3.1.0
>
>
> Impala only allows a single (DISTINCT columns) expression in each query.
> {color:red}Note:
> If you do not need precise accuracy, you can produce an estimate of the 
> distinct values for a column by specifying NDV(column); a query can contain 
> multiple instances of NDV(column). To make Impala automatically rewrite 
> COUNT(DISTINCT) expressions to NDV(), enable the APPX_COUNT_DISTINCT query 
> option.
> {color}
> {code}
> [impala:21000] > select count(distinct i_class_id) from item;
> Query: select count(distinct i_class_id) from item
> Query finished, fetching results ...
> 16
> Returned 1 row(s) in 1.51s
> {code}
> {code}
> [impala:21000] > select count(distinct i_class_id), count(distinct 
> i_brand_id) from item;
> Query: select count(distinct i_class_id), count(distinct i_brand_id) from item
> ERROR: com.cloudera.impala.common.AnalysisException: Analysis exception (in 
> select count(distinct i_class_id), count(distinct i_brand_id) from item)
>   at 
> com.cloudera.impala.analysis.AnalysisContext.analyze(AnalysisContext.java:133)
>   at 
> com.cloudera.impala.service.Frontend.createExecRequest(Frontend.java:221)
>   at 
> com.cloudera.impala.service.JniFrontend.createExecRequest(JniFrontend.java:89)
> Caused by: com.cloudera.impala.common.AnalysisException: all DISTINCT 
> aggregate functions need to have the same set of parameters as COUNT(DISTINCT 
> i_class_id); deviating function: COUNT(DISTINCT i_brand_id)
>   at 
> com.cloudera.impala.analysis.AggregateInfo.createDistinctAggInfo(AggregateInfo.java:196)
>   at 
> com.cloudera.impala.analysis.AggregateInfo.create(AggregateInfo.java:143)
>   at 
> com.cloudera.impala.analysis.SelectStmt.createAggInfo(SelectStmt.java:466)
>   at 
> com.cloudera.impala.analysis.SelectStmt.analyzeAggregation(SelectStmt.java:347)
>   at com.cloudera.impala.analysis.SelectStmt.analyze(SelectStmt.java:155)
>   at 
> com.cloudera.impala.analysis.AnalysisContext.analyze(AnalysisContext.java:130)
>   ... 2 more
> {code}
> Hive supports this:
> {code}
> $ hive -e "select count(distinct i_class_id), count(distinct i_brand_id) from 
> item;"
> Logging initialized using configuration in 
> file:/etc/hive/conf.dist/hive-log4j.properties
> Hive history file=/tmp/grahn/hive_job_log_grahn_201303052234_1625576708.txt
> Total MapReduce jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks determined at compile time: 1
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=
> In order to set a constant number of reducers:
>   set mapred.reduce.tasks=
> Starting Job = job_201302081514_0073, Tracking URL = 
> http://impala:50030/jobdetails.jsp?jobid=job_201302081514_0073
> Kill Command = /usr/lib/hadoop/bin/hadoop job  
> -Dmapred.job.tracker=m0525.mtv.cloudera.com:8021 -kill job_201302081514_0073
> Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 
> 1
> 2013-03-05 22:34:43,255 Stage-1 map = 0%,  reduce = 0%
> 2013-03-05 22:34:49,323 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 4.81 
> sec
> 2013-03-05 22:34:50,337 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 4.81 
> sec
> 2013-03-05 22:34:51,351 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 4.81 
> sec
> 2013-03-05 22:34:52,360 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 4.81 
> sec
> 2013-03-05 22:34:53,370 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 4.81 
> sec
> 2013-03-05 22:34:54,379 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 4.81 
> sec
> 2013-03-05 22:34:55,389 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 
> 8.58 sec
> 2013-03-05 22:34:56,402 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 
> 8.58 sec
> 2013-03-05 22:34:57,413 

[jira] [Updated] (IMPALA-110) Add support for multiple distinct operators in the same query block

2018-09-26 Thread Thomas Tauber-Marshall (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Tauber-Marshall updated IMPALA-110:
--
Fix Version/s: Impala 3.1.0

> Add support for multiple distinct operators in the same query block
> ---
>
> Key: IMPALA-110
> URL: https://issues.apache.org/jira/browse/IMPALA-110
> Project: IMPALA
>  Issue Type: New Feature
>  Components: Backend, Frontend
>Affects Versions: Impala 0.5, Impala 1.4, Impala 2.0, Impala 2.2, Impala 
> 2.3.0
>Reporter: Greg Rahn
>Assignee: Thomas Tauber-Marshall
>Priority: Major
>  Labels: sql-language
> Fix For: Impala 3.1.0
>
>
> Impala only allows a single (DISTINCT columns) expression in each query.
> {color:red}Note:
> If you do not need precise accuracy, you can produce an estimate of the 
> distinct values for a column by specifying NDV(column); a query can contain 
> multiple instances of NDV(column). To make Impala automatically rewrite 
> COUNT(DISTINCT) expressions to NDV(), enable the APPX_COUNT_DISTINCT query 
> option.
> {color}
> {code}
> [impala:21000] > select count(distinct i_class_id) from item;
> Query: select count(distinct i_class_id) from item
> Query finished, fetching results ...
> 16
> Returned 1 row(s) in 1.51s
> {code}
> {code}
> [impala:21000] > select count(distinct i_class_id), count(distinct 
> i_brand_id) from item;
> Query: select count(distinct i_class_id), count(distinct i_brand_id) from item
> ERROR: com.cloudera.impala.common.AnalysisException: Analysis exception (in 
> select count(distinct i_class_id), count(distinct i_brand_id) from item)
>   at 
> com.cloudera.impala.analysis.AnalysisContext.analyze(AnalysisContext.java:133)
>   at 
> com.cloudera.impala.service.Frontend.createExecRequest(Frontend.java:221)
>   at 
> com.cloudera.impala.service.JniFrontend.createExecRequest(JniFrontend.java:89)
> Caused by: com.cloudera.impala.common.AnalysisException: all DISTINCT 
> aggregate functions need to have the same set of parameters as COUNT(DISTINCT 
> i_class_id); deviating function: COUNT(DISTINCT i_brand_id)
>   at 
> com.cloudera.impala.analysis.AggregateInfo.createDistinctAggInfo(AggregateInfo.java:196)
>   at 
> com.cloudera.impala.analysis.AggregateInfo.create(AggregateInfo.java:143)
>   at 
> com.cloudera.impala.analysis.SelectStmt.createAggInfo(SelectStmt.java:466)
>   at 
> com.cloudera.impala.analysis.SelectStmt.analyzeAggregation(SelectStmt.java:347)
>   at com.cloudera.impala.analysis.SelectStmt.analyze(SelectStmt.java:155)
>   at 
> com.cloudera.impala.analysis.AnalysisContext.analyze(AnalysisContext.java:130)
>   ... 2 more
> {code}
> Hive supports this:
> {code}
> $ hive -e "select count(distinct i_class_id), count(distinct i_brand_id) from 
> item;"
> Logging initialized using configuration in 
> file:/etc/hive/conf.dist/hive-log4j.properties
> Hive history file=/tmp/grahn/hive_job_log_grahn_201303052234_1625576708.txt
> Total MapReduce jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks determined at compile time: 1
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=
> In order to set a constant number of reducers:
>   set mapred.reduce.tasks=
> Starting Job = job_201302081514_0073, Tracking URL = 
> http://impala:50030/jobdetails.jsp?jobid=job_201302081514_0073
> Kill Command = /usr/lib/hadoop/bin/hadoop job  
> -Dmapred.job.tracker=m0525.mtv.cloudera.com:8021 -kill job_201302081514_0073
> Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 
> 1
> 2013-03-05 22:34:43,255 Stage-1 map = 0%,  reduce = 0%
> 2013-03-05 22:34:49,323 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 4.81 
> sec
> 2013-03-05 22:34:50,337 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 4.81 
> sec
> 2013-03-05 22:34:51,351 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 4.81 
> sec
> 2013-03-05 22:34:52,360 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 4.81 
> sec
> 2013-03-05 22:34:53,370 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 4.81 
> sec
> 2013-03-05 22:34:54,379 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 4.81 
> sec
> 2013-03-05 22:34:55,389 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 
> 8.58 sec
> 2013-03-05 22:34:56,402 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 
> 8.58 sec
> 2013-03-05 22:34:57,413 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 
> 8.58 sec
> 2013-03-05 22:34:58,424 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 
> 8.58 sec
> MapReduce Total cumulative CPU time: 8 seconds 580 msec
> Ended Job = job_201302081514_0073
> MapReduce Jobs Launched: 
> Job 0: Map: 1  Reduce: 

[jira] [Created] (IMPALA-7593) test_automatic_invalidation failing in S3

2018-09-19 Thread Thomas Tauber-Marshall (JIRA)
Thomas Tauber-Marshall created IMPALA-7593:
--

 Summary: test_automatic_invalidation failing in S3
 Key: IMPALA-7593
 URL: https://issues.apache.org/jira/browse/IMPALA-7593
 Project: IMPALA
  Issue Type: Bug
Affects Versions: Impala 3.1.0
Reporter: Thomas Tauber-Marshall
Assignee: Tianyi Wang


Note that the build has the fix for IMPALA-7580

{noformat}
4:59:01 ___ TestAutomaticCatalogInvalidation.test_v1_catalog 
___
04:59:01 custom_cluster/test_automatic_invalidation.py:63: in test_v1_catalog
04:59:01 self._run_test(cursor)
04:59:01 custom_cluster/test_automatic_invalidation.py:58: in _run_test
04:59:01 assert time.time() < timeout
04:59:01 E   assert 1537355634.805718 < 1537355634.394429
04:59:01 E+  where 1537355634.805718 = ()
04:59:01 E+where  = time.time
04:59:01  Captured stderr setup 
-
04:59:01 -- 2018-09-19 04:13:22,796 INFO MainThread: Starting cluster with 
command: 
/data/jenkins/workspace/impala-asf-master-core-asan/repos/Impala/bin/start-impala-cluster.py
 --cluster_size=3 --num_coordinators=3 
--log_dir=/data/jenkins/workspace/impala-asf-master-core-asan/repos/Impala/logs/custom_cluster_tests
 --log_level=1 '--impalad_args="--invalidate_tables_timeout_s=20" ' 
'--state_store_args="--statestore_update_frequency_ms=50 
--statestore_priority_update_frequency_ms=50 
--statestore_heartbeat_frequency_ms=50" ' 
'--catalogd_args="--invalidate_tables_timeout_s=20" '
04:59:01 04:13:23 MainThread: Starting State Store logging to 
/data/jenkins/workspace/impala-asf-master-core-asan/repos/Impala/logs/custom_cluster_tests/statestored.INFO
04:59:01 04:13:23 MainThread: Starting Catalog Service logging to 
/data/jenkins/workspace/impala-asf-master-core-asan/repos/Impala/logs/custom_cluster_tests/catalogd.INFO
04:59:01 04:13:24 MainThread: Starting Impala Daemon logging to 
/data/jenkins/workspace/impala-asf-master-core-asan/repos/Impala/logs/custom_cluster_tests/impalad.INFO
04:59:01 04:13:25 MainThread: Starting Impala Daemon logging to 
/data/jenkins/workspace/impala-asf-master-core-asan/repos/Impala/logs/custom_cluster_tests/impalad_node1.INFO
04:59:01 04:13:26 MainThread: Starting Impala Daemon logging to 
/data/jenkins/workspace/impala-asf-master-core-asan/repos/Impala/logs/custom_cluster_tests/impalad_node2.INFO
04:59:01 04:13:29 MainThread: Found 3 impalad/1 statestored/1 catalogd 
process(es)
04:59:01 04:13:29 MainThread: Getting num_known_live_backends from 
impala-ec2-centos74-r4-4xlarge-ondemand-1860.vpc.cloudera.com:25000
04:59:01 04:13:29 MainThread: Waiting for num_known_live_backends=3. Current 
value: 0
04:59:01 04:13:30 MainThread: Getting num_known_live_backends from 
impala-ec2-centos74-r4-4xlarge-ondemand-1860.vpc.cloudera.com:25000
04:59:01 04:13:30 MainThread: Waiting for num_known_live_backends=3. Current 
value: 1
04:59:01 04:13:31 MainThread: Getting num_known_live_backends from 
impala-ec2-centos74-r4-4xlarge-ondemand-1860.vpc.cloudera.com:25000
04:59:01 04:13:31 MainThread: Waiting for num_known_live_backends=3. Current 
value: 2
04:59:01 04:13:32 MainThread: Getting num_known_live_backends from 
impala-ec2-centos74-r4-4xlarge-ondemand-1860.vpc.cloudera.com:25000
04:59:01 04:13:32 MainThread: num_known_live_backends has reached value: 3
04:59:01 04:13:32 MainThread: Getting num_known_live_backends from 
impala-ec2-centos74-r4-4xlarge-ondemand-1860.vpc.cloudera.com:25001
04:59:01 04:13:32 MainThread: num_known_live_backends has reached value: 3
04:59:01 04:13:32 MainThread: Getting num_known_live_backends from 
impala-ec2-centos74-r4-4xlarge-ondemand-1860.vpc.cloudera.com:25002
04:59:01 04:13:32 MainThread: num_known_live_backends has reached value: 3
04:59:01 04:13:32 MainThread: Impala Cluster Running with 3 nodes (3 
coordinators, 3 executors).
04:59:01 -- 2018-09-19 04:13:33,034 INFO MainThread: Found 3 impalad/1 
statestored/1 catalogd process(es)
04:59:01 -- 2018-09-19 04:13:33,034 INFO MainThread: Getting metric: 
statestore.live-backends from 
impala-ec2-centos74-r4-4xlarge-ondemand-1860.vpc.cloudera.com:25010
04:59:01 -- 2018-09-19 04:13:33,036 INFO MainThread: Metric 
'statestore.live-backends' has reached desired value: 4
04:59:01 -- 2018-09-19 04:13:33,036 INFO MainThread: Getting 
num_known_live_backends from 
impala-ec2-centos74-r4-4xlarge-ondemand-1860.vpc.cloudera.com:25000
04:59:01 -- 2018-09-19 04:13:33,037 INFO MainThread: 
num_known_live_backends has reached value: 3
04:59:01 -- 2018-09-19 04:13:33,037 INFO MainThread: Getting 
num_known_live_backends from 
impala-ec2-centos74-r4-4xlarge-ondemand-1860.vpc.cloudera.com:25001
04:59:01 -- 2018-09-19 04:13:33,038 INFO MainThread: 
num_known_live_backends has reached value: 3
04:59:01 -- 2018-09-19 04:13:33,038 INFO MainThread: Getting 

[jira] [Assigned] (IMPALA-4308) Make the minidumps archived in our Jenkins jobs usable

2018-09-20 Thread Thomas Tauber-Marshall (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-4308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Tauber-Marshall reassigned IMPALA-4308:
--

Assignee: Thomas Tauber-Marshall

> Make the minidumps archived in our Jenkins jobs usable
> --
>
> Key: IMPALA-4308
> URL: https://issues.apache.org/jira/browse/IMPALA-4308
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Infrastructure
>Affects Versions: Impala 2.8.0
>Reporter: Taras Bobrovytsky
>Assignee: Thomas Tauber-Marshall
>Priority: Major
>  Labels: breakpad, test-infra
>
> The minidumps that are archived in our Jenkins jobs are unusable because we 
> do not save the symbols that are required to extract stack traces. As part of 
> the log archiving process, we should:
> # Extract the necessary symbols and save them into the $IMPALA_HOME/logs 
> directory.
> # Automatically collect the backtraces from the minidumps and save them into 
> $IMPALA_HOME/logs directory in a text file



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Resolved] (IMPALA-7519) Support elliptic curve ssl ciphers

2018-09-24 Thread Thomas Tauber-Marshall (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-7519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Tauber-Marshall resolved IMPALA-7519.

   Resolution: Fixed
Fix Version/s: Impala 3.1.0

> Support elliptic curve ssl ciphers
> --
>
> Key: IMPALA-7519
> URL: https://issues.apache.org/jira/browse/IMPALA-7519
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Backend, Clients
>Affects Versions: Impala 3.1.0
>Reporter: Thomas Tauber-Marshall
>Assignee: Thomas Tauber-Marshall
>Priority: Major
>  Labels: security
> Fix For: Impala 3.1.0
>
>
> Thrift's SSLSocketFactory class does not support setting ciphers that use 
> ecdh. We already override this class for others reasons, it would be 
> straightforward to add the necessary openssl calls to enable this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Assigned] (IMPALA-5669) Partial sort node should have a maximum reservation set by default.

2019-01-02 Thread Thomas Tauber-Marshall (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-5669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Tauber-Marshall reassigned IMPALA-5669:
--

Assignee: (was: Thomas Tauber-Marshall)

> Partial sort node should have a maximum reservation set by default.
> ---
>
> Key: IMPALA-5669
> URL: https://issues.apache.org/jira/browse/IMPALA-5669
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Backend
>Affects Versions: Impala 2.10.0
>Reporter: Thomas Tauber-Marshall
>Priority: Major
>  Labels: performance, resource-management
>
> A change currently in review, IMPALA-5498, adds a new exec node 
> PartialSortNode. Initially, it will just allocate memory up to the query 
> memory limit, but once the new buffer management work in IMPALA-3200 goes in, 
> it should be modified to operate within a memory constraint.
> PartialSortNode can operate with essentially any amount of memory, with the 
> tradeoff that a smaller limit leads to a "lower quality", more random sort. 
> We should investigate the performance implications of different limits, and 
> consider making the limit configurable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-7963) test_empty_build_joins failed with hdfs timeout

2018-12-12 Thread Thomas Tauber-Marshall (JIRA)
Thomas Tauber-Marshall created IMPALA-7963:
--

 Summary: test_empty_build_joins failed with hdfs timeout
 Key: IMPALA-7963
 URL: https://issues.apache.org/jira/browse/IMPALA-7963
 Project: IMPALA
  Issue Type: Bug
  Components: Backend
Affects Versions: Impala 3.2.0
Reporter: Thomas Tauber-Marshall
Assignee: Joe McDonnell


Seen in an exhaustive build on centos6:
{noformat}
05:39:09  TestJoinQueries.test_empty_build_joins[batch_size: 1 | protocol: 
beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
'disable_codegen_rows_threshold': 0, 'disable_codegen': False, 
'abort_on_error': 1, 'debug_action': None, 'exec_single_node_rows_threshold': 
0} | table_format: parquet/none] 
05:39:09 [gw3] linux2 -- Python 2.6.6 
/data/jenkins/workspace/impala-asf-master-exhaustive-centos6/repos/Impala/bin/../infra/python/env/bin/python
05:39:09 query_test/test_join_queries.py:97: in test_empty_build_joins
05:39:09 self.run_test_case('QueryTest/empty-build-joins', new_vector)
05:39:09 common/impala_test_suite.py:467: in run_test_case
05:39:09 result = self.__execute_query(target_impalad_client, query, 
user=user)
05:39:09 common/impala_test_suite.py:688: in __execute_query
05:39:09 return impalad_client.execute(query, user=user)
05:39:09 common/impala_connection.py:170: in execute
05:39:09 return self.__beeswax_client.execute(sql_stmt, user=user)
05:39:09 beeswax/impala_beeswax.py:182: in execute
05:39:09 handle = self.__execute_query(query_string.strip(), user=user)
05:39:09 beeswax/impala_beeswax.py:359: in __execute_query
05:39:09 self.wait_for_finished(handle)
05:39:09 beeswax/impala_beeswax.py:380: in wait_for_finished
05:39:09 raise ImpalaBeeswaxException("Query aborted:" + error_log, None)
05:39:09 E   ImpalaBeeswaxException: ImpalaBeeswaxException:
05:39:09 EQuery aborted:hdfsOpenFile() for 
hdfs://localhost:20500/test-warehouse/alltypestiny/year=2009/month=2/090201.txt 
failed to finish before the 300 second timeout
05:39:09 - Captured stderr call 
-
05:39:09 -- executing against localhost:21000
05:39:09 use functional_parquet;
05:39:09 
05:39:09 -- 2018-12-11 03:11:34,797 INFO MainThread: Started query 
d747763f9d663cd7:9abd4b99
05:39:09 SET batch_size=1;
05:39:09 SET num_nodes=0;
05:39:09 SET disable_codegen_rows_threshold=0;
05:39:09 SET disable_codegen=False;
05:39:09 SET abort_on_error=1;
05:39:09 SET exec_single_node_rows_threshold=0;
05:39:09 -- executing against localhost:21000
05:39:09 select straight_join atp.id
05:39:09 from alltypes atp
05:39:09   inner join functional.alltypestiny att on atp.id = att.id
05:39:09 where att.int_col = 999;
05:39:09 
05:39:09 -- 2018-12-11 03:11:34,816 INFO MainThread: Started query 
5045de8553c5843c:bdc6aa1c
05:39:09 -- executing against localhost:21000
05:39:09 select straight_join atp.id
05:39:09 from alltypes atp
05:39:09   right join functional.alltypestiny att on atp.id = att.id
05:39:09 where att.int_col = 999;
05:39:09 
05:39:09 -- 2018-12-11 03:11:35,519 INFO MainThread: Started query 
124ef451a3f65d09:f2ae4a5d
{noformat}
Presumably caused by IMPALA-7738



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-7990) Failing assert in TestFailpoints .test_lifecycle_failures

2018-12-17 Thread Thomas Tauber-Marshall (JIRA)


[ 
https://issues.apache.org/jira/browse/IMPALA-7990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16723247#comment-16723247
 ] 

Thomas Tauber-Marshall commented on IMPALA-7990:


Taking a look

> Failing assert in TestFailpoints .test_lifecycle_failures
> -
>
> Key: IMPALA-7990
> URL: https://issues.apache.org/jira/browse/IMPALA-7990
> Project: IMPALA
>  Issue Type: Bug
>  Components: Backend
>Affects Versions: Impala 3.1.0
>Reporter: bharath v
>Assignee: Thomas Tauber-Marshall
>Priority: Blocker
>  Labels: broken-build
>
> This test is hitting the assert intermittently and I'm not able to repro it 
> locally even after looping the test for a while.
> Error Message
> {noformat}
> failure/test_failpoints.py:176: in test_lifecycle_failures assert 'Query 
> aborted:Debug Action: FIS_FAIL_THREAD_CREATION:FAIL@0.5' \ E   
> AssertionError: ImpalaBeeswaxException: E  INNER EXCEPTION:  'beeswaxd.ttypes.BeeswaxException'> E  MESSAGE: Debug Action: 
> FIS_FAIL_THREAD_CREATION:FAIL@0.5 E  E   assert 'Query aborted:Debug 
> Action: FIS_FAIL_THREAD_CREATION:FAIL@0.5' in "ImpalaBeeswaxException:\n 
> INNER EXCEPTION: \n MESSAGE: Debug 
> Action: FIS_FAIL_THREAD_CREATION:FAIL@0.5\n" E+  where 
> "ImpalaBeeswaxException:\n INNER EXCEPTION:  'beeswaxd.ttypes.BeeswaxException'>\n MESSAGE: Debug Action: 
> FIS_FAIL_THREAD_CREATION:FAIL@0.5\n" = str(ImpalaBeeswaxException())
> {noformat}
> Stacktrace
> {noformat}
> failure/test_failpoints.py:176: in test_lifecycle_failures
> assert 'Query aborted:Debug Action: FIS_FAIL_THREAD_CREATION:FAIL@0.5' \
> E   AssertionError: ImpalaBeeswaxException:
> E  INNER EXCEPTION: 
> E  MESSAGE: Debug Action: FIS_FAIL_THREAD_CREATION:FAIL@0.5
> E 
> E   assert 'Query aborted:Debug Action: FIS_FAIL_THREAD_CREATION:FAIL@0.5' in 
> "ImpalaBeeswaxException:\n INNER EXCEPTION:  'beeswaxd.ttypes.BeeswaxException'>\n MESSAGE: Debug Action: 
> FIS_FAIL_THREAD_CREATION:FAIL@0.5\n"
> E+  where "ImpalaBeeswaxException:\n INNER EXCEPTION:  'beeswaxd.ttypes.BeeswaxException'>\n MESSAGE: Debug Action: 
> FIS_FAIL_THREAD_CREATION:FAIL@0.5\n" = str(ImpalaBeeswaxException())
> {noformat}
> Standard Error
> {noformat}
> -- connecting to: localhost:21000
> -- connecting to localhost:21050 with impyla
> Conn 
> -- 2018-12-16 17:59:19,399 INFO MainThread: Closing active operation
> SET debug_action=FIS_IN_PREPARE:FAIL@1.0;
> -- executing against localhost:21000
> select * from tpch.lineitem limit 1;
> -- 2018-12-16 17:59:22,779 INFO MainThread: Started query 
> 5e4209c709db77df:4c9baa78
> SET debug_action=FIS_IN_OPEN:FAIL@1.0;
> -- executing against localhost:21000
> select * from tpch.lineitem limit 1;
> -- 2018-12-16 17:59:22,891 INFO MainThread: Started query 
> b341dcc7d97f3d56:1855e491
> SET debug_action=FIS_IN_EXEC_INTERNAL:FAIL@1.0;
> -- executing against localhost:21000
> select * from tpch.lineitem limit 1;
> -- 2018-12-16 17:59:22,999 INFO MainThread: Started query 
> f74b482fd8f7bb5f:35158806
> SET debug_action=FIS_FAIL_THREAD_CREATION:FAIL@0.5;
> -- executing against localhost:21000
> select * from tpch.lineitem limit 1;
> -- 2018-12-16 17:59:23,110 INFO MainThread: Started query 
> fa4ae2eff60d340b:a074c6f6
> {noformat}
> Commit:
> {noformat}
> Checking out Revision b9377d3fdebb8f13fe4cc56ce16b666897b35822 (origin/master)
> 16:21:43  > git checkout -f b9377d3fdebb8f13fe4cc56ce16b666897b35822
> 16:21:46 Commit message: "IMPALA-7989: Revert "Remove Python 2.4 workarounds 
> in start-impala-cluster.py""
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Work started] (IMPALA-7990) Failing assert in TestFailpoints .test_lifecycle_failures

2018-12-17 Thread Thomas Tauber-Marshall (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-7990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on IMPALA-7990 started by Thomas Tauber-Marshall.
--
> Failing assert in TestFailpoints .test_lifecycle_failures
> -
>
> Key: IMPALA-7990
> URL: https://issues.apache.org/jira/browse/IMPALA-7990
> Project: IMPALA
>  Issue Type: Bug
>  Components: Backend
>Affects Versions: Impala 3.1.0
>Reporter: bharath v
>Assignee: Thomas Tauber-Marshall
>Priority: Blocker
>  Labels: broken-build
>
> This test is hitting the assert intermittently and I'm not able to repro it 
> locally even after looping the test for a while.
> Error Message
> {noformat}
> failure/test_failpoints.py:176: in test_lifecycle_failures assert 'Query 
> aborted:Debug Action: FIS_FAIL_THREAD_CREATION:FAIL@0.5' \ E   
> AssertionError: ImpalaBeeswaxException: E  INNER EXCEPTION:  'beeswaxd.ttypes.BeeswaxException'> E  MESSAGE: Debug Action: 
> FIS_FAIL_THREAD_CREATION:FAIL@0.5 E  E   assert 'Query aborted:Debug 
> Action: FIS_FAIL_THREAD_CREATION:FAIL@0.5' in "ImpalaBeeswaxException:\n 
> INNER EXCEPTION: \n MESSAGE: Debug 
> Action: FIS_FAIL_THREAD_CREATION:FAIL@0.5\n" E+  where 
> "ImpalaBeeswaxException:\n INNER EXCEPTION:  'beeswaxd.ttypes.BeeswaxException'>\n MESSAGE: Debug Action: 
> FIS_FAIL_THREAD_CREATION:FAIL@0.5\n" = str(ImpalaBeeswaxException())
> {noformat}
> Stacktrace
> {noformat}
> failure/test_failpoints.py:176: in test_lifecycle_failures
> assert 'Query aborted:Debug Action: FIS_FAIL_THREAD_CREATION:FAIL@0.5' \
> E   AssertionError: ImpalaBeeswaxException:
> E  INNER EXCEPTION: 
> E  MESSAGE: Debug Action: FIS_FAIL_THREAD_CREATION:FAIL@0.5
> E 
> E   assert 'Query aborted:Debug Action: FIS_FAIL_THREAD_CREATION:FAIL@0.5' in 
> "ImpalaBeeswaxException:\n INNER EXCEPTION:  'beeswaxd.ttypes.BeeswaxException'>\n MESSAGE: Debug Action: 
> FIS_FAIL_THREAD_CREATION:FAIL@0.5\n"
> E+  where "ImpalaBeeswaxException:\n INNER EXCEPTION:  'beeswaxd.ttypes.BeeswaxException'>\n MESSAGE: Debug Action: 
> FIS_FAIL_THREAD_CREATION:FAIL@0.5\n" = str(ImpalaBeeswaxException())
> {noformat}
> Standard Error
> {noformat}
> -- connecting to: localhost:21000
> -- connecting to localhost:21050 with impyla
> Conn 
> -- 2018-12-16 17:59:19,399 INFO MainThread: Closing active operation
> SET debug_action=FIS_IN_PREPARE:FAIL@1.0;
> -- executing against localhost:21000
> select * from tpch.lineitem limit 1;
> -- 2018-12-16 17:59:22,779 INFO MainThread: Started query 
> 5e4209c709db77df:4c9baa78
> SET debug_action=FIS_IN_OPEN:FAIL@1.0;
> -- executing against localhost:21000
> select * from tpch.lineitem limit 1;
> -- 2018-12-16 17:59:22,891 INFO MainThread: Started query 
> b341dcc7d97f3d56:1855e491
> SET debug_action=FIS_IN_EXEC_INTERNAL:FAIL@1.0;
> -- executing against localhost:21000
> select * from tpch.lineitem limit 1;
> -- 2018-12-16 17:59:22,999 INFO MainThread: Started query 
> f74b482fd8f7bb5f:35158806
> SET debug_action=FIS_FAIL_THREAD_CREATION:FAIL@0.5;
> -- executing against localhost:21000
> select * from tpch.lineitem limit 1;
> -- 2018-12-16 17:59:23,110 INFO MainThread: Started query 
> fa4ae2eff60d340b:a074c6f6
> {noformat}
> Commit:
> {noformat}
> Checking out Revision b9377d3fdebb8f13fe4cc56ce16b666897b35822 (origin/master)
> 16:21:43  > git checkout -f b9377d3fdebb8f13fe4cc56ce16b666897b35822
> 16:21:46 Commit message: "IMPALA-7989: Revert "Remove Python 2.4 workarounds 
> in start-impala-cluster.py""
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Resolved] (IMPALA-7990) Failing assert in TestFailpoints .test_lifecycle_failures

2018-12-18 Thread Thomas Tauber-Marshall (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-7990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Tauber-Marshall resolved IMPALA-7990.

   Resolution: Fixed
Fix Version/s: Impala 3.2.0

> Failing assert in TestFailpoints .test_lifecycle_failures
> -
>
> Key: IMPALA-7990
> URL: https://issues.apache.org/jira/browse/IMPALA-7990
> Project: IMPALA
>  Issue Type: Bug
>  Components: Backend
>Affects Versions: Impala 3.1.0
>Reporter: bharath v
>Assignee: Thomas Tauber-Marshall
>Priority: Blocker
>  Labels: broken-build
> Fix For: Impala 3.2.0
>
>
> This test is hitting the assert intermittently and I'm not able to repro it 
> locally even after looping the test for a while.
> Error Message
> {noformat}
> failure/test_failpoints.py:176: in test_lifecycle_failures assert 'Query 
> aborted:Debug Action: FIS_FAIL_THREAD_CREATION:FAIL@0.5' \ E   
> AssertionError: ImpalaBeeswaxException: E  INNER EXCEPTION:  'beeswaxd.ttypes.BeeswaxException'> E  MESSAGE: Debug Action: 
> FIS_FAIL_THREAD_CREATION:FAIL@0.5 E  E   assert 'Query aborted:Debug 
> Action: FIS_FAIL_THREAD_CREATION:FAIL@0.5' in "ImpalaBeeswaxException:\n 
> INNER EXCEPTION: \n MESSAGE: Debug 
> Action: FIS_FAIL_THREAD_CREATION:FAIL@0.5\n" E+  where 
> "ImpalaBeeswaxException:\n INNER EXCEPTION:  'beeswaxd.ttypes.BeeswaxException'>\n MESSAGE: Debug Action: 
> FIS_FAIL_THREAD_CREATION:FAIL@0.5\n" = str(ImpalaBeeswaxException())
> {noformat}
> Stacktrace
> {noformat}
> failure/test_failpoints.py:176: in test_lifecycle_failures
> assert 'Query aborted:Debug Action: FIS_FAIL_THREAD_CREATION:FAIL@0.5' \
> E   AssertionError: ImpalaBeeswaxException:
> E  INNER EXCEPTION: 
> E  MESSAGE: Debug Action: FIS_FAIL_THREAD_CREATION:FAIL@0.5
> E 
> E   assert 'Query aborted:Debug Action: FIS_FAIL_THREAD_CREATION:FAIL@0.5' in 
> "ImpalaBeeswaxException:\n INNER EXCEPTION:  'beeswaxd.ttypes.BeeswaxException'>\n MESSAGE: Debug Action: 
> FIS_FAIL_THREAD_CREATION:FAIL@0.5\n"
> E+  where "ImpalaBeeswaxException:\n INNER EXCEPTION:  'beeswaxd.ttypes.BeeswaxException'>\n MESSAGE: Debug Action: 
> FIS_FAIL_THREAD_CREATION:FAIL@0.5\n" = str(ImpalaBeeswaxException())
> {noformat}
> Standard Error
> {noformat}
> -- connecting to: localhost:21000
> -- connecting to localhost:21050 with impyla
> Conn 
> -- 2018-12-16 17:59:19,399 INFO MainThread: Closing active operation
> SET debug_action=FIS_IN_PREPARE:FAIL@1.0;
> -- executing against localhost:21000
> select * from tpch.lineitem limit 1;
> -- 2018-12-16 17:59:22,779 INFO MainThread: Started query 
> 5e4209c709db77df:4c9baa78
> SET debug_action=FIS_IN_OPEN:FAIL@1.0;
> -- executing against localhost:21000
> select * from tpch.lineitem limit 1;
> -- 2018-12-16 17:59:22,891 INFO MainThread: Started query 
> b341dcc7d97f3d56:1855e491
> SET debug_action=FIS_IN_EXEC_INTERNAL:FAIL@1.0;
> -- executing against localhost:21000
> select * from tpch.lineitem limit 1;
> -- 2018-12-16 17:59:22,999 INFO MainThread: Started query 
> f74b482fd8f7bb5f:35158806
> SET debug_action=FIS_FAIL_THREAD_CREATION:FAIL@0.5;
> -- executing against localhost:21000
> select * from tpch.lineitem limit 1;
> -- 2018-12-16 17:59:23,110 INFO MainThread: Started query 
> fa4ae2eff60d340b:a074c6f6
> {noformat}
> Commit:
> {noformat}
> Checking out Revision b9377d3fdebb8f13fe4cc56ce16b666897b35822 (origin/master)
> 16:21:43  > git checkout -f b9377d3fdebb8f13fe4cc56ce16b666897b35822
> 16:21:46 Commit message: "IMPALA-7989: Revert "Remove Python 2.4 workarounds 
> in start-impala-cluster.py""
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Work started] (IMPALA-7930) Crash in thrift-server-test

2018-12-10 Thread Thomas Tauber-Marshall (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-7930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on IMPALA-7930 started by Thomas Tauber-Marshall.
--
> Crash in thrift-server-test
> ---
>
> Key: IMPALA-7930
> URL: https://issues.apache.org/jira/browse/IMPALA-7930
> Project: IMPALA
>  Issue Type: Bug
>  Components: Backend
>Affects Versions: Impala 3.2.0
>Reporter: Lars Volker
>Assignee: Thomas Tauber-Marshall
>Priority: Critical
>  Labels: broken-build, flaky
>
> I've seen a crash in thrift-server-test during an exhaustive test run. 
> Unfortunately the core file indicated that it was written by a directory, 
> which caused the automatic core dump resolution to fail. Here's the resolved 
> minidump:
> {noformat}
> Crash reason:  SIGABRT
> Crash address: 0x7d11d19
> Process uptime: not available
> Thread 0 (crashed)
>  0  libc-2.17.so + 0x351f7
> rax = 0x   rdx = 0x0006
> rcx = 0x   rbx = 0x7f1e65876000
> rsi = 0x1d19   rdi = 0x1d19
> rbp = 0x7f1e61dbde68   rsp = 0x7fffc22796d8
>  r8 = 0x000a1a10r9 = 0xfefefefefeff092d
> r10 = 0x0008   r11 = 0x0202
> r12 = 0x029dca31   r13 = 0x033a5e00
> r14 = 0x   r15 = 0x
> rip = 0x7f1e61c721f7
> Found by: given as instruction pointer in context
>  1  libc-2.17.so + 0x368e8
> rsp = 0x7fffc22796e0   rip = 0x7f1e61c738e8
> Found by: stack scanning
>  2  libc-2.17.so + 0x17df70
> rsp = 0x7fffc2279770   rip = 0x7f1e61dbaf70
> Found by: stack scanning
>  3  thrift-server-test!_fini + 0xdf918
> rsp = 0x7fffc2279778   rip = 0x02ab0288
> Found by: stack scanning
>  4  libc-2.17.so + 0x2fbc3
> rsp = 0x7fffc2279790   rip = 0x7f1e61c6cbc3
> Found by: stack scanning
>  5  
> thrift-server-test!testing::internal::TestEventRepeater::OnTestProgramEnd(testing::UnitTest
>  const&) + 0x55
> rsp = 0x7fffc22797b0   rip = 0x028711f5
> Found by: stack scanning
>  6  libc-2.17.so + 0x17df70
> rbx = 0x   rbp = 0x
> rsp = 0x7fffc22797e0   r12 = 0x
> r13 = 0x0005   rip = 0x7f1e61dbaf70
> Found by: call frame info
>  7  thrift-server-test!_fini + 0xc0c1
> rsp = 0x7fffc22797f0   rip = 0x029dca31
> Found by: stack scanning
>  8  thrift-server-test!_fini + 0x9d5490
> rsp = 0x7fffc22797f8   rip = 0x033a5e00
> Found by: stack scanning
>  9  libc-2.17.so + 0x180e68
> rsp = 0x7fffc2279808   rip = 0x7f1e61dbde68
> Found by: stack scanning
> 10  libc-2.17.so + 0x2e266
> rsp = 0x7fffc2279810   rip = 0x7f1e61c6b266
> Found by: stack scanning
> 11  thrift-server-test!_fini + 0x9d5490
> rsp = 0x7fffc2279818   rip = 0x033a5e00
> Found by: stack scanning
> 12  libc-2.17.so + 0x17df70
> rsp = 0x7fffc2279820   rip = 0x7f1e61dbaf70
> Found by: stack scanning
> 13  thrift-server-test!_fini + 0xc0c1
> rsp = 0x7fffc2279828   rip = 0x029dca31
> Found by: stack scanning
> 14  thrift-server-test!_fini + 0xdf918
> rsp = 0x7fffc2279840   rip = 0x02ab0288
> Found by: stack scanning
> 15  thrift-server-test!_fini + 0x9d5490
> rsp = 0x7fffc2279850   rip = 0x033a5e00
> Found by: stack scanning
> 16  thrift-server-test!_fini + 0xc0c1
> rsp = 0x7fffc2279860   rip = 0x029dca31
> Found by: stack scanning
> 17  thrift-server-test!_fini + 0x9d5490
> rsp = 0x7fffc2279870   rip = 0x033a5e00
> Found by: stack scanning
> 18  thrift-server-test!_fini + 0xc0c1
> rsp = 0x7fffc2279878   rip = 0x029dca31
> Found by: stack scanning
> 19  thrift-server-test!_fini + 0xdf918
> rsp = 0x7fffc2279880   rip = 0x02ab0288
> Found by: stack scanning
> 20  libc-2.17.so + 0x2e312
> rsp = 0x7fffc2279890   rip = 0x7f1e61c6b312
> Found by: stack scanning
> 21  
> thrift-server-test!boost::shared_array::~shared_array()
>  + 0x70
> rsp = 0x7fffc22798b0   rip = 0x02719b40
> Found by: stack scanning
> 22  
> thrift-server-test!boost::detail::sp_counted_impl_p::dispose()
>  + 0x4f
> rsp = 0x7fffc22798c0   rip = 0x0271e1af
> Found by: stack scanning
> 23  
> thrift-server-test!boost::detail::sp_counted_impl_pd  boost::checked_array_deleter 
> >::dispose() + 0xaa
> rbx = 0x042f7128   rsp = 0x7fffc22798d0
> rip = 0x02719cfa
> Found by: call frame info
> 24  
> thrift-server-test!boost::shared_array::~shared_array()
>  + 0x39
> rbx = 

[jira] [Commented] (IMPALA-7931) test_shutdown_executor fails with timeout waiting for query target state

2018-12-10 Thread Thomas Tauber-Marshall (JIRA)


[ 
https://issues.apache.org/jira/browse/IMPALA-7931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16715661#comment-16715661
 ] 

Thomas Tauber-Marshall commented on IMPALA-7931:


Yeah, that sounds reasonable. Definitely would be nice to turn 'cause' into a 
struct rather than plumbing through another arg.

Of course, its a little unfortunate to put the query on the queue to be 
cancelled only to not actually cancel it, but we probably don't want to add 
more work to MembershipCallback(). Of course, this can already happen (eg. if 
the query to be cancelled isn't running yet or has already finished), so should 
be fine as long as things are documented well.

> test_shutdown_executor fails with timeout waiting for query target state
> 
>
> Key: IMPALA-7931
> URL: https://issues.apache.org/jira/browse/IMPALA-7931
> Project: IMPALA
>  Issue Type: Bug
>  Components: Infrastructure
>Affects Versions: Impala 3.2.0
>Reporter: Lars Volker
>Assignee: Tim Armstrong
>Priority: Critical
>  Labels: broken-build
> Attachments: impala-7931-impalad-logs.tar.gz
>
>
> On a recent S3 test run test_shutdown_executor hit a timeout waiting for a 
> query to reach state FINISHED. Instead the query stays at state 5 (EXCEPTION).
> {noformat}
> 12:51:11 __ TestShutdownCommand.test_shutdown_executor 
> __
> 12:51:11 custom_cluster/test_restart_services.py:209: in 
> test_shutdown_executor
> 12:51:11 assert self.__fetch_and_get_num_backends(QUERY, 
> before_shutdown_handle) == 3
> 12:51:11 custom_cluster/test_restart_services.py:356: in 
> __fetch_and_get_num_backends
> 12:51:11 self.client.QUERY_STATES['FINISHED'], timeout=20)
> 12:51:11 common/impala_service.py:267: in wait_for_query_state
> 12:51:11 target_state, query_state)
> 12:51:11 E   AssertionError: Did not reach query state in time target=4 
> actual=5
> {noformat}
> From the logs I can see that the query fails because one of the executors 
> becomes unreachable:
> {noformat}
> I1204 12:31:39.954125  5609 impala-server.cc:1792] Query 
> a34c3a84775e5599:b2b25eb9: Failed due to unreachable impalad(s): 
> jenkins-worker:22001
> {noformat}
> The query was {{select count\(*) from functional_parquet.alltypes where 
> sleep(1) = bool_col}}. 
> It seems that the query took longer than expected and was still running when 
> the executor shut down.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Updated] (IMPALA-7272) Fix potential crash when a min-max runtime filter is generated for a string value

2018-12-10 Thread Thomas Tauber-Marshall (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-7272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Tauber-Marshall updated IMPALA-7272:
---
Summary: Fix potential crash when a min-max runtime filter is generated for 
a string value  (was: impalad   crash when Fatigue test)

> Fix potential crash when a min-max runtime filter is generated for a string 
> value
> -
>
> Key: IMPALA-7272
> URL: https://issues.apache.org/jira/browse/IMPALA-7272
> Project: IMPALA
>  Issue Type: Bug
>  Components: Backend
>Affects Versions: Impala 2.11.0, Impala 2.12.0
> Environment: apache  branch  
> [329979d6fb0caa0dc449d7e0aa75460c30e868f0]
> centos 6.5
>  ./buildall.sh -skiptests -noclean -asan
>Reporter: yyzzjj
>Assignee: Thomas Tauber-Marshall
>Priority: Blocker
>  Labels: crash
> Fix For: Impala 3.1.0
>
> Attachments: e4386102-833c-40bb-4eec10b2-827c76be.dmp, 
> impalad_node0.ERROR, impalad_node0.WARNING, testing_impala.sh
>
>
> (gdb) bt
> #0 0x003269832635 in raise () from /lib64/libc.so.6
> #1 0x003269833e15 in abort () from /lib64/libc.so.6
> #2 0x04010f64 in google::DumpStackTraceAndExit() ()
> #3 0x040079dd in google::LogMessage::Fail() ()
> #4 0x04009282 in google::LogMessage::SendToLog() ()
> #5 0x040073b7 in google::LogMessage::Flush() ()
> #6 0x0400a97e in google::LogMessageFatal::~LogMessageFatal() ()
> #7 0x01a2dfab in impala::MemPool::CheckIntegrity (this=0x5916e1f8, 
> check_current_chunk_empty=true)
>  at /export/ldb/online/kudu_rpc_branch/be/src/runtime/mem-pool.cc:258
> #8 0x01a2cf56 in impala::MemPool::FindChunk (this=0x5916e1f8, 
> min_size=10, check_limits=true) at 
> /export/ldb/online/kudu_rpc_branch/be/src/runtime/mem-pool.cc:158
> #9 0x01a3dd1b in impala::MemPool::Allocate (alignment=8, 
> size=10, this=0x5916e1f8) at 
> /export/ldb/online/kudu_rpc_branch/be/src/runtime/mem-pool.h:273
> #10 impala::MemPool::TryAllocate (this=0x5916e1f8, size=10) at 
> /export/ldb/online/kudu_rpc_branch/be/src/runtime/mem-pool.h:109
> #11 0x01caefb8 in impala::StringBuffer::GrowBuffer 
> (this=0x7f90d9489c28, new_size=10) at 
> /export/ldb/online/kudu_rpc_branch/be/src/runtime/string-buffer.h:85
> #12 0x01caee18 in impala::StringBuffer::Append (this=0x7f90d9489c28, 
> str=0x7f92cda6e039 "1104700843don...@jd.com业务运营部\230\340\246͒\177", 
> str_len=10)
>  at /export/ldb/online/kudu_rpc_branch/be/src/runtime/string-buffer.h:53
> #13 0x01cac864 in impala::StringMinMaxFilter::CopyToBuffer 
> (this=0x7f90d9489c00, buffer=0x7f90d9489c28, value=0x7f90d9489c08, len=10)
>  at /export/ldb/online/kudu_rpc_branch/be/src/util/min-max-filter.cc:304
> #14 0x01cac2a9 in impala::StringMinMaxFilter::MaterializeValues 
> (this=0x7f90d9489c00) at 
> /export/ldb/online/kudu_rpc_branch/be/src/util/min-max-filter.cc:229
> #15 0x02b9641a in impala::FilterContext::MaterializeValues 
> (this=0x61cc0b70) at 
> /export/ldb/online/kudu_rpc_branch/be/src/exec/filter-context.cc:97
> #16 0x7f93fdb9440e in ?? ()
> #17 0x7f90a97f5400 in ?? ()
> #18 0x2acd2bba01a2e0f7 in ?? ()
> #19 0x5916e140 in ?? ()
> #20 0x7f930c34d740 in ?? ()
> #21 0x7f90a97f5220 in ?? ()
> #22 0x66aa77bb66aa77bb in ?? ()
> #23 0x61cc0b70 in ?? ()
> #24 0x61cc0b70 in ?? ()
> #25 0x61cc0b98 in ?? ()
> #26 0x61cc0b70 in ?? ()
> #27 0x7f90a97f5300 in ?? ()
> #28 0x01ab84ed in 
> impala::RuntimeFilterBank::AllocateScratchMinMaxFilter (this= variable: Cannot access memory at address 0xff4f>, 
>  filter_id= 0xff4b>, 
>  type= 0xff3f>) at 
> /export/ldb/online/kudu_rpc_branch/be/src/runtime/runtime-filter-bank.cc:250
> Backtrace stopped: previous frame inner to this frame (corrupt stack?)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Resolved] (IMPALA-7926) test_reconnect failing

2018-12-13 Thread Thomas Tauber-Marshall (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-7926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Tauber-Marshall resolved IMPALA-7926.

   Resolution: Fixed
Fix Version/s: Impala 3.2.0

> test_reconnect failing
> --
>
> Key: IMPALA-7926
> URL: https://issues.apache.org/jira/browse/IMPALA-7926
> Project: IMPALA
>  Issue Type: Bug
>Reporter: Thomas Tauber-Marshall
>Assignee: Thomas Tauber-Marshall
>Priority: Major
>  Labels: broken-build
> Fix For: Impala 3.2.0
>
>
> {noformat}
> 00:52:58 __ TestImpalaShellInteractive.test_reconnect 
> ___
> 00:52:58 shell/test_shell_interactive.py:191: in test_reconnect
> 00:52:58 assert get_num_open_sessions(initial_impala_service) == 
> num_sessions_initial, \
> 00:52:58 E   AssertionError: Connection to localhost.localdomain:21000 should 
> have been closed
> 00:52:58 E   assert 0 == 1
> 00:52:58 E+  where 0 =  0xccdf848>()
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Assigned] (IMPALA-7468) Port CancelQueryFInstances() to KRPC

2018-12-04 Thread Thomas Tauber-Marshall (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-7468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Tauber-Marshall reassigned IMPALA-7468:
--

Assignee: Andrew Sherman

> Port CancelQueryFInstances() to KRPC
> 
>
> Key: IMPALA-7468
> URL: https://issues.apache.org/jira/browse/IMPALA-7468
> Project: IMPALA
>  Issue Type: Sub-task
>  Components: Distributed Exec
>Affects Versions: Impala 3.1.0
>Reporter: Michael Ho
>Assignee: Andrew Sherman
>Priority: Major
>  Labels: ramp-up
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-7468) Port CancelQueryFInstances() to KRPC

2018-12-04 Thread Thomas Tauber-Marshall (JIRA)


[ 
https://issues.apache.org/jira/browse/IMPALA-7468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16709252#comment-16709252
 ] 

Thomas Tauber-Marshall commented on IMPALA-7468:


Example of a similar patch: https://gerrit.cloudera.org/#/c/10855/

> Port CancelQueryFInstances() to KRPC
> 
>
> Key: IMPALA-7468
> URL: https://issues.apache.org/jira/browse/IMPALA-7468
> Project: IMPALA
>  Issue Type: Sub-task
>  Components: Distributed Exec
>Affects Versions: Impala 3.1.0
>Reporter: Michael Ho
>Assignee: Andrew Sherman
>Priority: Major
>  Labels: ramp-up
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Assigned] (IMPALA-7183) We should print the sender name when logging a report for an unknown status report on the coordinatior

2018-12-04 Thread Thomas Tauber-Marshall (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-7183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Tauber-Marshall reassigned IMPALA-7183:
--

Assignee: Andrew Sherman

> We should print the sender name when logging a report for an unknown status 
> report on the coordinatior
> --
>
> Key: IMPALA-7183
> URL: https://issues.apache.org/jira/browse/IMPALA-7183
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Backend, Distributed Exec
>Affects Versions: Impala 2.13.0, Impala 3.1.0
>Reporter: Lars Volker
>Assignee: Andrew Sherman
>Priority: Critical
>  Labels: ramp-up
>
> We should print the sender name when logging a report for an unknown status 
> report on the coordinatior in 
> [impala-server.cc:1229|https://github.com/apache/impala/blob/e7d5a25a4516337ef651983b1d945abf06c3a831/be/src/service/impala-server.cc#L1229].
> That will help identify backends with stuck fragment instances who fail to 
> get cancelled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-7926) test_reconnect failing

2018-12-04 Thread Thomas Tauber-Marshall (JIRA)
Thomas Tauber-Marshall created IMPALA-7926:
--

 Summary: test_reconnect failing
 Key: IMPALA-7926
 URL: https://issues.apache.org/jira/browse/IMPALA-7926
 Project: IMPALA
  Issue Type: Bug
Reporter: Thomas Tauber-Marshall


{noformat}
00:52:58 __ TestImpalaShellInteractive.test_reconnect 
___
00:52:58 shell/test_shell_interactive.py:191: in test_reconnect
00:52:58 assert get_num_open_sessions(initial_impala_service) == 
num_sessions_initial, \
00:52:58 E   AssertionError: Connection to localhost.localdomain:21000 should 
have been closed
00:52:58 E   assert 0 == 1
00:52:58 E+  where 0 = ()
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Work started] (IMPALA-7926) test_reconnect failing

2018-12-04 Thread Thomas Tauber-Marshall (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-7926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on IMPALA-7926 started by Thomas Tauber-Marshall.
--
> test_reconnect failing
> --
>
> Key: IMPALA-7926
> URL: https://issues.apache.org/jira/browse/IMPALA-7926
> Project: IMPALA
>  Issue Type: Bug
>Reporter: Thomas Tauber-Marshall
>Assignee: Thomas Tauber-Marshall
>Priority: Major
>  Labels: broken-build
>
> {noformat}
> 00:52:58 __ TestImpalaShellInteractive.test_reconnect 
> ___
> 00:52:58 shell/test_shell_interactive.py:191: in test_reconnect
> 00:52:58 assert get_num_open_sessions(initial_impala_service) == 
> num_sessions_initial, \
> 00:52:58 E   AssertionError: Connection to localhost.localdomain:21000 should 
> have been closed
> 00:52:58 E   assert 0 == 1
> 00:52:58 E+  where 0 =  0xccdf848>()
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Work started] (IMPALA-5847) Some query options do not work as expected in .test files

2018-12-04 Thread Thomas Tauber-Marshall (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-5847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on IMPALA-5847 started by Thomas Tauber-Marshall.
--
> Some query options do not work as expected in .test files
> -
>
> Key: IMPALA-5847
> URL: https://issues.apache.org/jira/browse/IMPALA-5847
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Infrastructure
>Reporter: Alexander Behm
>Assignee: Thomas Tauber-Marshall
>Priority: Minor
>
> We often use "set" in .test files to alter query options. Theoretically, a 
> "set" command should change the session-level query options and in most cases 
> a single .test file is executed from the same Impala session. However, for 
> some options using "set" within a query section does not seem to work. For 
> example, "num_nodes" does not work as expected as shown below.
> PyTest:
> {code}
> import pytest
> from tests.common.impala_test_suite import ImpalaTestSuite
> class TestStringQueries(ImpalaTestSuite):
>   @classmethod
>   def get_workload(cls):
> return 'functional-query'
>   def test_set_bug(self, vector):
> self.run_test_case('QueryTest/set_bug', vector)
> {code}
> Corresponding .test file:
> {code}
> 
>  QUERY
> set num_nodes=1;
> select count(*) from functional.alltypes;
> select count(*) from functional.alltypes;
> select count(*) from functional.alltypes;
>  RESULTS
> 7300
>  TYPES
> BIGINT
> 
> {code}
> After running the test above, I validated that the 3 queries were run from 
> the same session, and that the queries run a distributed plan. The 
> "num_nodes" option was definitely not picked up. I am not sure which query 
> options are affected. In several .test files setting other query options does 
> seem to work as expected.
> I suspect that the test framework might keep its own list of default query 
> options which get submitted together with the query, so the session-level 
> options are overridden on a per-request basis. For example, if I change the 
> pytest to remove the "num_nodes" dictionary entry, then the test works as 
> expected.
> PyTest workaround:
> {code}
> import pytest
> from tests.common.impala_test_suite import ImpalaTestSuite
> class TestStringQueries(ImpalaTestSuite):
>   @classmethod
>   def get_workload(cls):
> return 'functional-query'
>   def test_set_bug(self, vector):
> # Workaround SET bug
> vector.get_value('exec_option').pop('num_nodes', None)
> self.run_test_case('QueryTest/set_bug', vector)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Resolved] (IMPALA-7790) Kudu tests fail when with use_hybrid_clock=false

2018-12-04 Thread Thomas Tauber-Marshall (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-7790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Tauber-Marshall resolved IMPALA-7790.

   Resolution: Fixed
Fix Version/s: Impala 3.2.0

> Kudu tests fail when with use_hybrid_clock=false
> 
>
> Key: IMPALA-7790
> URL: https://issues.apache.org/jira/browse/IMPALA-7790
> Project: IMPALA
>  Issue Type: Bug
>Reporter: Thomas Tauber-Marshall
>Assignee: Thomas Tauber-Marshall
>Priority: Major
> Fix For: Impala 3.2.0
>
>
> Since IMPALA-6812, we've run many of our tests against Kudu at the 
> READ_AT_SNAPSHOT scan level, which ensures consistent results. This scan 
> level is only supported if Kudu is run with the flag --use_hybrid_clock=true 
> (which is the default).
> This hasn't generally been a problem in the past, as we've primarily run 
> Impala's functional tests against the minicluster, where Kudu is configured 
> correctly, but there's been some effort around running these tests against 
> real clusters, in which case --use_hybrid_clock=false may be set.
> We should at a minimum recognize this situation and skip the tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Work started] (IMPALA-6924) Compute stats profiles should include reference to child queries

2018-12-04 Thread Thomas Tauber-Marshall (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-6924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on IMPALA-6924 started by Thomas Tauber-Marshall.
--
> Compute stats profiles should include reference to child queries
> 
>
> Key: IMPALA-6924
> URL: https://issues.apache.org/jira/browse/IMPALA-6924
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Backend
>Affects Versions: Impala 3.0, Impala 2.12.0
>Reporter: Tim Armstrong
>Assignee: Thomas Tauber-Marshall
>Priority: Major
>  Labels: observability, supportability
>
> "Compute stats" queries spawn off child queries that do most of the work. 
> It's non-trivial to track down the child queries and get their profiles if 
> something goes wrong. We really should have, at a minimum, the query IDs of 
> the child queries in the parent's profile and vice-versa.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Assigned] (IMPALA-7926) test_reconnect failing

2018-12-04 Thread Thomas Tauber-Marshall (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-7926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Tauber-Marshall reassigned IMPALA-7926:
--

Assignee: Thomas Tauber-Marshall

> test_reconnect failing
> --
>
> Key: IMPALA-7926
> URL: https://issues.apache.org/jira/browse/IMPALA-7926
> Project: IMPALA
>  Issue Type: Bug
>Reporter: Thomas Tauber-Marshall
>Assignee: Thomas Tauber-Marshall
>Priority: Major
>  Labels: broken-build
>
> {noformat}
> 00:52:58 __ TestImpalaShellInteractive.test_reconnect 
> ___
> 00:52:58 shell/test_shell_interactive.py:191: in test_reconnect
> 00:52:58 assert get_num_open_sessions(initial_impala_service) == 
> num_sessions_initial, \
> 00:52:58 E   AssertionError: Connection to localhost.localdomain:21000 should 
> have been closed
> 00:52:58 E   assert 0 == 1
> 00:52:58 E+  where 0 =  0xccdf848>()
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Resolved] (IMPALA-6924) Compute stats profiles should include reference to child queries

2018-12-04 Thread Thomas Tauber-Marshall (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-6924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Tauber-Marshall resolved IMPALA-6924.

   Resolution: Fixed
Fix Version/s: Impala 3.2.0

> Compute stats profiles should include reference to child queries
> 
>
> Key: IMPALA-6924
> URL: https://issues.apache.org/jira/browse/IMPALA-6924
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Backend
>Affects Versions: Impala 3.0, Impala 2.12.0
>Reporter: Tim Armstrong
>Assignee: Thomas Tauber-Marshall
>Priority: Major
>  Labels: observability, supportability
> Fix For: Impala 3.2.0
>
>
> "Compute stats" queries spawn off child queries that do most of the work. 
> It's non-trivial to track down the child queries and get their profiles if 
> something goes wrong. We really should have, at a minimum, the query IDs of 
> the child queries in the parent's profile and vice-versa.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Assigned] (IMPALA-4555) Don't cancel query for failed ReportExecStatus (done=false) RPC

2018-11-27 Thread Thomas Tauber-Marshall (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-4555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Tauber-Marshall reassigned IMPALA-4555:
--

Assignee: Thomas Tauber-Marshall  (was: Michael Ho)

> Don't cancel query for failed ReportExecStatus (done=false) RPC
> ---
>
> Key: IMPALA-4555
> URL: https://issues.apache.org/jira/browse/IMPALA-4555
> Project: IMPALA
>  Issue Type: Sub-task
>  Components: Distributed Exec
>Affects Versions: Impala 2.7.0
>Reporter: Sailesh Mukil
>Assignee: Thomas Tauber-Marshall
>Priority: Major
>
> We currently try to send the ReportExecStatus RPC up to 3 times if the first 
> 2 times are unsuccessful - due to high network load or a network partition. 
> If all 3 attempts fail, we cancel the fragment instance and hence the query.
> However, we do not need to cancel the fragment instance if sending the report 
> with _done=false_ failed. We can just skip this turn and try again the next 
> time.
> We could probably skip sending the report up to 2 times (if we're unable to 
> send due to high network load and if done=false) before succumbing to the 
> current behavior, which is to cancel the fragment instance. The point is to 
> try at a later time when the network load may be lower rather than try 
> quickly again. The chance that the network load would reduce in 100 ms is 
> less than in 5s.
> Also, we probably do not need to have the retry logic unless we've already 
> skipped twice or if done=true.
> This could help reduce the network load on the coordinator for highly 
> concurrent workloads.
> The only drawback I see now is that the QueryExecSummary might be stale for a 
> while (which it would have anyway because the RPCs would have failed to send)
> P.S: This above proposed solution may need to change if we go ahead with 
> IMPALA-2990.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Work started] (IMPALA-4555) Don't cancel query for failed ReportExecStatus (done=false) RPC

2018-11-27 Thread Thomas Tauber-Marshall (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-4555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on IMPALA-4555 started by Thomas Tauber-Marshall.
--
> Don't cancel query for failed ReportExecStatus (done=false) RPC
> ---
>
> Key: IMPALA-4555
> URL: https://issues.apache.org/jira/browse/IMPALA-4555
> Project: IMPALA
>  Issue Type: Sub-task
>  Components: Distributed Exec
>Affects Versions: Impala 2.7.0
>Reporter: Sailesh Mukil
>Assignee: Thomas Tauber-Marshall
>Priority: Major
>
> We currently try to send the ReportExecStatus RPC up to 3 times if the first 
> 2 times are unsuccessful - due to high network load or a network partition. 
> If all 3 attempts fail, we cancel the fragment instance and hence the query.
> However, we do not need to cancel the fragment instance if sending the report 
> with _done=false_ failed. We can just skip this turn and try again the next 
> time.
> We could probably skip sending the report up to 2 times (if we're unable to 
> send due to high network load and if done=false) before succumbing to the 
> current behavior, which is to cancel the fragment instance. The point is to 
> try at a later time when the network load may be lower rather than try 
> quickly again. The chance that the network load would reduce in 100 ms is 
> less than in 5s.
> Also, we probably do not need to have the retry logic unless we've already 
> skipped twice or if done=true.
> This could help reduce the network load on the coordinator for highly 
> concurrent workloads.
> The only drawback I see now is that the QueryExecSummary might be stale for a 
> while (which it would have anyway because the RPCs would have failed to send)
> P.S: This above proposed solution may need to change if we go ahead with 
> IMPALA-2990.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Resolved] (IMPALA-7499) Build against CDH Kudu

2018-09-11 Thread Thomas Tauber-Marshall (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-7499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Tauber-Marshall resolved IMPALA-7499.

   Resolution: Fixed
Fix Version/s: Impala 3.1.0

> Build against CDH Kudu
> --
>
> Key: IMPALA-7499
> URL: https://issues.apache.org/jira/browse/IMPALA-7499
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Infrastructure
>Affects Versions: Impala 3.1.0
>Reporter: Thomas Tauber-Marshall
>Assignee: Thomas Tauber-Marshall
>Priority: Major
>  Labels: kudu
> Fix For: Impala 3.1.0
>
>
> Historically, we've pulled in Kudu (both the client to build against and 
> server binaries to run in the minicluster) from the toolchain. It would 
> simplify maintenance of this dependency to pull it in from CDH, the way we 
> already to for Hive, HBase, etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-7519) Support elliptic curve ssl ciphers

2018-09-14 Thread Thomas Tauber-Marshall (JIRA)


[ 
https://issues.apache.org/jira/browse/IMPALA-7519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615093#comment-16615093
 ] 

Thomas Tauber-Marshall commented on IMPALA-7519:


The motivation is security primarily. There's an Impala user who has business 
requirements for using this type of cipher.

Fwiw, we already support this in the krpc stack, this would just be to extend 
the same support to the thrift stack.

I have a review out here: https://gerrit.cloudera.org/#/c/11376/

> Support elliptic curve ssl ciphers
> --
>
> Key: IMPALA-7519
> URL: https://issues.apache.org/jira/browse/IMPALA-7519
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Backend, Clients
>Affects Versions: Impala 3.1.0
>Reporter: Thomas Tauber-Marshall
>Assignee: Thomas Tauber-Marshall
>Priority: Major
>  Labels: security
>
> Thrift's SSLSocketFactory class does not support setting ciphers that use 
> ecdh. We already override this class for others reasons, it would be 
> straightforward to add the necessary openssl calls to enable this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-7573) Fix GroupingAggregator::Reset's handling of output_partition_

2018-09-14 Thread Thomas Tauber-Marshall (JIRA)
Thomas Tauber-Marshall created IMPALA-7573:
--

 Summary: Fix GroupingAggregator::Reset's handling of 
output_partition_
 Key: IMPALA-7573
 URL: https://issues.apache.org/jira/browse/IMPALA-7573
 Project: IMPALA
  Issue Type: Improvement
  Components: Backend
Affects Versions: Impala 3.1.0
Reporter: Thomas Tauber-Marshall
Assignee: Thomas Tauber-Marshall


GroupingAggregator::Reset() doesn't call Close() on output_partition_, which 
can lead to hitting a DCHECK(is_closed) in the Partition destructor



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-7576) Add e default timeout for all e2e tests

2018-09-14 Thread Thomas Tauber-Marshall (JIRA)
Thomas Tauber-Marshall created IMPALA-7576:
--

 Summary: Add e default timeout for all e2e tests
 Key: IMPALA-7576
 URL: https://issues.apache.org/jira/browse/IMPALA-7576
 Project: IMPALA
  Issue Type: Improvement
  Components: Infrastructure
Affects Versions: Impala 3.1.0
Reporter: Thomas Tauber-Marshall
Assignee: Thomas Tauber-Marshall


We've recently seen a number of hangs in tests. This can waste testing 
resources, make it difficult to triage if its unclear what test hung, and can 
cause loss of coverage when following tests don't run.

We should set a default timeout in pytest for all tests. The timeout can be 
high to avoid false positives



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-7583) Failure in test_analytic_fns

2018-09-17 Thread Thomas Tauber-Marshall (JIRA)
Thomas Tauber-Marshall created IMPALA-7583:
--

 Summary: Failure in test_analytic_fns
 Key: IMPALA-7583
 URL: https://issues.apache.org/jira/browse/IMPALA-7583
 Project: IMPALA
  Issue Type: Bug
Affects Versions: Impala 3.1.0
Reporter: Thomas Tauber-Marshall


Seen in a build:
{noformat}
08:09:43  TestQueries.test_analytic_fns[exec_option: 
{'disable_codegen_rows_threshold': 0, 'disable_codegen': True, 
'abort_on_error': 1, 'debug_action': None, 'exec_single_node_rows_threshold': 
'100', 'batch_size': 0, 'num_nodes': 0} | table_format: text/lzo/block] 
08:09:43 [gw1] linux2 -- Python 2.7.5 
/data/jenkins/workspace/impala-cdh6.x-exhaustive-release/repos/Impala/bin/../infra/python/env/bin/python
08:09:43 query_test/test_queries.py:54: in test_analytic_fns
08:09:43 self.run_test_case('QueryTest/analytic-fns', vector)
08:09:43 common/impala_test_suite.py:437: in run_test_case
08:09:43 self.__verify_results_and_errors(vector, test_section, result, 
use_db)
08:09:43 common/impala_test_suite.py:310: in __verify_results_and_errors
08:09:43 replace_filenames_with_placeholder)
08:09:43 common/test_result_verifier.py:433: in verify_raw_results
08:09:43 VERIFIER_MAP[verifier](expected, actual)
08:09:43 common/test_result_verifier.py:260: in verify_query_result_is_equal
08:09:43 assert expected_results == actual_results
08:09:43 E   assert Comparing QueryTestResults (expected vs actual):
08:09:43 E 'a\x00b','' != '9k\x00',''
08:09:43 E 'a\x00b','NULL' == 'a\x00b','NULL'
08:09:43 - Captured stderr call 
-
...
08:09:43 -- 2018-09-12 06:49:31,694 INFO MainThread: Started query 
d14c894f7e2a20fc:c797406b
08:09:43 -- executing against localhost:21000
08:09:43 select count(*) from (
08:09:43 select
08:09:43   from_unixtime(lead(bigint_col, 1) over (order by id), 
'MMddHH:mm:ss') as a,
08:09:43   lead(from_unixtime(bigint_col, 'MMddHH:mm:ss'), 1) over (order 
by id) AS b
08:09:43 from functional.alltypes) x
08:09:43 where x.a = x.b;
08:09:43 
08:09:43 -- 2018-09-12 06:49:31,809 INFO MainThread: Started query 
d948840775a29d70:5fce58f4
08:09:43 -- executing against localhost:21000
08:09:43 select count(*) from (
08:09:43 select
08:09:43   from_unixtime(lag(bigint_col, 1) over (order by id), 
'MMddHH:mm:ss') as a,
08:09:43   lag(from_unixtime(bigint_col, 'MMddHH:mm:ss'), 1) over (order by 
id) AS b
08:09:43 from functional.alltypes) x
08:09:43 where x.a = x.b;
08:09:43 
08:09:43 -- 2018-09-12 06:49:31,930 INFO MainThread: Started query 
7468c485800eb48:8c6f1356
08:09:43 -- executing against localhost:21000
08:09:43 select f,lead(b,1,null) over (order by f)
08:09:43 from (select * from nulltable union all select * from nulltable) x;
08:09:43 
08:09:43 -- 2018-09-12 06:49:32,178 INFO MainThread: Started query 
3f4a69557cbfc76b:bf6a1a58
08:09:43 -- 2018-09-12 06:49:32,260 ERRORMainThread: Comparing 
QueryTestResults (expected vs actual):
08:09:43 'a\x00b','' != '9k\x00',''
08:09:43 'a\x00b','NULL' == 'a\x00b','NULL'
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Resolved] (IMPALA-7576) Add e default timeout for all e2e tests

2018-09-17 Thread Thomas Tauber-Marshall (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-7576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Tauber-Marshall resolved IMPALA-7576.

   Resolution: Fixed
Fix Version/s: Impala 3.1.0

> Add e default timeout for all e2e tests
> ---
>
> Key: IMPALA-7576
> URL: https://issues.apache.org/jira/browse/IMPALA-7576
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Infrastructure
>Affects Versions: Impala 3.1.0
>Reporter: Thomas Tauber-Marshall
>Assignee: Thomas Tauber-Marshall
>Priority: Major
> Fix For: Impala 3.1.0
>
>
> We've recently seen a number of hangs in tests. This can waste testing 
> resources, make it difficult to triage if its unclear what test hung, and can 
> cause loss of coverage when following tests don't run.
> We should set a default timeout in pytest for all tests. The timeout can be 
> high to avoid false positives



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Resolved] (IMPALA-7573) Fix GroupingAggregator::Reset's handling of output_partition_

2018-09-17 Thread Thomas Tauber-Marshall (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-7573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Tauber-Marshall resolved IMPALA-7573.

   Resolution: Fixed
Fix Version/s: Impala 3.1.0

> Fix GroupingAggregator::Reset's handling of output_partition_
> -
>
> Key: IMPALA-7573
> URL: https://issues.apache.org/jira/browse/IMPALA-7573
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Backend
>Affects Versions: Impala 3.1.0
>Reporter: Thomas Tauber-Marshall
>Assignee: Thomas Tauber-Marshall
>Priority: Critical
> Fix For: Impala 3.1.0
>
>
> GroupingAggregator::Reset() doesn't call Close() on output_partition_, which 
> can lead to hitting a DCHECK(is_closed) in the Partition destructor



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-7580) test_local_catalog fails on S3 build

2018-09-17 Thread Thomas Tauber-Marshall (JIRA)
Thomas Tauber-Marshall created IMPALA-7580:
--

 Summary: test_local_catalog fails on S3 build
 Key: IMPALA-7580
 URL: https://issues.apache.org/jira/browse/IMPALA-7580
 Project: IMPALA
  Issue Type: Bug
Affects Versions: Impala 3.1.0
Reporter: Thomas Tauber-Marshall
Assignee: Tianyi Wang


{noformat}
07:07:19 _ TestAutomaticCatalogInvalidation.test_local_catalog 
__
07:07:19 custom_cluster/test_automatic_invalidation.py:65: in test_local_catalog
07:07:19 self._run_test(cursor)
07:07:19 custom_cluster/test_automatic_invalidation.py:45: in _run_test
07:07:19 assert self.metadata_cache_string in self._get_catalog_object()
07:07:19 E   assert 'columns (list) = liststruct' in 

[jira] [Created] (IMPALA-7581) Hang in buffer-pool-test

2018-09-17 Thread Thomas Tauber-Marshall (JIRA)
Thomas Tauber-Marshall created IMPALA-7581:
--

 Summary: Hang in buffer-pool-test
 Key: IMPALA-7581
 URL: https://issues.apache.org/jira/browse/IMPALA-7581
 Project: IMPALA
  Issue Type: Bug
Affects Versions: Impala 3.1.0
Reporter: Thomas Tauber-Marshall
Assignee: Tim Armstrong


We have observed a hang in buffer-pool-test an ASAN build. Unfortunately, no 
logs were generated with any info about what might have happened.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Assigned] (IMPALA-6658) Parquet RLE encoding can waste space with small repeated runs

2018-09-11 Thread Thomas Tauber-Marshall (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-6658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Tauber-Marshall reassigned IMPALA-6658:
--

Assignee: Andrew Sherman

> Parquet RLE encoding can waste space with small repeated runs
> -
>
> Key: IMPALA-6658
> URL: https://issues.apache.org/jira/browse/IMPALA-6658
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Backend
>Reporter: Csaba Ringhofer
>Assignee: Andrew Sherman
>Priority: Minor
>  Labels: parquet, ramp-up
>
> Currently RleEncoder creates repeated runs from 8 repeated values, which can 
> be less space efficient than bit-packed if bit width is 1 or 2. In the worst 
> case, the whole data page can be ~2X larger if bit width is 1, and ~1.25X 
> larger if bit is 2 compared to bit-packing.
> A comment in rle_encoding.h writes different numbers, but it probably does 
> not calculate with the overhead of splitting long runs into smaller ones 
> (every run adds +1 byte for its length): 
> https://github.com/apache/impala/blob/8079cd9d2a87051f81a41910b74fab15e35f36ea/be/src/util/rle-encoding.h#L62
> Note that if the data page is compressed, this size difference probably 
> disappears, but the larger uncompressed buffer size can still affect  
> performance.
> Parquet RLE encoding is described here: https://github.com/apache/parquet-
> format/blob/master/Encodings.md#run-length-encoding--bit-packing-hybrid-rle--3



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Resolved] (IMPALA-5847) Some query options do not work as expected in .test files

2019-01-22 Thread Thomas Tauber-Marshall (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-5847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Tauber-Marshall resolved IMPALA-5847.

   Resolution: Fixed
Fix Version/s: Impala 3.2.0

> Some query options do not work as expected in .test files
> -
>
> Key: IMPALA-5847
> URL: https://issues.apache.org/jira/browse/IMPALA-5847
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Infrastructure
>Reporter: Alexander Behm
>Assignee: Thomas Tauber-Marshall
>Priority: Minor
> Fix For: Impala 3.2.0
>
>
> We often use "set" in .test files to alter query options. Theoretically, a 
> "set" command should change the session-level query options and in most cases 
> a single .test file is executed from the same Impala session. However, for 
> some options using "set" within a query section does not seem to work. For 
> example, "num_nodes" does not work as expected as shown below.
> PyTest:
> {code}
> import pytest
> from tests.common.impala_test_suite import ImpalaTestSuite
> class TestStringQueries(ImpalaTestSuite):
>   @classmethod
>   def get_workload(cls):
> return 'functional-query'
>   def test_set_bug(self, vector):
> self.run_test_case('QueryTest/set_bug', vector)
> {code}
> Corresponding .test file:
> {code}
> 
>  QUERY
> set num_nodes=1;
> select count(*) from functional.alltypes;
> select count(*) from functional.alltypes;
> select count(*) from functional.alltypes;
>  RESULTS
> 7300
>  TYPES
> BIGINT
> 
> {code}
> After running the test above, I validated that the 3 queries were run from 
> the same session, and that the queries run a distributed plan. The 
> "num_nodes" option was definitely not picked up. I am not sure which query 
> options are affected. In several .test files setting other query options does 
> seem to work as expected.
> I suspect that the test framework might keep its own list of default query 
> options which get submitted together with the query, so the session-level 
> options are overridden on a per-request basis. For example, if I change the 
> pytest to remove the "num_nodes" dictionary entry, then the test works as 
> expected.
> PyTest workaround:
> {code}
> import pytest
> from tests.common.impala_test_suite import ImpalaTestSuite
> class TestStringQueries(ImpalaTestSuite):
>   @classmethod
>   def get_workload(cls):
> return 'functional-query'
>   def test_set_bug(self, vector):
> # Workaround SET bug
> vector.get_value('exec_option').pop('num_nodes', None)
> self.run_test_case('QueryTest/set_bug', vector)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-8390) test_cancel_insert and test_cancel_sort broken

2019-04-04 Thread Thomas Tauber-Marshall (JIRA)
Thomas Tauber-Marshall created IMPALA-8390:
--

 Summary: test_cancel_insert and test_cancel_sort broken
 Key: IMPALA-8390
 URL: https://issues.apache.org/jira/browse/IMPALA-8390
 Project: IMPALA
  Issue Type: Bug
Reporter: Thomas Tauber-Marshall
Assignee: Thomas Tauber-Marshall


The tests test_cancel_insert and test_cancel_sort in test_cancellation.py are 
both broken due to specifying a test dimension 'action' which was renamed as 
part of IMPALA-7205

More generally, test_cancellation.py has a large number of test dimensions that 
blow up into a huge test matrix and we should probably think through what 
combinations of tests are actually giving us the coverage we want



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Resolved] (IMPALA-8377) Recent toolchain bump breaks Ubuntu 14.04 builds

2019-04-04 Thread Thomas Tauber-Marshall (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Tauber-Marshall resolved IMPALA-8377.

   Resolution: Fixed
Fix Version/s: Impala 3.3.0

> Recent toolchain bump breaks Ubuntu 14.04 builds
> 
>
> Key: IMPALA-8377
> URL: https://issues.apache.org/jira/browse/IMPALA-8377
> Project: IMPALA
>  Issue Type: Bug
>  Components: Infrastructure
>Affects Versions: Impala 3.3.0
>Reporter: Lars Volker
>Assignee: Thomas Tauber-Marshall
>Priority: Critical
>  Labels: broken-build
> Fix For: Impala 3.3.0
>
>
> Commit 25559dd4 in this change broke the build on Ubuntu 14.04: 
> https://gerrit.cloudera.org/#/c/12824/
> All daemons and any backend tests immediately segfault during startup with 
> this stack:
> {noformat}
> Program terminated with signal SIGSEGV, Segmentation fault.
> #0  0x in ?? ()
> (gdb) where
> #0  0x in ?? ()
> #1  0x7ff0abed9a80 in pthread_once () from 
> /lib/x86_64-linux-gnu/libpthread.so.0
> #2  0x04a93375 in 
> llvm::ManagedStaticBase::RegisterManagedStatic(void* (*)(), void (*)(void*)) 
> const ()
> #3  0x04a7ac76 in llvm::ManagedStatic<(anonymous 
> namespace)::CommandLineParser, llvm::object_creator<(anonymous 
> namespace)::CommandLineParser>, llvm::object_deleter<(anonymous 
> namespace)::CommandLineParser> >::operator*() [clone .constprop.407] ()
> #4  0x04a843a6 in llvm::cl::Option::addArgument() ()
> #5  0x01b26f27 in _GLOBAL__sub_I_SyntaxHighlighting.cpp ()
> #6  0x04dac9bd in __libc_csu_init ()
> #7  0x7ff0abb24ed5 in __libc_start_main () from 
> /lib/x86_64-linux-gnu/libc.so.6
> #8  0x01b59c97 in _start ()
> {noformat}
> Setting {{IMPALA_KUDU_VERSION}} back to {{5211897}} in impala-config.sh make 
> the daemons start again, as does setting {{KUDU_IS_SUPPORTED=false}}. 
> However, only the former fixes the be-tests.
> One outcome of this might be "Won't Fix" and we deprecate support for Ubuntu 
> 14.04. If that seems favorable, we should briefly discuss it on dev@.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-7184) Support Kudu's READ_YOUR_WRITES scan mode

2019-03-28 Thread Thomas Tauber-Marshall (JIRA)


[ 
https://issues.apache.org/jira/browse/IMPALA-7184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16804227#comment-16804227
 ] 

Thomas Tauber-Marshall commented on IMPALA-7184:


[~andrew.wong] I played around with this and it doesn't work, at least not as 
Impala expects it to work:
- I create a Kudu table, insert some stuff into it, scan it back at 
READ_YOUR_WRITES. Everything works as expected.
- I wait greater than 'tablet_history_max_age_sec' and attempt to scan it again 
at READ_YOUR_WRITES (using either the same KuduClient or a new one). This 
results in an error of the form 'Snapshot timestamp is earlier than the ancient 
history mark...'

I can avoid the error if I interact with the table in some other way (eg. 
performing a scan at READ_LATEST or do an ALTER) and then scan it at 
READ_YOUR_WRITES in less than  'tablet_history_max_age_sec'

I can avoid the error if I call SetLatestObservedTimestamp() to something that 
is more recent than 'tablet_history_max_age_sec', but currently Impala only 
calls SetLatestObservedTimestamp() if you're in a session in which there was a 
previous DML operation (in which case its set to the return of 
GetLatestObservedTimestamp() just after the dml has been flushed for the last 
time). Is the expectation that Impala should always be calling 
SetLatestObservedTimestamp() before scans?

> Support Kudu's READ_YOUR_WRITES scan mode
> -
>
> Key: IMPALA-7184
> URL: https://issues.apache.org/jira/browse/IMPALA-7184
> Project: IMPALA
>  Issue Type: Improvement
>Reporter: Thomas Tauber-Marshall
>Assignee: Thomas Tauber-Marshall
>Priority: Major
>  Labels: kudu
>
> Kudu recently added a new scan mode called READ_YOUR_WRITES which provides 
> better consistency guarantees than READ_LATEST or READ_AT_SNAPSHOT, the 
> options currently supported by Impala.
> Unfortunately, READ_YOUR_WRITES is currently affected by a bug that makes it 
> unusable by Impala (KUDU-2233). Once this is fixed, we should add support for 
> it, and consider either setting it as the default, or at least using it in 
> tests, see the discussion in https://gerrit.cloudera.org/#/c/10503/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-8336) TestNestedTypes.test_tpch_mem_limit_single_node failed on centos6

2019-03-25 Thread Thomas Tauber-Marshall (JIRA)


[ 
https://issues.apache.org/jira/browse/IMPALA-8336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16800897#comment-16800897
 ] 

Thomas Tauber-Marshall commented on IMPALA-8336:


Another instance of this, on ubuntu16: 
https://jenkins.impala.io/job/gerrit-verify-dryrun/3946/

> TestNestedTypes.test_tpch_mem_limit_single_node failed on centos6
> -
>
> Key: IMPALA-8336
> URL: https://issues.apache.org/jira/browse/IMPALA-8336
> Project: IMPALA
>  Issue Type: Bug
>  Components: Backend
>Affects Versions: Impala 3.3.0
>Reporter: Joe McDonnell
>Priority: Critical
>  Labels: broken-build
>
> The test expects the memory limit to be exceeded, but it doesn't happen:
> {noformat}
> query_test.test_nested_types.TestNestedTypes.test_tpch_mem_limit_single_node[protocol:
>  beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
> 'disable_codegen_rows_threshold': 0, 'disable_codegen': False, 
> 'abort_on_error': 1, 'exec_single_node_rows_threshold': 0} | table_format: 
> orc/def/block]
> query_test/test_nested_types.py:123: in test_tpch_mem_limit_single_node
> new_vector, use_db='tpch_nested' + db_suffix)
> common/impala_test_suite.py:487: in run_test_case
> assert False, "Expected exception: %s" % expected_str
> E   AssertionError: Expected exception: row_regex: .*Memory limit exceeded: 
> Failed to allocate [0-9]+ bytes for collection 
> 'tpch_nested_.*.customer.c_orders.item.o_lineitems'.*{noformat}
> Seen once on Centos 6.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Resolved] (IMPALA-8207) Fix query loading in run-workload.py

2019-02-20 Thread Thomas Tauber-Marshall (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Tauber-Marshall resolved IMPALA-8207.

   Resolution: Fixed
Fix Version/s: Impala 3.2.0

> Fix query loading in run-workload.py
> 
>
> Key: IMPALA-8207
> URL: https://issues.apache.org/jira/browse/IMPALA-8207
> Project: IMPALA
>  Issue Type: Bug
>  Components: Infrastructure
>Affects Versions: Impala 3.2.0
>Reporter: Thomas Tauber-Marshall
>Assignee: Thomas Tauber-Marshall
>Priority: Major
> Fix For: Impala 3.2.0
>
>
> The code that run-workload.py uses to retrieve the queries for particular 
> workloads has not been kept up to date with changes to the contents of the 
> testdata/workload/* directories, resulting in it picking up and running 
> various queries that were not really intended to be part of the workloads.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Updated] (IMPALA-8299) GroupingAggregator::Partition::Close() may access an uninitialized hash table

2019-03-19 Thread Thomas Tauber-Marshall (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Tauber-Marshall updated IMPALA-8299:
---
Target Version: Impala 3.2.0  (was: Impala 3.3.0)

> GroupingAggregator::Partition::Close() may access an uninitialized hash table
> -
>
> Key: IMPALA-8299
> URL: https://issues.apache.org/jira/browse/IMPALA-8299
> Project: IMPALA
>  Issue Type: Bug
>  Components: Backend
>Affects Versions: Impala 3.1.0, Impala 3.2.0
>Reporter: Michael Ho
>Assignee: Thomas Tauber-Marshall
>Priority: Blocker
>  Labels: crash
> Fix For: Impala 3.2.0
>
>
> On the rare occasion that {{Suballocator::Allocate()}} failed in 
> {{HashTable::init()}}, the {{GroupingAggregator::Partition::Close()}} may 
> access an uninitialized hash table, leading to crash below:
> {noformat}
> #4  0x7f5413a1268f in JVM_handle_linux_signal () from 
> ./sysroot/usr/java/jdk1.8.0_144/jre/lib/amd64/server/libjvm.so
> #5  0x7f5413a08be3 in signalHandler(int, siginfo*, void*) () from 
> ./sysroot/usr/java/jdk1.8.0_144/jre/lib/amd64/server/libjvm.so
> #6  
> #7  0x023c5c00 in impala::HashTable::NextFilledBucket 
> (this=0x1a1fa000, bucket_idx=0x7f533cfc73d0, node=0x7f533cfc73c8)
> at 
> /data/jenkins/workspace/impala-private-parameterized/repos/Impala/be/src/exec/hash-table.inline.h:185
> #8  0x0244b639 in impala::HashTable::Begin (this=0x1a1fa000, 
> ctx=0x15445e00) at 
> /data/jenkins/workspace/impala-private-parameterized/repos/Impala/be/src/exec/hash-table.inline.h:163
> #9  0x02457c69 in impala::GroupingAggregator::Partition::Close 
> (this=0x133ef3e0, finalize_rows=true)
> at 
> /data/jenkins/workspace/impala-private-parameterized/repos/Impala/be/src/exec/grouping-aggregator-partition.cc:207
> #10 0x02448f26 in impala::GroupingAggregator::ClosePartitions 
> (this=0x1327aa00) at 
> /data/jenkins/workspace/impala-private-parameterized/repos/Impala/be/src/exec/grouping-aggregator.cc:939
> #11 0x02443622 in impala::GroupingAggregator::Close (this=0x1327aa00, 
> state=0x1bf5f180) at 
> /data/jenkins/workspace/impala-private-parameterized/repos/Impala/be/src/exec/grouping-aggregator.cc:386
> #12 0x02412ce4 in impala::AggregationNode::Close (this=0x1346f600, 
> state=0x1bf5f180) at 
> /data/jenkins/workspace/impala-private-parameterized/repos/Impala/be/src/exec/aggregation-node.cc:139
> #13 0x0242a69f in impala::BlockingJoinNode::ProcessBuildInputAsync 
> (this=0xb466480, state=0x1bf5f180, build_sink=0xf35f600, 
> status=0x7f53402ddb20)
> at 
> /data/jenkins/workspace/impala-private-parameterized/repos/Impala/be/src/exec/blocking-join-node.cc:173
> #14 0x0242a865 in 
> impala::BlockingJoinNodeoperator()(void) const 
> (__closure=0x21c6fd80) at 
> /data/jenkins/workspace/impala-private-parameterized/repos/Impala/be/src/exec/blocking-join-node.cc:212
> #15 0x0242c4d5 in 
> boost::detail::function::void_function_obj_invoker0  impala::DataSink*)::, 
> void>::invoke(boost::detail::function::function_buffer &) 
> (function_obj_ptr=...) at 
> /data/jenkins/workspace/impala-private-parameterized/Impala-Toolchain/boost-1.57.0-p3/include/boost/function/function_template.hpp:153
> #16 0x01d7eb4e in boost::function0::operator() 
> (this=0x7f533cfc7ba0) at 
> /data/jenkins/workspace/impala-private-parameterized/Impala-Toolchain/boost-1.57.0-p3/include/boost/function/function_template.hpp:767
> #17 0x0224f3d1 in impala::Thread::SuperviseThread(std::string const&, 
> std::string const&, boost::function, impala::ThreadDebugInfo const*, 
> impala::Promise*) (name=...,
> category=..., functor=..., parent_thread_info=0x7f53402de850, 
> thread_started=0x7f53402dd7d0) at 
> /data/jenkins/workspace/impala-private-parameterized/repos/Impala/be/src/util/thread.cc:359
> #18 0x02257755 in boost::_bi::list5, 
> boost::_bi::value, boost::_bi::value >, 
> boost::_bi::value, 
> boost::_bi::value to continue, or q  to 
> quit---
> romise*> >::operator() const&, std::string const&, boost::function, impala::ThreadDebugInfo 
> const*, impala::Promise*), 
> boost::_bi::list0>(boost::_bi::type, void (*&)(std::string const&, 
> std::string const&, boost::function, impala::ThreadDebugInfo const*, 
> impala::Promise*), boost::_bi::list0&, int) (
> this=0x14cf5fc0,
> f=@0x14cf5fb8: 0x224f06a  const&, std::string const&, boost::function, impala::ThreadDebugInfo 
> const*, impala::Promise*)>, a=...)
> at 
> /data/jenkins/workspace/impala-private-parameterized/Impala-Toolchain/boost-1.57.0-p3/include/boost/bind/bind.hpp:525
> (gdb) f 7
> #7  0x023c5c00 in impala::HashTable::NextFilledBucket 
> (this=0x1a1fa000, bucket_idx=0x7f533cfc73d0, 

[jira] [Updated] (IMPALA-8299) GroupingAggregator::Partition::Close() may access an uninitialized hash table

2019-03-19 Thread Thomas Tauber-Marshall (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Tauber-Marshall updated IMPALA-8299:
---
Fix Version/s: (was: Impala 3.3.0)
   Impala 3.2.0

> GroupingAggregator::Partition::Close() may access an uninitialized hash table
> -
>
> Key: IMPALA-8299
> URL: https://issues.apache.org/jira/browse/IMPALA-8299
> Project: IMPALA
>  Issue Type: Bug
>  Components: Backend
>Affects Versions: Impala 3.1.0, Impala 3.2.0
>Reporter: Michael Ho
>Assignee: Thomas Tauber-Marshall
>Priority: Blocker
>  Labels: crash
> Fix For: Impala 3.2.0
>
>
> On the rare occasion that {{Suballocator::Allocate()}} failed in 
> {{HashTable::init()}}, the {{GroupingAggregator::Partition::Close()}} may 
> access an uninitialized hash table, leading to crash below:
> {noformat}
> #4  0x7f5413a1268f in JVM_handle_linux_signal () from 
> ./sysroot/usr/java/jdk1.8.0_144/jre/lib/amd64/server/libjvm.so
> #5  0x7f5413a08be3 in signalHandler(int, siginfo*, void*) () from 
> ./sysroot/usr/java/jdk1.8.0_144/jre/lib/amd64/server/libjvm.so
> #6  
> #7  0x023c5c00 in impala::HashTable::NextFilledBucket 
> (this=0x1a1fa000, bucket_idx=0x7f533cfc73d0, node=0x7f533cfc73c8)
> at 
> /data/jenkins/workspace/impala-private-parameterized/repos/Impala/be/src/exec/hash-table.inline.h:185
> #8  0x0244b639 in impala::HashTable::Begin (this=0x1a1fa000, 
> ctx=0x15445e00) at 
> /data/jenkins/workspace/impala-private-parameterized/repos/Impala/be/src/exec/hash-table.inline.h:163
> #9  0x02457c69 in impala::GroupingAggregator::Partition::Close 
> (this=0x133ef3e0, finalize_rows=true)
> at 
> /data/jenkins/workspace/impala-private-parameterized/repos/Impala/be/src/exec/grouping-aggregator-partition.cc:207
> #10 0x02448f26 in impala::GroupingAggregator::ClosePartitions 
> (this=0x1327aa00) at 
> /data/jenkins/workspace/impala-private-parameterized/repos/Impala/be/src/exec/grouping-aggregator.cc:939
> #11 0x02443622 in impala::GroupingAggregator::Close (this=0x1327aa00, 
> state=0x1bf5f180) at 
> /data/jenkins/workspace/impala-private-parameterized/repos/Impala/be/src/exec/grouping-aggregator.cc:386
> #12 0x02412ce4 in impala::AggregationNode::Close (this=0x1346f600, 
> state=0x1bf5f180) at 
> /data/jenkins/workspace/impala-private-parameterized/repos/Impala/be/src/exec/aggregation-node.cc:139
> #13 0x0242a69f in impala::BlockingJoinNode::ProcessBuildInputAsync 
> (this=0xb466480, state=0x1bf5f180, build_sink=0xf35f600, 
> status=0x7f53402ddb20)
> at 
> /data/jenkins/workspace/impala-private-parameterized/repos/Impala/be/src/exec/blocking-join-node.cc:173
> #14 0x0242a865 in 
> impala::BlockingJoinNodeoperator()(void) const 
> (__closure=0x21c6fd80) at 
> /data/jenkins/workspace/impala-private-parameterized/repos/Impala/be/src/exec/blocking-join-node.cc:212
> #15 0x0242c4d5 in 
> boost::detail::function::void_function_obj_invoker0  impala::DataSink*)::, 
> void>::invoke(boost::detail::function::function_buffer &) 
> (function_obj_ptr=...) at 
> /data/jenkins/workspace/impala-private-parameterized/Impala-Toolchain/boost-1.57.0-p3/include/boost/function/function_template.hpp:153
> #16 0x01d7eb4e in boost::function0::operator() 
> (this=0x7f533cfc7ba0) at 
> /data/jenkins/workspace/impala-private-parameterized/Impala-Toolchain/boost-1.57.0-p3/include/boost/function/function_template.hpp:767
> #17 0x0224f3d1 in impala::Thread::SuperviseThread(std::string const&, 
> std::string const&, boost::function, impala::ThreadDebugInfo const*, 
> impala::Promise*) (name=...,
> category=..., functor=..., parent_thread_info=0x7f53402de850, 
> thread_started=0x7f53402dd7d0) at 
> /data/jenkins/workspace/impala-private-parameterized/repos/Impala/be/src/util/thread.cc:359
> #18 0x02257755 in boost::_bi::list5, 
> boost::_bi::value, boost::_bi::value >, 
> boost::_bi::value, 
> boost::_bi::value to continue, or q  to 
> quit---
> romise*> >::operator() const&, std::string const&, boost::function, impala::ThreadDebugInfo 
> const*, impala::Promise*), 
> boost::_bi::list0>(boost::_bi::type, void (*&)(std::string const&, 
> std::string const&, boost::function, impala::ThreadDebugInfo const*, 
> impala::Promise*), boost::_bi::list0&, int) (
> this=0x14cf5fc0,
> f=@0x14cf5fb8: 0x224f06a  const&, std::string const&, boost::function, impala::ThreadDebugInfo 
> const*, impala::Promise*)>, a=...)
> at 
> /data/jenkins/workspace/impala-private-parameterized/Impala-Toolchain/boost-1.57.0-p3/include/boost/bind/bind.hpp:525
> (gdb) f 7
> #7  0x023c5c00 in impala::HashTable::NextFilledBucket 
> (this=0x1a1fa000, 

[jira] [Assigned] (IMPALA-8327) TestRPCTimeout::test_reportexecstatus_retry() times out on exhaustive build

2019-03-20 Thread Thomas Tauber-Marshall (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Tauber-Marshall reassigned IMPALA-8327:
--

Assignee: Thomas Tauber-Marshall

> TestRPCTimeout::test_reportexecstatus_retry() times out on exhaustive build
> ---
>
> Key: IMPALA-8327
> URL: https://issues.apache.org/jira/browse/IMPALA-8327
> Project: IMPALA
>  Issue Type: Bug
>  Components: Backend
>Affects Versions: Impala 3.3.0
>Reporter: Joe McDonnell
>Assignee: Thomas Tauber-Marshall
>Priority: Blocker
>  Labels: broken-build
>
>  
> An exhaustive build timed out while executing 
> TestRPCTimeout::test_reportexecstatus_retry():
> {noformat}
> 20:22:45 
> custom_cluster/test_rpc_timeout.py::TestRPCTimeout::test_reportexecstatus_jitter[protocol:
>  beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
> 'disable_codegen_rows_threshold': 0, 'disable_codegen': False, 
> 'abort_on_error': 1, 'exec_single_node_rows_threshold': 0} | table_format: 
> text/none] PASSED
> 09:50:18 
> custom_cluster/test_rpc_timeout.py::TestRPCTimeout::test_reportexecstatus_retry[protocol:
>  beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
> 'disable_codegen_rows_threshold': 0, 'disable_codegen': False, 
> 'abort_on_error': 1, 'exec_single_node_rows_threshold': 0} | table_format: 
> text/none] 
> 09:50:18 
> 09:50:18  Tests TIMED OUT! {noformat}
> The impalad log shows a few messages like this:
> {noformat}
> I0319 20:22:56.376278 13826 impala-service-pool.cc:130] ReportExecStatus 
> request on impala.ControlService from 127.0.0.1:46358 dropped due to 
> backpressure. The service queue contains 0 items out of a maximum of 
> 2147483647; memory consumption is 16.17 KB. Contents of service 
> queue:{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-8327) TestRPCTimeout::test_reportexecstatus_retry() times out on exhaustive build

2019-03-20 Thread Thomas Tauber-Marshall (JIRA)


[ 
https://issues.apache.org/jira/browse/IMPALA-8327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797454#comment-16797454
 ] 

Thomas Tauber-Marshall commented on IMPALA-8327:


I have a patch out for IMPALA-2990: https://gerrit.cloudera.org/#/c/12299/

I'll close this once that goes in.

> TestRPCTimeout::test_reportexecstatus_retry() times out on exhaustive build
> ---
>
> Key: IMPALA-8327
> URL: https://issues.apache.org/jira/browse/IMPALA-8327
> Project: IMPALA
>  Issue Type: Bug
>  Components: Backend
>Affects Versions: Impala 3.3.0
>Reporter: Joe McDonnell
>Priority: Blocker
>  Labels: broken-build
>
>  
> An exhaustive build timed out while executing 
> TestRPCTimeout::test_reportexecstatus_retry():
> {noformat}
> 20:22:45 
> custom_cluster/test_rpc_timeout.py::TestRPCTimeout::test_reportexecstatus_jitter[protocol:
>  beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
> 'disable_codegen_rows_threshold': 0, 'disable_codegen': False, 
> 'abort_on_error': 1, 'exec_single_node_rows_threshold': 0} | table_format: 
> text/none] PASSED
> 09:50:18 
> custom_cluster/test_rpc_timeout.py::TestRPCTimeout::test_reportexecstatus_retry[protocol:
>  beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 
> 'disable_codegen_rows_threshold': 0, 'disable_codegen': False, 
> 'abort_on_error': 1, 'exec_single_node_rows_threshold': 0} | table_format: 
> text/none] 
> 09:50:18 
> 09:50:18  Tests TIMED OUT! {noformat}
> The impalad log shows a few messages like this:
> {noformat}
> I0319 20:22:56.376278 13826 impala-service-pool.cc:130] ReportExecStatus 
> request on impala.ControlService from 127.0.0.1:46358 dropped due to 
> backpressure. The service queue contains 0 items out of a maximum of 
> 2147483647; memory consumption is 16.17 KB. Contents of service 
> queue:{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Resolved] (IMPALA-8299) GroupingAggregator::Partition::Close() may access an uninitialized hash table

2019-03-15 Thread Thomas Tauber-Marshall (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Tauber-Marshall resolved IMPALA-8299.

   Resolution: Fixed
Fix Version/s: Impala 3.3.0

> GroupingAggregator::Partition::Close() may access an uninitialized hash table
> -
>
> Key: IMPALA-8299
> URL: https://issues.apache.org/jira/browse/IMPALA-8299
> Project: IMPALA
>  Issue Type: Bug
>  Components: Backend
>Affects Versions: Impala 3.1.0, Impala 3.2.0
>Reporter: Michael Ho
>Assignee: Thomas Tauber-Marshall
>Priority: Blocker
>  Labels: crash
> Fix For: Impala 3.3.0
>
>
> On the rare occasion that {{Suballocator::Allocate()}} failed in 
> {{HashTable::init()}}, the {{GroupingAggregator::Partition::Close()}} may 
> access an uninitialized hash table, leading to crash below:
> {noformat}
> #4  0x7f5413a1268f in JVM_handle_linux_signal () from 
> ./sysroot/usr/java/jdk1.8.0_144/jre/lib/amd64/server/libjvm.so
> #5  0x7f5413a08be3 in signalHandler(int, siginfo*, void*) () from 
> ./sysroot/usr/java/jdk1.8.0_144/jre/lib/amd64/server/libjvm.so
> #6  
> #7  0x023c5c00 in impala::HashTable::NextFilledBucket 
> (this=0x1a1fa000, bucket_idx=0x7f533cfc73d0, node=0x7f533cfc73c8)
> at 
> /data/jenkins/workspace/impala-private-parameterized/repos/Impala/be/src/exec/hash-table.inline.h:185
> #8  0x0244b639 in impala::HashTable::Begin (this=0x1a1fa000, 
> ctx=0x15445e00) at 
> /data/jenkins/workspace/impala-private-parameterized/repos/Impala/be/src/exec/hash-table.inline.h:163
> #9  0x02457c69 in impala::GroupingAggregator::Partition::Close 
> (this=0x133ef3e0, finalize_rows=true)
> at 
> /data/jenkins/workspace/impala-private-parameterized/repos/Impala/be/src/exec/grouping-aggregator-partition.cc:207
> #10 0x02448f26 in impala::GroupingAggregator::ClosePartitions 
> (this=0x1327aa00) at 
> /data/jenkins/workspace/impala-private-parameterized/repos/Impala/be/src/exec/grouping-aggregator.cc:939
> #11 0x02443622 in impala::GroupingAggregator::Close (this=0x1327aa00, 
> state=0x1bf5f180) at 
> /data/jenkins/workspace/impala-private-parameterized/repos/Impala/be/src/exec/grouping-aggregator.cc:386
> #12 0x02412ce4 in impala::AggregationNode::Close (this=0x1346f600, 
> state=0x1bf5f180) at 
> /data/jenkins/workspace/impala-private-parameterized/repos/Impala/be/src/exec/aggregation-node.cc:139
> #13 0x0242a69f in impala::BlockingJoinNode::ProcessBuildInputAsync 
> (this=0xb466480, state=0x1bf5f180, build_sink=0xf35f600, 
> status=0x7f53402ddb20)
> at 
> /data/jenkins/workspace/impala-private-parameterized/repos/Impala/be/src/exec/blocking-join-node.cc:173
> #14 0x0242a865 in 
> impala::BlockingJoinNodeoperator()(void) const 
> (__closure=0x21c6fd80) at 
> /data/jenkins/workspace/impala-private-parameterized/repos/Impala/be/src/exec/blocking-join-node.cc:212
> #15 0x0242c4d5 in 
> boost::detail::function::void_function_obj_invoker0  impala::DataSink*)::, 
> void>::invoke(boost::detail::function::function_buffer &) 
> (function_obj_ptr=...) at 
> /data/jenkins/workspace/impala-private-parameterized/Impala-Toolchain/boost-1.57.0-p3/include/boost/function/function_template.hpp:153
> #16 0x01d7eb4e in boost::function0::operator() 
> (this=0x7f533cfc7ba0) at 
> /data/jenkins/workspace/impala-private-parameterized/Impala-Toolchain/boost-1.57.0-p3/include/boost/function/function_template.hpp:767
> #17 0x0224f3d1 in impala::Thread::SuperviseThread(std::string const&, 
> std::string const&, boost::function, impala::ThreadDebugInfo const*, 
> impala::Promise*) (name=...,
> category=..., functor=..., parent_thread_info=0x7f53402de850, 
> thread_started=0x7f53402dd7d0) at 
> /data/jenkins/workspace/impala-private-parameterized/repos/Impala/be/src/util/thread.cc:359
> #18 0x02257755 in boost::_bi::list5, 
> boost::_bi::value, boost::_bi::value >, 
> boost::_bi::value, 
> boost::_bi::value to continue, or q  to 
> quit---
> romise*> >::operator() const&, std::string const&, boost::function, impala::ThreadDebugInfo 
> const*, impala::Promise*), 
> boost::_bi::list0>(boost::_bi::type, void (*&)(std::string const&, 
> std::string const&, boost::function, impala::ThreadDebugInfo const*, 
> impala::Promise*), boost::_bi::list0&, int) (
> this=0x14cf5fc0,
> f=@0x14cf5fb8: 0x224f06a  const&, std::string const&, boost::function, impala::ThreadDebugInfo 
> const*, impala::Promise*)>, a=...)
> at 
> /data/jenkins/workspace/impala-private-parameterized/Impala-Toolchain/boost-1.57.0-p3/include/boost/bind/bind.hpp:525
> (gdb) f 7
> #7  0x023c5c00 in impala::HashTable::NextFilledBucket 
> (this=0x1a1fa000, 

[jira] [Assigned] (IMPALA-8173) run-workload.py KeyError on 'query_id'

2019-02-07 Thread Thomas Tauber-Marshall (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Tauber-Marshall reassigned IMPALA-8173:
--

Assignee: Thomas Tauber-Marshall

> run-workload.py KeyError on 'query_id'
> --
>
> Key: IMPALA-8173
> URL: https://issues.apache.org/jira/browse/IMPALA-8173
> Project: IMPALA
>  Issue Type: Bug
>  Components: Infrastructure
>Affects Versions: Impala 3.2.0
>Reporter: Thomas Tauber-Marshall
>Assignee: Thomas Tauber-Marshall
>Priority: Major
>
> A recent commit (IMPALA-7694) broke bin/run-workload.py by requiring that an 
> ImpalaBeeswaxResult is constructed with a query_id available, which is 
> violated in query_exec_functions.py
> We should fix this, and probably also add an automated test that runs 
> run-workload.py to prevent regressions like this in the future
> [~lv]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-8173) run-workload.py KeyError on 'query_id'

2019-02-07 Thread Thomas Tauber-Marshall (JIRA)
Thomas Tauber-Marshall created IMPALA-8173:
--

 Summary: run-workload.py KeyError on 'query_id'
 Key: IMPALA-8173
 URL: https://issues.apache.org/jira/browse/IMPALA-8173
 Project: IMPALA
  Issue Type: Bug
  Components: Infrastructure
Affects Versions: Impala 3.2.0
Reporter: Thomas Tauber-Marshall


A recent commit (IMPALA-7694) broke bin/run-workload.py by requiring that an 
ImpalaBeeswaxResult is constructed with a query_id available, which is violated 
in query_exec_functions.py

We should fix this, and probably also add an automated test that runs 
run-workload.py to prevent regressions like this in the future

[~lv]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Resolved] (IMPALA-4555) Don't cancel query for failed ReportExecStatus (done=false) RPC

2019-02-08 Thread Thomas Tauber-Marshall (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-4555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Tauber-Marshall resolved IMPALA-4555.

   Resolution: Fixed
Fix Version/s: Impala 3.2.0

> Don't cancel query for failed ReportExecStatus (done=false) RPC
> ---
>
> Key: IMPALA-4555
> URL: https://issues.apache.org/jira/browse/IMPALA-4555
> Project: IMPALA
>  Issue Type: Sub-task
>  Components: Distributed Exec
>Affects Versions: Impala 2.7.0
>Reporter: Sailesh Mukil
>Assignee: Thomas Tauber-Marshall
>Priority: Major
> Fix For: Impala 3.2.0
>
>
> We currently try to send the ReportExecStatus RPC up to 3 times if the first 
> 2 times are unsuccessful - due to high network load or a network partition. 
> If all 3 attempts fail, we cancel the fragment instance and hence the query.
> However, we do not need to cancel the fragment instance if sending the report 
> with _done=false_ failed. We can just skip this turn and try again the next 
> time.
> We could probably skip sending the report up to 2 times (if we're unable to 
> send due to high network load and if done=false) before succumbing to the 
> current behavior, which is to cancel the fragment instance. The point is to 
> try at a later time when the network load may be lower rather than try 
> quickly again. The chance that the network load would reduce in 100 ms is 
> less than in 5s.
> Also, we probably do not need to have the retry logic unless we've already 
> skipped twice or if done=true.
> This could help reduce the network load on the coordinator for highly 
> concurrent workloads.
> The only drawback I see now is that the QueryExecSummary might be stale for a 
> while (which it would have anyway because the RPCs would have failed to send)
> P.S: This above proposed solution may need to change if we go ahead with 
> IMPALA-2990.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Comment Edited] (IMPALA-8183) TestRPCTimeout.test_reportexecstatus_retry times out

2019-02-12 Thread Thomas Tauber-Marshall (JIRA)


[ 
https://issues.apache.org/jira/browse/IMPALA-8183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766304#comment-16766304
 ] 

Thomas Tauber-Marshall edited comment on IMPALA-8183 at 2/12/19 6:31 PM:
-

-Yeah, this may just be a poorly written test. If this is happening very often, 
we can xfail it, otherwise it'll probably be easier to write a good test for 
this case once my rpc debugging patch goes in-

Realized we're talking about an existing test, not the one that I just added 
which I confusingly gave the name 'test_reportexecstatus_retries'. I seem to 
have repro-ed this locally, and the behavior that I'm seeing doesn't make any 
sense, so this looks like it may be a real bug with IMPALA-4555. I'll keep 
investigating


was (Author: twmarshall):
Yeah, this may just be a poorly written test. If this is happening very often, 
we can xfail it, otherwise it'll probably be easier to write a good test for 
this case once my rpc debugging patch goes in

> TestRPCTimeout.test_reportexecstatus_retry times out
> 
>
> Key: IMPALA-8183
> URL: https://issues.apache.org/jira/browse/IMPALA-8183
> Project: IMPALA
>  Issue Type: Bug
>Reporter: Andrew Sherman
>Assignee: Thomas Tauber-Marshall
>Priority: Major
>
> There are 2 forms of failure, where the test itself times out, and where the 
> whole test run times out, suspiciously just after running 
> test_reportexecstatus_retry
> {quote}
> Error Message
> Failed: Timeout >7200s
> Stacktrace
> custom_cluster/test_rpc_timeout.py:143: in test_reportexecstatus_retry
> self.execute_query_verify_metrics(self.TEST_QUERY, None, 10)
> custom_cluster/test_rpc_timeout.py:45: in execute_query_verify_metrics
> self.execute_query(query, query_options)
> common/impala_test_suite.py:601: in wrapper
> return function(*args, **kwargs)
> common/impala_test_suite.py:632: in execute_query
> return self.__execute_query(self.client, query, query_options)
> common/impala_test_suite.py:699: in __execute_query
> return impalad_client.execute(query, user=user)
> common/impala_connection.py:174: in execute
> return self.__beeswax_client.execute(sql_stmt, user=user)
> beeswax/impala_beeswax.py:183: in execute
> handle = self.__execute_query(query_string.strip(), user=user)
> beeswax/impala_beeswax.py:360: in __execute_query
> self.wait_for_finished(handle)
> beeswax/impala_beeswax.py:384: in wait_for_finished
> time.sleep(0.05)
> E   Failed: Timeout >7200s
> {quote}
> {quote}
> Test run timed out. This probably happened due to a hung thread which can be 
> confirmed by looking at the stacktrace of running impalad processes at 
> /data/jenkins/workspace/xxx/repos/Impala/logs/timeout_stacktrace
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-8183) TestRPCTimeout.test_reportexecstatus_retry times out

2019-02-12 Thread Thomas Tauber-Marshall (JIRA)


[ 
https://issues.apache.org/jira/browse/IMPALA-8183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766304#comment-16766304
 ] 

Thomas Tauber-Marshall commented on IMPALA-8183:


Yeah, this may just be a poorly written test. If this is happening very often, 
we can xfail it, otherwise it'll probably be easier to write a good test for 
this case once my rpc debugging patch goes in

> TestRPCTimeout.test_reportexecstatus_retry times out
> 
>
> Key: IMPALA-8183
> URL: https://issues.apache.org/jira/browse/IMPALA-8183
> Project: IMPALA
>  Issue Type: Bug
>Reporter: Andrew Sherman
>Assignee: Thomas Tauber-Marshall
>Priority: Major
>
> There are 2 forms of failure, where the test itself times out, and where the 
> whole test run times out, suspiciously just after running 
> test_reportexecstatus_retry
> {quote}
> Error Message
> Failed: Timeout >7200s
> Stacktrace
> custom_cluster/test_rpc_timeout.py:143: in test_reportexecstatus_retry
> self.execute_query_verify_metrics(self.TEST_QUERY, None, 10)
> custom_cluster/test_rpc_timeout.py:45: in execute_query_verify_metrics
> self.execute_query(query, query_options)
> common/impala_test_suite.py:601: in wrapper
> return function(*args, **kwargs)
> common/impala_test_suite.py:632: in execute_query
> return self.__execute_query(self.client, query, query_options)
> common/impala_test_suite.py:699: in __execute_query
> return impalad_client.execute(query, user=user)
> common/impala_connection.py:174: in execute
> return self.__beeswax_client.execute(sql_stmt, user=user)
> beeswax/impala_beeswax.py:183: in execute
> handle = self.__execute_query(query_string.strip(), user=user)
> beeswax/impala_beeswax.py:360: in __execute_query
> self.wait_for_finished(handle)
> beeswax/impala_beeswax.py:384: in wait_for_finished
> time.sleep(0.05)
> E   Failed: Timeout >7200s
> {quote}
> {quote}
> Test run timed out. This probably happened due to a hung thread which can be 
> confirmed by looking at the stacktrace of running impalad processes at 
> /data/jenkins/workspace/xxx/repos/Impala/logs/timeout_stacktrace
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Updated] (IMPALA-8183) TestRPCTimeout.test_reportexecstatus_retry times out

2019-02-12 Thread Thomas Tauber-Marshall (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Tauber-Marshall updated IMPALA-8183:
---
Affects Version/s: Impala 3.2.0

> TestRPCTimeout.test_reportexecstatus_retry times out
> 
>
> Key: IMPALA-8183
> URL: https://issues.apache.org/jira/browse/IMPALA-8183
> Project: IMPALA
>  Issue Type: Bug
>Affects Versions: Impala 3.2.0
>Reporter: Andrew Sherman
>Assignee: Thomas Tauber-Marshall
>Priority: Blocker
>
> There are 2 forms of failure, where the test itself times out, and where the 
> whole test run times out, suspiciously just after running 
> test_reportexecstatus_retry
> {quote}
> Error Message
> Failed: Timeout >7200s
> Stacktrace
> custom_cluster/test_rpc_timeout.py:143: in test_reportexecstatus_retry
> self.execute_query_verify_metrics(self.TEST_QUERY, None, 10)
> custom_cluster/test_rpc_timeout.py:45: in execute_query_verify_metrics
> self.execute_query(query, query_options)
> common/impala_test_suite.py:601: in wrapper
> return function(*args, **kwargs)
> common/impala_test_suite.py:632: in execute_query
> return self.__execute_query(self.client, query, query_options)
> common/impala_test_suite.py:699: in __execute_query
> return impalad_client.execute(query, user=user)
> common/impala_connection.py:174: in execute
> return self.__beeswax_client.execute(sql_stmt, user=user)
> beeswax/impala_beeswax.py:183: in execute
> handle = self.__execute_query(query_string.strip(), user=user)
> beeswax/impala_beeswax.py:360: in __execute_query
> self.wait_for_finished(handle)
> beeswax/impala_beeswax.py:384: in wait_for_finished
> time.sleep(0.05)
> E   Failed: Timeout >7200s
> {quote}
> {quote}
> Test run timed out. This probably happened due to a hung thread which can be 
> confirmed by looking at the stacktrace of running impalad processes at 
> /data/jenkins/workspace/xxx/repos/Impala/logs/timeout_stacktrace
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Updated] (IMPALA-8183) TestRPCTimeout.test_reportexecstatus_retry times out

2019-02-12 Thread Thomas Tauber-Marshall (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Tauber-Marshall updated IMPALA-8183:
---
Priority: Blocker  (was: Major)

> TestRPCTimeout.test_reportexecstatus_retry times out
> 
>
> Key: IMPALA-8183
> URL: https://issues.apache.org/jira/browse/IMPALA-8183
> Project: IMPALA
>  Issue Type: Bug
>Reporter: Andrew Sherman
>Assignee: Thomas Tauber-Marshall
>Priority: Blocker
>
> There are 2 forms of failure, where the test itself times out, and where the 
> whole test run times out, suspiciously just after running 
> test_reportexecstatus_retry
> {quote}
> Error Message
> Failed: Timeout >7200s
> Stacktrace
> custom_cluster/test_rpc_timeout.py:143: in test_reportexecstatus_retry
> self.execute_query_verify_metrics(self.TEST_QUERY, None, 10)
> custom_cluster/test_rpc_timeout.py:45: in execute_query_verify_metrics
> self.execute_query(query, query_options)
> common/impala_test_suite.py:601: in wrapper
> return function(*args, **kwargs)
> common/impala_test_suite.py:632: in execute_query
> return self.__execute_query(self.client, query, query_options)
> common/impala_test_suite.py:699: in __execute_query
> return impalad_client.execute(query, user=user)
> common/impala_connection.py:174: in execute
> return self.__beeswax_client.execute(sql_stmt, user=user)
> beeswax/impala_beeswax.py:183: in execute
> handle = self.__execute_query(query_string.strip(), user=user)
> beeswax/impala_beeswax.py:360: in __execute_query
> self.wait_for_finished(handle)
> beeswax/impala_beeswax.py:384: in wait_for_finished
> time.sleep(0.05)
> E   Failed: Timeout >7200s
> {quote}
> {quote}
> Test run timed out. This probably happened due to a hung thread which can be 
> confirmed by looking at the stacktrace of running impalad processes at 
> /data/jenkins/workspace/xxx/repos/Impala/logs/timeout_stacktrace
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Updated] (IMPALA-8183) TestRPCTimeout.test_reportexecstatus_retry times out

2019-02-12 Thread Thomas Tauber-Marshall (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Tauber-Marshall updated IMPALA-8183:
---
Component/s: Distributed Exec

> TestRPCTimeout.test_reportexecstatus_retry times out
> 
>
> Key: IMPALA-8183
> URL: https://issues.apache.org/jira/browse/IMPALA-8183
> Project: IMPALA
>  Issue Type: Bug
>  Components: Distributed Exec
>Affects Versions: Impala 3.2.0
>Reporter: Andrew Sherman
>Assignee: Thomas Tauber-Marshall
>Priority: Blocker
>
> There are 2 forms of failure, where the test itself times out, and where the 
> whole test run times out, suspiciously just after running 
> test_reportexecstatus_retry
> {quote}
> Error Message
> Failed: Timeout >7200s
> Stacktrace
> custom_cluster/test_rpc_timeout.py:143: in test_reportexecstatus_retry
> self.execute_query_verify_metrics(self.TEST_QUERY, None, 10)
> custom_cluster/test_rpc_timeout.py:45: in execute_query_verify_metrics
> self.execute_query(query, query_options)
> common/impala_test_suite.py:601: in wrapper
> return function(*args, **kwargs)
> common/impala_test_suite.py:632: in execute_query
> return self.__execute_query(self.client, query, query_options)
> common/impala_test_suite.py:699: in __execute_query
> return impalad_client.execute(query, user=user)
> common/impala_connection.py:174: in execute
> return self.__beeswax_client.execute(sql_stmt, user=user)
> beeswax/impala_beeswax.py:183: in execute
> handle = self.__execute_query(query_string.strip(), user=user)
> beeswax/impala_beeswax.py:360: in __execute_query
> self.wait_for_finished(handle)
> beeswax/impala_beeswax.py:384: in wait_for_finished
> time.sleep(0.05)
> E   Failed: Timeout >7200s
> {quote}
> {quote}
> Test run timed out. This probably happened due to a hung thread which can be 
> confirmed by looking at the stacktrace of running impalad processes at 
> /data/jenkins/workspace/xxx/repos/Impala/logs/timeout_stacktrace
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-8183) TestRPCTimeout.test_reportexecstatus_retry times out

2019-02-12 Thread Thomas Tauber-Marshall (JIRA)


[ 
https://issues.apache.org/jira/browse/IMPALA-8183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766371#comment-16766371
 ] 

Thomas Tauber-Marshall commented on IMPALA-8183:


Alright - figured out what's going on.

The test is designed to cause ReportExecStatus() rpcs to fail by backing up the 
control service queue. Prior to IMPALA-4555, after a failed ReportExecStatus() 
we would wait 'report_status_retry_interval_ms' between retries, which was 
100ms by default and wasn't touched by the test. That 100ms was right on the 
edge of being enough time for the coordinator to keep up with processing the 
reports, so that some would fail but most would succeed. It was always possible 
that we could hit 2990 in this setup, but it was unlikely.

Now, we wait 'status_report_interval_ms'. By default, this is 5000ms, so it 
should give the coordinator even more time and make these issues less likely. 
However, the test sets 'status_report_interval_ms' to 10ms, which isn't nearly 
enough time for the coordinator to do its processing, causing lots of the 
ReportExecStatus() rpcs to fail and making us hit 2990 pretty often.

Not sure what the solution is yet, the test will need to be reworked, but at 
least this isn't a bug with IMPALA-4555

> TestRPCTimeout.test_reportexecstatus_retry times out
> 
>
> Key: IMPALA-8183
> URL: https://issues.apache.org/jira/browse/IMPALA-8183
> Project: IMPALA
>  Issue Type: Bug
>  Components: Distributed Exec
>Affects Versions: Impala 3.2.0
>Reporter: Andrew Sherman
>Assignee: Thomas Tauber-Marshall
>Priority: Blocker
>
> There are 2 forms of failure, where the test itself times out, and where the 
> whole test run times out, suspiciously just after running 
> test_reportexecstatus_retry
> {quote}
> Error Message
> Failed: Timeout >7200s
> Stacktrace
> custom_cluster/test_rpc_timeout.py:143: in test_reportexecstatus_retry
> self.execute_query_verify_metrics(self.TEST_QUERY, None, 10)
> custom_cluster/test_rpc_timeout.py:45: in execute_query_verify_metrics
> self.execute_query(query, query_options)
> common/impala_test_suite.py:601: in wrapper
> return function(*args, **kwargs)
> common/impala_test_suite.py:632: in execute_query
> return self.__execute_query(self.client, query, query_options)
> common/impala_test_suite.py:699: in __execute_query
> return impalad_client.execute(query, user=user)
> common/impala_connection.py:174: in execute
> return self.__beeswax_client.execute(sql_stmt, user=user)
> beeswax/impala_beeswax.py:183: in execute
> handle = self.__execute_query(query_string.strip(), user=user)
> beeswax/impala_beeswax.py:360: in __execute_query
> self.wait_for_finished(handle)
> beeswax/impala_beeswax.py:384: in wait_for_finished
> time.sleep(0.05)
> E   Failed: Timeout >7200s
> {quote}
> {quote}
> Test run timed out. This probably happened due to a hung thread which can be 
> confirmed by looking at the stacktrace of running impalad processes at 
> /data/jenkins/workspace/xxx/repos/Impala/logs/timeout_stacktrace
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Resolved] (IMPALA-8183) TestRPCTimeout.test_reportexecstatus_retry times out

2019-02-12 Thread Thomas Tauber-Marshall (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Tauber-Marshall resolved IMPALA-8183.

   Resolution: Fixed
Fix Version/s: Impala 3.3.0

> TestRPCTimeout.test_reportexecstatus_retry times out
> 
>
> Key: IMPALA-8183
> URL: https://issues.apache.org/jira/browse/IMPALA-8183
> Project: IMPALA
>  Issue Type: Bug
>  Components: Distributed Exec
>Affects Versions: Impala 3.2.0
>Reporter: Andrew Sherman
>Assignee: Thomas Tauber-Marshall
>Priority: Blocker
> Fix For: Impala 3.3.0
>
>
> There are 2 forms of failure, where the test itself times out, and where the 
> whole test run times out, suspiciously just after running 
> test_reportexecstatus_retry
> {quote}
> Error Message
> Failed: Timeout >7200s
> Stacktrace
> custom_cluster/test_rpc_timeout.py:143: in test_reportexecstatus_retry
> self.execute_query_verify_metrics(self.TEST_QUERY, None, 10)
> custom_cluster/test_rpc_timeout.py:45: in execute_query_verify_metrics
> self.execute_query(query, query_options)
> common/impala_test_suite.py:601: in wrapper
> return function(*args, **kwargs)
> common/impala_test_suite.py:632: in execute_query
> return self.__execute_query(self.client, query, query_options)
> common/impala_test_suite.py:699: in __execute_query
> return impalad_client.execute(query, user=user)
> common/impala_connection.py:174: in execute
> return self.__beeswax_client.execute(sql_stmt, user=user)
> beeswax/impala_beeswax.py:183: in execute
> handle = self.__execute_query(query_string.strip(), user=user)
> beeswax/impala_beeswax.py:360: in __execute_query
> self.wait_for_finished(handle)
> beeswax/impala_beeswax.py:384: in wait_for_finished
> time.sleep(0.05)
> E   Failed: Timeout >7200s
> {quote}
> {quote}
> Test run timed out. This probably happened due to a hung thread which can be 
> confirmed by looking at the stacktrace of running impalad processes at 
> /data/jenkins/workspace/xxx/repos/Impala/logs/timeout_stacktrace
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Updated] (IMPALA-8183) TestRPCTimeout.test_reportexecstatus_retry times out

2019-02-13 Thread Thomas Tauber-Marshall (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Tauber-Marshall updated IMPALA-8183:
---
Fix Version/s: (was: Impala 3.3.0)
   Impala 3.2.0

> TestRPCTimeout.test_reportexecstatus_retry times out
> 
>
> Key: IMPALA-8183
> URL: https://issues.apache.org/jira/browse/IMPALA-8183
> Project: IMPALA
>  Issue Type: Bug
>  Components: Distributed Exec
>Affects Versions: Impala 3.2.0
>Reporter: Andrew Sherman
>Assignee: Thomas Tauber-Marshall
>Priority: Blocker
> Fix For: Impala 3.2.0
>
>
> There are 2 forms of failure, where the test itself times out, and where the 
> whole test run times out, suspiciously just after running 
> test_reportexecstatus_retry
> {quote}
> Error Message
> Failed: Timeout >7200s
> Stacktrace
> custom_cluster/test_rpc_timeout.py:143: in test_reportexecstatus_retry
> self.execute_query_verify_metrics(self.TEST_QUERY, None, 10)
> custom_cluster/test_rpc_timeout.py:45: in execute_query_verify_metrics
> self.execute_query(query, query_options)
> common/impala_test_suite.py:601: in wrapper
> return function(*args, **kwargs)
> common/impala_test_suite.py:632: in execute_query
> return self.__execute_query(self.client, query, query_options)
> common/impala_test_suite.py:699: in __execute_query
> return impalad_client.execute(query, user=user)
> common/impala_connection.py:174: in execute
> return self.__beeswax_client.execute(sql_stmt, user=user)
> beeswax/impala_beeswax.py:183: in execute
> handle = self.__execute_query(query_string.strip(), user=user)
> beeswax/impala_beeswax.py:360: in __execute_query
> self.wait_for_finished(handle)
> beeswax/impala_beeswax.py:384: in wait_for_finished
> time.sleep(0.05)
> E   Failed: Timeout >7200s
> {quote}
> {quote}
> Test run timed out. This probably happened due to a hung thread which can be 
> confirmed by looking at the stacktrace of running impalad processes at 
> /data/jenkins/workspace/xxx/repos/Impala/logs/timeout_stacktrace
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-8199) stress test fails: "No module named RuntimeProfile.ttypes"

2019-02-13 Thread Thomas Tauber-Marshall (JIRA)
Thomas Tauber-Marshall created IMPALA-8199:
--

 Summary: stress test fails: "No module named RuntimeProfile.ttypes"
 Key: IMPALA-8199
 URL: https://issues.apache.org/jira/browse/IMPALA-8199
 Project: IMPALA
  Issue Type: Bug
  Components: Infrastructure
Reporter: Thomas Tauber-Marshall
Assignee: Thomas Tauber-Marshall


A recent commit (IMPALA-6964) broke the stress test because it added an import 
of a generated thrift value to a python file that is included by the stress 
test. The stress test is intended to be able to be run without doing a full 
build of Impala, but in this case the generated thrift isn't available, leading 
to an import error.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Resolved] (IMPALA-8173) run-workload.py KeyError on 'query_id'

2019-02-11 Thread Thomas Tauber-Marshall (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Tauber-Marshall resolved IMPALA-8173.

   Resolution: Fixed
Fix Version/s: Impala 3.3.0

> run-workload.py KeyError on 'query_id'
> --
>
> Key: IMPALA-8173
> URL: https://issues.apache.org/jira/browse/IMPALA-8173
> Project: IMPALA
>  Issue Type: Bug
>  Components: Infrastructure
>Affects Versions: Impala 3.2.0
>Reporter: Thomas Tauber-Marshall
>Assignee: Thomas Tauber-Marshall
>Priority: Major
> Fix For: Impala 3.3.0
>
>
> A recent commit (IMPALA-7694) broke bin/run-workload.py by requiring that an 
> ImpalaBeeswaxResult is constructed with a query_id available, which is 
> violated in query_exec_functions.py
> We should fix this, and probably also add an automated test that runs 
> run-workload.py to prevent regressions like this in the future
> [~lv]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Resolved] (IMPALA-8199) stress test fails: "No module named RuntimeProfile.ttypes"

2019-02-15 Thread Thomas Tauber-Marshall (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Tauber-Marshall resolved IMPALA-8199.

   Resolution: Fixed
Fix Version/s: Impala 3.3.0

> stress test fails: "No module named RuntimeProfile.ttypes"
> --
>
> Key: IMPALA-8199
> URL: https://issues.apache.org/jira/browse/IMPALA-8199
> Project: IMPALA
>  Issue Type: Bug
>  Components: Infrastructure
>Reporter: Thomas Tauber-Marshall
>Assignee: Thomas Tauber-Marshall
>Priority: Major
> Fix For: Impala 3.3.0
>
>
> A recent commit (IMPALA-6964) broke the stress test because it added an 
> import of a generated thrift value to a python file that is included by the 
> stress test. The stress test is intended to be able to be run without doing a 
> full build of Impala, but in this case the generated thrift isn't available, 
> leading to an import error.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-8207) Fix query loading in run-workload.py

2019-02-15 Thread Thomas Tauber-Marshall (JIRA)
Thomas Tauber-Marshall created IMPALA-8207:
--

 Summary: Fix query loading in run-workload.py
 Key: IMPALA-8207
 URL: https://issues.apache.org/jira/browse/IMPALA-8207
 Project: IMPALA
  Issue Type: Bug
  Components: Infrastructure
Affects Versions: Impala 3.2.0
Reporter: Thomas Tauber-Marshall
Assignee: Thomas Tauber-Marshall


The code that run-workload.py uses to retrieve the queries for particular 
workloads has not been kept up to date with changes to the contents of the 
testdata/workload/* directories, resulting in it picking up and running various 
queries that were not really intended to be part of the workloads.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-8139) Report DML stats incrementally

2019-01-29 Thread Thomas Tauber-Marshall (JIRA)
Thomas Tauber-Marshall created IMPALA-8139:
--

 Summary: Report DML stats incrementally
 Key: IMPALA-8139
 URL: https://issues.apache.org/jira/browse/IMPALA-8139
 Project: IMPALA
  Issue Type: Improvement
  Components: Backend, Distributed Exec
Affects Versions: Impala 3.2.0
Reporter: Thomas Tauber-Marshall


Impala collects some stats related to dml execution. Currently, these are 
reported back to the coordinator (in  a DmlExecStatusPB) only with the final 
status report, as its tricky to report them in an idempotent way.

With IMPALA-4555, we're introducing functionality for portions of the status 
report to be non-idempotent. We can use this mechanism to report the dml stats 
incrementally during query execution, instead of once at the end, which is 
useful for user visibility into the status of running queries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-8138) Re-introduce rpc debugging options

2019-01-29 Thread Thomas Tauber-Marshall (JIRA)
Thomas Tauber-Marshall created IMPALA-8138:
--

 Summary: Re-introduce rpc debugging options
 Key: IMPALA-8138
 URL: https://issues.apache.org/jira/browse/IMPALA-8138
 Project: IMPALA
  Issue Type: Improvement
  Components: Backend
Affects Versions: Impala 3.2.0
Reporter: Thomas Tauber-Marshall
Assignee: Thomas Tauber-Marshall


In the past, we had fault injection options for backend rpcs implemented in 
ImpalaBackendClient. With the move the krpc, we lost some of those options. We 
should re-introduce an equivalent mechanism for our backend krpc calls to make 
it easy to simulate various rpc failure scenarios.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-8407) Warn when Impala shell fails to connect due to tlsv1.2

2019-04-11 Thread Thomas Tauber-Marshall (JIRA)
Thomas Tauber-Marshall created IMPALA-8407:
--

 Summary: Warn when Impala shell fails to connect due to tlsv1.2
 Key: IMPALA-8407
 URL: https://issues.apache.org/jira/browse/IMPALA-8407
 Project: IMPALA
  Issue Type: Improvement
Affects Versions: Impala 3.3.0
Reporter: Thomas Tauber-Marshall


When impala-shell is used to connect to an impala cluster with 
--ssl_minimum_version=tlsv1.2, if the Python version being used is < 2.7.9 the 
connection will fail due to a limitation of TSSLSocket. See IMPALA-6990 for 
more details

Currently, when this occurs, the error that gets printed is "EOF occurred in 
violation of protocol", which is not very helpful. We should detect this 
situation and print a more informative warning



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Assigned] (IMPALA-8420) Support HTTP based HS2/beeswax endpoints on coordinators

2019-05-15 Thread Thomas Tauber-Marshall (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Tauber-Marshall reassigned IMPALA-8420:
--

Assignee: Thomas Tauber-Marshall  (was: Dinesh Garg)

> Support HTTP based HS2/beeswax endpoints on coordinators
> 
>
> Key: IMPALA-8420
> URL: https://issues.apache.org/jira/browse/IMPALA-8420
> Project: IMPALA
>  Issue Type: New Feature
>  Components: Backend, Clients
>Affects Versions: Impala 3.3.0
>Reporter: bharath v
>Assignee: Thomas Tauber-Marshall
>Priority: Major
>
> The ask is to support HTTP based endpoints for client <-> coordinator 
> communication. 
> Currently they are implemented as thrift over binary. With this, the 
> coordinator can support client reconnects (incase of flaky clients) and also 
> opens up the possibility to use smarter L7 load balancers instead of relying 
> on L4 load balancers using IP hashing.
> Some notes (based on my research so far)
> - Thrift supports THttpTransport but cookie support is unclear
> - We need to rethink the server side session management a bit, since the 
> current session handling is tied to the connection. If a connection is 
> closed, server side session is wiped off. We need to decouple both to support 
> reconnects.
> - Current query lifecycle is tied to the client since the coordinator expects 
> the client to drain the current result set row batch before producing the 
> next one. If reconnects are supported, coordinator needs to aggressively 
> materialize result set so that the resources are not held for longer while 
> waiting for the client to reconnect.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Closed] (IMPALA-8538) Support hiveserver2 over HTTP

2019-05-15 Thread Thomas Tauber-Marshall (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Tauber-Marshall closed IMPALA-8538.
--
Resolution: Duplicate

IMPALA-8420

> Support hiveserver2 over HTTP 
> --
>
> Key: IMPALA-8538
> URL: https://issues.apache.org/jira/browse/IMPALA-8538
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Infrastructure
>Affects Versions: Impala 3.3.0
>Reporter: Thomas Tauber-Marshall
>Assignee: Thomas Tauber-Marshall
>Priority: Major
>
> Impala should provide the option to connect to our hiveserver2 interface over 
> http, to give clients more flexibility in how they would like to connect.
> This should include support for https and some form of authorization, 
> probably BASIC auth to ldap to start and Kerberos support can be added in a 
> later patch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Resolved] (IMPALA-7930) Crash in thrift-server-test

2019-06-04 Thread Thomas Tauber-Marshall (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-7930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Tauber-Marshall resolved IMPALA-7930.

Resolution: Cannot Reproduce

> Crash in thrift-server-test
> ---
>
> Key: IMPALA-7930
> URL: https://issues.apache.org/jira/browse/IMPALA-7930
> Project: IMPALA
>  Issue Type: Bug
>  Components: Backend
>Affects Versions: Impala 3.2.0
>Reporter: Lars Volker
>Assignee: Thomas Tauber-Marshall
>Priority: Critical
>  Labels: broken-build, flaky
>
>  I've seen a crash in thrift-server-test during an exhaustive test run. 
> Unfortunately the core file indicated that it was written by a directory, 
> which caused the automatic core dump resolution to fail. Here's the resolved 
> minidump:
> {noformat}
> Crash reason:  SIGABRT
> Crash address: 0x7d11d19
> Process uptime: not available
> Thread 0 (crashed)
>  0  libc-2.17.so + 0x351f7
> rax = 0x   rdx = 0x0006
> rcx = 0x   rbx = 0x7f1e65876000
> rsi = 0x1d19   rdi = 0x1d19
> rbp = 0x7f1e61dbde68   rsp = 0x7fffc22796d8
>  r8 = 0x000a1a10r9 = 0xfefefefefeff092d
> r10 = 0x0008   r11 = 0x0202
> r12 = 0x029dca31   r13 = 0x033a5e00
> r14 = 0x   r15 = 0x
> rip = 0x7f1e61c721f7
> Found by: given as instruction pointer in context
>  1  libc-2.17.so + 0x368e8
> rsp = 0x7fffc22796e0   rip = 0x7f1e61c738e8
> Found by: stack scanning
>  2  libc-2.17.so + 0x17df70
> rsp = 0x7fffc2279770   rip = 0x7f1e61dbaf70
> Found by: stack scanning
>  3  thrift-server-test!_fini + 0xdf918
> rsp = 0x7fffc2279778   rip = 0x02ab0288
> Found by: stack scanning
>  4  libc-2.17.so + 0x2fbc3
> rsp = 0x7fffc2279790   rip = 0x7f1e61c6cbc3
> Found by: stack scanning
>  5  
> thrift-server-test!testing::internal::TestEventRepeater::OnTestProgramEnd(testing::UnitTest
>  const&) + 0x55
> rsp = 0x7fffc22797b0   rip = 0x028711f5
> Found by: stack scanning
>  6  libc-2.17.so + 0x17df70
> rbx = 0x   rbp = 0x
> rsp = 0x7fffc22797e0   r12 = 0x
> r13 = 0x0005   rip = 0x7f1e61dbaf70
> Found by: call frame info
>  7  thrift-server-test!_fini + 0xc0c1
> rsp = 0x7fffc22797f0   rip = 0x029dca31
> Found by: stack scanning
>  8  thrift-server-test!_fini + 0x9d5490
> rsp = 0x7fffc22797f8   rip = 0x033a5e00
> Found by: stack scanning
>  9  libc-2.17.so + 0x180e68
> rsp = 0x7fffc2279808   rip = 0x7f1e61dbde68
> Found by: stack scanning
> 10  libc-2.17.so + 0x2e266
> rsp = 0x7fffc2279810   rip = 0x7f1e61c6b266
> Found by: stack scanning
> 11  thrift-server-test!_fini + 0x9d5490
> rsp = 0x7fffc2279818   rip = 0x033a5e00
> Found by: stack scanning
> 12  libc-2.17.so + 0x17df70
> rsp = 0x7fffc2279820   rip = 0x7f1e61dbaf70
> Found by: stack scanning
> 13  thrift-server-test!_fini + 0xc0c1
> rsp = 0x7fffc2279828   rip = 0x029dca31
> Found by: stack scanning
> 14  thrift-server-test!_fini + 0xdf918
> rsp = 0x7fffc2279840   rip = 0x02ab0288
> Found by: stack scanning
> 15  thrift-server-test!_fini + 0x9d5490
> rsp = 0x7fffc2279850   rip = 0x033a5e00
> Found by: stack scanning
> 16  thrift-server-test!_fini + 0xc0c1
> rsp = 0x7fffc2279860   rip = 0x029dca31
> Found by: stack scanning
> 17  thrift-server-test!_fini + 0x9d5490
> rsp = 0x7fffc2279870   rip = 0x033a5e00
> Found by: stack scanning
> 18  thrift-server-test!_fini + 0xc0c1
> rsp = 0x7fffc2279878   rip = 0x029dca31
> Found by: stack scanning
> 19  thrift-server-test!_fini + 0xdf918
> rsp = 0x7fffc2279880   rip = 0x02ab0288
> Found by: stack scanning
> 20  libc-2.17.so + 0x2e312
> rsp = 0x7fffc2279890   rip = 0x7f1e61c6b312
> Found by: stack scanning
> 21  
> thrift-server-test!boost::shared_array::~shared_array()
>  + 0x70
> rsp = 0x7fffc22798b0   rip = 0x02719b40
> Found by: stack scanning
> 22  
> thrift-server-test!boost::detail::sp_counted_impl_p::dispose()
>  + 0x4f
> rsp = 0x7fffc22798c0   rip = 0x0271e1af
> Found by: stack scanning
> 23  
> thrift-server-test!boost::detail::sp_counted_impl_pd  boost::checked_array_deleter 
> >::dispose() + 0xaa
> rbx = 0x042f7128   rsp = 0x7fffc22798d0
> rip = 0x02719cfa
> Found by: call frame info
> 24  
> thrift-server-test!boost::shared_array::~shared_array()
>  + 

[jira] [Resolved] (IMPALA-7326) test_kudu_partition_ddl failed with exception message: "Table already exists"

2019-06-04 Thread Thomas Tauber-Marshall (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-7326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Tauber-Marshall resolved IMPALA-7326.

Resolution: Cannot Reproduce

> test_kudu_partition_ddl failed with exception message: "Table already exists"
> -
>
> Key: IMPALA-7326
> URL: https://issues.apache.org/jira/browse/IMPALA-7326
> Project: IMPALA
>  Issue Type: Bug
>  Components: Catalog
>Affects Versions: Impala 3.1.0, Impala 3.2.0
>Reporter: Michael Ho
>Assignee: Thomas Tauber-Marshall
>Priority: Critical
>  Labels: broken-build, flaky, kudu
>
> cc'ing [~twm378]. Does it look like some known issue ? Putting it in the 
> catalog category for now but please feel free to update the component as you 
> see fit.
> {noformat}
> query_test/test_kudu.py:96: in test_kudu_partition_ddl
> self.run_test_case('QueryTest/kudu_partition_ddl', vector, 
> use_db=unique_database)
> common/impala_test_suite.py:397: in run_test_case
> result = self.__execute_query(target_impalad_client, query, user=user)
> common/impala_test_suite.py:612: in __execute_query
> return impalad_client.execute(query, user=user)
> common/impala_connection.py:160: in execute
> return self.__beeswax_client.execute(sql_stmt, user=user)
> beeswax/impala_beeswax.py:173: in execute
> handle = self.__execute_query(query_string.strip(), user=user)
> beeswax/impala_beeswax.py:339: in __execute_query
> handle = self.execute_query_async(query_string, user=user)
> beeswax/impala_beeswax.py:335: in execute_query_async
> return self.__do_rpc(lambda: self.imp_service.query(query,))
> beeswax/impala_beeswax.py:460: in __do_rpc
> raise ImpalaBeeswaxException(self.__build_error_message(b), b)
> E   ImpalaBeeswaxException: ImpalaBeeswaxException:
> EINNER EXCEPTION: 
> EMESSAGE: ImpalaRuntimeException: Error creating Kudu table 
> 'impala::test_kudu_partition_ddl_7e04e8f9.simple_hash_range'
> E   CAUSED BY: NonRecoverableException: Table 
> impala::test_kudu_partition_ddl_7e04e8f9.simple_hash_range already exists 
> with id 3e81a4ceff27471cad9fcb3bc0b977c3
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-8591) Fuzz test the http endpoint

2019-05-28 Thread Thomas Tauber-Marshall (JIRA)
Thomas Tauber-Marshall created IMPALA-8591:
--

 Summary: Fuzz test the http endpoint
 Key: IMPALA-8591
 URL: https://issues.apache.org/jira/browse/IMPALA-8591
 Project: IMPALA
  Issue Type: Improvement
  Components: Infrastructure
Affects Versions: Impala 3.3.0
Reporter: Thomas Tauber-Marshall
Assignee: Thomas Tauber-Marshall


IMPALA-8538 is adding an http endpoint for clients to connect to and run 
queries from. The patch for it adds basic functional testing, but we should do 
additional testing, such as fuzz testing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-8597) Improve session maintenance timing logic

2019-05-29 Thread Thomas Tauber-Marshall (JIRA)
Thomas Tauber-Marshall created IMPALA-8597:
--

 Summary: Improve session maintenance timing logic
 Key: IMPALA-8597
 URL: https://issues.apache.org/jira/browse/IMPALA-8597
 Project: IMPALA
  Issue Type: Improvement
Reporter: Thomas Tauber-Marshall


Currently, the coordinator maintains a list of the timeout lengths for all 
sessions that have an idle_session_timeout set. The original intention of this 
was to have the thread that checks for timeouts wake up at an interval of  / 2, but this resulted in IMPALA-5108

The fix for that bug changed the session maintenance thread wake up every 1 
second if any timeout is registered, but we still maintain the list of timeout 
values even though only the length of the list is ever used.

Given that the default config is for there to be no session timeouts and that 
the maintenance thread is somewhat inefficient in holding the 
session_state_map_ lock for almost its entire execution, we may want to keep 
the behavior of only waking up once per second if there are any registered 
timeouts, in which case it would be more efficient to just maintain a count of 
timeouts instead of the list.

Or, we may want to just simplify the logic and have the thread always wake up 
once per second, without tracking the registered timeouts at all (esp. with the 
new work in IMPALA-1653 which adds closing of disconnected sessions to the 
maintenance thread), in which case we might want to consider ways to avoid 
holding the session_state_map_ lock for so long, eg. by sharding it the way we 
did with the client_request_state_map_



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Resolved] (IMPALA-8545) Test ldap authentication

2019-06-03 Thread Thomas Tauber-Marshall (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Tauber-Marshall resolved IMPALA-8545.

   Resolution: Fixed
Fix Version/s: Impala 3.3.0

> Test ldap authentication
> 
>
> Key: IMPALA-8545
> URL: https://issues.apache.org/jira/browse/IMPALA-8545
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Infrastructure
>Affects Versions: Impala 3.3.0
>Reporter: Thomas Tauber-Marshall
>Assignee: Thomas Tauber-Marshall
>Priority: Major
> Fix For: Impala 3.3.0
>
>
> Impala doesn't currently have any automated tests that exercise the ldap auth 
> functionality. Some ideas to fix this:
> - Setting up a local ldap server for the minicluster to use during tests, eg. 
> ApacheDS
> - Mocking out the openldap calls



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-8616) Document --disconnected_session_timeout

2019-06-03 Thread Thomas Tauber-Marshall (JIRA)
Thomas Tauber-Marshall created IMPALA-8616:
--

 Summary: Document --disconnected_session_timeout
 Key: IMPALA-8616
 URL: https://issues.apache.org/jira/browse/IMPALA-8616
 Project: IMPALA
  Issue Type: Sub-task
  Components: Docs
Reporter: Thomas Tauber-Marshall
Assignee: Alex Rodoni


IMPALA-1653 added a new startup flag, --disconnected_session_timeout, which we 
should document.

We might also want to document the new session behavior, that hs2 sessions 
aren't automatically closed when the connection is closed, though I don't see 
the old behavior documented anywhere so it may be sufficient just to explain 
the flag.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Work started] (IMPALA-8626) JDBC tests don't seem to be using HTTP

2019-06-05 Thread Thomas Tauber-Marshall (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on IMPALA-8626 started by Thomas Tauber-Marshall.
--
> JDBC tests don't seem to be using HTTP
> --
>
> Key: IMPALA-8626
> URL: https://issues.apache.org/jira/browse/IMPALA-8626
> Project: IMPALA
>  Issue Type: Bug
>  Components: Infrastructure
>Affects Versions: Impala 3.3.0
>Reporter: Tim Armstrong
>Assignee: Thomas Tauber-Marshall
>Priority: Blocker
>
> I noticed that the parameterized JDBC tests are passing on the dockerised 
> cluster, which shouldn't be possible because IMPALA-8623 isn't done.
> https://jenkins.impala.io/job/ubuntu-16.04-dockerised-tests/453/testReport/org.apache.impala.service/JdbcTest/
> The connection strings look identical in both cases:
> {noformat}
> 19/06/05 13:10:04 INFO testutil.ImpalaJdbcClient: Connecting to: 
> jdbc:hive2://localhost:21050/default;auth=noSasl
> {noformat}
> {noformat}
> 19/06/05 13:10:04 INFO testutil.ImpalaJdbcClient: Connecting to: 
> jdbc:hive2://localhost:21050/default;auth=noSasl
> {noformat}
> I was looking at related code and saw some misuse of == vs equals() for 
> string comparison here 
> https://github.com/apache/impala/blob/master/fe/src/test/java/org/apache/impala/testutil/ImpalaJdbcClient.java#L172
> {code}
>   private static String getConnectionStr(String connType, String authStr) {
> String connString = DEFAULT_CONNECTION_TEMPLATE + authStr;
> if (connType == "binary") {
>   return String.format(connString, HS2_BINARY_PORT);
> } else {
>   Preconditions.checkState(connType == "http");
>   return String.format(connString + HTTP_TRANSPORT_SPEC, HS2_HTTP_PORT);
> }
> }
> {code}
> But I don't think that explains what I'm seeing above.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



<    1   2   3   4   5   >