[jira] [Commented] (IMPALA-9243) Coordinator Web UI should list which executors have been blacklisted

2019-12-16 Thread Lars Volker (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-9243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16997452#comment-16997452
 ] 

Lars Volker commented on IMPALA-9243:
-

I think it would be good to have a counter for the number of times that a node 
was blacklisted (or total time) in addition to a flag. That way it's possible 
to spot bad nodes and check for thresholds.

> Coordinator Web UI should list which executors have been blacklisted
> 
>
> Key: IMPALA-9243
> URL: https://issues.apache.org/jira/browse/IMPALA-9243
> Project: IMPALA
>  Issue Type: Improvement
>Reporter: Sahil Takiar
>Priority: Major
>
> Currently, information about which nodes are blacklisted only shows up in 
> runtime profiles and Coordinator logs. It would be nice to display 
> blacklisting information in the Web UI as well so that a user can view which 
> nodes are blacklisted at any given time.
> One potential place to put the blacklisting information is in the /backends 
> page, which already lists out all the backends part of the cluster. A new 
> column called "Status" which can have values of either "Active" or 
> "Blacklisted" would be nice (perhaps we should re-factor the "Quiescing" 
> column to use the new "Status" column as well). This is similar to what the 
> Spark Web UI does for blacklisted nodes: 
> [https://ndu0e1pobsf1dobtvj5nls3q-wpengine.netdna-ssl.com/wp-content/uploads/2019/08/BLACKLIST-SCHEDULING.png]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Resolved] (IMPALA-9151) Number of executors during planning needs to account for suspended executor groupa

2019-12-03 Thread Lars Volker (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-9151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Volker resolved IMPALA-9151.
-
Fix Version/s: Impala 3.4.0
   Resolution: Fixed

> Number of executors during planning needs to account for suspended executor 
> groupa
> --
>
> Key: IMPALA-9151
> URL: https://issues.apache.org/jira/browse/IMPALA-9151
> Project: IMPALA
>  Issue Type: Bug
>  Components: Frontend
>Reporter: Lars Volker
>Assignee: Lars Volker
>Priority: Major
> Fix For: Impala 3.4.0
>
>
> When configuring Impala with executor groups, the planner might see a 
> {{ExecutorMembershipSnapshot}} that has no executors in it. This can happen 
> if the first executor group has not started up yet, or if all executor groups 
> have been shutdown. If this happens, the planner will make sub-optimal 
> decisions, e.g. decide on a broadcast join vs a PHJ. In the former case, we 
> should have a configurable fallback cluster size to use during planning. In 
> the latter case, we should hang on to the last executor group size that we 
> had observed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Closed] (IMPALA-9208) impala still failed to read orc format table using "enable_orc_scanner=true"

2019-12-03 Thread Lars Volker (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-9208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Volker closed IMPALA-9208.
---
Resolution: Not A Problem

> impala still failed to read orc format table using "enable_orc_scanner=true"
> 
>
> Key: IMPALA-9208
> URL: https://issues.apache.org/jira/browse/IMPALA-9208
> Project: IMPALA
>  Issue Type: Bug
>  Components: Clients
>Affects Versions: Impala 3.3.0
>Reporter: lv haiyang
>Priority: Major
>
> [bigdata@sdw1 ~]$ impala-shell --var=enable_orc_scanner=true
> Starting Impala Shell without Kerberos authentication
> Opened TCP connection to sdw1:21000
> Connected to sdw1:21000
> Server version: impalad version 3.4.0-SNAPSHOT RELEASE (build 
> 190ad89ebd46161f63a520c2a6d75ad56701e145)
> ***
> Welcome to the Impala shell.
> (Impala Shell v3.4.0-SNAPSHOT (190ad89) built on Tue Nov 19 08:52:31 UTC 2019)
> The SET command shows the current value of all shell and query options.
> ***
> [sdw1:21000] default> use company;
> Query: use company
> [sdw1:21000] company> select * from orc_table;
> Query: select * from orc_table
> Query submitted at: 2019-12-02 11:53:41 (Coordinator: http://sdw1:25000)
> ERROR: NotImplementedException: ORC scans are disabled by 
> --enable_orc_scanner flag



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IMPALA-9208) impala still failed to read orc format table using "enable_orc_scanner=true"

2019-12-03 Thread Lars Volker (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-9208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16987231#comment-16987231
 ] 

Lars Volker commented on IMPALA-9208:
-

You need to pass the flag as a parameter to the impalad when starting your 
cluster. Please use u...@impala.apache.org  for any issues with using Impala.

> impala still failed to read orc format table using "enable_orc_scanner=true"
> 
>
> Key: IMPALA-9208
> URL: https://issues.apache.org/jira/browse/IMPALA-9208
> Project: IMPALA
>  Issue Type: Bug
>  Components: Clients
>Affects Versions: Impala 3.3.0
>Reporter: lv haiyang
>Priority: Major
>
> [bigdata@sdw1 ~]$ impala-shell --var=enable_orc_scanner=true
> Starting Impala Shell without Kerberos authentication
> Opened TCP connection to sdw1:21000
> Connected to sdw1:21000
> Server version: impalad version 3.4.0-SNAPSHOT RELEASE (build 
> 190ad89ebd46161f63a520c2a6d75ad56701e145)
> ***
> Welcome to the Impala shell.
> (Impala Shell v3.4.0-SNAPSHOT (190ad89) built on Tue Nov 19 08:52:31 UTC 2019)
> The SET command shows the current value of all shell and query options.
> ***
> [sdw1:21000] default> use company;
> Query: use company
> [sdw1:21000] company> select * from orc_table;
> Query: select * from orc_table
> Query submitted at: 2019-12-02 11:53:41 (Coordinator: http://sdw1:25000)
> ERROR: NotImplementedException: ORC scans are disabled by 
> --enable_orc_scanner flag



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Updated] (IMPALA-9202) Fix flakiness in TestExecutorGroups

2019-11-27 Thread Lars Volker (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-9202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Volker updated IMPALA-9202:

Labels: broken-build flaky flaky-test  (was: flaky-test)

> Fix flakiness in TestExecutorGroups
> ---
>
> Key: IMPALA-9202
> URL: https://issues.apache.org/jira/browse/IMPALA-9202
> Project: IMPALA
>  Issue Type: Bug
>Affects Versions: Impala 3.4.0
>Reporter: Bikramjeet Vig
>Assignee: Bikramjeet Vig
>Priority: Minor
>  Labels: broken-build, flaky, flaky-test
>
> test_executor_groups.TestExecutorGroups.test_admission_slots failed on an 
> assertion recently with the following stacktrace
> {noformat}
> custom_cluster/test_executor_groups.py:215: in test_admission_slots
> assert ("Initial admission queue reason: Not enough admission control 
> slots "
> E   assert 'Initial admission queue reason: Not enough admission control 
> slots available on host' in 'Query (id=0445eb75d4842dce:219df01e):\n  
> DEBUG MODE WARNING: Query profile created while running a DEBUG buil...0)\n   
>   - NumRowsFetchedFromCache: 0 (0)\n - RowMaterializationRate: 0\n - 
> RowMaterializationTimer: 0.000ns\n
> {noformat}
> On investigating the logs, it seems like the query did in fact get queued 
> with the expected reason. The only reason I can think of that it failed to 
> appear on profile is that the profile was fetched before the admission reason 
> could be added to the profile. This happened in an ASAN build so I am 
> assuming the slowness in execution contributed to widening the window in 
> which this can happen.
> from the logs:
> {noformat}
> I1104 18:18:34.144309 113361 impala-server.cc:1046] 
> 0445eb75d4842dce:219df01e] Registered query 
> query_id=0445eb75d4842dce:219df01e 
> session_id=da467385483f4fb3:16683a81d25fe79e
> I1104 18:18:34.144951 113361 Frontend.java:1256] 
> 0445eb75d4842dce:219df01e] Analyzing query: select * from 
> functional_parquet.alltypestiny  where month < 3 and id + 
> random() < sleep(500); db: default
> I1104 18:18:34.149049 113361 BaseAuthorizationChecker.java:96] 
> 0445eb75d4842dce:219df01e] Authorization check took 4 ms
> I1104 18:18:34.149219 113361 Frontend.java:1297] 
> 0445eb75d4842dce:219df01e] Analysis and authorization finished.
> I1104 18:18:34.163229 113885 scheduler.cc:548] 
> 0445eb75d4842dce:219df01e] Exec at coord is false
> I1104 18:18:34.163945 113885 admission-controller.cc:1295] 
> 0445eb75d4842dce:219df01e] Trying to admit 
> id=0445eb75d4842dce:219df01e in pool_name=default-pool 
> executor_group_name=default-pool-group1 per_host_mem_estimate=176.02 MB 
> dedicated_coord_mem_estimate=100.02 MB max_requests=-1 (configured 
> statically) max_queued=200 (configured statically) max_mem=-1.00 B 
> (configured statically)
> I1104 18:18:34.164203 113885 admission-controller.cc:1307] 
> 0445eb75d4842dce:219df01e] Stats: agg_num_running=1, 
> agg_num_queued=0, agg_mem_reserved=8.00 KB,  
> local_host(local_mem_admitted=452.05 MB, num_admitted_running=1, 
> num_queued=0, backend_mem_reserved=8.00 KB)
> I1104 18:18:34.164383 113885 admission-controller.cc:902] 
> 0445eb75d4842dce:219df01e] Queuing, query 
> id=0445eb75d4842dce:219df01e reason: Not enough admission control 
> slots available on host 
> impala-ec2-centos74-r4-4xlarge-ondemand-1f88.vpc.cloudera.com:22002. Needed 1 
> slots but 1/1 are already in use.
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Resolved] (IMPALA-8863) Add support to run tests over HS2-HTTP

2019-11-27 Thread Lars Volker (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Volker resolved IMPALA-8863.
-
Fix Version/s: Impala 3.4.0
   Resolution: Fixed

> Add support to run tests over HS2-HTTP
> --
>
> Key: IMPALA-8863
> URL: https://issues.apache.org/jira/browse/IMPALA-8863
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Infrastructure
>Affects Versions: Impala 3.3.0
>Reporter: Lars Volker
>Assignee: Lars Volker
>Priority: Major
>  Labels: impyla
> Fix For: Impala 3.4.0
>
>
> Once https://github.com/cloudera/impyla/issues/357 gets addressed, we should 
> run at least some of our tests over hs2-http using Impyla to better test 
> Impyla's HTTP endpoint support.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Updated] (IMPALA-8863) Add support to run tests over HS2-HTTP

2019-11-27 Thread Lars Volker (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Volker updated IMPALA-8863:

Target Version: Impala 3.4.0  (was: Impala 3.3.0)

> Add support to run tests over HS2-HTTP
> --
>
> Key: IMPALA-8863
> URL: https://issues.apache.org/jira/browse/IMPALA-8863
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Infrastructure
>Affects Versions: Impala 3.3.0
>Reporter: Lars Volker
>Assignee: Lars Volker
>Priority: Major
>  Labels: impyla
> Fix For: Impala 3.4.0
>
>
> Once https://github.com/cloudera/impyla/issues/357 gets addressed, we should 
> run at least some of our tests over hs2-http using Impyla to better test 
> Impyla's HTTP endpoint support.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Resolved] (IMPALA-8863) Add support to run tests over HS2-HTTP

2019-11-27 Thread Lars Volker (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Volker resolved IMPALA-8863.
-
Fix Version/s: Impala 3.4.0
   Resolution: Fixed

> Add support to run tests over HS2-HTTP
> --
>
> Key: IMPALA-8863
> URL: https://issues.apache.org/jira/browse/IMPALA-8863
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Infrastructure
>Affects Versions: Impala 3.3.0
>Reporter: Lars Volker
>Assignee: Lars Volker
>Priority: Major
>  Labels: impyla
> Fix For: Impala 3.4.0
>
>
> Once https://github.com/cloudera/impyla/issues/357 gets addressed, we should 
> run at least some of our tests over hs2-http using Impyla to better test 
> Impyla's HTTP endpoint support.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IMPALA-9122) TestEventProcessing.test_insert_events flaky in precommit

2019-11-26 Thread Lars Volker (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-9122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16982732#comment-16982732
 ] 

Lars Volker commented on IMPALA-9122:
-

Hit this again here: https://jenkins.impala.io/job/gerrit-verify-dryrun/5292/

> TestEventProcessing.test_insert_events flaky in precommit
> -
>
> Key: IMPALA-9122
> URL: https://issues.apache.org/jira/browse/IMPALA-9122
> Project: IMPALA
>  Issue Type: Bug
>  Components: Infrastructure
>Reporter: Tim Armstrong
>Assignee: Anurag Mantripragada
>Priority: Critical
>  Labels: broken-build, flaky
>
> https://jenkins.impala.io/job/ubuntu-16.04-from-scratch/8613/
> {noformat}
> custom_cluster.test_event_processing.TestEventProcessing.test_insert_events 
> (from pytest)
> Failing for the past 1 build (Since Failed#8613 )
> Took 1 min 20 sec.
> add description
> Error Message
> assert  >(18421) 
> is True  +  where  TestEventProcessing.wait_for_insert_event_processing of 
> > = 
>  0x7f7fa86ec6d0>.wait_for_insert_event_processing
> Stacktrace
> custom_cluster/test_event_processing.py:82: in test_insert_events
> self.run_test_insert_events()
> custom_cluster/test_event_processing.py:143: in run_test_insert_events
> assert self.wait_for_insert_event_processing(last_synced_event_id) is True
> E   assert  of  0x7f7fa86ec6d0>>(18421) is True
> E+  where  TestEventProcessing.wait_for_insert_event_processing of 
> > = 
>  0x7f7fa86ec6d0>.wait_for_insert_event_processing
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Resolved] (IMPALA-9032) Impala returns 0 rows over hs2-http without waiting for fetch_rows_timeout_ms timeout

2019-11-20 Thread Lars Volker (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-9032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Volker resolved IMPALA-9032.
-
Resolution: Duplicate

> Impala returns 0 rows over hs2-http without waiting for fetch_rows_timeout_ms 
> timeout
> -
>
> Key: IMPALA-9032
> URL: https://issues.apache.org/jira/browse/IMPALA-9032
> Project: IMPALA
>  Issue Type: Bug
>  Components: Backend
>Affects Versions: Impala 3.4.0
>Reporter: Lars Volker
>Priority: Major
>
> This looks like a bug to me but I'm not entirely sure. I'm trying to run our 
> tests over hs2-http (IMPALA-8863) and after the change for IMPALA-7312 to 
> introduce a non-blocking mode for FetchResults() it looks like we sometimes 
> return an empty result way before {{fetch_rows_timeout_ms}} has elapsed. This 
> triggers a bug in Impyla 
> ([#369|https://github.com/cloudera/impyla/issues/369]), but it also seems 
> like something we should investigate and fix in Impala.
> {noformat}
> I1007 22:10:10.697760 56550 impala-hs2-server.cc:821] FetchResults(): 
> query_id=764d4313dbc64e20:2831560c fetch_size=1024I1007 
> 22:10:10.697760 56550 impala-hs2-server.cc:821] FetchResults(): 
> query_id=764d4313dbc64e20:2831560c fetch_size=1024I1007 
> 22:10:10.697988 56527 scheduler.cc:468] 6d4cba4d2e8ccc42:66ce26a8] 
> Exec at coord is falseI1007 22:10:10.698014 54090 impala-hs2-server.cc:663] 
> GetOperationStatus(): query_id=0d43fd73ce4403fd:da25dde9I1007 
> 22:10:10.698173   127 control-service.cc:142] 
> 0646e91fd6a0a953:02949ff3] ExecQueryFInstances(): 
> query_id=0646e91fd6a0a953:02949ff3 coord=b04a12d76e27:22000 
> #instances=1I1007 22:10:10.698356 56527 admission-controller.cc:1270] 
> 6d4cba4d2e8ccc42:66ce26a8] Trying to admit 
> id=6d4cba4d2e8ccc42:66ce26a8 in pool_name=root.default 
> executor_group_name=default per_host_mem_estimate=52.02 MB 
> dedicated_coord_mem_estimate=110.02 MB max_requests=-1 (configured 
> statically) max_queued=200 (configured statically) max_mem=29.30 GB 
> (configured statically)I1007 22:10:10.698386 56527 
> admission-controller.cc:1282] 6d4cba4d2e8ccc42:66ce26a8] Stats: 
> agg_num_running=9, agg_num_queued=0, agg_mem_reserved=8.34 GB,  
> local_host(local_mem_admitted=9.09 GB, num_admitted_running=9, num_queued=0, 
> backend_mem_reserved=6.70 GB)I1007 22:10:10.698415 56527 
> admission-controller.cc:871] 6d4cba4d2e8ccc42:66ce26a8] Admitting 
> query id=6d4cba4d2e8ccc42:66ce26a8I1007 22:10:10.698479 56527 
> impala-server.cc:1713] 6d4cba4d2e8ccc42:66ce26a8] Registering query 
> locationsI1007 22:10:10.698529 56527 coordinator.cc:97] 
> 6d4cba4d2e8ccc42:66ce26a8] Exec() 
> query_id=6d4cba4d2e8ccc42:66ce26a8 stmt=select count(*) from alltypes 
> where month=1I1007 22:10:10.698992 56527 coordinator.cc:361] 
> 6d4cba4d2e8ccc42:66ce26a8] starting execution on 3 backends for 
> query_id=6d4cba4d2e8ccc42:66ce26a8I1007 22:10:10.699383 56523 
> coordinator.cc:375] 0646e91fd6a0a953:02949ff3] started execution on 1 
> backends for query_id=0646e91fd6a0a953:02949ff3I1007 22:10:10.699409 
> 56534 scheduler.cc:468] e1495f928c2cd4f6:eeda82aa] Exec at coord is 
> falseI1007 22:10:10.700017   127 control-service.cc:142] 
> 6d4cba4d2e8ccc42:66ce26a8] ExecQueryFInstances(): 
> query_id=6d4cba4d2e8ccc42:66ce26a8 coord=b04a12d76e27:22000 
> #instances=1I1007 22:10:10.700147 56534 scheduler.cc:468] 
> e1495f928c2cd4f6:eeda82aa] Exec at coord is falseI1007 
> 22:10:10.700234   325 TAcceptQueueServer.cpp:340] New connection to server 
> hiveserver2-http-frontend from client I1007 
> 22:10:10.700286   329 TAcceptQueueServer.cpp:227] TAcceptQueueServer: 
> hiveserver2-http-frontend started connection setup for client  172.18.0.1 Port: 51580>I1007 22:10:10.700314   329 
> TAcceptQueueServer.cpp:245] TAcceptQueueServer: hiveserver2-http-frontend 
> finished connection setup for client I1007 
> 22:10:10.700371 56550 impala-hs2-server.cc:844] FetchResults(): 
> query_id=764d4313dbc64e20:2831560c #results=1 has_more=trueI1007 
> 22:10:10.700508 56551 impala-server.cc:1969] Connection 
> 8249c7defcb10124:1bc65ed9ea562aab from client 172.18.0.1:51576 to server 
> hiveserver2-http-frontend closed. The connection had 1 associated 
> session(s).I1007 22:10:10.700688 53748 impala-beeswax-server.cc:260] close(): 
> query_id=e9473ff80c5d4afe:733cefe0I1007 22:10:10.700711 53748 
> impala-server.cc:1129] UnregisterQuery(): 
> query_id=e9473ff80c5d4afe:733cefe0I1007 22:10:10.700721 53748 
> impala-server.cc:1234] Cancel(): 
> query_id=e9473ff80c5d4afe:733cefe0I1007 22:10:10.700742 53748 
> coordinator.cc:716] 

[jira] [Resolved] (IMPALA-9032) Impala returns 0 rows over hs2-http without waiting for fetch_rows_timeout_ms timeout

2019-11-20 Thread Lars Volker (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-9032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Volker resolved IMPALA-9032.
-
Resolution: Duplicate

> Impala returns 0 rows over hs2-http without waiting for fetch_rows_timeout_ms 
> timeout
> -
>
> Key: IMPALA-9032
> URL: https://issues.apache.org/jira/browse/IMPALA-9032
> Project: IMPALA
>  Issue Type: Bug
>  Components: Backend
>Affects Versions: Impala 3.4.0
>Reporter: Lars Volker
>Priority: Major
>
> This looks like a bug to me but I'm not entirely sure. I'm trying to run our 
> tests over hs2-http (IMPALA-8863) and after the change for IMPALA-7312 to 
> introduce a non-blocking mode for FetchResults() it looks like we sometimes 
> return an empty result way before {{fetch_rows_timeout_ms}} has elapsed. This 
> triggers a bug in Impyla 
> ([#369|https://github.com/cloudera/impyla/issues/369]), but it also seems 
> like something we should investigate and fix in Impala.
> {noformat}
> I1007 22:10:10.697760 56550 impala-hs2-server.cc:821] FetchResults(): 
> query_id=764d4313dbc64e20:2831560c fetch_size=1024I1007 
> 22:10:10.697760 56550 impala-hs2-server.cc:821] FetchResults(): 
> query_id=764d4313dbc64e20:2831560c fetch_size=1024I1007 
> 22:10:10.697988 56527 scheduler.cc:468] 6d4cba4d2e8ccc42:66ce26a8] 
> Exec at coord is falseI1007 22:10:10.698014 54090 impala-hs2-server.cc:663] 
> GetOperationStatus(): query_id=0d43fd73ce4403fd:da25dde9I1007 
> 22:10:10.698173   127 control-service.cc:142] 
> 0646e91fd6a0a953:02949ff3] ExecQueryFInstances(): 
> query_id=0646e91fd6a0a953:02949ff3 coord=b04a12d76e27:22000 
> #instances=1I1007 22:10:10.698356 56527 admission-controller.cc:1270] 
> 6d4cba4d2e8ccc42:66ce26a8] Trying to admit 
> id=6d4cba4d2e8ccc42:66ce26a8 in pool_name=root.default 
> executor_group_name=default per_host_mem_estimate=52.02 MB 
> dedicated_coord_mem_estimate=110.02 MB max_requests=-1 (configured 
> statically) max_queued=200 (configured statically) max_mem=29.30 GB 
> (configured statically)I1007 22:10:10.698386 56527 
> admission-controller.cc:1282] 6d4cba4d2e8ccc42:66ce26a8] Stats: 
> agg_num_running=9, agg_num_queued=0, agg_mem_reserved=8.34 GB,  
> local_host(local_mem_admitted=9.09 GB, num_admitted_running=9, num_queued=0, 
> backend_mem_reserved=6.70 GB)I1007 22:10:10.698415 56527 
> admission-controller.cc:871] 6d4cba4d2e8ccc42:66ce26a8] Admitting 
> query id=6d4cba4d2e8ccc42:66ce26a8I1007 22:10:10.698479 56527 
> impala-server.cc:1713] 6d4cba4d2e8ccc42:66ce26a8] Registering query 
> locationsI1007 22:10:10.698529 56527 coordinator.cc:97] 
> 6d4cba4d2e8ccc42:66ce26a8] Exec() 
> query_id=6d4cba4d2e8ccc42:66ce26a8 stmt=select count(*) from alltypes 
> where month=1I1007 22:10:10.698992 56527 coordinator.cc:361] 
> 6d4cba4d2e8ccc42:66ce26a8] starting execution on 3 backends for 
> query_id=6d4cba4d2e8ccc42:66ce26a8I1007 22:10:10.699383 56523 
> coordinator.cc:375] 0646e91fd6a0a953:02949ff3] started execution on 1 
> backends for query_id=0646e91fd6a0a953:02949ff3I1007 22:10:10.699409 
> 56534 scheduler.cc:468] e1495f928c2cd4f6:eeda82aa] Exec at coord is 
> falseI1007 22:10:10.700017   127 control-service.cc:142] 
> 6d4cba4d2e8ccc42:66ce26a8] ExecQueryFInstances(): 
> query_id=6d4cba4d2e8ccc42:66ce26a8 coord=b04a12d76e27:22000 
> #instances=1I1007 22:10:10.700147 56534 scheduler.cc:468] 
> e1495f928c2cd4f6:eeda82aa] Exec at coord is falseI1007 
> 22:10:10.700234   325 TAcceptQueueServer.cpp:340] New connection to server 
> hiveserver2-http-frontend from client I1007 
> 22:10:10.700286   329 TAcceptQueueServer.cpp:227] TAcceptQueueServer: 
> hiveserver2-http-frontend started connection setup for client  172.18.0.1 Port: 51580>I1007 22:10:10.700314   329 
> TAcceptQueueServer.cpp:245] TAcceptQueueServer: hiveserver2-http-frontend 
> finished connection setup for client I1007 
> 22:10:10.700371 56550 impala-hs2-server.cc:844] FetchResults(): 
> query_id=764d4313dbc64e20:2831560c #results=1 has_more=trueI1007 
> 22:10:10.700508 56551 impala-server.cc:1969] Connection 
> 8249c7defcb10124:1bc65ed9ea562aab from client 172.18.0.1:51576 to server 
> hiveserver2-http-frontend closed. The connection had 1 associated 
> session(s).I1007 22:10:10.700688 53748 impala-beeswax-server.cc:260] close(): 
> query_id=e9473ff80c5d4afe:733cefe0I1007 22:10:10.700711 53748 
> impala-server.cc:1129] UnregisterQuery(): 
> query_id=e9473ff80c5d4afe:733cefe0I1007 22:10:10.700721 53748 
> impala-server.cc:1234] Cancel(): 
> query_id=e9473ff80c5d4afe:733cefe0I1007 22:10:10.700742 53748 
> coordinator.cc:716] 

[jira] [Commented] (IMPALA-9091) query_test.test_scanners.TestScannerReservation.test_scanners flaky

2019-11-20 Thread Lars Volker (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-9091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16978689#comment-16978689
 ] 

Lars Volker commented on IMPALA-9091:
-

I hit this again here: 
https://jenkins.impala.io/job/ubuntu-16.04-from-scratch/8899/testReport/junit/query_test.test_scanners/TestScannerReservation/test_scanners_protocol__beeswax___exec_optionbatch_size___0___num_nodes___0___disable_codegen_rows_threshold___0___disable_codegen___False___abort_on_error___1___exec_single_node_rows_threshold___0table_format__text_none_/

> query_test.test_scanners.TestScannerReservation.test_scanners flaky
> ---
>
> Key: IMPALA-9091
> URL: https://issues.apache.org/jira/browse/IMPALA-9091
> Project: IMPALA
>  Issue Type: Bug
>  Components: Infrastructure
>Reporter: Tim Armstrong
>Assignee: Zoltán Borók-Nagy
>Priority: Critical
>  Labels: flaky
>
> https://jenkins.impala.io/job/ubuntu-16.04-from-scratch/8463
> {noformat}
> E   AssertionError: Did not find matches for lines in runtime profile:
> E   EXPECTED LINES:
> E   row_regex:.*ParquetRowGroupIdealReservation.*Avg: 3.50 MB.*
> E   
> E   ACTUAL PROFILE:
> E   Query (id=3b48738ce971e36b:b6f52bf5):
> E DEBUG MODE WARNING: Query profile created while running a DEBUG build 
> of Impala. Use RELEASE builds to measure query performance.
> E Summary:
> E   Session ID: 2e4c96b22f2ac6e3:88afa967b63e7983
> E   Session Type: BEESWAX
> E   Start Time: 2019-10-24 21:22:06.311001000
> E   End Time: 2019-10-24 21:22:06.520778000
> E   Query Type: QUERY
> E   Query State: FINISHED
> E   Query Status: OK
> E   Impala Version: impalad version 3.4.0-SNAPSHOT DEBUG (build 
> 8c60e91f7c3812aca14739535a994d21c51fc0b0)
> E   User: ubuntu
> E   Connected User: ubuntu
> E   Delegated User: 
> E   Network Address: :::127.0.0.1:37312
> E   Default Db: functional
> E   Sql Statement: select * from tpch_parquet.lineitem
> E   where l_orderkey < 10
> E   Coordinator: ip-172-31-20-105:22000
> E   Query Options (set by configuration): 
> ABORT_ON_ERROR=1,EXEC_SINGLE_NODE_ROWS_THRESHOLD=0,DISABLE_CODEGEN_ROWS_THRESHOLD=0,TIMEZONE=Universal,CLIENT_IDENTIFIER=query_test/test_scanners.py::TestScannerReservation::()::test_scanners[protocol:beeswax|exec_option:{'batch_size':0;'num_nodes':0;'disable_codegen_rows_threshold':0;'disable_codegen':False;'abort_on_error':1;'exec_single_node_rows_threshold':0}|table_form
> E   Query Options (set by configuration and planner): 
> ABORT_ON_ERROR=1,EXEC_SINGLE_NODE_ROWS_THRESHOLD=0,MT_DOP=0,DISABLE_CODEGEN_ROWS_THRESHOLD=0,TIMEZONE=Universal,CLIENT_IDENTIFIER=query_test/test_scanners.py::TestScannerReservation::()::test_scanners[protocol:beeswax|exec_option:{'batch_size':0;'num_nodes':0;'disable_codegen_rows_threshold':0;'disable_codegen':False;'abort_on_error':1;'exec_single_node_rows_threshold':0}|table_form
> E   Plan: 
> E   
> E   Max Per-Host Resource Reservation: Memory=40.00MB Threads=3
> E   Per-Host Resource Estimates: Memory=1.26GB
> E   Analyzed query: SELECT * FROM tpch_parquet.lineitem WHERE l_orderkey < 
> CAST(10
> E   AS BIGINT)
> E   
> E   F01:PLAN FRAGMENT [UNPARTITIONED] hosts=1 instances=1
> E   |  Per-Host Resources: mem-estimate=10.69MB mem-reservation=0B 
> thread-reservation=1
> E   PLAN-ROOT SINK
> E   |  output exprs: tpch_parquet.lineitem.l_orderkey, 
> tpch_parquet.lineitem.l_partkey, tpch_parquet.lineitem.l_suppkey, 
> tpch_parquet.lineitem.l_linenumber, tpch_parquet.lineitem.l_quantity, 
> tpch_parquet.lineitem.l_extendedprice, tpch_parquet.lineitem.l_discount, 
> tpch_parquet.lineitem.l_tax, tpch_parquet.lineitem.l_returnflag, 
> tpch_parquet.lineitem.l_linestatus, tpch_parquet.lineitem.l_shipdate, 
> tpch_parquet.lineitem.l_commitdate, tpch_parquet.lineitem.l_receiptdate, 
> tpch_parquet.lineitem.l_shipinstruct, tpch_parquet.lineitem.l_shipmode, 
> tpch_parquet.lineitem.l_comment
> E   |  mem-estimate=0B mem-reservation=0B thread-reservation=0
> E   |
> E   01:EXCHANGE [UNPARTITIONED]
> E   |  mem-estimate=10.69MB mem-reservation=0B thread-reservation=0
> E   |  tuple-ids=0 row-size=231B cardinality=600.12K
> E   |  in pipelines: 00(GETNEXT)
> E   |
> E   F00:PLAN FRAGMENT [RANDOM] hosts=3 instances=3
> E   Per-Host Resources: mem-estimate=1.25GB mem-reservation=40.00MB 
> thread-reservation=2
> E   00:SCAN HDFS [tpch_parquet.lineitem, RANDOM]
> E  HDFS partitions=1/1 files=3 size=193.97MB
> E  predicates: l_orderkey < CAST(10 AS BIGINT)
> E  stored statistics:
> Etable: rows=6.00M size=193.97MB
> Ecolumns: all
> E  extrapolated-rows=disabled max-scan-range-rows=2.14M
> E  parquet statistics predicates: l_orderkey < CAST(10 AS 

[jira] [Commented] (IMPALA-9148) AuthorizationStmtTest.testColumnMaskEnabled seems flaky

2019-11-19 Thread Lars Volker (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-9148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16977794#comment-16977794
 ] 

Lars Volker commented on IMPALA-9148:
-

His this again here: 
https://jenkins.impala.io/job/ubuntu-16.04-from-scratch/8877/

> AuthorizationStmtTest.testColumnMaskEnabled seems flaky
> ---
>
> Key: IMPALA-9148
> URL: https://issues.apache.org/jira/browse/IMPALA-9148
> Project: IMPALA
>  Issue Type: Bug
>  Components: Frontend
>Reporter: Fang-Yu Rao
>Assignee: Fang-Yu Rao
>Priority: Major
>
> Recently we have seen failed {{AuthorizationStmtTest.testColumnMaskEnabled}} 
> occasionally with the following error messages (e.g., 
> [https://jenkins.impala.io/job/gerrit-verify-dryrun/5194/consoleFull]).
> {code:java}
> Impala does not support row filtering yet. Row filtering is enabled on table: 
> functional.alltypes_view
> expected:
> Impala does not support column masking yet. Column masking is enabled on 
> column: functional.alltypes_view.string_col
> {code}
> Taking a look at the {{testColumnMaskEnabled()}}, we can see the related SQL 
> statement is
> {code:java}
> select string_col from functional.alltypes_view;
> {code}
> I found that for this SQL statement, {{authorizeRowFilterAndColumnMask()}} in 
> {{RangerAuthorizationCheker.java}} will be called first 
> ([https://github.com/apache/impala/blob/master/fe/src/main/java/org/apache/impala/authorization/ranger/RangerAuthorizationChecker.java#L183-L200]).
>  There will be two privilege requests, one request for column, and the other 
> for table. The function {{authorizeRowFilter()}} is the only function that 
> could produce the error message above 
> ([https://github.com/apache/impala/blob/master/fe/src/main/java/org/apache/impala/authorization/ranger/RangerAuthorizationChecker.java#L295-L308]).
>  Specifically, this error would be generated if 
> {{plugin_.evalRowFilterPolicies(req, null).isRowFilterEnabled()}} returns 
> true 
> ([https://github.com/apache/impala/blob/master/fe/src/main/java/org/apache/impala/authorization/ranger/RangerAuthorizationChecker.java#L303]).
> I have taken a brief look at {{isRowFilterEnabled()}}, and found that it will 
> return true only if there is some policy on the Ranger server that specifies 
> the policy of row filtering (according to my current understanding). However, 
> in {{testColumnMaskEnabled()}} 
> ([https://github.com/apache/impala/blob/master/fe/src/test/java/org/apache/impala/authorization/AuthorizationStmtTest.java#L2836]),
>  we only add a policy for column masking. Therefore, I suspect it may be 
> possible that some other tests added to the Ranger server some policy for row 
> filtering but did not properly do the cleanup of this row filtering policy 
> afterwards.
> To address this issue, we should add some logic to clean up the policies 
> stored on the Ranger server before running this JUnit test. This JUnit test 
> assumes that the Ranger server does not store any policies related to column 
> masking and row filtering before the testing.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Assigned] (IMPALA-7550) Create documentation for all profile fields

2019-11-18 Thread Lars Volker (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-7550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Volker reassigned IMPALA-7550:
---

Assignee: Jiawei Wang  (was: Lars Volker)

> Create documentation for all profile fields
> ---
>
> Key: IMPALA-7550
> URL: https://issues.apache.org/jira/browse/IMPALA-7550
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Docs
>Affects Versions: Impala 3.1.0
>Reporter: Lars Volker
>Assignee: Jiawei Wang
>Priority: Major
>  Labels: supportability
>
> It would be great to have some documentation for all profile fields. 
> Currently users often have to look at the source code to figure out the exact 
> meaning of a field. Instead we should investigate ways to generate 
> documentation on the fields, e.g. by cleaning and automatically extracting 
> relevant comments from the source code.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-9155) Single Admission Controller per Cluster

2019-11-13 Thread Lars Volker (Jira)
Lars Volker created IMPALA-9155:
---

 Summary: Single Admission Controller per Cluster
 Key: IMPALA-9155
 URL: https://issues.apache.org/jira/browse/IMPALA-9155
 Project: IMPALA
  Issue Type: New Feature
  Components: Backend
Affects Versions: Impala 3.4.0
Reporter: Lars Volker
Assignee: Lars Volker


We should consider building a single admission control service that can admit 
queries for multiple coordinators. This would simplify the current approach 
where coordinators make distributed decisions, remove the risk of overadmission 
by concurrent decisions, and allow us to implement more sophisticated admission 
control strategies.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IMPALA-9155) Single Admission Controller per Cluster

2019-11-13 Thread Lars Volker (Jira)
Lars Volker created IMPALA-9155:
---

 Summary: Single Admission Controller per Cluster
 Key: IMPALA-9155
 URL: https://issues.apache.org/jira/browse/IMPALA-9155
 Project: IMPALA
  Issue Type: New Feature
  Components: Backend
Affects Versions: Impala 3.4.0
Reporter: Lars Volker
Assignee: Lars Volker


We should consider building a single admission control service that can admit 
queries for multiple coordinators. This would simplify the current approach 
where coordinators make distributed decisions, remove the risk of overadmission 
by concurrent decisions, and allow us to implement more sophisticated admission 
control strategies.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-9151) Number of executors during planning needs to account for suspended executor groupa

2019-11-12 Thread Lars Volker (Jira)
Lars Volker created IMPALA-9151:
---

 Summary: Number of executors during planning needs to account for 
suspended executor groupa
 Key: IMPALA-9151
 URL: https://issues.apache.org/jira/browse/IMPALA-9151
 Project: IMPALA
  Issue Type: Bug
  Components: Frontend
Reporter: Lars Volker
Assignee: Lars Volker


When configuring Impala with executor groups, the planner might see a 
{{ExecutorMembershipSnapshot}} that has no executors in it. This can happen if 
the first executor group has not started up yet, or if all executor groups have 
been shutdown. If this happens, the planner will make sub-optimal decisions, 
e.g. decide on a broadcast join vs a PHJ. In the former case, we should have a 
configurable fallback cluster size to use during planning. In the latter case, 
we should hang on to the last executor group size that we had observed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-9151) Number of executors during planning needs to account for suspended executor groupa

2019-11-12 Thread Lars Volker (Jira)
Lars Volker created IMPALA-9151:
---

 Summary: Number of executors during planning needs to account for 
suspended executor groupa
 Key: IMPALA-9151
 URL: https://issues.apache.org/jira/browse/IMPALA-9151
 Project: IMPALA
  Issue Type: Bug
  Components: Frontend
Reporter: Lars Volker
Assignee: Lars Volker


When configuring Impala with executor groups, the planner might see a 
{{ExecutorMembershipSnapshot}} that has no executors in it. This can happen if 
the first executor group has not started up yet, or if all executor groups have 
been shutdown. If this happens, the planner will make sub-optimal decisions, 
e.g. decide on a broadcast join vs a PHJ. In the former case, we should have a 
configurable fallback cluster size to use during planning. In the latter case, 
we should hang on to the last executor group size that we had observed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IMPALA-9032) Impala returns 0 rows over hs2-http without waiting for fetch_rows_timeout_ms timeout

2019-10-09 Thread Lars Volker (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-9032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16948030#comment-16948030
 ] 

Lars Volker commented on IMPALA-9032:
-

Thanks, I'll take a look whether it still happens with that change. Can you 
elaborate what the problem with the conversion was? I had taken a look at the 
original code and it seems ok to me.

> Impala returns 0 rows over hs2-http without waiting for fetch_rows_timeout_ms 
> timeout
> -
>
> Key: IMPALA-9032
> URL: https://issues.apache.org/jira/browse/IMPALA-9032
> Project: IMPALA
>  Issue Type: Bug
>  Components: Backend
>Affects Versions: Impala 3.4.0
>Reporter: Lars Volker
>Priority: Major
>
> This looks like a bug to me but I'm not entirely sure. I'm trying to run our 
> tests over hs2-http (IMPALA-8863) and after the change for IMPALA-7312 to 
> introduce a non-blocking mode for FetchResults() it looks like we sometimes 
> return an empty result way before {{fetch_rows_timeout_ms}} has elapsed. This 
> triggers a bug in Impyla 
> ([#369|https://github.com/cloudera/impyla/issues/369]), but it also seems 
> like something we should investigate and fix in Impala.
> {noformat}
> I1007 22:10:10.697760 56550 impala-hs2-server.cc:821] FetchResults(): 
> query_id=764d4313dbc64e20:2831560c fetch_size=1024I1007 
> 22:10:10.697760 56550 impala-hs2-server.cc:821] FetchResults(): 
> query_id=764d4313dbc64e20:2831560c fetch_size=1024I1007 
> 22:10:10.697988 56527 scheduler.cc:468] 6d4cba4d2e8ccc42:66ce26a8] 
> Exec at coord is falseI1007 22:10:10.698014 54090 impala-hs2-server.cc:663] 
> GetOperationStatus(): query_id=0d43fd73ce4403fd:da25dde9I1007 
> 22:10:10.698173   127 control-service.cc:142] 
> 0646e91fd6a0a953:02949ff3] ExecQueryFInstances(): 
> query_id=0646e91fd6a0a953:02949ff3 coord=b04a12d76e27:22000 
> #instances=1I1007 22:10:10.698356 56527 admission-controller.cc:1270] 
> 6d4cba4d2e8ccc42:66ce26a8] Trying to admit 
> id=6d4cba4d2e8ccc42:66ce26a8 in pool_name=root.default 
> executor_group_name=default per_host_mem_estimate=52.02 MB 
> dedicated_coord_mem_estimate=110.02 MB max_requests=-1 (configured 
> statically) max_queued=200 (configured statically) max_mem=29.30 GB 
> (configured statically)I1007 22:10:10.698386 56527 
> admission-controller.cc:1282] 6d4cba4d2e8ccc42:66ce26a8] Stats: 
> agg_num_running=9, agg_num_queued=0, agg_mem_reserved=8.34 GB,  
> local_host(local_mem_admitted=9.09 GB, num_admitted_running=9, num_queued=0, 
> backend_mem_reserved=6.70 GB)I1007 22:10:10.698415 56527 
> admission-controller.cc:871] 6d4cba4d2e8ccc42:66ce26a8] Admitting 
> query id=6d4cba4d2e8ccc42:66ce26a8I1007 22:10:10.698479 56527 
> impala-server.cc:1713] 6d4cba4d2e8ccc42:66ce26a8] Registering query 
> locationsI1007 22:10:10.698529 56527 coordinator.cc:97] 
> 6d4cba4d2e8ccc42:66ce26a8] Exec() 
> query_id=6d4cba4d2e8ccc42:66ce26a8 stmt=select count(*) from alltypes 
> where month=1I1007 22:10:10.698992 56527 coordinator.cc:361] 
> 6d4cba4d2e8ccc42:66ce26a8] starting execution on 3 backends for 
> query_id=6d4cba4d2e8ccc42:66ce26a8I1007 22:10:10.699383 56523 
> coordinator.cc:375] 0646e91fd6a0a953:02949ff3] started execution on 1 
> backends for query_id=0646e91fd6a0a953:02949ff3I1007 22:10:10.699409 
> 56534 scheduler.cc:468] e1495f928c2cd4f6:eeda82aa] Exec at coord is 
> falseI1007 22:10:10.700017   127 control-service.cc:142] 
> 6d4cba4d2e8ccc42:66ce26a8] ExecQueryFInstances(): 
> query_id=6d4cba4d2e8ccc42:66ce26a8 coord=b04a12d76e27:22000 
> #instances=1I1007 22:10:10.700147 56534 scheduler.cc:468] 
> e1495f928c2cd4f6:eeda82aa] Exec at coord is falseI1007 
> 22:10:10.700234   325 TAcceptQueueServer.cpp:340] New connection to server 
> hiveserver2-http-frontend from client I1007 
> 22:10:10.700286   329 TAcceptQueueServer.cpp:227] TAcceptQueueServer: 
> hiveserver2-http-frontend started connection setup for client  172.18.0.1 Port: 51580>I1007 22:10:10.700314   329 
> TAcceptQueueServer.cpp:245] TAcceptQueueServer: hiveserver2-http-frontend 
> finished connection setup for client I1007 
> 22:10:10.700371 56550 impala-hs2-server.cc:844] FetchResults(): 
> query_id=764d4313dbc64e20:2831560c #results=1 has_more=trueI1007 
> 22:10:10.700508 56551 impala-server.cc:1969] Connection 
> 8249c7defcb10124:1bc65ed9ea562aab from client 172.18.0.1:51576 to server 
> hiveserver2-http-frontend closed. The connection had 1 associated 
> session(s).I1007 22:10:10.700688 53748 impala-beeswax-server.cc:260] close(): 
> query_id=e9473ff80c5d4afe:733cefe0I1007 22:10:10.700711 53748 
> impala-server.cc:1129] UnregisterQuery(): 
> 

[jira] [Commented] (IMPALA-9032) Impala returns 0 rows over hs2-http without waiting for fetch_rows_timeout_ms timeout

2019-10-09 Thread Lars Volker (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-9032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16947845#comment-16947845
 ] 

Lars Volker commented on IMPALA-9032:
-

CC [~stakiar], [~tarmstrong]

> Impala returns 0 rows over hs2-http without waiting for fetch_rows_timeout_ms 
> timeout
> -
>
> Key: IMPALA-9032
> URL: https://issues.apache.org/jira/browse/IMPALA-9032
> Project: IMPALA
>  Issue Type: Bug
>  Components: Backend
>Affects Versions: Impala 3.4.0
>Reporter: Lars Volker
>Priority: Major
>
> This looks like a bug to me but I'm not entirely sure. I'm trying to run our 
> tests over hs2-http (IMPALA-8863) and after the change for IMPALA-7312 to 
> introduce a non-blocking mode for FetchResults() it looks like we sometimes 
> return an empty result way before {{fetch_rows_timeout_ms}} has elapsed. This 
> triggers a bug in Impyla 
> ([#369|https://github.com/cloudera/impyla/issues/369]), but it also seems 
> like something we should investigate and fix in Impala.
> {noformat}
> I1007 22:10:10.697760 56550 impala-hs2-server.cc:821] FetchResults(): 
> query_id=764d4313dbc64e20:2831560c fetch_size=1024I1007 
> 22:10:10.697760 56550 impala-hs2-server.cc:821] FetchResults(): 
> query_id=764d4313dbc64e20:2831560c fetch_size=1024I1007 
> 22:10:10.697988 56527 scheduler.cc:468] 6d4cba4d2e8ccc42:66ce26a8] 
> Exec at coord is falseI1007 22:10:10.698014 54090 impala-hs2-server.cc:663] 
> GetOperationStatus(): query_id=0d43fd73ce4403fd:da25dde9I1007 
> 22:10:10.698173   127 control-service.cc:142] 
> 0646e91fd6a0a953:02949ff3] ExecQueryFInstances(): 
> query_id=0646e91fd6a0a953:02949ff3 coord=b04a12d76e27:22000 
> #instances=1I1007 22:10:10.698356 56527 admission-controller.cc:1270] 
> 6d4cba4d2e8ccc42:66ce26a8] Trying to admit 
> id=6d4cba4d2e8ccc42:66ce26a8 in pool_name=root.default 
> executor_group_name=default per_host_mem_estimate=52.02 MB 
> dedicated_coord_mem_estimate=110.02 MB max_requests=-1 (configured 
> statically) max_queued=200 (configured statically) max_mem=29.30 GB 
> (configured statically)I1007 22:10:10.698386 56527 
> admission-controller.cc:1282] 6d4cba4d2e8ccc42:66ce26a8] Stats: 
> agg_num_running=9, agg_num_queued=0, agg_mem_reserved=8.34 GB,  
> local_host(local_mem_admitted=9.09 GB, num_admitted_running=9, num_queued=0, 
> backend_mem_reserved=6.70 GB)I1007 22:10:10.698415 56527 
> admission-controller.cc:871] 6d4cba4d2e8ccc42:66ce26a8] Admitting 
> query id=6d4cba4d2e8ccc42:66ce26a8I1007 22:10:10.698479 56527 
> impala-server.cc:1713] 6d4cba4d2e8ccc42:66ce26a8] Registering query 
> locationsI1007 22:10:10.698529 56527 coordinator.cc:97] 
> 6d4cba4d2e8ccc42:66ce26a8] Exec() 
> query_id=6d4cba4d2e8ccc42:66ce26a8 stmt=select count(*) from alltypes 
> where month=1I1007 22:10:10.698992 56527 coordinator.cc:361] 
> 6d4cba4d2e8ccc42:66ce26a8] starting execution on 3 backends for 
> query_id=6d4cba4d2e8ccc42:66ce26a8I1007 22:10:10.699383 56523 
> coordinator.cc:375] 0646e91fd6a0a953:02949ff3] started execution on 1 
> backends for query_id=0646e91fd6a0a953:02949ff3I1007 22:10:10.699409 
> 56534 scheduler.cc:468] e1495f928c2cd4f6:eeda82aa] Exec at coord is 
> falseI1007 22:10:10.700017   127 control-service.cc:142] 
> 6d4cba4d2e8ccc42:66ce26a8] ExecQueryFInstances(): 
> query_id=6d4cba4d2e8ccc42:66ce26a8 coord=b04a12d76e27:22000 
> #instances=1I1007 22:10:10.700147 56534 scheduler.cc:468] 
> e1495f928c2cd4f6:eeda82aa] Exec at coord is falseI1007 
> 22:10:10.700234   325 TAcceptQueueServer.cpp:340] New connection to server 
> hiveserver2-http-frontend from client I1007 
> 22:10:10.700286   329 TAcceptQueueServer.cpp:227] TAcceptQueueServer: 
> hiveserver2-http-frontend started connection setup for client  172.18.0.1 Port: 51580>I1007 22:10:10.700314   329 
> TAcceptQueueServer.cpp:245] TAcceptQueueServer: hiveserver2-http-frontend 
> finished connection setup for client I1007 
> 22:10:10.700371 56550 impala-hs2-server.cc:844] FetchResults(): 
> query_id=764d4313dbc64e20:2831560c #results=1 has_more=trueI1007 
> 22:10:10.700508 56551 impala-server.cc:1969] Connection 
> 8249c7defcb10124:1bc65ed9ea562aab from client 172.18.0.1:51576 to server 
> hiveserver2-http-frontend closed. The connection had 1 associated 
> session(s).I1007 22:10:10.700688 53748 impala-beeswax-server.cc:260] close(): 
> query_id=e9473ff80c5d4afe:733cefe0I1007 22:10:10.700711 53748 
> impala-server.cc:1129] UnregisterQuery(): 
> query_id=e9473ff80c5d4afe:733cefe0I1007 22:10:10.700721 53748 
> impala-server.cc:1234] Cancel(): 
> query_id=e9473ff80c5d4afe:733cefe0I1007 

[jira] [Updated] (IMPALA-9032) Impala returns 0 rows over hs2-http without waiting for fetch_rows_timeout_ms timeout

2019-10-09 Thread Lars Volker (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-9032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Volker updated IMPALA-9032:

Summary: Impala returns 0 rows over hs2-http without waiting for 
fetch_rows_timeout_ms timeout  (was: Impala returns 0 rows over hs2-http 
without waiting for TODO timeout)

> Impala returns 0 rows over hs2-http without waiting for fetch_rows_timeout_ms 
> timeout
> -
>
> Key: IMPALA-9032
> URL: https://issues.apache.org/jira/browse/IMPALA-9032
> Project: IMPALA
>  Issue Type: Bug
>  Components: Backend
>Affects Versions: Impala 3.4.0
>Reporter: Lars Volker
>Priority: Major
>
> This looks like a bug to me but I'm not entirely sure. I'm trying to run our 
> tests over hs2-http (IMPALA-8863) and after the change for IMPALA-7312 to 
> introduce a non-blocking mode for FetchResults() it looks like we sometimes 
> return an empty result way before {{fetch_rows_timeout_ms}} has elapsed. This 
> triggers a bug in Impyla 
> ([#369|https://github.com/cloudera/impyla/issues/369]), but it also seems 
> like something we should investigate and fix in Impala.
> {noformat}
> I1007 22:10:10.697760 56550 impala-hs2-server.cc:821] FetchResults(): 
> query_id=764d4313dbc64e20:2831560c fetch_size=1024I1007 
> 22:10:10.697760 56550 impala-hs2-server.cc:821] FetchResults(): 
> query_id=764d4313dbc64e20:2831560c fetch_size=1024I1007 
> 22:10:10.697988 56527 scheduler.cc:468] 6d4cba4d2e8ccc42:66ce26a8] 
> Exec at coord is falseI1007 22:10:10.698014 54090 impala-hs2-server.cc:663] 
> GetOperationStatus(): query_id=0d43fd73ce4403fd:da25dde9I1007 
> 22:10:10.698173   127 control-service.cc:142] 
> 0646e91fd6a0a953:02949ff3] ExecQueryFInstances(): 
> query_id=0646e91fd6a0a953:02949ff3 coord=b04a12d76e27:22000 
> #instances=1I1007 22:10:10.698356 56527 admission-controller.cc:1270] 
> 6d4cba4d2e8ccc42:66ce26a8] Trying to admit 
> id=6d4cba4d2e8ccc42:66ce26a8 in pool_name=root.default 
> executor_group_name=default per_host_mem_estimate=52.02 MB 
> dedicated_coord_mem_estimate=110.02 MB max_requests=-1 (configured 
> statically) max_queued=200 (configured statically) max_mem=29.30 GB 
> (configured statically)I1007 22:10:10.698386 56527 
> admission-controller.cc:1282] 6d4cba4d2e8ccc42:66ce26a8] Stats: 
> agg_num_running=9, agg_num_queued=0, agg_mem_reserved=8.34 GB,  
> local_host(local_mem_admitted=9.09 GB, num_admitted_running=9, num_queued=0, 
> backend_mem_reserved=6.70 GB)I1007 22:10:10.698415 56527 
> admission-controller.cc:871] 6d4cba4d2e8ccc42:66ce26a8] Admitting 
> query id=6d4cba4d2e8ccc42:66ce26a8I1007 22:10:10.698479 56527 
> impala-server.cc:1713] 6d4cba4d2e8ccc42:66ce26a8] Registering query 
> locationsI1007 22:10:10.698529 56527 coordinator.cc:97] 
> 6d4cba4d2e8ccc42:66ce26a8] Exec() 
> query_id=6d4cba4d2e8ccc42:66ce26a8 stmt=select count(*) from alltypes 
> where month=1I1007 22:10:10.698992 56527 coordinator.cc:361] 
> 6d4cba4d2e8ccc42:66ce26a8] starting execution on 3 backends for 
> query_id=6d4cba4d2e8ccc42:66ce26a8I1007 22:10:10.699383 56523 
> coordinator.cc:375] 0646e91fd6a0a953:02949ff3] started execution on 1 
> backends for query_id=0646e91fd6a0a953:02949ff3I1007 22:10:10.699409 
> 56534 scheduler.cc:468] e1495f928c2cd4f6:eeda82aa] Exec at coord is 
> falseI1007 22:10:10.700017   127 control-service.cc:142] 
> 6d4cba4d2e8ccc42:66ce26a8] ExecQueryFInstances(): 
> query_id=6d4cba4d2e8ccc42:66ce26a8 coord=b04a12d76e27:22000 
> #instances=1I1007 22:10:10.700147 56534 scheduler.cc:468] 
> e1495f928c2cd4f6:eeda82aa] Exec at coord is falseI1007 
> 22:10:10.700234   325 TAcceptQueueServer.cpp:340] New connection to server 
> hiveserver2-http-frontend from client I1007 
> 22:10:10.700286   329 TAcceptQueueServer.cpp:227] TAcceptQueueServer: 
> hiveserver2-http-frontend started connection setup for client  172.18.0.1 Port: 51580>I1007 22:10:10.700314   329 
> TAcceptQueueServer.cpp:245] TAcceptQueueServer: hiveserver2-http-frontend 
> finished connection setup for client I1007 
> 22:10:10.700371 56550 impala-hs2-server.cc:844] FetchResults(): 
> query_id=764d4313dbc64e20:2831560c #results=1 has_more=trueI1007 
> 22:10:10.700508 56551 impala-server.cc:1969] Connection 
> 8249c7defcb10124:1bc65ed9ea562aab from client 172.18.0.1:51576 to server 
> hiveserver2-http-frontend closed. The connection had 1 associated 
> session(s).I1007 22:10:10.700688 53748 impala-beeswax-server.cc:260] close(): 
> query_id=e9473ff80c5d4afe:733cefe0I1007 22:10:10.700711 53748 
> impala-server.cc:1129] UnregisterQuery(): 
> query_id=e9473ff80c5d4afe:733cefe0I1007 22:10:10.700721 

[jira] [Created] (IMPALA-9032) Impala returns 0 rows over hs2-http without waiting for TODO timeout

2019-10-09 Thread Lars Volker (Jira)
Lars Volker created IMPALA-9032:
---

 Summary: Impala returns 0 rows over hs2-http without waiting for 
TODO timeout
 Key: IMPALA-9032
 URL: https://issues.apache.org/jira/browse/IMPALA-9032
 Project: IMPALA
  Issue Type: Bug
  Components: Backend
Affects Versions: Impala 3.4.0
Reporter: Lars Volker


This looks like a bug to me but I'm not entirely sure. I'm trying to run our 
tests over hs2-http (IMPALA-8863) and after the change for IMPALA-7312 to 
introduce a non-blocking mode for FetchResults() it looks like we sometimes 
return an empty result way before {{fetch_rows_timeout_ms}} has elapsed. This 
triggers a bug in Impyla 
([#369|https://github.com/cloudera/impyla/issues/369]), but it also seems like 
something we should investigate and fix in Impala.

{noformat}
I1007 22:10:10.697760 56550 impala-hs2-server.cc:821] FetchResults(): 
query_id=764d4313dbc64e20:2831560c fetch_size=1024I1007 22:10:10.697760 
56550 impala-hs2-server.cc:821] FetchResults(): 
query_id=764d4313dbc64e20:2831560c fetch_size=1024I1007 22:10:10.697988 
56527 scheduler.cc:468] 6d4cba4d2e8ccc42:66ce26a8] Exec at coord is 
falseI1007 22:10:10.698014 54090 impala-hs2-server.cc:663] 
GetOperationStatus(): query_id=0d43fd73ce4403fd:da25dde9I1007 
22:10:10.698173   127 control-service.cc:142] 
0646e91fd6a0a953:02949ff3] ExecQueryFInstances(): 
query_id=0646e91fd6a0a953:02949ff3 coord=b04a12d76e27:22000 
#instances=1I1007 22:10:10.698356 56527 admission-controller.cc:1270] 
6d4cba4d2e8ccc42:66ce26a8] Trying to admit 
id=6d4cba4d2e8ccc42:66ce26a8 in pool_name=root.default 
executor_group_name=default per_host_mem_estimate=52.02 MB 
dedicated_coord_mem_estimate=110.02 MB max_requests=-1 (configured statically) 
max_queued=200 (configured statically) max_mem=29.30 GB (configured 
statically)I1007 22:10:10.698386 56527 admission-controller.cc:1282] 
6d4cba4d2e8ccc42:66ce26a8] Stats: agg_num_running=9, agg_num_queued=0, 
agg_mem_reserved=8.34 GB,  local_host(local_mem_admitted=9.09 GB, 
num_admitted_running=9, num_queued=0, backend_mem_reserved=6.70 GB)I1007 
22:10:10.698415 56527 admission-controller.cc:871] 
6d4cba4d2e8ccc42:66ce26a8] Admitting query 
id=6d4cba4d2e8ccc42:66ce26a8I1007 22:10:10.698479 56527 
impala-server.cc:1713] 6d4cba4d2e8ccc42:66ce26a8] Registering query 
locationsI1007 22:10:10.698529 56527 coordinator.cc:97] 
6d4cba4d2e8ccc42:66ce26a8] Exec() 
query_id=6d4cba4d2e8ccc42:66ce26a8 stmt=select count(*) from alltypes 
where month=1I1007 22:10:10.698992 56527 coordinator.cc:361] 
6d4cba4d2e8ccc42:66ce26a8] starting execution on 3 backends for 
query_id=6d4cba4d2e8ccc42:66ce26a8I1007 22:10:10.699383 56523 
coordinator.cc:375] 0646e91fd6a0a953:02949ff3] started execution on 1 
backends for query_id=0646e91fd6a0a953:02949ff3I1007 22:10:10.699409 
56534 scheduler.cc:468] e1495f928c2cd4f6:eeda82aa] Exec at coord is 
falseI1007 22:10:10.700017   127 control-service.cc:142] 
6d4cba4d2e8ccc42:66ce26a8] ExecQueryFInstances(): 
query_id=6d4cba4d2e8ccc42:66ce26a8 coord=b04a12d76e27:22000 
#instances=1I1007 22:10:10.700147 56534 scheduler.cc:468] 
e1495f928c2cd4f6:eeda82aa] Exec at coord is falseI1007 22:10:10.700234  
 325 TAcceptQueueServer.cpp:340] New connection to server 
hiveserver2-http-frontend from client I1007 
22:10:10.700286   329 TAcceptQueueServer.cpp:227] TAcceptQueueServer: 
hiveserver2-http-frontend started connection setup for client I1007 22:10:10.700314   329 TAcceptQueueServer.cpp:245] 
TAcceptQueueServer: hiveserver2-http-frontend finished connection setup for 
client I1007 22:10:10.700371 56550 
impala-hs2-server.cc:844] FetchResults(): 
query_id=764d4313dbc64e20:2831560c #results=1 has_more=trueI1007 
22:10:10.700508 56551 impala-server.cc:1969] Connection 
8249c7defcb10124:1bc65ed9ea562aab from client 172.18.0.1:51576 to server 
hiveserver2-http-frontend closed. The connection had 1 associated 
session(s).I1007 22:10:10.700688 53748 impala-beeswax-server.cc:260] close(): 
query_id=e9473ff80c5d4afe:733cefe0I1007 22:10:10.700711 53748 
impala-server.cc:1129] UnregisterQuery(): 
query_id=e9473ff80c5d4afe:733cefe0I1007 22:10:10.700721 53748 
impala-server.cc:1234] Cancel(): 
query_id=e9473ff80c5d4afe:733cefe0I1007 22:10:10.700742 53748 
coordinator.cc:716] CancelBackends() 
query_id=e9473ff80c5d4afe:733cefe0, tried to cancel 0 backendsI1007 
22:10:10.700690 56534 scheduler.cc:468] e1495f928c2cd4f6:eeda82aa] Exec 
at coord is falseI1007 22:10:10.701387 56534 admission-controller.cc:1270] 
e1495f928c2cd4f6:eeda82aa] Trying to admit 
id=e1495f928c2cd4f6:eeda82aa in pool_name=root.default 
executor_group_name=default per_host_mem_estimate=231.95 MB 

[jira] [Created] (IMPALA-9032) Impala returns 0 rows over hs2-http without waiting for TODO timeout

2019-10-09 Thread Lars Volker (Jira)
Lars Volker created IMPALA-9032:
---

 Summary: Impala returns 0 rows over hs2-http without waiting for 
TODO timeout
 Key: IMPALA-9032
 URL: https://issues.apache.org/jira/browse/IMPALA-9032
 Project: IMPALA
  Issue Type: Bug
  Components: Backend
Affects Versions: Impala 3.4.0
Reporter: Lars Volker


This looks like a bug to me but I'm not entirely sure. I'm trying to run our 
tests over hs2-http (IMPALA-8863) and after the change for IMPALA-7312 to 
introduce a non-blocking mode for FetchResults() it looks like we sometimes 
return an empty result way before {{fetch_rows_timeout_ms}} has elapsed. This 
triggers a bug in Impyla 
([#369|https://github.com/cloudera/impyla/issues/369]), but it also seems like 
something we should investigate and fix in Impala.

{noformat}
I1007 22:10:10.697760 56550 impala-hs2-server.cc:821] FetchResults(): 
query_id=764d4313dbc64e20:2831560c fetch_size=1024I1007 22:10:10.697760 
56550 impala-hs2-server.cc:821] FetchResults(): 
query_id=764d4313dbc64e20:2831560c fetch_size=1024I1007 22:10:10.697988 
56527 scheduler.cc:468] 6d4cba4d2e8ccc42:66ce26a8] Exec at coord is 
falseI1007 22:10:10.698014 54090 impala-hs2-server.cc:663] 
GetOperationStatus(): query_id=0d43fd73ce4403fd:da25dde9I1007 
22:10:10.698173   127 control-service.cc:142] 
0646e91fd6a0a953:02949ff3] ExecQueryFInstances(): 
query_id=0646e91fd6a0a953:02949ff3 coord=b04a12d76e27:22000 
#instances=1I1007 22:10:10.698356 56527 admission-controller.cc:1270] 
6d4cba4d2e8ccc42:66ce26a8] Trying to admit 
id=6d4cba4d2e8ccc42:66ce26a8 in pool_name=root.default 
executor_group_name=default per_host_mem_estimate=52.02 MB 
dedicated_coord_mem_estimate=110.02 MB max_requests=-1 (configured statically) 
max_queued=200 (configured statically) max_mem=29.30 GB (configured 
statically)I1007 22:10:10.698386 56527 admission-controller.cc:1282] 
6d4cba4d2e8ccc42:66ce26a8] Stats: agg_num_running=9, agg_num_queued=0, 
agg_mem_reserved=8.34 GB,  local_host(local_mem_admitted=9.09 GB, 
num_admitted_running=9, num_queued=0, backend_mem_reserved=6.70 GB)I1007 
22:10:10.698415 56527 admission-controller.cc:871] 
6d4cba4d2e8ccc42:66ce26a8] Admitting query 
id=6d4cba4d2e8ccc42:66ce26a8I1007 22:10:10.698479 56527 
impala-server.cc:1713] 6d4cba4d2e8ccc42:66ce26a8] Registering query 
locationsI1007 22:10:10.698529 56527 coordinator.cc:97] 
6d4cba4d2e8ccc42:66ce26a8] Exec() 
query_id=6d4cba4d2e8ccc42:66ce26a8 stmt=select count(*) from alltypes 
where month=1I1007 22:10:10.698992 56527 coordinator.cc:361] 
6d4cba4d2e8ccc42:66ce26a8] starting execution on 3 backends for 
query_id=6d4cba4d2e8ccc42:66ce26a8I1007 22:10:10.699383 56523 
coordinator.cc:375] 0646e91fd6a0a953:02949ff3] started execution on 1 
backends for query_id=0646e91fd6a0a953:02949ff3I1007 22:10:10.699409 
56534 scheduler.cc:468] e1495f928c2cd4f6:eeda82aa] Exec at coord is 
falseI1007 22:10:10.700017   127 control-service.cc:142] 
6d4cba4d2e8ccc42:66ce26a8] ExecQueryFInstances(): 
query_id=6d4cba4d2e8ccc42:66ce26a8 coord=b04a12d76e27:22000 
#instances=1I1007 22:10:10.700147 56534 scheduler.cc:468] 
e1495f928c2cd4f6:eeda82aa] Exec at coord is falseI1007 22:10:10.700234  
 325 TAcceptQueueServer.cpp:340] New connection to server 
hiveserver2-http-frontend from client I1007 
22:10:10.700286   329 TAcceptQueueServer.cpp:227] TAcceptQueueServer: 
hiveserver2-http-frontend started connection setup for client I1007 22:10:10.700314   329 TAcceptQueueServer.cpp:245] 
TAcceptQueueServer: hiveserver2-http-frontend finished connection setup for 
client I1007 22:10:10.700371 56550 
impala-hs2-server.cc:844] FetchResults(): 
query_id=764d4313dbc64e20:2831560c #results=1 has_more=trueI1007 
22:10:10.700508 56551 impala-server.cc:1969] Connection 
8249c7defcb10124:1bc65ed9ea562aab from client 172.18.0.1:51576 to server 
hiveserver2-http-frontend closed. The connection had 1 associated 
session(s).I1007 22:10:10.700688 53748 impala-beeswax-server.cc:260] close(): 
query_id=e9473ff80c5d4afe:733cefe0I1007 22:10:10.700711 53748 
impala-server.cc:1129] UnregisterQuery(): 
query_id=e9473ff80c5d4afe:733cefe0I1007 22:10:10.700721 53748 
impala-server.cc:1234] Cancel(): 
query_id=e9473ff80c5d4afe:733cefe0I1007 22:10:10.700742 53748 
coordinator.cc:716] CancelBackends() 
query_id=e9473ff80c5d4afe:733cefe0, tried to cancel 0 backendsI1007 
22:10:10.700690 56534 scheduler.cc:468] e1495f928c2cd4f6:eeda82aa] Exec 
at coord is falseI1007 22:10:10.701387 56534 admission-controller.cc:1270] 
e1495f928c2cd4f6:eeda82aa] Trying to admit 
id=e1495f928c2cd4f6:eeda82aa in pool_name=root.default 
executor_group_name=default per_host_mem_estimate=231.95 MB 

[jira] [Resolved] (IMPALA-8973) Update Kudu version to fix openssl1.1.1 compatibility issue

2019-10-02 Thread Lars Volker (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Volker resolved IMPALA-8973.
-
Resolution: Fixed

> Update Kudu version to fix openssl1.1.1 compatibility issue
> ---
>
> Key: IMPALA-8973
> URL: https://issues.apache.org/jira/browse/IMPALA-8973
> Project: IMPALA
>  Issue Type: Bug
>  Components: Infrastructure
>Affects Versions: Impala 3.4.0
>Reporter: Kurt Deschler
>Assignee: Kurt Deschler
>Priority: Major
>  Labels: kudu
> Fix For: Impala 3.4.0
>
>
> openssl1.1.1/TLS1.3 exposed an issue with Kudu connectivity that was resolved 
> https://issues.apache.org/jira/browse/KUDU-2871.
> This issue was observed causing test failures in kudu tests:
> MESSAGE: ImpalaRuntimeException: Error creating Kudu table 
> 'impala::tpch_kudu.lineitem'
> CAUSED BY: NonRecoverableException: not enough live tablet servers to create 
> a table with the requested replication factor 3; 0 tablet servers are alive
> 16:37:21.564919  7150 [heartbeater.cc:566]|http://heartbeater.cc:566]/] 
> Failed to heartbeat to 127.0.0.1:7051 (0 consecutive failures): Network 
> error: Failed to ping master at 127.0.0.1:7051: Client connection negotiation 
> failed: client connection to 127.0.0.1:7051: connect: Connection refused 
> (error 111)
> logs/data_loading/impalad.kurt-cldr.kdeschle.log.INFO.XXX
> "not enough live tablet servers"
>  
> impala-config.sh needs to be updated to pull in a newer version of Kudu that 
> has this fix.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Resolved] (IMPALA-8973) Update Kudu version to fix openssl1.1.1 compatibility issue

2019-10-02 Thread Lars Volker (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Volker resolved IMPALA-8973.
-
Resolution: Fixed

> Update Kudu version to fix openssl1.1.1 compatibility issue
> ---
>
> Key: IMPALA-8973
> URL: https://issues.apache.org/jira/browse/IMPALA-8973
> Project: IMPALA
>  Issue Type: Bug
>  Components: Infrastructure
>Affects Versions: Impala 3.4.0
>Reporter: Kurt Deschler
>Assignee: Kurt Deschler
>Priority: Major
>  Labels: kudu
> Fix For: Impala 3.4.0
>
>
> openssl1.1.1/TLS1.3 exposed an issue with Kudu connectivity that was resolved 
> https://issues.apache.org/jira/browse/KUDU-2871.
> This issue was observed causing test failures in kudu tests:
> MESSAGE: ImpalaRuntimeException: Error creating Kudu table 
> 'impala::tpch_kudu.lineitem'
> CAUSED BY: NonRecoverableException: not enough live tablet servers to create 
> a table with the requested replication factor 3; 0 tablet servers are alive
> 16:37:21.564919  7150 [heartbeater.cc:566]|http://heartbeater.cc:566]/] 
> Failed to heartbeat to 127.0.0.1:7051 (0 consecutive failures): Network 
> error: Failed to ping master at 127.0.0.1:7051: Client connection negotiation 
> failed: client connection to 127.0.0.1:7051: connect: Connection refused 
> (error 111)
> logs/data_loading/impalad.kurt-cldr.kdeschle.log.INFO.XXX
> "not enough live tablet servers"
>  
> impala-config.sh needs to be updated to pull in a newer version of Kudu that 
> has this fix.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IMPALA-8967) impala-shell does not trim ldap password whitespace

2019-09-23 Thread Lars Volker (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Volker updated IMPALA-8967:

Labels: newbie ramp-up usability  (was: usability)

> impala-shell does not trim ldap password whitespace
> ---
>
> Key: IMPALA-8967
> URL: https://issues.apache.org/jira/browse/IMPALA-8967
> Project: IMPALA
>  Issue Type: Bug
>  Components: Infrastructure
>Affects Versions: Impala 3.3.0
>Reporter: Lars Volker
>Priority: Major
>  Labels: newbie, ramp-up, usability
>
> We should trim whitespace off the LDAP password instead of issuing a warning 
> (_Warning: LDAP password contains a trailing newline. Did you use ‘echo’ 
> instead of ‘echo -n’?`_).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-8967) impala-shell does not trim ldap password whitespace

2019-09-23 Thread Lars Volker (Jira)
Lars Volker created IMPALA-8967:
---

 Summary: impala-shell does not trim ldap password whitespace
 Key: IMPALA-8967
 URL: https://issues.apache.org/jira/browse/IMPALA-8967
 Project: IMPALA
  Issue Type: Bug
  Components: Infrastructure
Affects Versions: Impala 3.3.0
Reporter: Lars Volker


We should trim whitespace off the LDAP password instead of issuing a warning 
(_Warning: LDAP password contains a trailing newline. Did you use ‘echo’ 
instead of ‘echo -n’?`_).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Resolved] (IMPALA-8936) Make queuing reason for unhealthy executor groups more generic

2019-09-15 Thread Lars Volker (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Volker resolved IMPALA-8936.
-
Fix Version/s: Impala 3.4.0
   Resolution: Fixed

> Make queuing reason for unhealthy executor groups more generic
> --
>
> Key: IMPALA-8936
> URL: https://issues.apache.org/jira/browse/IMPALA-8936
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Backend
>Reporter: Lars Volker
>Assignee: Lars Volker
>Priority: Major
>  Labels: user-experience
> Fix For: Impala 3.4.0
>
>
> In some situations, users might actually expect not having a healthy executor 
> group around, e.g. when they're starting one and it takes a while to come 
> online. We should make the queuing reason more generic and drop the 
> "unhealthy" concept from it to reduce confusion.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Resolved] (IMPALA-8936) Make queuing reason for unhealthy executor groups more generic

2019-09-15 Thread Lars Volker (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Volker resolved IMPALA-8936.
-
Fix Version/s: Impala 3.4.0
   Resolution: Fixed

> Make queuing reason for unhealthy executor groups more generic
> --
>
> Key: IMPALA-8936
> URL: https://issues.apache.org/jira/browse/IMPALA-8936
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Backend
>Reporter: Lars Volker
>Assignee: Lars Volker
>Priority: Major
>  Labels: user-experience
> Fix For: Impala 3.4.0
>
>
> In some situations, users might actually expect not having a healthy executor 
> group around, e.g. when they're starting one and it takes a while to come 
> online. We should make the queuing reason more generic and drop the 
> "unhealthy" concept from it to reduce confusion.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Created] (IMPALA-8936) Make queuing reason for unhealthy executor groups more generic

2019-09-10 Thread Lars Volker (Jira)
Lars Volker created IMPALA-8936:
---

 Summary: Make queuing reason for unhealthy executor groups more 
generic
 Key: IMPALA-8936
 URL: https://issues.apache.org/jira/browse/IMPALA-8936
 Project: IMPALA
  Issue Type: Improvement
  Components: Backend
Reporter: Lars Volker
Assignee: Lars Volker


In some situations, users might actually expect not having a healthy executor 
group around, e.g. when they're starting one and it takes a while to come 
online. We should make the queuing reason more generic and drop the "unhealthy" 
concept from it to reduce confusion.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-8936) Make queuing reason for unhealthy executor groups more generic

2019-09-10 Thread Lars Volker (Jira)
Lars Volker created IMPALA-8936:
---

 Summary: Make queuing reason for unhealthy executor groups more 
generic
 Key: IMPALA-8936
 URL: https://issues.apache.org/jira/browse/IMPALA-8936
 Project: IMPALA
  Issue Type: Improvement
  Components: Backend
Reporter: Lars Volker
Assignee: Lars Volker


In some situations, users might actually expect not having a healthy executor 
group around, e.g. when they're starting one and it takes a while to come 
online. We should make the queuing reason more generic and drop the "unhealthy" 
concept from it to reduce confusion.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Resolved] (IMPALA-8802) Switch to pgrep for graceful_shutdown_backends.sh

2019-08-29 Thread Lars Volker (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Volker resolved IMPALA-8802.
-
Fix Version/s: Impala 3.3.0
   Resolution: Fixed

> Switch to pgrep for graceful_shutdown_backends.sh
> -
>
> Key: IMPALA-8802
> URL: https://issues.apache.org/jira/browse/IMPALA-8802
> Project: IMPALA
>  Issue Type: Bug
>  Components: Infrastructure
>Affects Versions: Impala 3.3.0
>Reporter: Lars Volker
>Assignee: Lars Volker
>Priority: Major
> Fix For: Impala 3.3.0
>
>
> IMPALA-8798 added a script with a call to {{pidof}}. However, {{pgrep}} seems 
> generally preferred (https://mywiki.wooledge.org/BadUtils) and we should 
> switch to it.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Resolved] (IMPALA-8802) Switch to pgrep for graceful_shutdown_backends.sh

2019-08-29 Thread Lars Volker (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Volker resolved IMPALA-8802.
-
Fix Version/s: Impala 3.3.0
   Resolution: Fixed

> Switch to pgrep for graceful_shutdown_backends.sh
> -
>
> Key: IMPALA-8802
> URL: https://issues.apache.org/jira/browse/IMPALA-8802
> Project: IMPALA
>  Issue Type: Bug
>  Components: Infrastructure
>Affects Versions: Impala 3.3.0
>Reporter: Lars Volker
>Assignee: Lars Volker
>Priority: Major
> Fix For: Impala 3.3.0
>
>
> IMPALA-8798 added a script with a call to {{pidof}}. However, {{pgrep}} seems 
> generally preferred (https://mywiki.wooledge.org/BadUtils) and we should 
> switch to it.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Resolved] (IMPALA-8892) Add tools to Docker images

2019-08-29 Thread Lars Volker (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Volker resolved IMPALA-8892.
-
Fix Version/s: Impala 3.4.0
   Resolution: Fixed

> Add tools to Docker images
> --
>
> Key: IMPALA-8892
> URL: https://issues.apache.org/jira/browse/IMPALA-8892
> Project: IMPALA
>  Issue Type: Improvement
>Reporter: Lars Volker
>Assignee: Lars Volker
>Priority: Major
>  Labels: supportability
> Fix For: Impala 3.4.0
>
>
> Our docker images lack a bunch of tools that help to implement health checks 
> and debugging issues.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Resolved] (IMPALA-8892) Add tools to Docker images

2019-08-29 Thread Lars Volker (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Volker resolved IMPALA-8892.
-
Fix Version/s: Impala 3.4.0
   Resolution: Fixed

> Add tools to Docker images
> --
>
> Key: IMPALA-8892
> URL: https://issues.apache.org/jira/browse/IMPALA-8892
> Project: IMPALA
>  Issue Type: Improvement
>Reporter: Lars Volker
>Assignee: Lars Volker
>Priority: Major
>  Labels: supportability
> Fix For: Impala 3.4.0
>
>
> Our docker images lack a bunch of tools that help to implement health checks 
> and debugging issues.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Resolved] (IMPALA-8895) Expose daemon health on /healthz endpoint

2019-08-27 Thread Lars Volker (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Volker resolved IMPALA-8895.
-
Fix Version/s: Impala 3.4.0
   Resolution: Fixed

> Expose daemon health on /healthz endpoint
> -
>
> Key: IMPALA-8895
> URL: https://issues.apache.org/jira/browse/IMPALA-8895
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Backend
>Affects Versions: Impala 3.4.0
>Reporter: Lars Volker
>Assignee: Lars Volker
>Priority: Major
>  Labels: observability
> Fix For: Impala 3.4.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Resolved] (IMPALA-8895) Expose daemon health on /healthz endpoint

2019-08-27 Thread Lars Volker (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Volker resolved IMPALA-8895.
-
Fix Version/s: Impala 3.4.0
   Resolution: Fixed

> Expose daemon health on /healthz endpoint
> -
>
> Key: IMPALA-8895
> URL: https://issues.apache.org/jira/browse/IMPALA-8895
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Backend
>Affects Versions: Impala 3.4.0
>Reporter: Lars Volker
>Assignee: Lars Volker
>Priority: Major
>  Labels: observability
> Fix For: Impala 3.4.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-8900) Allow /healthz access without authentication

2019-08-27 Thread Lars Volker (Jira)
Lars Volker created IMPALA-8900:
---

 Summary: Allow /healthz access without authentication
 Key: IMPALA-8900
 URL: https://issues.apache.org/jira/browse/IMPALA-8900
 Project: IMPALA
  Issue Type: Improvement
  Components: Backend
Affects Versions: Impala 3.4.0
Reporter: Lars Volker


When enabling SPNEGO authentication for the debug webpages, /healthz becomes 
unavailable. Some tooling might rely on the endpoint being accessible without 
authentication and it does not pose a security risk to make it available.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-8900) Allow /healthz access without authentication

2019-08-27 Thread Lars Volker (Jira)
Lars Volker created IMPALA-8900:
---

 Summary: Allow /healthz access without authentication
 Key: IMPALA-8900
 URL: https://issues.apache.org/jira/browse/IMPALA-8900
 Project: IMPALA
  Issue Type: Improvement
  Components: Backend
Affects Versions: Impala 3.4.0
Reporter: Lars Volker


When enabling SPNEGO authentication for the debug webpages, /healthz becomes 
unavailable. Some tooling might rely on the endpoint being accessible without 
authentication and it does not pose a security risk to make it available.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Created] (IMPALA-8895) Expose daemon health on /healthz endpoint

2019-08-26 Thread Lars Volker (Jira)
Lars Volker created IMPALA-8895:
---

 Summary: Expose daemon health on /healthz endpoint
 Key: IMPALA-8895
 URL: https://issues.apache.org/jira/browse/IMPALA-8895
 Project: IMPALA
  Issue Type: Improvement
  Components: Backend
Affects Versions: Impala 3.4.0
Reporter: Lars Volker
Assignee: Lars Volker






--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-8895) Expose daemon health on /healthz endpoint

2019-08-26 Thread Lars Volker (Jira)
Lars Volker created IMPALA-8895:
---

 Summary: Expose daemon health on /healthz endpoint
 Key: IMPALA-8895
 URL: https://issues.apache.org/jira/browse/IMPALA-8895
 Project: IMPALA
  Issue Type: Improvement
  Components: Backend
Affects Versions: Impala 3.4.0
Reporter: Lars Volker
Assignee: Lars Volker






--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Created] (IMPALA-8892) Add tools to Docker images

2019-08-26 Thread Lars Volker (Jira)
Lars Volker created IMPALA-8892:
---

 Summary: Add tools to Docker images
 Key: IMPALA-8892
 URL: https://issues.apache.org/jira/browse/IMPALA-8892
 Project: IMPALA
  Issue Type: Improvement
Reporter: Lars Volker
Assignee: Lars Volker


Our docker images lack a bunch of tools that help to implement health checks 
and debugging issues.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-8892) Add tools to Docker images

2019-08-26 Thread Lars Volker (Jira)
Lars Volker created IMPALA-8892:
---

 Summary: Add tools to Docker images
 Key: IMPALA-8892
 URL: https://issues.apache.org/jira/browse/IMPALA-8892
 Project: IMPALA
  Issue Type: Improvement
Reporter: Lars Volker
Assignee: Lars Volker


Our docker images lack a bunch of tools that help to implement health checks 
and debugging issues.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Resolved] (IMPALA-7070) Failed test: query_test.test_nested_types.TestParquetArrayEncodings.test_thrift_array_of_arrays on S3

2019-08-18 Thread Lars Volker (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-7070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Volker resolved IMPALA-7070.
-
   Resolution: Fixed
Fix Version/s: Impala 3.3.0

> Failed test: 
> query_test.test_nested_types.TestParquetArrayEncodings.test_thrift_array_of_arrays
>  on S3
> -
>
> Key: IMPALA-7070
> URL: https://issues.apache.org/jira/browse/IMPALA-7070
> Project: IMPALA
>  Issue Type: Bug
>  Components: Backend
>Affects Versions: Impala 3.0
>Reporter: Dimitris Tsirogiannis
>Priority: Critical
>  Labels: broken-build, flaky, s3, test-failure
> Fix For: Impala 3.3.0
>
>
>  
> {code:java}
> Error Message
> query_test/test_nested_types.py:406: in test_thrift_array_of_arrays "col1 
> array>") query_test/test_nested_types.py:579: in 
> _create_test_table check_call(["hadoop", "fs", "-put", local_path, 
> location], shell=False) /usr/lib64/python2.6/subprocess.py:505: in check_call 
> raise CalledProcessError(retcode, cmd) E   CalledProcessError: Command 
> '['hadoop', 'fs', '-put', 
> '/data/jenkins/workspace/impala-asf-2.x-core-s3/repos/Impala/testdata/parquet_nested_types_encodings/bad-thrift.parquet',
>  
> 's3a://impala-cdh5-s3-test/test-warehouse/test_thrift_array_of_arrays_11da5fde.db/ThriftArrayOfArrays']'
>  returned non-zero exit status 1
> Stacktrace
> query_test/test_nested_types.py:406: in test_thrift_array_of_arrays
> "col1 array>")
> query_test/test_nested_types.py:579: in _create_test_table
> check_call(["hadoop", "fs", "-put", local_path, location], shell=False)
> /usr/lib64/python2.6/subprocess.py:505: in check_call
> raise CalledProcessError(retcode, cmd)
> E   CalledProcessError: Command '['hadoop', 'fs', '-put', 
> '/data/jenkins/workspace/impala-asf-2.x-core-s3/repos/Impala/testdata/parquet_nested_types_encodings/bad-thrift.parquet',
>  
> 's3a://impala-cdh5-s3-test/test-warehouse/test_thrift_array_of_arrays_11da5fde.db/ThriftArrayOfArrays']'
>  returned non-zero exit status 1
> Standard Error
> SET sync_ddl=False;
> -- executing against localhost:21000
> DROP DATABASE IF EXISTS `test_thrift_array_of_arrays_11da5fde` CASCADE;
> SET sync_ddl=False;
> -- executing against localhost:21000
> CREATE DATABASE `test_thrift_array_of_arrays_11da5fde`;
> MainThread: Created database "test_thrift_array_of_arrays_11da5fde" for test 
> ID 
> "query_test/test_nested_types.py::TestParquetArrayEncodings::()::test_thrift_array_of_arrays[exec_option:
>  {'batch_size': 0, 'num_nodes': 0, 'disable_codegen_rows_threshold': 0, 
> 'disable_codegen': False, 'abort_on_error': 1, 'debug_action': None, 
> 'exec_single_node_rows_threshold': 0} | table_format: parquet/none]"
> -- executing against localhost:21000
> create table test_thrift_array_of_arrays_11da5fde.ThriftArrayOfArrays (col1 
> array>) stored as parquet location 
> 's3a://impala-cdh5-s3-test/test-warehouse/test_thrift_array_of_arrays_11da5fde.db/ThriftArrayOfArrays';
> 18/05/20 18:31:03 WARN impl.MetricsConfig: Cannot locate configuration: tried 
> hadoop-metrics2-s3a-file-system.properties,hadoop-metrics2.properties
> 18/05/20 18:31:03 INFO impl.MetricsSystemImpl: Scheduled snapshot period at 
> 10 second(s).
> 18/05/20 18:31:03 INFO impl.MetricsSystemImpl: s3a-file-system metrics system 
> started
> 18/05/20 18:31:06 INFO Configuration.deprecation: 
> fs.s3a.server-side-encryption-key is deprecated. Instead, use 
> fs.s3a.server-side-encryption.key
> put: rename 
> `s3a://impala-cdh5-s3-test/test-warehouse/test_thrift_array_of_arrays_11da5fde.db/ThriftArrayOfArrays/bad-thrift.parquet._COPYING_'
>  to 
> `s3a://impala-cdh5-s3-test/test-warehouse/test_thrift_array_of_arrays_11da5fde.db/ThriftArrayOfArrays/bad-thrift.parquet':
>  Input/output error
> 18/05/20 18:31:08 INFO impl.MetricsSystemImpl: Stopping s3a-file-system 
> metrics system...
> 18/05/20 18:31:08 INFO impl.MetricsSystemImpl: s3a-file-system metrics system 
> stopped.
> 18/05/20 18:31:08 INFO impl.MetricsSystemImpl: s3a-file-system metrics system 
> shutdown complete.{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Resolved] (IMPALA-7070) Failed test: query_test.test_nested_types.TestParquetArrayEncodings.test_thrift_array_of_arrays on S3

2019-08-18 Thread Lars Volker (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-7070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Volker resolved IMPALA-7070.
-
   Resolution: Fixed
Fix Version/s: Impala 3.3.0

> Failed test: 
> query_test.test_nested_types.TestParquetArrayEncodings.test_thrift_array_of_arrays
>  on S3
> -
>
> Key: IMPALA-7070
> URL: https://issues.apache.org/jira/browse/IMPALA-7070
> Project: IMPALA
>  Issue Type: Bug
>  Components: Backend
>Affects Versions: Impala 3.0
>Reporter: Dimitris Tsirogiannis
>Priority: Critical
>  Labels: broken-build, flaky, s3, test-failure
> Fix For: Impala 3.3.0
>
>
>  
> {code:java}
> Error Message
> query_test/test_nested_types.py:406: in test_thrift_array_of_arrays "col1 
> array>") query_test/test_nested_types.py:579: in 
> _create_test_table check_call(["hadoop", "fs", "-put", local_path, 
> location], shell=False) /usr/lib64/python2.6/subprocess.py:505: in check_call 
> raise CalledProcessError(retcode, cmd) E   CalledProcessError: Command 
> '['hadoop', 'fs', '-put', 
> '/data/jenkins/workspace/impala-asf-2.x-core-s3/repos/Impala/testdata/parquet_nested_types_encodings/bad-thrift.parquet',
>  
> 's3a://impala-cdh5-s3-test/test-warehouse/test_thrift_array_of_arrays_11da5fde.db/ThriftArrayOfArrays']'
>  returned non-zero exit status 1
> Stacktrace
> query_test/test_nested_types.py:406: in test_thrift_array_of_arrays
> "col1 array>")
> query_test/test_nested_types.py:579: in _create_test_table
> check_call(["hadoop", "fs", "-put", local_path, location], shell=False)
> /usr/lib64/python2.6/subprocess.py:505: in check_call
> raise CalledProcessError(retcode, cmd)
> E   CalledProcessError: Command '['hadoop', 'fs', '-put', 
> '/data/jenkins/workspace/impala-asf-2.x-core-s3/repos/Impala/testdata/parquet_nested_types_encodings/bad-thrift.parquet',
>  
> 's3a://impala-cdh5-s3-test/test-warehouse/test_thrift_array_of_arrays_11da5fde.db/ThriftArrayOfArrays']'
>  returned non-zero exit status 1
> Standard Error
> SET sync_ddl=False;
> -- executing against localhost:21000
> DROP DATABASE IF EXISTS `test_thrift_array_of_arrays_11da5fde` CASCADE;
> SET sync_ddl=False;
> -- executing against localhost:21000
> CREATE DATABASE `test_thrift_array_of_arrays_11da5fde`;
> MainThread: Created database "test_thrift_array_of_arrays_11da5fde" for test 
> ID 
> "query_test/test_nested_types.py::TestParquetArrayEncodings::()::test_thrift_array_of_arrays[exec_option:
>  {'batch_size': 0, 'num_nodes': 0, 'disable_codegen_rows_threshold': 0, 
> 'disable_codegen': False, 'abort_on_error': 1, 'debug_action': None, 
> 'exec_single_node_rows_threshold': 0} | table_format: parquet/none]"
> -- executing against localhost:21000
> create table test_thrift_array_of_arrays_11da5fde.ThriftArrayOfArrays (col1 
> array>) stored as parquet location 
> 's3a://impala-cdh5-s3-test/test-warehouse/test_thrift_array_of_arrays_11da5fde.db/ThriftArrayOfArrays';
> 18/05/20 18:31:03 WARN impl.MetricsConfig: Cannot locate configuration: tried 
> hadoop-metrics2-s3a-file-system.properties,hadoop-metrics2.properties
> 18/05/20 18:31:03 INFO impl.MetricsSystemImpl: Scheduled snapshot period at 
> 10 second(s).
> 18/05/20 18:31:03 INFO impl.MetricsSystemImpl: s3a-file-system metrics system 
> started
> 18/05/20 18:31:06 INFO Configuration.deprecation: 
> fs.s3a.server-side-encryption-key is deprecated. Instead, use 
> fs.s3a.server-side-encryption.key
> put: rename 
> `s3a://impala-cdh5-s3-test/test-warehouse/test_thrift_array_of_arrays_11da5fde.db/ThriftArrayOfArrays/bad-thrift.parquet._COPYING_'
>  to 
> `s3a://impala-cdh5-s3-test/test-warehouse/test_thrift_array_of_arrays_11da5fde.db/ThriftArrayOfArrays/bad-thrift.parquet':
>  Input/output error
> 18/05/20 18:31:08 INFO impl.MetricsSystemImpl: Stopping s3a-file-system 
> metrics system...
> 18/05/20 18:31:08 INFO impl.MetricsSystemImpl: s3a-file-system metrics system 
> stopped.
> 18/05/20 18:31:08 INFO impl.MetricsSystemImpl: s3a-file-system metrics system 
> shutdown complete.{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (IMPALA-7117) Lower debug level for HDFS S3 connector back to INFO

2019-08-18 Thread Lars Volker (JIRA)


[ 
https://issues.apache.org/jira/browse/IMPALA-7117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16910063#comment-16910063
 ] 

Lars Volker commented on IMPALA-7117:
-

I think we can just keep it at Debug since it only affects our own tests and 
no-one objected so far. Closing this one for now.

> Lower debug level for HDFS S3 connector back to INFO
> 
>
> Key: IMPALA-7117
> URL: https://issues.apache.org/jira/browse/IMPALA-7117
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Frontend
>Affects Versions: Impala 2.13.0, Impala 3.1.0
>Reporter: Lars Volker
>Assignee: Lars Volker
>Priority: Blocker
>  Labels: s3
>
> This change will increase the log level for the HDFS S3 connector to DEBUG to 
> help with IMPALA-6910 and IMPALA-7070. Before the next release we need to 
> lower it again.
> https://gerrit.cloudera.org/#/c/10596/
> I'm making this a P1 to remind us that we must do this before cutting a 
> release.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Closed] (IMPALA-7117) Lower debug level for HDFS S3 connector back to INFO

2019-08-18 Thread Lars Volker (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-7117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Volker closed IMPALA-7117.
---
Resolution: Won't Fix

> Lower debug level for HDFS S3 connector back to INFO
> 
>
> Key: IMPALA-7117
> URL: https://issues.apache.org/jira/browse/IMPALA-7117
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Frontend
>Affects Versions: Impala 2.13.0, Impala 3.1.0
>Reporter: Lars Volker
>Assignee: Lars Volker
>Priority: Blocker
>  Labels: s3
>
> This change will increase the log level for the HDFS S3 connector to DEBUG to 
> help with IMPALA-6910 and IMPALA-7070. Before the next release we need to 
> lower it again.
> https://gerrit.cloudera.org/#/c/10596/
> I'm making this a P1 to remind us that we must do this before cutting a 
> release.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Closed] (IMPALA-7117) Lower debug level for HDFS S3 connector back to INFO

2019-08-18 Thread Lars Volker (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-7117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Volker closed IMPALA-7117.
---
Resolution: Won't Fix

> Lower debug level for HDFS S3 connector back to INFO
> 
>
> Key: IMPALA-7117
> URL: https://issues.apache.org/jira/browse/IMPALA-7117
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Frontend
>Affects Versions: Impala 2.13.0, Impala 3.1.0
>Reporter: Lars Volker
>Assignee: Lars Volker
>Priority: Blocker
>  Labels: s3
>
> This change will increase the log level for the HDFS S3 connector to DEBUG to 
> help with IMPALA-6910 and IMPALA-7070. Before the next release we need to 
> lower it again.
> https://gerrit.cloudera.org/#/c/10596/
> I'm making this a P1 to remind us that we must do this before cutting a 
> release.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (IMPALA-8867) test_auto_scaling is flaky

2019-08-15 Thread Lars Volker (JIRA)
Lars Volker created IMPALA-8867:
---

 Summary: test_auto_scaling is flaky
 Key: IMPALA-8867
 URL: https://issues.apache.org/jira/browse/IMPALA-8867
 Project: IMPALA
  Issue Type: Bug
  Components: Infrastructure
Affects Versions: Impala 3.3.0
Reporter: Lars Volker
Assignee: Lars Volker


It looks like test_auto_scaling can sometimes fail to reach the required query 
throughput within the configured timeout.

{{noformat}}
Error MessageAssertionError: Query rate did not surpass 5 within 45 s assert 
any( at 
0x7f7ad65856e0>)Stacktracecustom_cluster/test_auto_scaling.py:104: in 
test_single_workload
assert any(workload.get_query_rate() > min_query_rate or sleep(1)
E   AssertionError: Query rate did not surpass 5 within 45 s
E   assert any( at 0x7f7ad65856e0>)
{{noformat}}




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (IMPALA-8863) Add support to run tests over HS2-HTTP

2019-08-14 Thread Lars Volker (JIRA)
Lars Volker created IMPALA-8863:
---

 Summary: Add support to run tests over HS2-HTTP
 Key: IMPALA-8863
 URL: https://issues.apache.org/jira/browse/IMPALA-8863
 Project: IMPALA
  Issue Type: Improvement
  Components: Infrastructure
Affects Versions: Impala 3.3.0
Reporter: Lars Volker
Assignee: Lars Volker


Once https://github.com/cloudera/impyla/issues/357 gets addressed, we should 
run at least some of our tests over hs2-http using Impyla to better test 
Impyla's HTTP endpoint support.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (IMPALA-8863) Add support to run tests over HS2-HTTP

2019-08-14 Thread Lars Volker (JIRA)
Lars Volker created IMPALA-8863:
---

 Summary: Add support to run tests over HS2-HTTP
 Key: IMPALA-8863
 URL: https://issues.apache.org/jira/browse/IMPALA-8863
 Project: IMPALA
  Issue Type: Improvement
  Components: Infrastructure
Affects Versions: Impala 3.3.0
Reporter: Lars Volker
Assignee: Lars Volker


Once https://github.com/cloudera/impyla/issues/357 gets addressed, we should 
run at least some of our tests over hs2-http using Impyla to better test 
Impyla's HTTP endpoint support.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Updated] (IMPALA-8852) ImpalaD fail to start on a non-datanode with "Invalid short-circuit reads configuration"

2019-08-12 Thread Lars Volker (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Volker updated IMPALA-8852:

Affects Version/s: Impala 3.3.0

> ImpalaD fail to start on a non-datanode with "Invalid short-circuit reads 
> configuration"
> 
>
> Key: IMPALA-8852
> URL: https://issues.apache.org/jira/browse/IMPALA-8852
> Project: IMPALA
>  Issue Type: Bug
>  Components: Backend
>Affects Versions: Impala 3.2.0, Impala 3.3.0
>Reporter: Adriano
>Priority: Major
>  Labels: ramp-up
>
> On coordinator only nodes ([typically the edge 
> nodes|https://www.cloudera.com/documentation/enterprise/5-15-x/topics/impala_dedicated_coordinator.html#concept_omm_gf1_n2b]):
> {code:java}
> --is_coordinator=true
> --is_executor=false
> {code}
> the *dfs.domain.socket.path* (can be nonexistent on the local FS as the 
> Datanode role eventually is not installed on the edge node).
> The non existing path prevent the ImpalaD to start with the message:
> {code:java}
> I0809 04:15:53.899714 25364 status.cc:124] Invalid short-circuit reads 
> configuration:
>   - Impala cannot read or execute the parent directory of 
> dfs.domain.socket.path
> @   0xb35f19
> @  0x100e2fe
> @  0x103f274
> @  0x102836f
> @   0xa9f573
> @ 0x7f97807e93d4
> @   0xafb3b8
> E0809 04:15:53.899749 25364 impala-server.cc:278] Invalid short-circuit reads 
> configuration:
>   - Impala cannot read or execute the parent directory of 
> dfs.domain.socket.path
> {code}
> despite a coordinator-only ImpalaD does not do short circuit reads.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Updated] (IMPALA-8852) ImpalaD fail to start on a non-datanode with "Invalid short-circuit reads configuration"

2019-08-12 Thread Lars Volker (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Volker updated IMPALA-8852:

Priority: Major  (was: Minor)

> ImpalaD fail to start on a non-datanode with "Invalid short-circuit reads 
> configuration"
> 
>
> Key: IMPALA-8852
> URL: https://issues.apache.org/jira/browse/IMPALA-8852
> Project: IMPALA
>  Issue Type: Bug
>  Components: Backend
>Affects Versions: Impala 3.2.0
>Reporter: Adriano
>Priority: Major
>  Labels: ramp-up
>
> On coordinator only nodes ([typically the edge 
> nodes|https://www.cloudera.com/documentation/enterprise/5-15-x/topics/impala_dedicated_coordinator.html#concept_omm_gf1_n2b]):
> {code:java}
> --is_coordinator=true
> --is_executor=false
> {code}
> the *dfs.domain.socket.path* (can be nonexistent on the local FS as the 
> Datanode role eventually is not installed on the edge node).
> The non existing path prevent the ImpalaD to start with the message:
> {code:java}
> I0809 04:15:53.899714 25364 status.cc:124] Invalid short-circuit reads 
> configuration:
>   - Impala cannot read or execute the parent directory of 
> dfs.domain.socket.path
> @   0xb35f19
> @  0x100e2fe
> @  0x103f274
> @  0x102836f
> @   0xa9f573
> @ 0x7f97807e93d4
> @   0xafb3b8
> E0809 04:15:53.899749 25364 impala-server.cc:278] Invalid short-circuit reads 
> configuration:
>   - Impala cannot read or execute the parent directory of 
> dfs.domain.socket.path
> {code}
> despite a coordinator-only ImpalaD does not do short circuit reads.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-8852) ImpalaD fail to start on a non-datanode with "Invalid short-circuit reads configuration"

2019-08-12 Thread Lars Volker (JIRA)


[ 
https://issues.apache.org/jira/browse/IMPALA-8852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16905325#comment-16905325
 ] 

Lars Volker commented on IMPALA-8852:
-

Thanks for filing this issue. As a permanent solution we should only emit a 
warning when the socket cannot be found and {{-is_executor=false}}.

> ImpalaD fail to start on a non-datanode with "Invalid short-circuit reads 
> configuration"
> 
>
> Key: IMPALA-8852
> URL: https://issues.apache.org/jira/browse/IMPALA-8852
> Project: IMPALA
>  Issue Type: Bug
>  Components: Backend
>Affects Versions: Impala 3.2.0
>Reporter: Adriano
>Priority: Minor
>
> On coordinator only nodes ([typically the edge 
> nodes|https://www.cloudera.com/documentation/enterprise/5-15-x/topics/impala_dedicated_coordinator.html#concept_omm_gf1_n2b]):
> {code:java}
> --is_coordinator=true
> --is_executor=false
> {code}
> the *dfs.domain.socket.path* (can be nonexistent on the local FS as the 
> Datanode role eventually is not installed on the edge node).
> The non existing path prevent the ImpalaD to start with the message:
> {code:java}
> I0809 04:15:53.899714 25364 status.cc:124] Invalid short-circuit reads 
> configuration:
>   - Impala cannot read or execute the parent directory of 
> dfs.domain.socket.path
> @   0xb35f19
> @  0x100e2fe
> @  0x103f274
> @  0x102836f
> @   0xa9f573
> @ 0x7f97807e93d4
> @   0xafb3b8
> E0809 04:15:53.899749 25364 impala-server.cc:278] Invalid short-circuit reads 
> configuration:
>   - Impala cannot read or execute the parent directory of 
> dfs.domain.socket.path
> {code}
> despite a coordinator-only ImpalaD does not do short circuit reads.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Updated] (IMPALA-8852) ImpalaD fail to start on a non-datanode with "Invalid short-circuit reads configuration"

2019-08-12 Thread Lars Volker (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Volker updated IMPALA-8852:

Labels: ramp-up  (was: )

> ImpalaD fail to start on a non-datanode with "Invalid short-circuit reads 
> configuration"
> 
>
> Key: IMPALA-8852
> URL: https://issues.apache.org/jira/browse/IMPALA-8852
> Project: IMPALA
>  Issue Type: Bug
>  Components: Backend
>Affects Versions: Impala 3.2.0
>Reporter: Adriano
>Priority: Minor
>  Labels: ramp-up
>
> On coordinator only nodes ([typically the edge 
> nodes|https://www.cloudera.com/documentation/enterprise/5-15-x/topics/impala_dedicated_coordinator.html#concept_omm_gf1_n2b]):
> {code:java}
> --is_coordinator=true
> --is_executor=false
> {code}
> the *dfs.domain.socket.path* (can be nonexistent on the local FS as the 
> Datanode role eventually is not installed on the edge node).
> The non existing path prevent the ImpalaD to start with the message:
> {code:java}
> I0809 04:15:53.899714 25364 status.cc:124] Invalid short-circuit reads 
> configuration:
>   - Impala cannot read or execute the parent directory of 
> dfs.domain.socket.path
> @   0xb35f19
> @  0x100e2fe
> @  0x103f274
> @  0x102836f
> @   0xa9f573
> @ 0x7f97807e93d4
> @   0xafb3b8
> E0809 04:15:53.899749 25364 impala-server.cc:278] Invalid short-circuit reads 
> configuration:
>   - Impala cannot read or execute the parent directory of 
> dfs.domain.socket.path
> {code}
> despite a coordinator-only ImpalaD does not do short circuit reads.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Resolved] (IMPALA-8798) TestAutoScaling does not work on erasure-coded files

2019-07-31 Thread Lars Volker (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Volker resolved IMPALA-8798.
-
   Resolution: Fixed
Fix Version/s: Impala 3.3.0

> TestAutoScaling does not work on erasure-coded files
> 
>
> Key: IMPALA-8798
> URL: https://issues.apache.org/jira/browse/IMPALA-8798
> Project: IMPALA
>  Issue Type: Bug
>  Components: Infrastructure
>Affects Versions: Impala 3.3.0
>Reporter: Lars Volker
>Assignee: Lars Volker
>Priority: Critical
>  Labels: scalability
> Fix For: Impala 3.3.0
>
>
> TestAutoScaling uses the ConcurrentWorkload class, which does not set the 
> required query option to support scanning erasure-coded files. We should 
> disable the test for those cases.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Resolved] (IMPALA-8798) TestAutoScaling does not work on erasure-coded files

2019-07-31 Thread Lars Volker (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Volker resolved IMPALA-8798.
-
   Resolution: Fixed
Fix Version/s: Impala 3.3.0

> TestAutoScaling does not work on erasure-coded files
> 
>
> Key: IMPALA-8798
> URL: https://issues.apache.org/jira/browse/IMPALA-8798
> Project: IMPALA
>  Issue Type: Bug
>  Components: Infrastructure
>Affects Versions: Impala 3.3.0
>Reporter: Lars Volker
>Assignee: Lars Volker
>Priority: Critical
>  Labels: scalability
> Fix For: Impala 3.3.0
>
>
> TestAutoScaling uses the ConcurrentWorkload class, which does not set the 
> required query option to support scanning erasure-coded files. We should 
> disable the test for those cases.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-8802) Switch to pgrep for graceful_shutdown_backends.sh

2019-07-29 Thread Lars Volker (JIRA)
Lars Volker created IMPALA-8802:
---

 Summary: Switch to pgrep for graceful_shutdown_backends.sh
 Key: IMPALA-8802
 URL: https://issues.apache.org/jira/browse/IMPALA-8802
 Project: IMPALA
  Issue Type: Bug
  Components: Infrastructure
Affects Versions: Impala 3.3.0
Reporter: Lars Volker
Assignee: Lars Volker


IMPALA-8798 added a script with a call to {{pidof}}. However, {{pgrep}} seems 
generally preferred (https://mywiki.wooledge.org/BadUtils) and we should switch 
to it.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-8802) Switch to pgrep for graceful_shutdown_backends.sh

2019-07-29 Thread Lars Volker (JIRA)
Lars Volker created IMPALA-8802:
---

 Summary: Switch to pgrep for graceful_shutdown_backends.sh
 Key: IMPALA-8802
 URL: https://issues.apache.org/jira/browse/IMPALA-8802
 Project: IMPALA
  Issue Type: Bug
  Components: Infrastructure
Affects Versions: Impala 3.3.0
Reporter: Lars Volker
Assignee: Lars Volker


IMPALA-8798 added a script with a call to {{pidof}}. However, {{pgrep}} seems 
generally preferred (https://mywiki.wooledge.org/BadUtils) and we should switch 
to it.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (IMPALA-8798) TestAutoScaling does not work on erasure-coded files

2019-07-26 Thread Lars Volker (JIRA)
Lars Volker created IMPALA-8798:
---

 Summary: TestAutoScaling does not work on erasure-coded files
 Key: IMPALA-8798
 URL: https://issues.apache.org/jira/browse/IMPALA-8798
 Project: IMPALA
  Issue Type: Bug
  Components: Infrastructure
Affects Versions: Impala 3.3.0
Reporter: Lars Volker
Assignee: Lars Volker


TestAutoScaling uses the ConcurrentWorkload class, which does not set the 
required query option to support scanning erasure-coded files. We should 
disable the test for those cases.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-8798) TestAutoScaling does not work on erasure-coded files

2019-07-26 Thread Lars Volker (JIRA)
Lars Volker created IMPALA-8798:
---

 Summary: TestAutoScaling does not work on erasure-coded files
 Key: IMPALA-8798
 URL: https://issues.apache.org/jira/browse/IMPALA-8798
 Project: IMPALA
  Issue Type: Bug
  Components: Infrastructure
Affects Versions: Impala 3.3.0
Reporter: Lars Volker
Assignee: Lars Volker


TestAutoScaling uses the ConcurrentWorkload class, which does not set the 
required query option to support scanning erasure-coded files. We should 
disable the test for those cases.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (IMPALA-8789) Include script to trigger graceful shutdown in docker containers

2019-07-24 Thread Lars Volker (JIRA)
Lars Volker created IMPALA-8789:
---

 Summary: Include script to trigger graceful shutdown in docker 
containers
 Key: IMPALA-8789
 URL: https://issues.apache.org/jira/browse/IMPALA-8789
 Project: IMPALA
  Issue Type: Improvement
Affects Versions: Impala 3.3.0
Reporter: Lars Volker
Assignee: Lars Volker


We should include a utility script in the docker containers to trigger a 
graceful shutdown by sending SIGRTMIN to all impalads.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (IMPALA-8789) Include script to trigger graceful shutdown in docker containers

2019-07-24 Thread Lars Volker (JIRA)
Lars Volker created IMPALA-8789:
---

 Summary: Include script to trigger graceful shutdown in docker 
containers
 Key: IMPALA-8789
 URL: https://issues.apache.org/jira/browse/IMPALA-8789
 Project: IMPALA
  Issue Type: Improvement
Affects Versions: Impala 3.3.0
Reporter: Lars Volker
Assignee: Lars Volker


We should include a utility script in the docker containers to trigger a 
graceful shutdown by sending SIGRTMIN to all impalads.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Resolved] (IMPALA-8610) Log rotation can fail gerrit-verify-dryrun/ubuntu-16.04-from-scratch jobs

2019-07-21 Thread Lars Volker (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Volker resolved IMPALA-8610.
-
   Resolution: Fixed
Fix Version/s: Not Applicable

It seems that my change to the configuration was sufficient, at least we 
haven't seen this again since then.

> Log rotation can fail gerrit-verify-dryrun/ubuntu-16.04-from-scratch jobs
> -
>
> Key: IMPALA-8610
> URL: https://issues.apache.org/jira/browse/IMPALA-8610
> Project: IMPALA
>  Issue Type: Bug
>  Components: Infrastructure, Jenkins
>Reporter: Lars Volker
>Assignee: Lars Volker
>Priority: Critical
> Fix For: Not Applicable
>
>
> It seems that otherwise successful runs of {{ubuntu-16.04-from-scratch}} can 
> fail at the end when copying log files in the end. If the impalads are still 
> running, they can rotate a logfile while {{cp}} tries to copy it, which 
> results in output like this ([affected 
> job|https://jenkins.impala.io/job/ubuntu-16.04-from-scratch/6027/console]):
> {noformat}
> 00:24:28 + cp -r -L /home/ubuntu/Impala/logs /home/ubuntu/Impala/logs_static
> 00:24:37 cp: cannot stat 
> '/home/ubuntu/Impala/logs/cluster/impalad.ip-172-31-19-200.ubuntu.log.WARNING.20190601-072315.102026':
>  No such file or directory
> 00:24:37 cp: cannot stat 
> '/home/ubuntu/Impala/logs/cluster/impalad.ip-172-31-19-200.ubuntu.log.WARNING.20190601-072304.100810':
>  No such file or directory
> 00:24:37 cp: cannot stat 
> '/home/ubuntu/Impala/logs/cluster/impalad.ip-172-31-19-200.ubuntu.log.WARNING.20190601-072315.102023':
>  No such file or directory
> {noformat}
> Note that the script is currently configured in Jenkins to run in a shell 
> with {{-e}} and that a successful run contains additional lines after the 
> {{cp}} command:
> {noformat}
> 04:16:18 + cp -r -L /home/ubuntu/Impala/logs /home/ubuntu/Impala/logs_static
> 04:16:21 + rm -f /tmp/git-err-ksiZp.log
> {noformat}
> As a fix we should kill all daemons before copying the files.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Resolved] (IMPALA-8610) Log rotation can fail gerrit-verify-dryrun/ubuntu-16.04-from-scratch jobs

2019-07-21 Thread Lars Volker (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Volker resolved IMPALA-8610.
-
   Resolution: Fixed
Fix Version/s: Not Applicable

It seems that my change to the configuration was sufficient, at least we 
haven't seen this again since then.

> Log rotation can fail gerrit-verify-dryrun/ubuntu-16.04-from-scratch jobs
> -
>
> Key: IMPALA-8610
> URL: https://issues.apache.org/jira/browse/IMPALA-8610
> Project: IMPALA
>  Issue Type: Bug
>  Components: Infrastructure, Jenkins
>Reporter: Lars Volker
>Assignee: Lars Volker
>Priority: Critical
> Fix For: Not Applicable
>
>
> It seems that otherwise successful runs of {{ubuntu-16.04-from-scratch}} can 
> fail at the end when copying log files in the end. If the impalads are still 
> running, they can rotate a logfile while {{cp}} tries to copy it, which 
> results in output like this ([affected 
> job|https://jenkins.impala.io/job/ubuntu-16.04-from-scratch/6027/console]):
> {noformat}
> 00:24:28 + cp -r -L /home/ubuntu/Impala/logs /home/ubuntu/Impala/logs_static
> 00:24:37 cp: cannot stat 
> '/home/ubuntu/Impala/logs/cluster/impalad.ip-172-31-19-200.ubuntu.log.WARNING.20190601-072315.102026':
>  No such file or directory
> 00:24:37 cp: cannot stat 
> '/home/ubuntu/Impala/logs/cluster/impalad.ip-172-31-19-200.ubuntu.log.WARNING.20190601-072304.100810':
>  No such file or directory
> 00:24:37 cp: cannot stat 
> '/home/ubuntu/Impala/logs/cluster/impalad.ip-172-31-19-200.ubuntu.log.WARNING.20190601-072315.102023':
>  No such file or directory
> {noformat}
> Note that the script is currently configured in Jenkins to run in a shell 
> with {{-e}} and that a successful run contains additional lines after the 
> {{cp}} command:
> {noformat}
> 04:16:18 + cp -r -L /home/ubuntu/Impala/logs /home/ubuntu/Impala/logs_static
> 04:16:21 + rm -f /tmp/git-err-ksiZp.log
> {noformat}
> As a fix we should kill all daemons before copying the files.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Work started] (IMPALA-8610) Log rotation can fail gerrit-verify-dryrun/ubuntu-16.04-from-scratch jobs

2019-07-21 Thread Lars Volker (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on IMPALA-8610 started by Lars Volker.
---
> Log rotation can fail gerrit-verify-dryrun/ubuntu-16.04-from-scratch jobs
> -
>
> Key: IMPALA-8610
> URL: https://issues.apache.org/jira/browse/IMPALA-8610
> Project: IMPALA
>  Issue Type: Bug
>  Components: Infrastructure, Jenkins
>Reporter: Lars Volker
>Assignee: Lars Volker
>Priority: Critical
>
> It seems that otherwise successful runs of {{ubuntu-16.04-from-scratch}} can 
> fail at the end when copying log files in the end. If the impalads are still 
> running, they can rotate a logfile while {{cp}} tries to copy it, which 
> results in output like this ([affected 
> job|https://jenkins.impala.io/job/ubuntu-16.04-from-scratch/6027/console]):
> {noformat}
> 00:24:28 + cp -r -L /home/ubuntu/Impala/logs /home/ubuntu/Impala/logs_static
> 00:24:37 cp: cannot stat 
> '/home/ubuntu/Impala/logs/cluster/impalad.ip-172-31-19-200.ubuntu.log.WARNING.20190601-072315.102026':
>  No such file or directory
> 00:24:37 cp: cannot stat 
> '/home/ubuntu/Impala/logs/cluster/impalad.ip-172-31-19-200.ubuntu.log.WARNING.20190601-072304.100810':
>  No such file or directory
> 00:24:37 cp: cannot stat 
> '/home/ubuntu/Impala/logs/cluster/impalad.ip-172-31-19-200.ubuntu.log.WARNING.20190601-072315.102023':
>  No such file or directory
> {noformat}
> Note that the script is currently configured in Jenkins to run in a shell 
> with {{-e}} and that a successful run contains additional lines after the 
> {{cp}} command:
> {noformat}
> 04:16:18 + cp -r -L /home/ubuntu/Impala/logs /home/ubuntu/Impala/logs_static
> 04:16:21 + rm -f /tmp/git-err-ksiZp.log
> {noformat}
> As a fix we should kill all daemons before copying the files.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Closed] (IMPALA-8596) TestObservability.test_global_exchange_counters failed in ASAN

2019-07-21 Thread Lars Volker (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Volker closed IMPALA-8596.
---
Resolution: Cannot Reproduce

This hasn't shown up again since May. Closing it for now; we can always re-open 
if we see if again

> TestObservability.test_global_exchange_counters failed in ASAN
> --
>
> Key: IMPALA-8596
> URL: https://issues.apache.org/jira/browse/IMPALA-8596
> Project: IMPALA
>  Issue Type: Bug
>  Components: Distributed Exec
>Affects Versions: Impala 3.3.0
>Reporter: Zoltán Borók-Nagy
>Assignee: Lars Volker
>Priority: Blocker
>  Labels: broken-build, flaky
>
> Seen in an ASAN build: 
> {noformat}
> Error Message
> query_test/test_observability.py:415: in test_global_exchange_counters assert 
> m E assert None
> Stacktrace
> query_test/test_observability.py:415: in test_global_exchange_counters assert 
> m E assert None
> Standard Error
> -- executing against localhost:21000 select count(*) from tpch_parquet.orders 
> o inner join tpch_parquet.lineitem l on o.o_orderkey = l.l_orderkey group by 
> o.o_clerk limit 10; -- 2019-05-28 05:24:17,072 INFO MainThread: Started query 
> 664116fde66bdd8c:4ca951da
> {noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Closed] (IMPALA-8596) TestObservability.test_global_exchange_counters failed in ASAN

2019-07-21 Thread Lars Volker (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Volker closed IMPALA-8596.
---
Resolution: Cannot Reproduce

This hasn't shown up again since May. Closing it for now; we can always re-open 
if we see if again

> TestObservability.test_global_exchange_counters failed in ASAN
> --
>
> Key: IMPALA-8596
> URL: https://issues.apache.org/jira/browse/IMPALA-8596
> Project: IMPALA
>  Issue Type: Bug
>  Components: Distributed Exec
>Affects Versions: Impala 3.3.0
>Reporter: Zoltán Borók-Nagy
>Assignee: Lars Volker
>Priority: Blocker
>  Labels: broken-build, flaky
>
> Seen in an ASAN build: 
> {noformat}
> Error Message
> query_test/test_observability.py:415: in test_global_exchange_counters assert 
> m E assert None
> Stacktrace
> query_test/test_observability.py:415: in test_global_exchange_counters assert 
> m E assert None
> Standard Error
> -- executing against localhost:21000 select count(*) from tpch_parquet.orders 
> o inner join tpch_parquet.lineitem l on o.o_orderkey = l.l_orderkey group by 
> o.o_clerk limit 10; -- 2019-05-28 05:24:17,072 INFO MainThread: Started query 
> 664116fde66bdd8c:4ca951da
> {noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Resolved] (IMPALA-8158) Bump Impyla version, use HS2 service to retrieve thrift profiles

2019-07-21 Thread Lars Volker (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Volker resolved IMPALA-8158.
-
   Resolution: Fixed
Fix Version/s: Impala 3.3.0

> Bump Impyla version, use HS2 service to retrieve thrift profiles
> 
>
> Key: IMPALA-8158
> URL: https://issues.apache.org/jira/browse/IMPALA-8158
> Project: IMPALA
>  Issue Type: New Feature
>  Components: Infrastructure
>Affects Versions: Impala 3.2.0
>Reporter: Lars Volker
>Assignee: Lars Volker
>Priority: Major
> Fix For: Impala 3.3.0
>
>
> Once Impyla has been updated, we should retrieve Thrift profiles through HS2 
> synchronously instead of scraping the debug web pages.
> https://github.com/cloudera/impyla/issues/332



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Resolved] (IMPALA-8158) Bump Impyla version, use HS2 service to retrieve thrift profiles

2019-07-21 Thread Lars Volker (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Volker resolved IMPALA-8158.
-
   Resolution: Fixed
Fix Version/s: Impala 3.3.0

> Bump Impyla version, use HS2 service to retrieve thrift profiles
> 
>
> Key: IMPALA-8158
> URL: https://issues.apache.org/jira/browse/IMPALA-8158
> Project: IMPALA
>  Issue Type: New Feature
>  Components: Infrastructure
>Affects Versions: Impala 3.2.0
>Reporter: Lars Volker
>Assignee: Lars Volker
>Priority: Major
> Fix For: Impala 3.3.0
>
>
> Once Impyla has been updated, we should retrieve Thrift profiles through HS2 
> synchronously instead of scraping the debug web pages.
> https://github.com/cloudera/impyla/issues/332



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Resolved] (IMPALA-8461) Re-schedule queries if the executor configuration has changed while queued in AC

2019-07-21 Thread Lars Volker (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Volker resolved IMPALA-8461.
-
   Resolution: Duplicate
Fix Version/s: Impala 3.3.0

This has been folded into IMPALA-8484

> Re-schedule queries if the executor configuration has changed while queued in 
> AC
> 
>
> Key: IMPALA-8461
> URL: https://issues.apache.org/jira/browse/IMPALA-8461
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Backend
>Affects Versions: Impala 3.3.0
>Reporter: Lars Volker
>Assignee: Lars Volker
>Priority: Major
> Fix For: Impala 3.3.0
>
>
> If the executor configuration changes while a query is waiting to be 
> admitted, we need to reschedule it. The current behavior tries to run it as 
> is which will then fail. To achieve this, we should call 
> Scheduler::Schedule() from the AdmissionController and then re-schedule if 
> necessary. We need to think about ways to detect changes to the executor 
> configuration, but a simple hash might be good enough.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Resolved] (IMPALA-8461) Re-schedule queries if the executor configuration has changed while queued in AC

2019-07-21 Thread Lars Volker (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Volker resolved IMPALA-8461.
-
   Resolution: Duplicate
Fix Version/s: Impala 3.3.0

This has been folded into IMPALA-8484

> Re-schedule queries if the executor configuration has changed while queued in 
> AC
> 
>
> Key: IMPALA-8461
> URL: https://issues.apache.org/jira/browse/IMPALA-8461
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Backend
>Affects Versions: Impala 3.3.0
>Reporter: Lars Volker
>Assignee: Lars Volker
>Priority: Major
> Fix For: Impala 3.3.0
>
>
> If the executor configuration changes while a query is waiting to be 
> admitted, we need to reschedule it. The current behavior tries to run it as 
> is which will then fail. To achieve this, we should call 
> Scheduler::Schedule() from the AdmissionController and then re-schedule if 
> necessary. We need to think about ways to detect changes to the executor 
> configuration, but a simple hash might be good enough.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Resolved] (IMPALA-8484) Add support to run queries on disjoint executor groups

2019-07-21 Thread Lars Volker (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Volker resolved IMPALA-8484.
-
   Resolution: Fixed
Fix Version/s: Impala 3.3.0

> Add support to run queries on disjoint executor groups
> --
>
> Key: IMPALA-8484
> URL: https://issues.apache.org/jira/browse/IMPALA-8484
> Project: IMPALA
>  Issue Type: New Feature
>Affects Versions: Impala 3.3.0
>Reporter: Lars Volker
>Assignee: Lars Volker
>Priority: Major
>  Labels: scalability
> Fix For: Impala 3.3.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Resolved] (IMPALA-8776) test_event_processing.TestEventProcessing.test_insert_events is flaky

2019-07-21 Thread Lars Volker (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Volker resolved IMPALA-8776.
-
   Resolution: Fixed
Fix Version/s: Impala 3.3.0

> test_event_processing.TestEventProcessing.test_insert_events is flaky
> -
>
> Key: IMPALA-8776
> URL: https://issues.apache.org/jira/browse/IMPALA-8776
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Infrastructure
>Reporter: Lars Volker
>Assignee: Lars Volker
>Priority: Major
>  Labels: flaky, flaky-test
> Fix For: Impala 3.3.0
>
>
> It looks like test_event_processing.TestEventProcessing.test_insert_events 
> can sporadically fail when waiting for insert events to be processed. To make 
> it more robust, we should wait for longer than 10 seconds. We should also add 
> a small delay when looping inside {{wait_for_insert_event_processing()}} to 
> keep system load low and reduce the risk of starving the other processes of 
> resources.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Resolved] (IMPALA-8776) test_event_processing.TestEventProcessing.test_insert_events is flaky

2019-07-21 Thread Lars Volker (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Volker resolved IMPALA-8776.
-
   Resolution: Fixed
Fix Version/s: Impala 3.3.0

> test_event_processing.TestEventProcessing.test_insert_events is flaky
> -
>
> Key: IMPALA-8776
> URL: https://issues.apache.org/jira/browse/IMPALA-8776
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Infrastructure
>Reporter: Lars Volker
>Assignee: Lars Volker
>Priority: Major
>  Labels: flaky, flaky-test
> Fix For: Impala 3.3.0
>
>
> It looks like test_event_processing.TestEventProcessing.test_insert_events 
> can sporadically fail when waiting for insert events to be processed. To make 
> it more robust, we should wait for longer than 10 seconds. We should also add 
> a small delay when looping inside {{wait_for_insert_event_processing()}} to 
> keep system load low and reduce the risk of starving the other processes of 
> resources.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Resolved] (IMPALA-8484) Add support to run queries on disjoint executor groups

2019-07-21 Thread Lars Volker (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Volker resolved IMPALA-8484.
-
   Resolution: Fixed
Fix Version/s: Impala 3.3.0

> Add support to run queries on disjoint executor groups
> --
>
> Key: IMPALA-8484
> URL: https://issues.apache.org/jira/browse/IMPALA-8484
> Project: IMPALA
>  Issue Type: New Feature
>Affects Versions: Impala 3.3.0
>Reporter: Lars Volker
>Assignee: Lars Volker
>Priority: Major
>  Labels: scalability
> Fix For: Impala 3.3.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Assigned] (IMPALA-8776) test_event_processing.TestEventProcessing.test_insert_events is flaky

2019-07-20 Thread Lars Volker (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Volker reassigned IMPALA-8776:
---

Assignee: Lars Volker

> test_event_processing.TestEventProcessing.test_insert_events is flaky
> -
>
> Key: IMPALA-8776
> URL: https://issues.apache.org/jira/browse/IMPALA-8776
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Infrastructure
>Reporter: Lars Volker
>Assignee: Lars Volker
>Priority: Major
>  Labels: flaky, flaky-test
>
> It looks like test_event_processing.TestEventProcessing.test_insert_events 
> can sporadically fail when waiting for insert events to be processed. To make 
> it more robust, we should wait for longer than 10 seconds. We should also add 
> a small delay when looping inside {{wait_for_insert_event_processing()}} to 
> keep system load low and reduce the risk of starving the other processes of 
> resources.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Updated] (IMPALA-8776) test_event_processing.TestEventProcessing.test_insert_events is flaky

2019-07-20 Thread Lars Volker (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Volker updated IMPALA-8776:

Labels: flaky flaky-test  (was: )

> test_event_processing.TestEventProcessing.test_insert_events is flaky
> -
>
> Key: IMPALA-8776
> URL: https://issues.apache.org/jira/browse/IMPALA-8776
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Infrastructure
>Reporter: Lars Volker
>Priority: Major
>  Labels: flaky, flaky-test
>
> It looks like test_event_processing.TestEventProcessing.test_insert_events 
> can sporadically fail when waiting for insert events to be processed. To make 
> it more robust, we should wait for longer than 10 seconds. We should also add 
> a small delay when looping inside {{wait_for_insert_event_processing()}} to 
> keep system load low and reduce the risk of starving the other processes of 
> resources.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-8776) test_event_processing.TestEventProcessing.test_insert_events is flaky

2019-07-20 Thread Lars Volker (JIRA)
Lars Volker created IMPALA-8776:
---

 Summary: 
test_event_processing.TestEventProcessing.test_insert_events is flaky
 Key: IMPALA-8776
 URL: https://issues.apache.org/jira/browse/IMPALA-8776
 Project: IMPALA
  Issue Type: Improvement
  Components: Infrastructure
Reporter: Lars Volker


It looks like test_event_processing.TestEventProcessing.test_insert_events can 
sporadically fail when waiting for insert events to be processed. To make it 
more robust, we should wait for longer than 10 seconds. We should also add a 
small delay when looping inside {{wait_for_insert_event_processing()}} to keep 
system load low and reduce the risk of starving the other processes of 
resources.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-8776) test_event_processing.TestEventProcessing.test_insert_events is flaky

2019-07-20 Thread Lars Volker (JIRA)
Lars Volker created IMPALA-8776:
---

 Summary: 
test_event_processing.TestEventProcessing.test_insert_events is flaky
 Key: IMPALA-8776
 URL: https://issues.apache.org/jira/browse/IMPALA-8776
 Project: IMPALA
  Issue Type: Improvement
  Components: Infrastructure
Reporter: Lars Volker


It looks like test_event_processing.TestEventProcessing.test_insert_events can 
sporadically fail when waiting for insert events to be processed. To make it 
more robust, we should wait for longer than 10 seconds. We should also add a 
small delay when looping inside {{wait_for_insert_event_processing()}} to keep 
system load low and reduce the risk of starving the other processes of 
resources.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Resolved] (IMPALA-8758) Misleading error message "Unknown executor group" during cluster startup with dedicated coordinator

2019-07-16 Thread Lars Volker (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Volker resolved IMPALA-8758.
-
   Resolution: Fixed
 Assignee: Lars Volker
Fix Version/s: Impala 3.3.0

> Misleading error message "Unknown executor group" during cluster startup with 
> dedicated coordinator
> ---
>
> Key: IMPALA-8758
> URL: https://issues.apache.org/jira/browse/IMPALA-8758
> Project: IMPALA
>  Issue Type: Bug
>  Components: Backend
>Affects Versions: Impala 3.3.0
>Reporter: Lars Volker
>Assignee: Lars Volker
>Priority: Major
>  Labels: cluster-membership, dedicated-coordinator, ramp-up, 
> scheduler
> Fix For: Impala 3.3.0
>
>
> During cluster startup the Scheduler will return an error until the local 
> backend has started up ("Local backend has not been registered in the cluster 
> membership"). Afterwards, it will assume that the default executor group 
> exists. However, if the coordinator is not also an executor (i.e. it is a 
> dedicated coordinator), then it will not actually create the default executor 
> group in cluster-membership-mgr.cc:256.
> Queries are expected to fail in this scenario, but the error message should 
> certainly be improved to indicate that no executors could be found. For this 
> purpose, we should make sure that the default executor group gets created as 
> soon as the local backend has started, but keep it empty if it is not an 
> executor. Then we can warn in the scheduler that no executors have been 
> registered so far.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Resolved] (IMPALA-8758) Misleading error message "Unknown executor group" during cluster startup with dedicated coordinator

2019-07-16 Thread Lars Volker (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Volker resolved IMPALA-8758.
-
   Resolution: Fixed
 Assignee: Lars Volker
Fix Version/s: Impala 3.3.0

> Misleading error message "Unknown executor group" during cluster startup with 
> dedicated coordinator
> ---
>
> Key: IMPALA-8758
> URL: https://issues.apache.org/jira/browse/IMPALA-8758
> Project: IMPALA
>  Issue Type: Bug
>  Components: Backend
>Affects Versions: Impala 3.3.0
>Reporter: Lars Volker
>Assignee: Lars Volker
>Priority: Major
>  Labels: cluster-membership, dedicated-coordinator, ramp-up, 
> scheduler
> Fix For: Impala 3.3.0
>
>
> During cluster startup the Scheduler will return an error until the local 
> backend has started up ("Local backend has not been registered in the cluster 
> membership"). Afterwards, it will assume that the default executor group 
> exists. However, if the coordinator is not also an executor (i.e. it is a 
> dedicated coordinator), then it will not actually create the default executor 
> group in cluster-membership-mgr.cc:256.
> Queries are expected to fail in this scenario, but the error message should 
> certainly be improved to indicate that no executors could be found. For this 
> purpose, we should make sure that the default executor group gets created as 
> soon as the local backend has started, but keep it empty if it is not an 
> executor. Then we can warn in the scheduler that no executors have been 
> registered so far.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Closed] (IMPALA-8724) Don't run queries on unhealthy executor groups

2019-07-14 Thread Lars Volker (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Volker closed IMPALA-8724.
---
Resolution: Duplicate

This will be folded into IMPALA-8484

> Don't run queries on unhealthy executor groups
> --
>
> Key: IMPALA-8724
> URL: https://issues.apache.org/jira/browse/IMPALA-8724
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Backend
>Affects Versions: Impala 3.3.0
>Reporter: Lars Volker
>Assignee: Lars Volker
>Priority: Critical
>  Labels: admission-control, fault-tolerance, scalability, 
> scheduler
>
> After IMPALA-8484 we need to add a way to exclude executor groups that are 
> only partially available. This will help to keep queries from running on 
> partially started groups and in cases where some nodes of an executor group 
> have failed.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Closed] (IMPALA-8724) Don't run queries on unhealthy executor groups

2019-07-14 Thread Lars Volker (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Volker closed IMPALA-8724.
---
Resolution: Duplicate

This will be folded into IMPALA-8484

> Don't run queries on unhealthy executor groups
> --
>
> Key: IMPALA-8724
> URL: https://issues.apache.org/jira/browse/IMPALA-8724
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Backend
>Affects Versions: Impala 3.3.0
>Reporter: Lars Volker
>Assignee: Lars Volker
>Priority: Critical
>  Labels: admission-control, fault-tolerance, scalability, 
> scheduler
>
> After IMPALA-8484 we need to add a way to exclude executor groups that are 
> only partially available. This will help to keep queries from running on 
> partially started groups and in cases where some nodes of an executor group 
> have failed.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-8762) Track number of running queries on all backends in admission controller

2019-07-14 Thread Lars Volker (JIRA)
Lars Volker created IMPALA-8762:
---

 Summary: Track number of running queries on all backends in 
admission controller
 Key: IMPALA-8762
 URL: https://issues.apache.org/jira/browse/IMPALA-8762
 Project: IMPALA
  Issue Type: Improvement
  Components: Backend
Affects Versions: Impala 3.3.0
Reporter: Lars Volker


To support running multiple coordinators with executor groups and slot based 
admission checks, all executors need to include the number of currently running 
queries in their statestore updates, similar to mem reserved.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-8762) Track number of running queries on all backends in admission controller

2019-07-14 Thread Lars Volker (JIRA)
Lars Volker created IMPALA-8762:
---

 Summary: Track number of running queries on all backends in 
admission controller
 Key: IMPALA-8762
 URL: https://issues.apache.org/jira/browse/IMPALA-8762
 Project: IMPALA
  Issue Type: Improvement
  Components: Backend
Affects Versions: Impala 3.3.0
Reporter: Lars Volker


To support running multiple coordinators with executor groups and slot based 
admission checks, all executors need to include the number of currently running 
queries in their statestore updates, similar to mem reserved.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (IMPALA-8758) Misleading error message "Unknown executor group" during cluster startup with dedicated coordinator

2019-07-12 Thread Lars Volker (JIRA)
Lars Volker created IMPALA-8758:
---

 Summary: Misleading error message "Unknown executor group" during 
cluster startup with dedicated coordinator
 Key: IMPALA-8758
 URL: https://issues.apache.org/jira/browse/IMPALA-8758
 Project: IMPALA
  Issue Type: Bug
  Components: Backend
Affects Versions: Impala 3.3.0
Reporter: Lars Volker


During cluster startup the Scheduler will return an error until the local 
backend has started up ("Local backend has not been registered in the cluster 
membership"). Afterwards, it will assume that the default executor group 
exists. However, if the coordinator is not also an executor (i.e. it is a 
dedicated coordinator), then it will not actually create the default executor 
group in cluster-membership-mgr.cc:256.

Queries are expected to fail in this scenario, but the error message should 
certainly be improved to indicate that no executors could be found. For this 
purpose, we should make sure that the default executor group gets created as 
soon as the local backend has started, but keep it empty if it is not an 
executor. Then we can warn in the scheduler that no executors have been 
registered so far.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (IMPALA-8758) Misleading error message "Unknown executor group" during cluster startup with dedicated coordinator

2019-07-12 Thread Lars Volker (JIRA)
Lars Volker created IMPALA-8758:
---

 Summary: Misleading error message "Unknown executor group" during 
cluster startup with dedicated coordinator
 Key: IMPALA-8758
 URL: https://issues.apache.org/jira/browse/IMPALA-8758
 Project: IMPALA
  Issue Type: Bug
  Components: Backend
Affects Versions: Impala 3.3.0
Reporter: Lars Volker


During cluster startup the Scheduler will return an error until the local 
backend has started up ("Local backend has not been registered in the cluster 
membership"). Afterwards, it will assume that the default executor group 
exists. However, if the coordinator is not also an executor (i.e. it is a 
dedicated coordinator), then it will not actually create the default executor 
group in cluster-membership-mgr.cc:256.

Queries are expected to fail in this scenario, but the error message should 
certainly be improved to indicate that no executors could be found. For this 
purpose, we should make sure that the default executor group gets created as 
soon as the local backend has started, but keep it empty if it is not an 
executor. Then we can warn in the scheduler that no executors have been 
registered so far.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-8757) Extend slot based admission to default executor group

2019-07-12 Thread Lars Volker (JIRA)
Lars Volker created IMPALA-8757:
---

 Summary: Extend slot based admission to default executor group
 Key: IMPALA-8757
 URL: https://issues.apache.org/jira/browse/IMPALA-8757
 Project: IMPALA
  Issue Type: Improvement
  Components: Backend
Affects Versions: Impala 3.3.0
Reporter: Lars Volker


IMPALA-8484 adds support for multiple executor groups and uses a slot-based 
mechanism to admit queries to executors. In order to keep the existing behavior 
stable, this logic is not applied to the default executor group.

This Jira tracks work on doing that. We have to be careful not to break the 
existing behavior, and if we do, hold the change back until the next 
compatibility-breaking release.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-8757) Extend slot based admission to default executor group

2019-07-12 Thread Lars Volker (JIRA)
Lars Volker created IMPALA-8757:
---

 Summary: Extend slot based admission to default executor group
 Key: IMPALA-8757
 URL: https://issues.apache.org/jira/browse/IMPALA-8757
 Project: IMPALA
  Issue Type: Improvement
  Components: Backend
Affects Versions: Impala 3.3.0
Reporter: Lars Volker


IMPALA-8484 adds support for multiple executor groups and uses a slot-based 
mechanism to admit queries to executors. In order to keep the existing behavior 
stable, this logic is not applied to the default executor group.

This Jira tracks work on doing that. We have to be careful not to break the 
existing behavior, and if we do, hold the change back until the next 
compatibility-breaking release.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (IMPALA-8731) Balance queries between executor groups

2019-06-28 Thread Lars Volker (JIRA)
Lars Volker created IMPALA-8731:
---

 Summary: Balance queries between executor groups
 Key: IMPALA-8731
 URL: https://issues.apache.org/jira/browse/IMPALA-8731
 Project: IMPALA
  Issue Type: Improvement
  Components: Backend
Affects Versions: Impala 3.3.0
Reporter: Lars Volker


After IMPALA-8484, we should revisit the assignment policy that we use to 
distribute queries to executor groups. In particular we should implement a 
policy that balances queries across executor groups instead of filling them up 
one by one.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-8731) Balance queries between executor groups

2019-06-28 Thread Lars Volker (JIRA)
Lars Volker created IMPALA-8731:
---

 Summary: Balance queries between executor groups
 Key: IMPALA-8731
 URL: https://issues.apache.org/jira/browse/IMPALA-8731
 Project: IMPALA
  Issue Type: Improvement
  Components: Backend
Affects Versions: Impala 3.3.0
Reporter: Lars Volker


After IMPALA-8484, we should revisit the assignment policy that we use to 
distribute queries to executor groups. In particular we should implement a 
policy that balances queries across executor groups instead of filling them up 
one by one.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IMPALA-8724) Don't run queries on unhealthy executor groups

2019-06-27 Thread Lars Volker (JIRA)
Lars Volker created IMPALA-8724:
---

 Summary: Don't run queries on unhealthy executor groups
 Key: IMPALA-8724
 URL: https://issues.apache.org/jira/browse/IMPALA-8724
 Project: IMPALA
  Issue Type: Improvement
  Components: Backend
Affects Versions: Impala 3.3.0
Reporter: Lars Volker
Assignee: Lars Volker


After IMPALA-8484 we need to add a way to exclude executor groups that are only 
partially available. This will help to keep queries from running on partially 
started groups and in cases where some nodes of an executor group have failed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-8724) Don't run queries on unhealthy executor groups

2019-06-27 Thread Lars Volker (JIRA)
Lars Volker created IMPALA-8724:
---

 Summary: Don't run queries on unhealthy executor groups
 Key: IMPALA-8724
 URL: https://issues.apache.org/jira/browse/IMPALA-8724
 Project: IMPALA
  Issue Type: Improvement
  Components: Backend
Affects Versions: Impala 3.3.0
Reporter: Lars Volker
Assignee: Lars Volker


After IMPALA-8484 we need to add a way to exclude executor groups that are only 
partially available. This will help to keep queries from running on partially 
started groups and in cases where some nodes of an executor group have failed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IMPALA-2968) Improve admission control dequeuing policy

2019-06-25 Thread Lars Volker (JIRA)


[ 
https://issues.apache.org/jira/browse/IMPALA-2968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872494#comment-16872494
 ] 

Lars Volker commented on IMPALA-2968:
-

One easy way that I can think of to do this is to add a "attempts before 
blocking" counter to each queue node. Every time a node fails to get admitted, 
we increment the counter by 1, and when hitting a configurable limit (e.g. 5) 
we block.

[~bikramjeet.vig], [~tarmstrong], [~asherman] - Thoughts?

> Improve admission control dequeuing policy
> --
>
> Key: IMPALA-2968
> URL: https://issues.apache.org/jira/browse/IMPALA-2968
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Backend
>Affects Versions: Impala 2.3.0
>Reporter: Matthew Jacobs
>Priority: Minor
>  Labels: admission-control, resource-management
>
> The current behavior only attempts to admit the head of the queue but the 
> head might need resources which are contended (e.g. on a hot node) while a 
> queued request behind the head might be admitted. We should consider a policy 
> which would not block the entire queue but yet is unlikely to starve the head 
> if other requests are continuously admitted from behind it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Updated] (IMPALA-8700) test_hdfs_scan_node_errors DCHECK-failing on invalid mtime

2019-06-24 Thread Lars Volker (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Volker updated IMPALA-8700:

Labels: broken-build  (was: )

> test_hdfs_scan_node_errors DCHECK-failing on invalid mtime
> --
>
> Key: IMPALA-8700
> URL: https://issues.apache.org/jira/browse/IMPALA-8700
> Project: IMPALA
>  Issue Type: Bug
>  Components: Backend
>Affects Versions: Impala 3.3.0
>Reporter: Jim Apple
>Assignee: Joe McDonnell
>Priority: Critical
>  Labels: broken-build
>
> The test 
> data_errors/test_data_errors.py::TestHdfsScanNodeErrors::test_hdfs_scan_node_errors
> is failing with
> {{F0623 scan-range.cc:480 Check failed: mtime > 0 (-2114487582 vs. 0)}}
> I've been able to reproduce it several times. Here is a saved nightly job on 
> master with the issue:
> https://jenkins.impala.io/job/ubuntu-16.04-from-scratch/6403/
> https://jenkins.impala.io/job/ubuntu-16.04-from-scratch/6403/artifact/Impala/logs_static/logs/ee_tests/impalad_node2.FATAL/*view*/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-8685) Evaluate default configuration of NUM_REMOTE_EXECUTOR_CANDIDATES

2019-06-19 Thread Lars Volker (JIRA)


[ 
https://issues.apache.org/jira/browse/IMPALA-8685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16868142#comment-16868142
 ] 

Lars Volker commented on IMPALA-8685:
-

Another reason of considering more than one remote executor per scan range 
during scheduling is to prevent skew. The scheduler tries to balance the amount 
of data processes by each executor and having a single choice (1 candidate) can 
prevent it from doing so.

> Evaluate default configuration of NUM_REMOTE_EXECUTOR_CANDIDATES
> 
>
> Key: IMPALA-8685
> URL: https://issues.apache.org/jira/browse/IMPALA-8685
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Backend
>Reporter: Michael Ho
>Priority: Critical
>
> The query option {{NUM_REMOTE_EXECUTOR_CANDIDATES}} is set to 3 by default. 
> This means that there are potentially 3 different executors which can process 
> a remote scan range. Over time, the data of a given remote scan range will be 
> spread across these 3 executors. My understanding of why this is not set to 1 
> is to avoid hot spots in pathological cases. On the other hand, this may mean 
> that we may not maximize the utilization of the file handle cache and data 
> cache. Also, for small clusters (e.g. a 3 node cluster), the default value 
> may render deterministic remote scan range scheduling ineffective. We may 
> want to re-evaluate the default value of {{NUM_REMOTE_EXECUTOR_CANDIDATES}}. 
> One idea is to set it to min(3, half of cluster size) so it works okay with 
> small cluster, which may be rather common for demo purposes. There may also 
> be other criteria for evaluating the default value.
> cc'ing [~joemcdonnell], [~tlipcon] and [~drorke]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-8673) Add query option to force plan hints for insert queries

2019-06-18 Thread Lars Volker (JIRA)
Lars Volker created IMPALA-8673:
---

 Summary: Add query option to force plan hints for insert queries
 Key: IMPALA-8673
 URL: https://issues.apache.org/jira/browse/IMPALA-8673
 Project: IMPALA
  Issue Type: Improvement
  Components: Backend, Frontend
Affects Versions: Impala 3.3.0
Reporter: Lars Volker


In Impala 3.0 we turned on insert clustering by default (IMPALA-5293). This can 
lead to performance regressions from 2.x for highly skewed data sets. To help 
with those cases, we should add a way to force plan hints for insert queries 
through a query option.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IMPALA-8673) Add query option to force plan hints for insert queries

2019-06-18 Thread Lars Volker (JIRA)
Lars Volker created IMPALA-8673:
---

 Summary: Add query option to force plan hints for insert queries
 Key: IMPALA-8673
 URL: https://issues.apache.org/jira/browse/IMPALA-8673
 Project: IMPALA
  Issue Type: Improvement
  Components: Backend, Frontend
Affects Versions: Impala 3.3.0
Reporter: Lars Volker


In Impala 3.0 we turned on insert clustering by default (IMPALA-5293). This can 
lead to performance regressions from 2.x for highly skewed data sets. To help 
with those cases, we should add a way to force plan hints for insert queries 
through a query option.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Updated] (IMPALA-8536) Add Scalable Pool Configuration to Admission Controller.

2019-06-12 Thread Lars Volker (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-8536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Volker updated IMPALA-8536:

Description: 
Add configuration parameters to Admission Controller that scale with
the number of hosts in the resource pool. The planned parameters are

* Max Running Queries Multiple - the Multiple of the current total number of 
running Impalads which gives the maximum number of concurrently running queries 
in this pool. 
* Max Queued Queries Multiple - the Multiple of the current total number of 
running Impalads which gives the maximum number of queries that can be queued 
in this pool. 
* Max Memory Multiple - the Multiple of the current total number of running 
Impalads which gives the maximum memory available across the cluster for the 
pool.


  was:
Add configuration parameters to Admission Controller that scale with
the number of hosts in the resource pool. The planned parameters are Max 
Running Queries * * Multiple - the Multiple of the current total number of 
running Impalads which gives the maximum number of concurrently running queries 
in this pool. 
* Max Queued Queries Multiple - the Multiple of the current total number of 
running Impalads which gives the maximum number of queries that can be queued 
in this pool. 
* Max Memory Multiple - the Multiple of the current total number of running 
Impalads which gives the maximum memory available across the cluster for the 
pool.



> Add Scalable Pool Configuration to Admission Controller.
> 
>
> Key: IMPALA-8536
> URL: https://issues.apache.org/jira/browse/IMPALA-8536
> Project: IMPALA
>  Issue Type: New Feature
>  Components: Backend
>Affects Versions: Impala 3.2.0
>Reporter: Andrew Sherman
>Assignee: Andrew Sherman
>Priority: Major
>  Labels: admission-control, resource-management, scalability
>
> Add configuration parameters to Admission Controller that scale with
> the number of hosts in the resource pool. The planned parameters are
> * Max Running Queries Multiple - the Multiple of the current total number of 
> running Impalads which gives the maximum number of concurrently running 
> queries in this pool. 
> * Max Queued Queries Multiple - the Multiple of the current total number of 
> running Impalads which gives the maximum number of queries that can be queued 
> in this pool. 
> * Max Memory Multiple - the Multiple of the current total number of running 
> Impalads which gives the maximum memory available across the cluster for the 
> pool.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



  1   2   3   4   5   6   7   >