[jira] [Created] (IMPALA-6897) Catalog server should flag tables with large number of small files
bharath v created IMPALA-6897: - Summary: Catalog server should flag tables with large number of small files Key: IMPALA-6897 URL: https://issues.apache.org/jira/browse/IMPALA-6897 Project: IMPALA Issue Type: Improvement Components: Catalog Affects Versions: Impala 2.13.0 Reporter: bharath v Since Catalog has all the file metadata information available, it should help flag tables with large number of small files. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IMPALA-6896) NullPointerException in DESCRIBE FORMATTED on views
Fredy Wijaya created IMPALA-6896: Summary: NullPointerException in DESCRIBE FORMATTED on views Key: IMPALA-6896 URL: https://issues.apache.org/jira/browse/IMPALA-6896 Project: IMPALA Issue Type: Bug Components: Frontend Affects Versions: Impala 2.11.0 Reporter: Fredy Wijaya {noformat} impala-shell -i localhost:21000 (first impalad) [localhost:21000] default> create view v1 as select * from functional.alltypes; [localhost:21000] default> alter view v1 as select * from tpch.customer; {noformat} {noformat} impala-shell.sh -i localhost:21001 (second impalad) [localhost:21001] default> describe formatted v1; Query: describe formatted v1 ERROR: NullPointerException: null{noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Closed] (IMPALA-6860) Impala 3.0 Doc: Upgrade Considerations
[ https://issues.apache.org/jira/browse/IMPALA-6860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alex Rodoni closed IMPALA-6860. --- > Impala 3.0 Doc: Upgrade Considerations > -- > > Key: IMPALA-6860 > URL: https://issues.apache.org/jira/browse/IMPALA-6860 > Project: IMPALA > Issue Type: Task > Components: Docs >Affects Versions: Impala 3.0 >Reporter: Alex Rodoni >Assignee: Alex Rodoni >Priority: Major > Fix For: Impala 3.0 > > > https://gerrit.cloudera.org/#/c/10080/ -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (IMPALA-6860) Impala 3.0 Doc: Upgrade Considerations
[ https://issues.apache.org/jira/browse/IMPALA-6860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alex Rodoni resolved IMPALA-6860. - Resolution: Fixed Fix Version/s: Impala 3.0 > Impala 3.0 Doc: Upgrade Considerations > -- > > Key: IMPALA-6860 > URL: https://issues.apache.org/jira/browse/IMPALA-6860 > Project: IMPALA > Issue Type: Task > Components: Docs >Affects Versions: Impala 3.0 >Reporter: Alex Rodoni >Assignee: Alex Rodoni >Priority: Major > Fix For: Impala 3.0 > > > https://gerrit.cloudera.org/#/c/10080/ -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (IMPALA-6880) test_bloom_wait_time fails
[ https://issues.apache.org/jira/browse/IMPALA-6880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thomas Tauber-Marshall resolved IMPALA-6880. Resolution: Fixed > test_bloom_wait_time fails > -- > > Key: IMPALA-6880 > URL: https://issues.apache.org/jira/browse/IMPALA-6880 > Project: IMPALA > Issue Type: Bug > Components: Backend >Affects Versions: Impala 3.0 >Reporter: Vuk Ercegovac >Assignee: Thomas Tauber-Marshall >Priority: Blocker > > {noformat} > query_test/test_runtime_filters.py:90: in test_bloom_wait_time > self.run_test_case('QueryTest/bloom_filters_wait', vector) > common/impala_test_suite.py:444: in run_test_case > verify_runtime_profile(test_section['RUNTIME_PROFILE'], > result.runtime_profile) > common/test_result_verifier.py:560: in verify_runtime_profile > actual)) > E AssertionError: Did not find matches for lines in runtime profile: > E EXPECTED LINES: > E row_regex: .*0 of 1 Runtime Filter Published, 1 Disabled.* > E > E ACTUAL PROFILE: > E Query (id=31479a87f057d480:fb4b93dc): > E Summary: > E Session ID: e54d8da4b1fc778d:f545112eedcbaca3 > E Session Type: BEESWAX > E Start Time: 2018-04-18 04:35:25.168216000 > E End Time: > E Query Type: QUERY > E Query State: FINISHED > E Query Status: OK > E Impala Version: impalad version 3.0.0-SNAPSHOT RELEASE (build > eaf66172df113dbf10cdb0a08a2bc51e4077ca38) > E User: jenkins > E Connected User: jenkins > E Delegated User: > E Network Address: 127.0.0.1:42339 > E Default Db: functional_text_snap > E Sql Statement: with l as (select * from tpch.lineitem UNION ALL > select * from tpch.lineitem) > E select STRAIGHT_JOIN count(*) from (select * from tpch.lineitem a LIMIT > 1) a > E join (select * from l LIMIT 50) b on a.l_orderkey = -b.l_orderkey > E Coordinator: ec2-m2-4xlarge-centos-6-4-0716.vpc.cloudera.com:22000 > E Query Options (set by configuration): > ABORT_ON_ERROR=1,EXEC_SINGLE_NODE_ROWS_THRESHOLD=0,RUNTIME_FILTER_WAIT_TIME_MS=60,RUNTIME_FILTER_MAX_SIZE=65536,DISABLE_CODEGEN_ROWS_THRESHOLD=0 > E Query Options (set by configuration and planner): > ABORT_ON_ERROR=1,EXEC_SINGLE_NODE_ROWS_THRESHOLD=0,RUNTIME_FILTER_WAIT_TIME_MS=60,MT_DOP=0,RUNTIME_FILTER_MAX_SIZE=65536,DISABLE_CODEGEN_ROWS_THRESHOLD=0 > E Plan: > E > E Max Per-Host Resource Reservation: Memory=4.88MB > E Per-Host Resource Estimates: Memory=542.88MB > E > E F01:PLAN FRAGMENT [UNPARTITIONED] hosts=1 instances=1 > E | Per-Host Resources: mem-estimate=14.81MB mem-reservation=4.81MB > runtime-filters-memory=64.00KB > E PLAN-ROOT SINK > E | mem-estimate=0B mem-reservation=0B > E | > E 05:AGGREGATE [FINALIZE] > E | output: count(*) > E | mem-estimate=10.00MB mem-reservation=0B spill-buffer=2.00MB > E | tuple-ids=7 row-size=8B cardinality=1 > E | > E 04:HASH JOIN [INNER JOIN, BROADCAST] > E | hash predicates: a.l_orderkey = -1 * l_orderkey > E | fk/pk conjuncts: assumed fk/pk > E | runtime filters: RF000[bloom] <- -1 * l_orderkey > E | mem-estimate=4.75MB mem-reservation=4.75MB spill-buffer=256.00KB > E | tuple-ids=0,4 row-size=16B cardinality=1 > E | > E |--08:EXCHANGE [UNPARTITIONED] > E | | mem-estimate=0B mem-reservation=0B > E | | tuple-ids=4 row-size=8B cardinality=50 > E | | > E | F05:PLAN FRAGMENT [UNPARTITIONED] hosts=1 instances=1 > E | Per-Host Resources: mem-estimate=0B mem-reservation=0B > E | 07:EXCHANGE [UNPARTITIONED] > E | | limit: 50 > E | | mem-estimate=0B mem-reservation=0B > E | | tuple-ids=4 row-size=8B cardinality=50 > E | | > E | F04:PLAN FRAGMENT [RANDOM] hosts=3 instances=3 > E | Per-Host Resources: mem-estimate=264.00MB mem-reservation=0B > E | 01:UNION > E | | pass-through-operands: all > E | | limit: 50 > E | | mem-estimate=0B mem-reservation=0B > E | | tuple-ids=4 row-size=8B cardinality=50 > E | | > E | |--03:SCAN HDFS [tpch.lineitem, RANDOM] > E | | partitions=1/1 files=1 size=718.94MB > E | | stored statistics: > E | | table: rows=6001215 size=718.94MB > E | | columns: all > E | | extrapolated-rows=disabled > E | | mem-estimate=264.00MB mem-reservation=0B > E | | tuple-ids=3 row-size=8B cardinality=6001215 > E | | > E | 02:SCAN HDFS [tpch.lineitem, RANDOM] > E | partitions=1/1 files=1 size=718.94MB > E | stored statistics: > E | table: rows=6001215 size=718.94MB > E | columns: all > E | extrapolated-rows=disabled > E | mem-estimate=264.00MB mem-reservation=0B > E | tuple-ids=2 row-size=8B cardinality=6001215 > E | > E
[jira] [Created] (IMPALA-6895) Eliminate SimpleLogger flush threads by inlining flush
Zoram Thanga created IMPALA-6895: Summary: Eliminate SimpleLogger flush threads by inlining flush Key: IMPALA-6895 URL: https://issues.apache.org/jira/browse/IMPALA-6895 Project: IMPALA Issue Type: Improvement Components: Backend Affects Versions: Impala 3.0 Reporter: Zoram Thanga Currently, SimpleLogger provides a Flush() interface which is used by its client(s) to periodically (hard-coded to 5 seconds) flush the log file. We could eliminate these flush threads by keeping track of last flush time, and have the caller of SimpleLogger::AppendEntry() flush on demand (now - last_flush_time >= 5 seconds or whatever). This has the added benefit of reducing contention on the SimpleLogger::log_file_lock_ mutex to just between the threads adding entries to the log file. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IMPALA-6894) Use an internal representation of query states in ClientRequestState
Bikramjeet Vig created IMPALA-6894: -- Summary: Use an internal representation of query states in ClientRequestState Key: IMPALA-6894 URL: https://issues.apache.org/jira/browse/IMPALA-6894 Project: IMPALA Issue Type: Sub-task Reporter: Bikramjeet Vig Having an internal representation of states will be useful as we can conveniently add/remove new states and develop logic around those eg. [initialization, analysis complete, planning complete/queued, running] (set the stage for IMPALA-2568 and its sub-tasks), [cancelled, exception etc] (IMPALA-1262). We can easily convert it to client (HS2 and Beeswax) specific states and ensure there are no client visible changes. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IMPALA-6893) Timeout for slow client fetches
Manaswini created IMPALA-6893: - Summary: Timeout for slow client fetches Key: IMPALA-6893 URL: https://issues.apache.org/jira/browse/IMPALA-6893 Project: IMPALA Issue Type: Improvement Components: Frontend Reporter: Manaswini Good to have an inbuilt ability to cancel a query when a client-fetch time is beyond a certain limit(maybe certain % of runtime or max upper limit). Some queries keep running for hours struck in client fetch phase and idle_query_timeout or session timeout does not help in this case. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IMPALA-6892) CheckHashAndDecrypt doesn't report disk and host where the verification failed
Mostafa Mokhtar created IMPALA-6892: --- Summary: CheckHashAndDecrypt doesn't report disk and host where the verification failed Key: IMPALA-6892 URL: https://issues.apache.org/jira/browse/IMPALA-6892 Project: IMPALA Issue Type: Bug Affects Versions: Impala 2.12.0 Reporter: Mostafa Mokhtar Assignee: Tim Armstrong Root causing block corruption is difficult because the Status message doesn't have the offending disk and host. {code} Query Type: QUERY Query State: EXCEPTION Query Status: Block verification failure {code} {code} Status TmpFileMgr::WriteHandle::CheckHashAndDecrypt(MemRange buffer) { DCHECK(FLAGS_disk_spill_encryption); SCOPED_TIMER(encryption_timer_); // GCM mode will verify the integrity by itself if (!key_.IsGcmMode()) { if (!hash_.Verify(buffer.data(), buffer.len())) { return Status("Block verification failure"); } } return key_.Decrypt(buffer.data(), buffer.len(), buffer.data()); } {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Closed] (IMPALA-6651) Impala 2.13 & 3.0 Docs: Fine-grained privileges
[ https://issues.apache.org/jira/browse/IMPALA-6651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alex Rodoni closed IMPALA-6651. --- Resolution: Fixed Fix Version/s: Impala 2.12.0 Impala 3.0 > Impala 2.13 & 3.0 Docs: Fine-grained privileges > --- > > Key: IMPALA-6651 > URL: https://issues.apache.org/jira/browse/IMPALA-6651 > Project: IMPALA > Issue Type: Sub-task > Components: Docs >Affects Versions: Impala 3.0, Impala 2.13.0 >Reporter: Alex Rodoni >Assignee: Alex Rodoni >Priority: Major > Fix For: Impala 3.0, Impala 2.12.0 > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Closed] (IMPALA-6868) Impala 3.0 Doc: Remove old kinit code for Impala 3
[ https://issues.apache.org/jira/browse/IMPALA-6868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alex Rodoni closed IMPALA-6868. --- Resolution: Fixed Fix Version/s: Impala 2.12.0 Impala 3.0 > Impala 3.0 Doc: Remove old kinit code for Impala 3 > -- > > Key: IMPALA-6868 > URL: https://issues.apache.org/jira/browse/IMPALA-6868 > Project: IMPALA > Issue Type: Sub-task > Components: Docs >Affects Versions: Impala 3.0 >Reporter: Alex Rodoni >Assignee: Alex Rodoni >Priority: Major > Fix For: Impala 3.0, Impala 2.12.0 > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Closed] (IMPALA-6867) Impala 2.12 & 3.0 Docs: Provide a query option to not shuffle on distinct exprs
[ https://issues.apache.org/jira/browse/IMPALA-6867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alex Rodoni closed IMPALA-6867. --- Resolution: Fixed Fix Version/s: Impala 2.12.0 Impala 3.0 > Impala 2.12 & 3.0 Docs: Provide a query option to not shuffle on distinct > exprs > --- > > Key: IMPALA-6867 > URL: https://issues.apache.org/jira/browse/IMPALA-6867 > Project: IMPALA > Issue Type: Sub-task > Components: Docs >Affects Versions: Impala 3.0, Impala 2.12.0 >Reporter: Alex Rodoni >Assignee: Alex Rodoni >Priority: Major > Fix For: Impala 3.0, Impala 2.12.0 > > > https://gerrit.cloudera.org/#/c/9949/ > New query option: > SHUFFLE_DISTINCT_EXPRS > This options controls the shuffling behavior when a query has both grouping > and distinct exprs. Impala can optionally include the distinct exprs in the > hash exchange of the first aggregation phase to spread the data among more > nodes. However, this plan requires another hash exchange on the grouping > exprs in the second phase which is not required when omitting the distinct > exprs in the first phase. Turning it off is recommended if the NDVs of the > grouping exprs is high. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Closed] (IMPALA-6886) Impala Doc: Remove Impala Cluster Sizing doc
[ https://issues.apache.org/jira/browse/IMPALA-6886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alex Rodoni closed IMPALA-6886. --- Resolution: Fixed Fix Version/s: Impala 2.12.0 Impala 3.0 > Impala Doc: Remove Impala Cluster Sizing doc > > > Key: IMPALA-6886 > URL: https://issues.apache.org/jira/browse/IMPALA-6886 > Project: IMPALA > Issue Type: Task > Components: Docs >Affects Versions: Impala 2.12.0 >Reporter: Alex Rodoni >Assignee: Alex Rodoni >Priority: Major > Fix For: Impala 3.0, Impala 2.12.0 > > > Removing impala_cluster_sizing.html per [~alanchoi]'s request. > https://gerrit.cloudera.org/#/c/10109/ -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Closed] (IMPALA-6732) Impala 2.12 Doc: Release Notes
[ https://issues.apache.org/jira/browse/IMPALA-6732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alex Rodoni closed IMPALA-6732. --- Resolution: Fixed Fix Version/s: Impala 2.12.0 > Impala 2.12 Doc: Release Notes > -- > > Key: IMPALA-6732 > URL: https://issues.apache.org/jira/browse/IMPALA-6732 > Project: IMPALA > Issue Type: Task > Components: Docs >Affects Versions: Impala 2.12.0 >Reporter: Alex Rodoni >Assignee: Alex Rodoni >Priority: Major > Fix For: Impala 2.12.0 > > > https://gerrit.cloudera.org/#/c/10071/ -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Closed] (IMPALA-5930) Document that SCAN_NODE_CODEGEN_THRESHOLD has had no effect since 2.7
[ https://issues.apache.org/jira/browse/IMPALA-5930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alex Rodoni closed IMPALA-5930. --- Resolution: Fixed Fix Version/s: Impala 3.0 Removed in https://issues.apache.org/jira/browse/IMPALA-4319 > Document that SCAN_NODE_CODEGEN_THRESHOLD has had no effect since 2.7 > - > > Key: IMPALA-5930 > URL: https://issues.apache.org/jira/browse/IMPALA-5930 > Project: IMPALA > Issue Type: Bug > Components: Docs >Affects Versions: Impala 2.10.0 >Reporter: Tim Armstrong >Assignee: Alex Rodoni >Priority: Minor > Fix For: Impala 3.0 > > > SCAN_NODE_CODEGEN_THRESHOLD was made ineffective a while back IMPALA-4319 but > the docs don't reflect this yet. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Closed] (IMPALA-6459) Doc: TABLESAMPLE for COMPUTE STATS
[ https://issues.apache.org/jira/browse/IMPALA-6459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alex Rodoni closed IMPALA-6459. --- Resolution: Fixed Fix Version/s: Impala 2.12.0 Impala 3.0 > Doc: TABLESAMPLE for COMPUTE STATS > -- > > Key: IMPALA-6459 > URL: https://issues.apache.org/jira/browse/IMPALA-6459 > Project: IMPALA > Issue Type: Sub-task > Components: Docs >Affects Versions: Impala 2.12.0 >Reporter: John Russell >Assignee: Alexander Behm >Priority: Critical > Fix For: Impala 3.0, Impala 2.12.0 > > > I started a private gerrit review. Let's have Alex practice taking it over. > Docs to update: > * Impala Shell Config option > * Table properties: > https://lists.apache.org/thread.html/9306724443fd98ae523f33d576125a159b2579ff6be313f8093e88c7@%3Ccommits.impala.apache.org%3E -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Closed] (IMPALA-6748) Impala 2.12 & 3.0 Docs: Support more separators between date and time in default timestamp format
[ https://issues.apache.org/jira/browse/IMPALA-6748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alex Rodoni closed IMPALA-6748. --- Resolution: Fixed Fix Version/s: Impala 2.12.0 Impala 3.0 > Impala 2.12 & 3.0 Docs: Support more separators between date and time in > default timestamp format > - > > Key: IMPALA-6748 > URL: https://issues.apache.org/jira/browse/IMPALA-6748 > Project: IMPALA > Issue Type: Sub-task > Components: Docs >Affects Versions: Impala 3.0, Impala 2.12.0 >Reporter: Alex Rodoni >Assignee: Alex Rodoni >Priority: Major > Fix For: Impala 3.0, Impala 2.12.0 > > > https://gerrit.cloudera.org/#/c/10052/ -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (IMPALA-6790) sqlparse needs to be upgraded in the Python environment
[ https://issues.apache.org/jira/browse/IMPALA-6790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joe McDonnell resolved IMPALA-6790. --- Resolution: Fixed Fix Version/s: Impala 2.13.0 Impala 3.0 > sqlparse needs to be upgraded in the Python environment > --- > > Key: IMPALA-6790 > URL: https://issues.apache.org/jira/browse/IMPALA-6790 > Project: IMPALA > Issue Type: Bug > Components: Infrastructure >Affects Versions: Impala 3.0 >Reporter: Joe McDonnell >Assignee: Joe McDonnell >Priority: Major > Fix For: Impala 3.0, Impala 2.13.0 > > > bin/load-data.py uses sqlparse to read SQL files and split them into SQL > statements. Recently, some remote cluster tests have seen errors during > dataload due to sqlparse failing to split SQL statements appropriately. > Specifically, it does not detect the end of a SQL statement and tries to run > dozens of SQL statements together. Impala's parser rejects this. The SQL file > is identical to the SQL file generated during our normal dataload, so > clearly, something about this system or its environment breaks sqlparse. > sqlparse in our environment is 0.1.15, which is quite old. The latest > sqlparse is 0.2.4. Running the tests with sqlparse 0.2.4 does not encounter > the error. sqlparse needs to be upgraded. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (IMPALA-6837) Allow setting multiple allowed networks in distcc server script
[ https://issues.apache.org/jira/browse/IMPALA-6837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tim Armstrong resolved IMPALA-6837. --- Resolution: Fixed Fix Version/s: Impala 3.1.0 Impala 2.13.0 > Allow setting multiple allowed networks in distcc server script > --- > > Key: IMPALA-6837 > URL: https://issues.apache.org/jira/browse/IMPALA-6837 > Project: IMPALA > Issue Type: Improvement > Components: Infrastructure >Reporter: Tim Armstrong >Assignee: Tim Armstrong >Priority: Minor > Fix For: Impala 2.13.0, Impala 3.1.0 > > > In some cases we want to allow multiple different networks, e.g. on Centos > {noformat} > -allow 172.16.0.0/12 --allow 10.16.0.0/8 > {noformat} > Or on ubuntu > {noformat} > ALLOWEDNETS="172.16.0.0/12 10.16.0.0/8" > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (IMPALA-2717) impala-shell breaks on non-ascii characters in the resultset
[ https://issues.apache.org/jira/browse/IMPALA-2717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tim Armstrong resolved IMPALA-2717. --- Resolution: Fixed Fix Version/s: Impala 3.1.0 Impala 2.13.0 > impala-shell breaks on non-ascii characters in the resultset > > > Key: IMPALA-2717 > URL: https://issues.apache.org/jira/browse/IMPALA-2717 > Project: IMPALA > Issue Type: Bug > Components: Clients >Affects Versions: Impala 2.2, Impala 2.3.0 > Environment: CDH5.4.7 >Reporter: Marcell Szabo >Assignee: Tim Armstrong >Priority: Minor > Labels: impala-shell, ramp-up, shell > Fix For: Impala 2.13.0, Impala 3.1.0 > > Attachments: IMPALA-2717.patch > > > (Shell build version: Impala Shell v2.2.0-cdh5.4.7 (8b8d376) built on Thu Sep > 17 02:00:38 PDT 2015) > [host:21000] > insert into sometable values ('Árvíztűrő tükörfúrógép'); > Query: insert into sometable values ('Árvíztűrő tükörfúrógép') > Inserted 1 row(s) in 6.84s > [host:21000] > select * from sometable; > Query: select * from sometable > Unknown Exception : 'ascii' codec can't encode character u'\xc1' in position > 83: ordinal not in range(128) > [Not connected] > > This is very similar to IMPALA-1130, IMPALA-489, IMPALA-738, the difference > is that here the resultset contains the offending char. > With the -B option the result is printed correctly. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IMPALA-6891) AuthorizationException in CROSS JOIN
Fredy Wijaya created IMPALA-6891: Summary: AuthorizationException in CROSS JOIN Key: IMPALA-6891 URL: https://issues.apache.org/jira/browse/IMPALA-6891 Project: IMPALA Issue Type: Bug Components: Frontend Affects Versions: Impala 2.11.0, Impala 2.10.0, Impala 2.9.0 Reporter: Fredy Wijaya {noformat} [localhost:21000] foo> create table t1(i int, j int); [localhost:21000] foo> create table t2(i int, j int); [localhost:21000] foo> grant select(i) on table foo.t1 to role test_role; [localhost:21000] foo> grant select(j) on table foo.t1 to role test_role; [localhost:21000] foo> grant select(i) on table foo.t2 to role test_role; [localhost:21000] foo> grant select(j) on table foo.t2 to role test_role;{noformat} {noformat} [localhost:21000] foo> select * from foo.t1 a cross join foo.t2 b; Fetched 0 row(s) in 0.14s {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IMPALA-6890) split-hbase.sh: Can't get master address from ZooKeeper; znode data == null
Vuk Ercegovac created IMPALA-6890: - Summary: split-hbase.sh: Can't get master address from ZooKeeper; znode data == null Key: IMPALA-6890 URL: https://issues.apache.org/jira/browse/IMPALA-6890 Project: IMPALA Issue Type: Bug Components: Infrastructure Affects Versions: Impala 2.12.0 Reporter: Vuk Ercegovac {noformat} 20:57:13 FAILED (Took: 7 min 58 sec) 20:57:13 '/data/jenkins/workspace/impala-cdh5-2.12.0_5.15.0-exhaustive-thrift/repos/Impala/testdata/bin/split-hbase.sh' failed. Tail of log: 20:57:13 Wed Apr 18 20:49:43 PDT 2018, RpcRetryingCaller{globalStartTime=1524109783051, pause=100, retries=31}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 20:57:13 Wed Apr 18 20:49:43 PDT 2018, RpcRetryingCaller{globalStartTime=1524109783051, pause=100, retries=31}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 20:57:13 Wed Apr 18 20:49:44 PDT 2018, RpcRetryingCaller{globalStartTime=1524109783051, pause=100, retries=31}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null ... 20:57:13 Wed Apr 18 20:57:13 PDT 2018, RpcRetryingCaller{globalStartTime=1524109783051, pause=100, retries=31}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 20:57:13 20:57:13at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:157) 20:57:13at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4329) 20:57:13at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4321) 20:57:13at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2952) 20:57:13at org.apache.impala.datagenerator.HBaseTestDataRegionAssigment.(HBaseTestDataRegionAssigment.java:74) 20:57:13at org.apache.impala.datagenerator.HBaseTestDataRegionAssigment.main(HBaseTestDataRegionAssigment.java:310) 20:57:13 Caused by: org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 20:57:13at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1698) 20:57:13at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1718) 20:57:13at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1875) 20:57:13at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38) 20:57:13at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:134) 20:57:13... 5 more 20:57:13 Caused by: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 20:57:13at org.apache.hadoop.hbase.zookeeper.MasterAddressTracker.getMasterAddress(MasterAddressTracker.java:154) 20:57:13at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(ConnectionManager.java:1648) 20:57:13at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1689) 20:57:13... 9 more 20:57:13 Error in /data/jenkins/workspace/impala-cdh5-2.12.0_5.15.0-exhaustive-thrift/repos/Impala/testdata/bin/split-hbase.sh at line 41: "$JAVA" ${JAVA_KERBEROS_MAGIC} \ 20:57:13 Error in /data/jenkins/workspace/impala-cdh5-2.12.0_5.15.0-exhaustive-thrift/repos/Impala/bin/run-all-tests.sh at line 48: # Run End-to-end Tests{noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (IMPALA-6887) Typo in authz-policy.ini.template
[ https://issues.apache.org/jira/browse/IMPALA-6887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fredy Wijaya resolved IMPALA-6887. -- Resolution: Fixed Fix Version/s: Impala 3.0 > Typo in authz-policy.ini.template > - > > Key: IMPALA-6887 > URL: https://issues.apache.org/jira/browse/IMPALA-6887 > Project: IMPALA > Issue Type: Bug > Components: Infrastructure >Reporter: Fredy Wijaya >Assignee: Fredy Wijaya >Priority: Minor > Fix For: Impala 3.0 > > > {noformat} > alter_functionl_text_lzo = > server=server1->db=functional_text_lzo->action=alter > {noformat} > Although it does not affect the tests directly, this typo could potentially > be an issue since we register the role with alter_functional_text_lzo. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (IMPALA-6878) SentryServicePinger should not print stacktrace at every retry
[ https://issues.apache.org/jira/browse/IMPALA-6878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fredy Wijaya resolved IMPALA-6878. -- Resolution: Done > SentryServicePinger should not print stacktrace at every retry > -- > > Key: IMPALA-6878 > URL: https://issues.apache.org/jira/browse/IMPALA-6878 > Project: IMPALA > Issue Type: Improvement > Components: Infrastructure >Reporter: Fredy Wijaya >Assignee: Fredy Wijaya >Priority: Minor > > The stack trace looks like this and is misleading as to whether the service > started successfully or not: > {code:java} > 18/04/18 12:03:23 INFO transport.SentryTransportPool: Creating pool for > localhost with default port 30911 > 18/04/18 12:03:23 INFO transport.SentryTransportPool: Adding endpoint > localhost:30911 > 18/04/18 12:03:23 INFO transport.SentryTransportPool: Connection pooling is > enabled > 18/04/18 12:03:23 ERROR transport.SentryTransportPool: Failed to obtain > transport for localhost:30911: java.net.ConnectException: Connection refused > (Connection refused) > 18/04/18 12:03:23 ERROR transport.RetryClientInvocationHandler: Failed to > connect > sentry.org.apache.thrift.transport.TTransportException: > java.net.ConnectException: Connection refused (Connection refused) > at sentry.org.apache.thrift.transport.TSocket.open(TSocket.java:226) > at > sentry.org.apache.sentry.core.common.transport.SentryTransportFactory.connectToServer(SentryTransportFactory.java:99) > at > sentry.org.apache.sentry.core.common.transport.SentryTransportFactory.getTransport(SentryTransportFactory.java:86) > at > sentry.org.apache.sentry.core.common.transport.SentryTransportPool$PoolFactory.create(SentryTransportPool.java:302) > at > sentry.org.apache.sentry.core.common.transport.SentryTransportPool$PoolFactory.create(SentryTransportPool.java:271) > at > org.apache.commons.pool2.BaseKeyedPooledObjectFactory.makeObject(BaseKeyedPooledObjectFactory.java:62) > at > org.apache.commons.pool2.impl.GenericKeyedObjectPool.create(GenericKeyedObjectPool.java:1041) > at > org.apache.commons.pool2.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:380) > at > org.apache.commons.pool2.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:279) > at > sentry.org.apache.sentry.core.common.transport.SentryTransportPool.getTransport(SentryTransportPool.java:183) > at > org.apache.sentry.provider.db.service.thrift.SentryPolicyServiceClientDefaultImpl.connect(SentryPolicyServiceClientDefaultImpl.java:90) > at > sentry.org.apache.sentry.core.common.transport.RetryClientInvocationHandler.connect(RetryClientInvocationHandler.java:141) > at > sentry.org.apache.sentry.core.common.transport.RetryClientInvocationHandler.invokeImpl(RetryClientInvocationHandler.java:90) > at > sentry.org.apache.sentry.core.common.transport.SentryClientInvocationHandler.invoke(SentryClientInvocationHandler.java:41) > at com.sun.proxy.$Proxy5.listAllRoles(Unknown Source) > at org.apache.impala.util.SentryUtil.listRoles(SentryUtil.java:52) > at > org.apache.impala.util.SentryPolicyService.listAllRoles(SentryPolicyService.java:398) > at > org.apache.impala.testutil.SentryServicePinger.main(SentryServicePinger.java:75) > Caused by: java.net.ConnectException: Connection refused (Connection refused) > at java.net.PlainSocketImpl.socketConnect(Native Method) > at > java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) > at > java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206) > at > java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) > at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) > at java.net.Socket.connect(Socket.java:589) > at sentry.org.apache.thrift.transport.TSocket.open(TSocket.java:221) > ... 17 more > 18/04/18 12:03:26 ERROR transport.SentryTransportPool: Failed to obtain > transport for localhost:30911: java.net.ConnectException: Connection refused > (Connection refused) > 18/04/18 12:03:26 ERROR transport.RetryClientInvocationHandler: Failed to > connect > sentry.org.apache.thrift.transport.TTransportException: > java.net.ConnectException: Connection refused (Connection refused) > at sentry.org.apache.thrift.transport.TSocket.open(TSocket.java:226) > at > sentry.org.apache.sentry.core.common.transport.SentryTransportFactory.connectToServer(SentryTransportFactory.java:99) > at > sentry.org.apache.sentry.core.common.transport.SentryTransportFactory.getTransport(SentryTransportFactory.java:86) > at >