[jira] [Commented] (IMPALA-12416) test_skipping_older_events and some other catlog tests failing

2023-09-05 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-12416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17762265#comment-17762265
 ] 

ASF subversion and git services commented on IMPALA-12416:
--

Commit 188c2d6379d0dc3f6fddcd307dcfa875fe754353 in impala's branch 
refs/heads/master from Sai Hemanth Gantasala
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=188c2d637 ]

IMPALA-12416: Fix test failures caused by
IMPALA-11535

Fixed the test failures in Java unit tests caused by incorrectly
setting the config 'enable_sync_to_latest_event_on_ddls' to true. This
flag has to be reset to its original value at the end of the test since
BackendConfig.INSTANCE is shared by all the FE tests. Also, increased
the hms polling interval to 10sec for the test_skipping_older_events()
end-to-end test to avoid flakiness.

Change-Id: I4930933dca849496bfbe475c8efc960d15fa57a8
Reviewed-on: http://gerrit.cloudera.org:8080/20454
Reviewed-by: Quanlong Huang 
Tested-by: Impala Public Jenkins 


> test_skipping_older_events and some other catlog tests failing
> --
>
> Key: IMPALA-12416
> URL: https://issues.apache.org/jira/browse/IMPALA-12416
> Project: IMPALA
>  Issue Type: Bug
>Reporter: Andrew Sherman
>Assignee: Sai Hemanth Gantasala
>Priority: Critical
>
> test_skipping_older_events was added in IMPALA-11535. The failure is:
> {code}
> custom_cluster/test_events_custom_configs.py:375: in 
> test_skipping_older_events
>     verify_skipping_older_events(test_old_table, True, False)
> custom_cluster/test_events_custom_configs.py:370: in 
> verify_skipping_older_events
>     verify_skipping_hive_stmt_events(complete_query, "new_table")
> custom_cluster/test_events_custom_configs.py:341: in 
> verify_skipping_hive_stmt_events
>     assert tbl_events_skipped_after > tbl_events_skipped_before
> E   assert 19 > 19
> {code}
> There are some other catalog test failures that appeared at the same time:
> {code}
> org.apache.impala.catalog.metastore.CatalogHmsFileMetadataTest
> org.apache.impala.catalog.metastore.CatalogHmsSyncToLatestEventIdTest
> org.apache.impala.catalog.metastore.EnableCatalogdHmsCacheFlagTest
> {code}
> which are failing, saying
> {code}
> Configurations invalidate_hms_cache_on_ddls and 
> enable_sync_to_latest_event_on_ddls can not be set to true at the same time
> {code}
> and I assume are related, please investigate these too, and fix if 
> appropriate.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-11535) Skip events happen before manual REFRESH

2023-09-05 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-11535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17762266#comment-17762266
 ] 

ASF subversion and git services commented on IMPALA-11535:
--

Commit 188c2d6379d0dc3f6fddcd307dcfa875fe754353 in impala's branch 
refs/heads/master from Sai Hemanth Gantasala
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=188c2d637 ]

IMPALA-12416: Fix test failures caused by
IMPALA-11535

Fixed the test failures in Java unit tests caused by incorrectly
setting the config 'enable_sync_to_latest_event_on_ddls' to true. This
flag has to be reset to its original value at the end of the test since
BackendConfig.INSTANCE is shared by all the FE tests. Also, increased
the hms polling interval to 10sec for the test_skipping_older_events()
end-to-end test to avoid flakiness.

Change-Id: I4930933dca849496bfbe475c8efc960d15fa57a8
Reviewed-on: http://gerrit.cloudera.org:8080/20454
Reviewed-by: Quanlong Huang 
Tested-by: Impala Public Jenkins 


> Skip events happen before manual REFRESH
> 
>
> Key: IMPALA-11535
> URL: https://issues.apache.org/jira/browse/IMPALA-11535
> Project: IMPALA
>  Issue Type: Improvement
>Reporter: Quanlong Huang
>Assignee: Sai Hemanth Gantasala
>Priority: Critical
>
> If the table has been manually refreshed, all its events happen before the 
> manual REFRESH can be skipped.
>  
> This happens when catalogd is lagging behind in processing events. When 
> processing an event, we can check whether there are manual REFRESH executed 
> after its eventTime. In such case, we don't need to process the event to 
> refresh anything. This helps catalogd to catch up HMS events quickly.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Assigned] (IMPALA-12424) Allow third party extensibility for JniFrontend

2023-09-05 Thread Steve Carlin (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-12424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Carlin reassigned IMPALA-12424:
-

Assignee: Steve Carlin

> Allow third party extensibility for JniFrontend
> ---
>
> Key: IMPALA-12424
> URL: https://issues.apache.org/jira/browse/IMPALA-12424
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Backend
>Reporter: Steve Carlin
>Assignee: Steve Carlin
>Priority: Major
>
> The JniFrontend java class is called through Jni on the backend.  We should 
> allow a developer to create their own JniFrontend class if they want to use 
> their own planner to create an Impala request



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-12424) Allow third party extensibility for JniFrontend

2023-09-05 Thread Steve Carlin (Jira)
Steve Carlin created IMPALA-12424:
-

 Summary: Allow third party extensibility for JniFrontend
 Key: IMPALA-12424
 URL: https://issues.apache.org/jira/browse/IMPALA-12424
 Project: IMPALA
  Issue Type: Improvement
  Components: Backend
Reporter: Steve Carlin


The JniFrontend java class is called through Jni on the backend.  We should 
allow a developer to create their own JniFrontend class if they want to use 
their own planner to create an Impala request



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-12423) Impala shell should allow a user to set up query options when the underlying protocol is strict_hs2_protocol

2023-09-05 Thread Fang-Yu Rao (Jira)
Fang-Yu Rao created IMPALA-12423:


 Summary: Impala shell should allow a user to set up query options 
when the underlying protocol is strict_hs2_protocol
 Key: IMPALA-12423
 URL: https://issues.apache.org/jira/browse/IMPALA-12423
 Project: IMPALA
  Issue Type: New Feature
Reporter: Fang-Yu Rao
Assignee: Fang-Yu Rao


Currently when we use the Impala shell to connect to a service, e.g., 
HiveServer2, via the strict HS2 protocol, we are not able to execute the SET 
statement or to set up the value of a query option as shown in the following. 
It would be much more convenient if a user is at least able to set up the value 
of a query option in the Impala shell when the Impala shell is used to connect 
to an external frontend that sends query plans to the Impala server for 
execution.
{code:java}
fangyurao@fangyu-upstream-dev:~$ impala-shell.sh -i 'localhost:11050' 
--strict_hs2_protocol
Starting Impala Shell with no authentication using Python 2.7.16
WARNING: Unable to track live progress with strict_hs2_protocol
LDAP password for fangyurao: 
Opened TCP connection to localhost:11050
Connected to localhost:11050
Server version: N/A
***
Welcome to the Impala shell.
(Impala Shell v4.3.0-SNAPSHOT (2f06a7b) built on Tue Sep  5 14:14:24 PDT 2023)

To see how Impala will plan to run your query without actually executing it, use
the EXPLAIN command. You can change the level of detail in the EXPLAIN output by
setting the EXPLAIN_LEVEL query option.
***
[localhost:11050] default> set;
Query options (defaults shown in []):
No options available.

Shell Options
WRITE_DELIMITED: False
VERBOSE: True
VERTICAL: False
LIVE_SUMMARY: False
OUTPUT_FILE: None
DELIMITER: \t
LIVE_PROGRESS: False

Variables:
No variables defined.
[localhost:11050] default> set num_nodes=2;
Unknown query option: num_nodes
Available query options, with their values (defaults shown in []):
Query options (defaults shown in []):
No options available.

Shell Options
WRITE_DELIMITED: False
VERBOSE: True
VERTICAL: False
LIVE_SUMMARY: False
OUTPUT_FILE: None
DELIMITER: \t
LIVE_PROGRESS: False
{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-10798) Prototype a simple JSON File reader

2023-09-05 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-10798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17762221#comment-17762221
 ] 

ASF subversion and git services commented on IMPALA-10798:
--

Commit 2f06a7b052cc95afcf4b0485cbc4028de33942e8 in impala's branch 
refs/heads/master from Eyizoha
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=2f06a7b05 ]

IMPALA-10798: Initial support for reading JSON files

Prototype of HdfsJsonScanner implemented based on rapidjson, which
supports scanning data from splitting json files.

The scanning of JSON data is mainly completed by two parts working
together. The first part is the JsonParser responsible for parsing the
JSON object, which is implemented based on the SAX-style API of
rapidjson. It reads data from the char stream, parses it, and calls the
corresponding callback function when encountering the corresponding JSON
element. See the comments of the JsonParser class for more details.

The other part is the HdfsJsonScanner, which inherits from HdfsScanner
and provides callback functions for the JsonParser. The callback
functions are responsible for providing data buffers to the Parser and
converting and materializing the Parser's parsing results into RowBatch.
It should be noted that the parser returns numeric values as strings to
the scanner. The scanner uses the TextConverter class to convert the
strings to the desired types, similar to how the HdfsTextScanner works.
This is an advantage compared to using number value provided by
rapidjson directly, as it eliminates concerns about inconsistencies in
converting decimals (e.g. losing precision).

Added a startup flag, enable_json_scanner, to be able to disable this
feature if we hit critical bugs in production.

Limitations
 - Multiline json objects are not fully supported yet. It is ok when
   each file has only one scan range. However, when a file has multiple
   scan ranges, there is a small probability of incomplete scanning of
   multiline JSON objects that span ScanRange boundaries (in such cases,
   parsing errors may be reported). For more details, please refer to
   the comments in the 'multiline_json.test'.
 - Compressed JSON files are not supported yet.
 - Complex types are not supported yet.

Tests
 - Most of the existing end-to-end tests can run on JSON format.
 - Add TestQueriesJsonTables in test_queries.py for testing multiline,
   malformed, and overflow in JSON.

Change-Id: I31309cb8f2d04722a0508b3f9b8f1532ad49a569
Reviewed-on: http://gerrit.cloudera.org:8080/19699
Reviewed-by: Quanlong Huang 
Tested-by: Impala Public Jenkins 


> Prototype a simple JSON File reader
> ---
>
> Key: IMPALA-10798
> URL: https://issues.apache.org/jira/browse/IMPALA-10798
> Project: IMPALA
>  Issue Type: Sub-task
>Reporter: Shikha Asrani
>Assignee: Ye Zihao
>Priority: Major
>
> This prototype involves ,
>  * Implementing front-end support for 'Select' from a table "stored as 
> JSONFILE".
>  * A JSON file scanner, using Arrow library to read the JSON file with 
> primitive data types and is expandable for further complex types and 
> optimizations.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Updated] (IMPALA-11284) INSERT query with concat operator fails with 'Function not set in thrift node' error

2023-09-05 Thread Michael Smith (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-11284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Smith updated IMPALA-11284:
---
Affects Version/s: Impala 4.1.0

> INSERT query with concat operator fails with 'Function not set in thrift 
> node' error
> 
>
> Key: IMPALA-11284
> URL: https://issues.apache.org/jira/browse/IMPALA-11284
> Project: IMPALA
>  Issue Type: Bug
>  Components: Frontend
>Affects Versions: Impala 4.1.0
>Reporter: Abhishek Rawat
>Assignee: Abhishek Rawat
>Priority: Critical
>
> *Steps to Reproduce:*
> {code:java}
> DROP TABLE t2;
> CREATE TABLE t2(c0 BOOLEAN, c1 STRING) STORED AS ICEBERG; 
> INSERT INTO t2(c0, c1) VALUES ( TRUE, ( 'abc' ||('927160245' || 'Q') ) );
> Error: Function not set in thrift node{code}
> Looks like a regression introduced by IMPALA-6590.
> fn_ was previously serialized during rewrite in :
> {code:java}
> treeToThriftHelper:FunctionCallExpr(Expr).treeToThriftHelper(TExpr) line: 866
> FunctionCallExpr(Expr).treeToThrift() line: 844 
> FeSupport.EvalExprWithoutRowBounded(Expr, TQueryCtx, int) line: 188
> LiteralExpr.createBounded(Expr, TQueryCtx, int) line: 210
> FoldConstantsRule.apply(Expr, Analyzer) line: 66
> ExprRewriter.applyRuleBottomUp(Expr, ExprRewriteRule, Analyzer) line: 85
> ExprRewriter.applyRuleRepeatedly(Expr, ExprRewriteRule, Analyzer) line: 71
> ExprRewriter.rewrite(Expr, Analyzer) line: 55   
> SelectList.rewriteExprs(ExprRewriter, Analyzer) line: 100
> SelectStmt.rewriteExprs(ExprRewriter) line: 1189
> ValuesStmt(SetOperationStmt).rewriteExprs(ExprRewriter) line: 467
> InsertStmt.rewriteExprs(ExprRewriter) line: 1119
> AnalysisContext.analyze(StmtMetadataLoader$StmtTableCache, 
> AuthorizationContext) line: 537       {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Updated] (IMPALA-12413) Make Iceberg tables created by Trino compatible with Impala

2023-09-05 Thread Jira


 [ 
https://issues.apache.org/jira/browse/IMPALA-12413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltán Borók-Nagy updated IMPALA-12413:
---
Target Version: Impala 4.3.0

> Make Iceberg tables created by Trino compatible with Impala
> ---
>
> Key: IMPALA-12413
> URL: https://issues.apache.org/jira/browse/IMPALA-12413
> Project: IMPALA
>  Issue Type: Bug
>  Components: Catalog
>Reporter: Zoltán Borók-Nagy
>Assignee: Zoltán Borók-Nagy
>Priority: Major
>  Labels: impala-iceberg
>
> Currently Iceberg tables created by Trino are not compatible with Impala as 
> Trino doesn't set storage_handler property and storage descriptors.
> It only denotes the table type via the table property 'table_type' which is 
> set to Iceberg.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-12422) Add interop tests with Trino

2023-09-05 Thread Jira
Zoltán Borók-Nagy created IMPALA-12422:
--

 Summary: Add interop tests with Trino
 Key: IMPALA-12422
 URL: https://issues.apache.org/jira/browse/IMPALA-12422
 Project: IMPALA
  Issue Type: Test
Reporter: Zoltán Borók-Nagy


IMPALA-12413 makes Impala able to deal with Iceberg tables create by Trino, but 
doesn't add tests for it because Trino is not yet part of the minicluster.

We need thorough interop testing between Impala and Trino.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Assigned] (IMPALA-12413) Make Iceberg tables created by Trino compatible with Impala

2023-09-05 Thread Jira


 [ 
https://issues.apache.org/jira/browse/IMPALA-12413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltán Borók-Nagy reassigned IMPALA-12413:
--

Assignee: Zoltán Borók-Nagy

> Make Iceberg tables created by Trino compatible with Impala
> ---
>
> Key: IMPALA-12413
> URL: https://issues.apache.org/jira/browse/IMPALA-12413
> Project: IMPALA
>  Issue Type: Bug
>  Components: Catalog
>Reporter: Zoltán Borók-Nagy
>Assignee: Zoltán Borók-Nagy
>Priority: Major
>  Labels: impala-iceberg
>
> Currently Iceberg tables created by Trino are not compatible with Impala as 
> Trino doesn't set storage_handler property and storage descriptors.
> It only denotes the table type via the table property 'table_type' which is 
> set to Iceberg.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Updated] (IMPALA-12421) Add tests for automatic metadata refresh for Iceberg tables

2023-09-05 Thread Jira


 [ 
https://issues.apache.org/jira/browse/IMPALA-12421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltán Borók-Nagy updated IMPALA-12421:
---
Parent: IMPALA-10446
Issue Type: Sub-task  (was: Test)

> Add tests for automatic metadata refresh for Iceberg tables
> ---
>
> Key: IMPALA-12421
> URL: https://issues.apache.org/jira/browse/IMPALA-12421
> Project: IMPALA
>  Issue Type: Sub-task
>Reporter: Zoltán Borók-Nagy
>Priority: Major
>  Labels: impala-iceberg
>
> Currently there is no interop tests between Hive/Impala about automatic 
> metadata refresh.
> The mechanism depends on that for Iceberg tables in the HiveCatalog, the 
> table property 'metadata_location' needs to be updated whenever someone 
> changes the Iceberg table.
> Impala eventually picks up the ALTER TABLE event, and refreshes the Iceberg 
> table.
> We should add tests for this, so future enhancements in the event handler 
> retain automait refresh of Iceberg tables.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-12421) Add tests for automatic metadata refresh for Iceberg tables

2023-09-05 Thread Jira
Zoltán Borók-Nagy created IMPALA-12421:
--

 Summary: Add tests for automatic metadata refresh for Iceberg 
tables
 Key: IMPALA-12421
 URL: https://issues.apache.org/jira/browse/IMPALA-12421
 Project: IMPALA
  Issue Type: Test
Reporter: Zoltán Borók-Nagy


Currently there is no interop tests between Hive/Impala about automatic 
metadata refresh.

The mechanism depends on that for Iceberg tables in the HiveCatalog, the table 
property 'metadata_location' needs to be updated whenever someone changes the 
Iceberg table.

Impala eventually picks up the ALTER TABLE event, and refreshes the Iceberg 
table.

We should add tests for this, so future enhancements in the event handler 
retain automait refresh of Iceberg tables.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-11284) INSERT query with concat operator fails with 'Function not set in thrift node' error

2023-09-05 Thread Riza Suminto (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-11284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17762120#comment-17762120
 ] 

Riza Suminto commented on IMPALA-11284:
---

Patch is available here: [https://gerrit.cloudera.org/c/18581/] 

> INSERT query with concat operator fails with 'Function not set in thrift 
> node' error
> 
>
> Key: IMPALA-11284
> URL: https://issues.apache.org/jira/browse/IMPALA-11284
> Project: IMPALA
>  Issue Type: Bug
>  Components: Frontend
>Reporter: Abhishek Rawat
>Assignee: Abhishek Rawat
>Priority: Critical
>
> *Steps to Reproduce:*
> {code:java}
> DROP TABLE t2;
> CREATE TABLE t2(c0 BOOLEAN, c1 STRING) STORED AS ICEBERG; 
> INSERT INTO t2(c0, c1) VALUES ( TRUE, ( 'abc' ||('927160245' || 'Q') ) );
> Error: Function not set in thrift node{code}
> Looks like a regression introduced by IMPALA-6590.
> fn_ was previously serialized during rewrite in :
> {code:java}
> treeToThriftHelper:FunctionCallExpr(Expr).treeToThriftHelper(TExpr) line: 866
> FunctionCallExpr(Expr).treeToThrift() line: 844 
> FeSupport.EvalExprWithoutRowBounded(Expr, TQueryCtx, int) line: 188
> LiteralExpr.createBounded(Expr, TQueryCtx, int) line: 210
> FoldConstantsRule.apply(Expr, Analyzer) line: 66
> ExprRewriter.applyRuleBottomUp(Expr, ExprRewriteRule, Analyzer) line: 85
> ExprRewriter.applyRuleRepeatedly(Expr, ExprRewriteRule, Analyzer) line: 71
> ExprRewriter.rewrite(Expr, Analyzer) line: 55   
> SelectList.rewriteExprs(ExprRewriter, Analyzer) line: 100
> SelectStmt.rewriteExprs(ExprRewriter) line: 1189
> ValuesStmt(SetOperationStmt).rewriteExprs(ExprRewriter) line: 467
> InsertStmt.rewriteExprs(ExprRewriter) line: 1119
> AnalysisContext.analyze(StmtMetadataLoader$StmtTableCache, 
> AuthorizationContext) line: 537       {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Resolved] (IMPALA-12409) Don't allow EXTERNAL Iceberg tables to point another Iceberg table in Hive catalog

2023-09-05 Thread Jira


 [ 
https://issues.apache.org/jira/browse/IMPALA-12409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltán Borók-Nagy resolved IMPALA-12409.

Fix Version/s: Impala 4.3.0
   Resolution: Fixed

> Don't allow EXTERNAL Iceberg tables to point another Iceberg table in Hive 
> catalog
> --
>
> Key: IMPALA-12409
> URL: https://issues.apache.org/jira/browse/IMPALA-12409
> Project: IMPALA
>  Issue Type: Bug
>  Components: Catalog, Frontend
>Reporter: Zoltán Borók-Nagy
>Assignee: Zoltán Borók-Nagy
>Priority: Major
>  Labels: impala-iceberg
> Fix For: Impala 4.3.0
>
>
> We shouldn't allow users creating an EXTERNAL Iceberg table that points to 
> another Iceberg table. I.e. the following should be forbidden:
> {noformat}
> CREATE EXTERNAL TABLE ice_ext
> STORED BY ICEBERG
> TBLPROPERTIES ('iceberg.table_identifier'='db.tbl');{noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Comment Edited] (IMPALA-10120) Beeline hangs when connecting to coordinators

2023-09-05 Thread XiangYang (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-10120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17762009#comment-17762009
 ] 

XiangYang edited comment on IMPALA-10120 at 9/5/23 9:32 AM:


Hi [~tangzhi] , there is an onging commit: 
[https://gerrit.cloudera.org/c/20344/] , sorry for too busy to reply the review 
recently, I'll try to complete it in the next few days.


was (Author: yx91490):
Hi [~tangzhi] , there is a onging commit: 
[https://gerrit.cloudera.org/c/20344/] , sorry for too busy to reply the review 
recently, I'll try to complete it in the next few days.

> Beeline hangs when connecting to coordinators
> -
>
> Key: IMPALA-10120
> URL: https://issues.apache.org/jira/browse/IMPALA-10120
> Project: IMPALA
>  Issue Type: Bug
>  Components: Clients
>Reporter: Quanlong Huang
>Assignee: XiangYang
>Priority: Critical
>
> Beeline is always hanging when connecting to a coordinator:
> {code:java}
> $ beeline -u "jdbc:hive2://localhost:21050/default;auth=noSasl"
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/home/quanlong/workspace/Impala/toolchain/cdp_components-4493826/apache-hive-3.1.3000.7.2.1.0-287-bin/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/home/quanlong/workspace/Impala/toolchain/cdp_components-4493826/hadoop-3.1.1.7.2.1.0-287/share/hadoop/common/lib/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
> ERROR StatusLogger No log4j2 configuration file found. Using default 
> configuration: logging only errors to the console. Set system property 
> 'log4j2.debug' to show Log4j2 internal initialization logging.
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/home/quanlong/workspace/Impala/toolchain/cdp_components-4493826/apache-hive-3.1.3000.7.2.1.0-287-bin/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/home/quanlong/workspace/Impala/toolchain/cdp_components-4493826/hadoop-3.1.1.7.2.1.0-287/share/hadoop/common/lib/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
> Connecting to jdbc:hive2://localhost:21050/default;auth=noSasl
> Connected to: Impala (version 4.0.0-SNAPSHOT)
> Driver: Hive JDBC (version 3.1.3000.7.2.1.0-287)
> Transaction isolation: TRANSACTION_REPEATABLE_READ
> {code}
> Looking into the impalad log file, there is a wrong option set:
> {code:java}
> I0901 15:41:14.577576 25325 TAcceptQueueServer.cpp:340] New connection to 
> server hiveserver2-frontend from client 
> I0901 15:41:14.577911 25597 impala-hs2-server.cc:300] Opening session: 
> 204a3f33cc8e28ea:d6a915ab96b26aa7 request username: 
> I0901 15:41:14.577970 25597 status.cc:129] Invalid query option: 
> set:hiveconf:hive.server2.thrift.resultset.default.fetch.size
> @  0x1cdba3d  impala::Status::Status()
> @  0x24c673f  impala::SetQueryOption()
> @  0x250c1d1  impala::ImpalaServer::OpenSession()
> @  0x2b0dc45  
> apache::hive::service::cli::thrift::TCLIServiceProcessor::process_OpenSession()
> @  0x2b0d993  
> apache::hive::service::cli::thrift::TCLIServiceProcessor::dispatchCall()
> @  0x2acd15a  
> impala::ImpalaHiveServer2ServiceProcessor::dispatchCall()
> @  0x1c8a483  apache::thrift::TDispatchProcessor::process()
> @  0x218ab4a  
> apache::thrift::server::TAcceptQueueServer::Task::run()
> @  0x218004a  impala::ThriftThread::RunRunnable()
> @  0x2181686  boost::_mfi::mf2<>::operator()()
> @  0x218151a  boost::_bi::list3<>::operator()<>()
> @  0x2181260  boost::_bi::bind_t<>::operator()()
> @  0x2181172  
> boost::detail::function::void_function_obj_invoker0<>::invoke()
> @  0x20fba57  boost::function0<>::operator()()
> @  0x26cb779  impala::Thread::SuperviseThread()
> @  0x26d3716  boost::_bi::list5<>::operator()<>()
> @  0x26d363a  boost::_bi::bind_t<>::operator()()
> @  0x26d35fb  boost::detail::thread_data<>::run()
> @  0x3eb7ae1  thread_proxy
> @ 0x7fc9443456b9  start_thread
> @ 0x7fc940e334dc  clone
> I0901 15:41:14.739985 25597 impala-hs2-server.cc:405] Opened session: 
> 204a3f33cc8e28ea:d6a915ab96b26aa7 effective username: 
> I0901 15:41:14.781677 25597 

[jira] [Commented] (IMPALA-10120) Beeline hangs when connecting to coordinators

2023-09-05 Thread XiangYang (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-10120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17762009#comment-17762009
 ] 

XiangYang commented on IMPALA-10120:


Hi [~tangzhi] , there is a onging commit: 
[https://gerrit.cloudera.org/c/20344/] , sorry for too busy to reply the review 
recently, I'll try to complete it in the next few days.

> Beeline hangs when connecting to coordinators
> -
>
> Key: IMPALA-10120
> URL: https://issues.apache.org/jira/browse/IMPALA-10120
> Project: IMPALA
>  Issue Type: Bug
>  Components: Clients
>Reporter: Quanlong Huang
>Assignee: XiangYang
>Priority: Critical
>
> Beeline is always hanging when connecting to a coordinator:
> {code:java}
> $ beeline -u "jdbc:hive2://localhost:21050/default;auth=noSasl"
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/home/quanlong/workspace/Impala/toolchain/cdp_components-4493826/apache-hive-3.1.3000.7.2.1.0-287-bin/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/home/quanlong/workspace/Impala/toolchain/cdp_components-4493826/hadoop-3.1.1.7.2.1.0-287/share/hadoop/common/lib/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
> ERROR StatusLogger No log4j2 configuration file found. Using default 
> configuration: logging only errors to the console. Set system property 
> 'log4j2.debug' to show Log4j2 internal initialization logging.
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/home/quanlong/workspace/Impala/toolchain/cdp_components-4493826/apache-hive-3.1.3000.7.2.1.0-287-bin/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/home/quanlong/workspace/Impala/toolchain/cdp_components-4493826/hadoop-3.1.1.7.2.1.0-287/share/hadoop/common/lib/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
> Connecting to jdbc:hive2://localhost:21050/default;auth=noSasl
> Connected to: Impala (version 4.0.0-SNAPSHOT)
> Driver: Hive JDBC (version 3.1.3000.7.2.1.0-287)
> Transaction isolation: TRANSACTION_REPEATABLE_READ
> {code}
> Looking into the impalad log file, there is a wrong option set:
> {code:java}
> I0901 15:41:14.577576 25325 TAcceptQueueServer.cpp:340] New connection to 
> server hiveserver2-frontend from client 
> I0901 15:41:14.577911 25597 impala-hs2-server.cc:300] Opening session: 
> 204a3f33cc8e28ea:d6a915ab96b26aa7 request username: 
> I0901 15:41:14.577970 25597 status.cc:129] Invalid query option: 
> set:hiveconf:hive.server2.thrift.resultset.default.fetch.size
> @  0x1cdba3d  impala::Status::Status()
> @  0x24c673f  impala::SetQueryOption()
> @  0x250c1d1  impala::ImpalaServer::OpenSession()
> @  0x2b0dc45  
> apache::hive::service::cli::thrift::TCLIServiceProcessor::process_OpenSession()
> @  0x2b0d993  
> apache::hive::service::cli::thrift::TCLIServiceProcessor::dispatchCall()
> @  0x2acd15a  
> impala::ImpalaHiveServer2ServiceProcessor::dispatchCall()
> @  0x1c8a483  apache::thrift::TDispatchProcessor::process()
> @  0x218ab4a  
> apache::thrift::server::TAcceptQueueServer::Task::run()
> @  0x218004a  impala::ThriftThread::RunRunnable()
> @  0x2181686  boost::_mfi::mf2<>::operator()()
> @  0x218151a  boost::_bi::list3<>::operator()<>()
> @  0x2181260  boost::_bi::bind_t<>::operator()()
> @  0x2181172  
> boost::detail::function::void_function_obj_invoker0<>::invoke()
> @  0x20fba57  boost::function0<>::operator()()
> @  0x26cb779  impala::Thread::SuperviseThread()
> @  0x26d3716  boost::_bi::list5<>::operator()<>()
> @  0x26d363a  boost::_bi::bind_t<>::operator()()
> @  0x26d35fb  boost::detail::thread_data<>::run()
> @  0x3eb7ae1  thread_proxy
> @ 0x7fc9443456b9  start_thread
> @ 0x7fc940e334dc  clone
> I0901 15:41:14.739985 25597 impala-hs2-server.cc:405] Opened session: 
> 204a3f33cc8e28ea:d6a915ab96b26aa7 effective username: 
> I0901 15:41:14.781677 25597 impala-hs2-server.cc:426] GetInfo(): 
> request=TGetInfoReq {
>   01: sessionHandle (struct) = TSessionHandle {
> 01: sessionId (struct) = THandleIdentifier {
>   01: guid (string) = "\xea(\x8e\xcc3?J \xa7j\xb2\x96\xab\x15\xa9\xd6",
>   02: 

[jira] [Updated] (IMPALA-12402) Add some configurations for CatalogdMetaProvider's cache_

2023-09-05 Thread Maxwell Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-12402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maxwell Guo updated IMPALA-12402:
-
Attachment: (was: 
0001-IMPALA-12402-Add-some-configurations-for-CatalogdMet.patch)

> Add some configurations for CatalogdMetaProvider's cache_
> -
>
> Key: IMPALA-12402
> URL: https://issues.apache.org/jira/browse/IMPALA-12402
> Project: IMPALA
>  Issue Type: Improvement
>  Components: fe
>Reporter: Maxwell Guo
>Assignee: Maxwell Guo
>Priority: Minor
>  Labels: pull-request-available
>
> when the cluster contains many db and tables such as if there are more than 
> 10 tables, and if we restart the impalad , the local cache_ 
> CatalogMetaProvider's need to doing some loading process. 
> As we know that the goole's guava cache 's concurrencyLevel os set to 4 by 
> default. 
> but if there is many tables the loading process will need more time and 
> increase the probability of lock contention, see 
> [here|https://github.com/google/guava/blob/master/guava/src/com/google/common/cache/CacheBuilder.java#L437].
>  
> So we propose to add some configurations here, the first is the concurrency 
> of cache.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org