[jira] [Created] (IMPALA-12853) Collection types in FROM clause for Iceberg Metadata Tables

2024-02-28 Thread Tamas Mate (Jira)
Tamas Mate created IMPALA-12853:
---

 Summary: Collection types in FROM clause for Iceberg Metadata 
Tables
 Key: IMPALA-12853
 URL: https://issues.apache.org/jira/browse/IMPALA-12853
 Project: IMPALA
  Issue Type: Bug
Reporter: Tamas Mate


Iceberg Metadata Table Collection types are not supported in from clause, 
therefore it is not possible to use 'item' or 'pos' in case of arrays for 
example.

Example query:
{code:java}
select delete_ids.item
from functional_parquet.iceberg_query_metadata.all_files, 
functional_parquet.iceberg_query_metadata.all_files.equality_ids delete_ids; 
{code}
When scanning metadata tables, the whole table and row is read from Iceberg 
which means that the "top" level StructLikeRow is returned. But Impala only 
creates tuples and slots for the selected slots and tuples. This makes 
accessing the values inside StructLikeRows difficult, because it cannot be 
accessed through accessors and these have to be accessed through positions.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Resolved] (IMPALA-12140) Fix confusion format in the side bar of official doc site

2024-02-23 Thread Tamas Mate (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-12140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Mate resolved IMPALA-12140.
-
Fix Version/s: Impala 4.3.0
   Resolution: Duplicate

It was fixed and deployed in IMPALA-11853

> Fix confusion format in the  side bar of official doc site
> --
>
> Key: IMPALA-12140
> URL: https://issues.apache.org/jira/browse/IMPALA-12140
> Project: IMPALA
>  Issue Type: Improvement
>Reporter: XiangYang
>Assignee: Tamas Mate
>Priority: Major
> Fix For: Impala 4.3.0
>
> Attachments: image-2023-05-13-21-51-43-789.png
>
>
> Because the hierarchy is too deep, the side bar hide the main content on some 
> doc pages, such as:
> [https://impala.apache.org/docs/build/html/topics/impala_mt_dop.html]
> on my computer this page looks like this:
> !image-2023-05-13-21-51-43-789.png|width=800,height=590!
> I think maybe we can make the side bar collapsible to fix it, or any other 
> good ideas.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Closed] (IMPALA-12030) TestIcebergTable.test_load failed in Ozone builds

2024-02-23 Thread Tamas Mate (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-12030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Mate closed IMPALA-12030.
---
Resolution: Cannot Reproduce

This has not happened for a while now, so I am closing it.

> TestIcebergTable.test_load failed in Ozone builds
> -
>
> Key: IMPALA-12030
> URL: https://issues.apache.org/jira/browse/IMPALA-12030
> Project: IMPALA
>  Issue Type: Bug
>Reporter: Tamas Mate
>Assignee: Tamas Mate
>Priority: Major
>
> query_test/test_iceberg.py::TestIcebergTable::test_load started failing in 
> Ozone test runs.
> First failure: query_test.test_iceberg.TestIcebergTable.test_load. It shows 
> up in later runs but Jenkins doesn't always identify the failure.
> {code:java}
> query_test/test_iceberg.py:842: in test_load
> self.run_test_case('QueryTest/iceberg-load', vector, 
> use_db=unique_database)
> common/impala_test_suite.py:773: in run_test_case
> self.__verify_exceptions(test_section['CATCH'], str(e), use_db)
> common/impala_test_suite.py:557: in __verify_exceptions
> (expected_str, actual_str)
> E   AssertionError: Unexpected exception string. Expected: minimum memory 
> reservation is greater than memory available to the query for buffer 
> reservations
> E   Not found in actual: ImpalaBeeswaxException: INNER EXCEPTION:  'beeswaxd.ttypes.BeeswaxException'> MESSAGE: AnalysisException: INPATH 
> location 
> 'ofs://localhost:9862/tmp/test_load_a61184e9/parquet/0-0-data-gfurnstahl_20220906113044_157fc172-f5d3-4c70-8653-fff150b6136a-job_16619542960420_0002-1-1.parquet'
>  does not exist. {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Assigned] (IMPALA-11265) Iceberg tables have a large memory footprint in catalog cache

2024-02-23 Thread Tamas Mate (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-11265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Mate reassigned IMPALA-11265:
---

Assignee: (was: Tamas Mate)

> Iceberg tables have a large memory footprint in catalog cache
> -
>
> Key: IMPALA-11265
> URL: https://issues.apache.org/jira/browse/IMPALA-11265
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Catalog
>Reporter: Quanlong Huang
>Priority: Major
>  Labels: impala-iceberg
>
> During the investigation of IMPALA-11260, I found the cache item size of a 
> (IcebergApiTableCacheKey, org.apache.iceberg.BaseTable) pair could be 30MB.
> For instance, here are the cache items of the iceberg table 
> {{{}functional_parquet.iceberg_partitioned{}}}:
> {code:java}
> weigh=3792, keyClass=class 
> org.apache.impala.catalog.local.CatalogdMetaProvider$TableCacheKey, 
> valueClass=class 
> org.apache.impala.catalog.local.CatalogdMetaProvider$TableMetaRefImpl
> weigh=14960, keyClass=class 
> org.apache.impala.catalog.local.CatalogdMetaProvider$IcebergMetaCacheKey, 
> valueClass=class org.apache.impala.thrift.TPartialTableInfo
> weigh=30546992, keyClass=class 
> org.apache.impala.catalog.local.CatalogdMetaProvider$IcebergApiTableCacheKey, 
> valueClass=class org.apache.iceberg.BaseTable
> weigh=496, keyClass=class 
> org.apache.impala.catalog.local.CatalogdMetaProvider$ColStatsCacheKey, 
> valueClass=class org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj
> weigh=496, keyClass=class 
> org.apache.impala.catalog.local.CatalogdMetaProvider$ColStatsCacheKey, 
> valueClass=class org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj
> weigh=496, keyClass=class 
> org.apache.impala.catalog.local.CatalogdMetaProvider$ColStatsCacheKey, 
> valueClass=class org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj
> weigh=512, keyClass=class 
> org.apache.impala.catalog.local.CatalogdMetaProvider$ColStatsCacheKey, 
> valueClass=class org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj
> weigh=472, keyClass=class 
> org.apache.impala.catalog.local.CatalogdMetaProvider$PartitionListCacheKey, 
> valueClass=class java.util.ArrayList
> weigh=10328, keyClass=class 
> org.apache.impala.catalog.local.CatalogdMetaProvider$PartitionCacheKey, 
> valueClass=class 
> org.apache.impala.catalog.local.CatalogdMetaProvider$PartitionMetadataImpl{code}
> Note that this table just have 20 rows. The total memory footprint size is 
> 30MB.
> For a normal partitioned partquet table, the memory footprint is not that 
> large. For instance, here are the cache items for 
> {{{}functional_parquet.alltypes{}}}:
> {code:java}
> weigh=4216, keyClass=class 
> org.apache.impala.catalog.local.CatalogdMetaProvider$TableCacheKey, 
> valueClass=class 
> org.apache.impala.catalog.local.CatalogdMetaProvider$TableMetaRefImpl
> weigh=480, keyClass=class 
> org.apache.impala.catalog.local.CatalogdMetaProvider$ColStatsCacheKey, 
> valueClass=class org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj
> weigh=472, keyClass=class 
> org.apache.impala.catalog.local.CatalogdMetaProvider$ColStatsCacheKey, 
> valueClass=class org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj
> weigh=488, keyClass=class 
> org.apache.impala.catalog.local.CatalogdMetaProvider$ColStatsCacheKey, 
> valueClass=class org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj
> weigh=488, keyClass=class 
> org.apache.impala.catalog.local.CatalogdMetaProvider$ColStatsCacheKey, 
> valueClass=class org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj
> weigh=480, keyClass=class 
> org.apache.impala.catalog.local.CatalogdMetaProvider$ColStatsCacheKey, 
> valueClass=class org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj
> weigh=488, keyClass=class 
> org.apache.impala.catalog.local.CatalogdMetaProvider$ColStatsCacheKey, 
> valueClass=class org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj
> weigh=488, keyClass=class 
> org.apache.impala.catalog.local.CatalogdMetaProvider$ColStatsCacheKey, 
> valueClass=class org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj
> weigh=488, keyClass=class 
> org.apache.impala.catalog.local.CatalogdMetaProvider$ColStatsCacheKey, 
> valueClass=class org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj
> weigh=488, keyClass=class 
> org.apache.impala.catalog.local.CatalogdMetaProvider$ColStatsCacheKey, 
> valueClass=class org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj
> weigh=488, keyClass=class 
> org.apache.impala.catalog.local.CatalogdMetaProvider$ColStatsCacheKey, 
> valueClass=class org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj
> weigh=496, keyClass=class 
> org.apache.impala.catalog.local.CatalogdMetaProvider$ColStatsCacheKey, 
> valueClass=class org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj
> 

[jira] [Assigned] (IMPALA-11991) Iceberg Metadata querying with time travel

2024-02-23 Thread Tamas Mate (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-11991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Mate reassigned IMPALA-11991:
---

Assignee: (was: Tamas Mate)

> Iceberg Metadata querying with time travel
> --
>
> Key: IMPALA-11991
> URL: https://issues.apache.org/jira/browse/IMPALA-11991
> Project: IMPALA
>  Issue Type: Sub-task
>Reporter: Tamas Mate
>Priority: Major
>
> The initial version of metadtata querying does not support time travels, we 
> support it with metadata tables as well.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Assigned] (IMPALA-12809) Iceberg metadata table scanner can be scheduled to executors

2024-02-23 Thread Tamas Mate (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-12809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Mate reassigned IMPALA-12809:
---

Assignee: Gabor Kaszab

> Iceberg metadata table scanner can be scheduled to executors
> 
>
> Key: IMPALA-12809
> URL: https://issues.apache.org/jira/browse/IMPALA-12809
> Project: IMPALA
>  Issue Type: Bug
>  Components: Backend
>Affects Versions: Impala 4.4.0
>Reporter: Tamas Mate
>Assignee: Gabor Kaszab
>Priority: Major
>  Labels: impala-iceberg
>
> On larger clusters the Iceberg metadata scanner can be scheduled to 
> executors, for example during a join. The fragment in this case will fail a 
> precondition check, because either the frontend_ object will not be present 
> or the table. Setting {{exec_at_coord}} to true is not enough and these 
> fragments should be scheduled to the {{{}coord_only_executor_group{}}}.
> Additionally, setting NUM_NODES=1 should be a viable workaround.
> Reproducible with the following local dev Impala cluster:
> {{./bin/start-impala-cluster.py --cluster_size=3 --num_coordinators=1 
> --use_exclusive_coordinators}}
> and query:
> {{select count(b.parent_id) from 
> functional_parquet.iceberg_query_metadata.history a}}
> {{join functional_parquet.iceberg_query_metadata.history b on a.snapshot_id = 
> b.snapshot_id;}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Assigned] (IMPALA-11876) Support 'fixed' data type in AVRO

2024-02-23 Thread Tamas Mate (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-11876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Mate reassigned IMPALA-11876:
---

Assignee: (was: Tamas Mate)

> Support 'fixed' data type in AVRO
> -
>
> Key: IMPALA-11876
> URL: https://issues.apache.org/jira/browse/IMPALA-11876
> Project: IMPALA
>  Issue Type: New Feature
>  Components: Backend
>Reporter: Noemi Pap-Takacs
>Priority: Major
>  Labels: avro, impala-iceberg
>
> Impala supports 'decimal' type in AVRO. 'Decimal' is a logical type, that can 
> annotate either 'bytes' or 'fixed' type underneath. Impala can read 'bytes' 
> but not 'fixed'.
> Iceberg writes 'decimal' type with underlying 'fixed' type. This means that 
> Impala is currently unable to support 'decimal' in AVRO tables written by 
> Iceberg. In order to fully support all implementations of 'decimal', 'fixed' 
> type must be supported. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Assigned] (IMPALA-12497) Use C++ Avro implementation instead of C

2024-02-23 Thread Tamas Mate (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-12497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Mate reassigned IMPALA-12497:
---

Assignee: (was: Tamas Mate)

> Use C++ Avro implementation instead of C
> 
>
> Key: IMPALA-12497
> URL: https://issues.apache.org/jira/browse/IMPALA-12497
> Project: IMPALA
>  Issue Type: Sub-task
>  Components: Backend
>Reporter: Tamas Mate
>Priority: Major
>  Labels: impala-iceberg
>
> The C++ avro lib is part of the native toolchain, we should use that instead 
> of C.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Assigned] (IMPALA-12651) Add support to BINARY type Iceberg Metadata table columns

2024-02-23 Thread Tamas Mate (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-12651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Mate reassigned IMPALA-12651:
---

Assignee: (was: Tamas Mate)

> Add support to BINARY type Iceberg Metadata table columns
> -
>
> Key: IMPALA-12651
> URL: https://issues.apache.org/jira/browse/IMPALA-12651
> Project: IMPALA
>  Issue Type: Sub-task
>  Components: Backend, Frontend
>Affects Versions: Impala 4.4.0
>Reporter: Tamas Mate
>Priority: Major
>  Labels: impala-iceberg
>
> Impala should be able to read BINARY type columns from Iceberg metadata 
> tables as strings, additionally this should be allowed when reading these 
> types from complex types.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Resolved] (IMPALA-10966) query_test.test_scanners.TestIceberg.test_iceberg_query multiple failures in an ASAN run

2024-02-23 Thread Tamas Mate (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-10966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Mate resolved IMPALA-10966.
-
Resolution: Cannot Reproduce

This hasn't happened for a while now, I am closing it.

> query_test.test_scanners.TestIceberg.test_iceberg_query multiple failures in 
> an ASAN run
> 
>
> Key: IMPALA-10966
> URL: https://issues.apache.org/jira/browse/IMPALA-10966
> Project: IMPALA
>  Issue Type: Bug
>  Components: Backend
>Affects Versions: Impala 4.1.0
>Reporter: Laszlo Gaal
>Assignee: Tamas Mate
>Priority: Critical
>  Labels: broken-build, iceberg
>
> the actual failures look pretty similar.
> Pattern 1:
> {code}
> query_test/test_scanners.py:357: in test_iceberg_query 
> self.run_test_case('QueryTest/iceberg-query', vector) 
> common/impala_test_suite.py:713: in run_test_case 
> self.__verify_results_and_errors(vector, test_section, result, use_db) 
> common/impala_test_suite.py:549: in __verify_results_and_errors 
> replace_filenames_with_placeholder) common/test_result_verifier.py:469: in 
> verify_raw_results VERIFIER_MAP[verifier](expected, actual) 
> common/test_result_verifier.py:278: in verify_query_result_is_equal 
> assert expected_results == actual_results E   assert Comparing 
> QueryTestResults (expected vs actual): E 
> 'hdfs://localhost:20500/test-warehouse/iceberg_test/hadoop_catalog/iceberg_partitioned_orc/functional_parquet/iceberg_partitioned_orc/data/action=click/4-4-0982a5d3-48c0-4dd0-ab87-d24190894251-0.orc',regex:.*,''
>  == 
> 'hdfs://localhost:20500/test-warehouse/iceberg_test/hadoop_catalog/iceberg_partitioned_orc/functional_parquet/iceberg_partitioned_orc/data/action=click/4-4-0982a5d3-48c0-4dd0-ab87-d24190894251-0.orc','460B',''
>  E 
> 'hdfs://localhost:20500/test-warehouse/iceberg_test/hadoop_catalog/iceberg_partitioned_orc/functional_parquet/iceberg_partitioned_orc/data/action=click/00014-14-dc56d2c8-e285-428d-b81e-f3d07ec53c12-0.orc',regex:.*,''
>  == 
> 'hdfs://localhost:20500/test-warehouse/iceberg_test/hadoop_catalog/iceberg_partitioned_orc/functional_parquet/iceberg_partitioned_orc/data/action=click/00014-14-dc56d2c8-e285-428d-b81e-f3d07ec53c12-0.orc','460B',''
> [. matching result lines elided.]
> E 
> 'hdfs://localhost:20500/test-warehouse/iceberg_test/hadoop_catalog/iceberg_partitioned_orc/functional_parquet/iceberg_partitioned_orc/metadata/version-hint.text',regex:.*,''
>  != 
> 'hdfs://localhost:20500/test-warehouse/iceberg_test/hadoop_catalog/iceberg_partitioned_orc/functional_parquet/iceberg_partitioned_orc/metadata/v3.metadata.json','2.21KB',''
>  
> E None != 
> 'hdfs://localhost:20500/test-warehouse/iceberg_test/hadoop_catalog/iceberg_partitioned_orc/functional_parquet/iceberg_partitioned_orc/metadata/v4.metadata.json','2.44KB',''
>  
> E None != 
> 'hdfs://localhost:20500/test-warehouse/iceberg_test/hadoop_catalog/iceberg_partitioned_orc/functional_parquet/iceberg_partitioned_orc/metadata/v5.metadata.json','2.66KB',''
>  
> E None != 
> 'hdfs://localhost:20500/test-warehouse/iceberg_test/hadoop_catalog/iceberg_partitioned_orc/functional_parquet/iceberg_partitioned_orc/metadata/version-hint.text','1B',''
>  
> E Number of rows returned (expected vs actual): 25 != 28
> {code}
> Pattern 2:
> {code}
> query_test/test_scanners.py:357: in test_iceberg_query
> self.run_test_case('QueryTest/iceberg-query', vector)
> common/impala_test_suite.py:713: in run_test_case
> self.__verify_results_and_errors(vector, test_section, result, use_db)
> common/impala_test_suite.py:549: in __verify_results_and_errors
> replace_filenames_with_placeholder)
> common/test_result_verifier.py:469: in verify_raw_results
> VERIFIER_MAP[verifier](expected, actual)
> common/test_result_verifier.py:278: in verify_query_result_is_equal
> assert expected_results == actual_results
> E   assert Comparing QueryTestResults (expected vs actual):
> E 
> 'hdfs://localhost:20500/test-warehouse/iceberg_test/hadoop_catalog/iceberg_partitioned_orc/functional_parquet/iceberg_partitioned_orc/data/action=click/4-4-0982a5d3-48c0-4dd0-ab87-d24190894251-0.orc',regex:.*,''
>  == 
> 'hdfs://localhost:20500/test-warehouse/iceberg_test/hadoop_catalog/iceberg_partitioned_orc/functional_parquet/iceberg_partitioned_orc/data/action=click/4-4-0982a5d3-48c0-4dd0-ab87-d24190894251-0.orc','460B',''
> [.matching result lines elided...]
> E 
> 'hdfs://localhost:20500/test-warehouse/iceberg_test/hadoop_catalog/iceberg_partitioned_orc/functional_parquet/iceberg_partitioned_orc/metadata/version-hint.text',regex:.*,''
>  != 
> 

[jira] [Updated] (IMPALA-12836) Aggregation over a STRUCT throws IllegalStateException

2024-02-22 Thread Tamas Mate (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-12836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Mate updated IMPALA-12836:

Summary: Aggregation over a STRUCT throws IllegalStateException  (was: 
Aggregation over a STRUCT IllegalStateException)

> Aggregation over a STRUCT throws IllegalStateException
> --
>
> Key: IMPALA-12836
> URL: https://issues.apache.org/jira/browse/IMPALA-12836
> Project: IMPALA
>  Issue Type: Bug
>  Components: Frontend
>Affects Versions: Impala 4.4.0
>Reporter: Tamas Mate
>Priority: Major
>
> A Preconditions check will fail when trying to aggregate over a struct.
> Repro query:
> {code}
> Query: select int_struct_col, sum(id) from functional_parquet.allcomplextypes 
> group by int_struct_col
> Query submitted at: 2024-02-22 13:08:20 (Coordinator: 
> http://tmate-desktop:25000)
> ERROR: IllegalStateException: null
> {code}
> {code:java}
> I0222 13:05:21.762225 10675 jni-util.cc:302] 
> 3c44b4fafbbcb6b5:eee03297] java.lang.IllegalStateException
>         at 
> com.google.common.base.Preconditions.checkState(Preconditions.java:486)
>         at 
> org.apache.impala.analysis.SlotRef.addStructChildrenAsSlotRefs(SlotRef.java:268)
>         at org.apache.impala.analysis.SlotRef.(SlotRef.java:93)
>         at 
> org.apache.impala.analysis.AggregateInfoBase.createTupleDesc(AggregateInfoBase.java:135)
>         at 
> org.apache.impala.analysis.AggregateInfoBase.createTupleDescs(AggregateInfoBase.java:101)
>         at 
> org.apache.impala.analysis.AggregateInfo.create(AggregateInfo.java:150)
>         at 
> org.apache.impala.analysis.AggregateInfo.create(AggregateInfo.java:171)
>         at 
> org.apache.impala.analysis.MultiAggregateInfo.analyze(MultiAggregateInfo.java:301)
>         at 
> org.apache.impala.analysis.SelectStmt$SelectAnalyzer.buildAggregateExprs(SelectStmt.java:1149)
>         at 
> org.apache.impala.analysis.SelectStmt$SelectAnalyzer.analyze(SelectStmt.java:355)
>         at 
> org.apache.impala.analysis.SelectStmt$SelectAnalyzer.access$100(SelectStmt.java:282)
>         at org.apache.impala.analysis.SelectStmt.analyze(SelectStmt.java:274)
>         at 
> org.apache.impala.analysis.AnalysisContext.analyze(AnalysisContext.java:545)
>         at 
> org.apache.impala.analysis.AnalysisContext.analyzeAndAuthorize(AnalysisContext.java:492)
>         at 
> org.apache.impala.service.Frontend.doCreateExecRequest(Frontend.java:2364)
>         at 
> org.apache.impala.service.Frontend.getTExecRequest(Frontend.java:2110)
>         at 
> org.apache.impala.service.Frontend.createExecRequest(Frontend.java:1883)
>         at 
> org.apache.impala.service.JniFrontend.createExecRequest(JniFrontend.java:169) 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-12836) Aggregation over a STRUCT IllegalStateException

2024-02-22 Thread Tamas Mate (Jira)
Tamas Mate created IMPALA-12836:
---

 Summary: Aggregation over a STRUCT IllegalStateException
 Key: IMPALA-12836
 URL: https://issues.apache.org/jira/browse/IMPALA-12836
 Project: IMPALA
  Issue Type: Bug
  Components: Frontend
Affects Versions: Impala 4.4.0
Reporter: Tamas Mate


A Preconditions check will fail when trying to aggregate over a struct.

Repro query:
{code}
Query: select int_struct_col, sum(id) from functional_parquet.allcomplextypes 
group by int_struct_col
Query submitted at: 2024-02-22 13:08:20 (Coordinator: 
http://tmate-desktop:25000)
ERROR: IllegalStateException: null
{code}

{code:java}
I0222 13:05:21.762225 10675 jni-util.cc:302] 3c44b4fafbbcb6b5:eee03297] 
java.lang.IllegalStateException
        at 
com.google.common.base.Preconditions.checkState(Preconditions.java:486)
        at 
org.apache.impala.analysis.SlotRef.addStructChildrenAsSlotRefs(SlotRef.java:268)
        at org.apache.impala.analysis.SlotRef.(SlotRef.java:93)
        at 
org.apache.impala.analysis.AggregateInfoBase.createTupleDesc(AggregateInfoBase.java:135)
        at 
org.apache.impala.analysis.AggregateInfoBase.createTupleDescs(AggregateInfoBase.java:101)
        at 
org.apache.impala.analysis.AggregateInfo.create(AggregateInfo.java:150)
        at 
org.apache.impala.analysis.AggregateInfo.create(AggregateInfo.java:171)
        at 
org.apache.impala.analysis.MultiAggregateInfo.analyze(MultiAggregateInfo.java:301)
        at 
org.apache.impala.analysis.SelectStmt$SelectAnalyzer.buildAggregateExprs(SelectStmt.java:1149)
        at 
org.apache.impala.analysis.SelectStmt$SelectAnalyzer.analyze(SelectStmt.java:355)
        at 
org.apache.impala.analysis.SelectStmt$SelectAnalyzer.access$100(SelectStmt.java:282)
        at org.apache.impala.analysis.SelectStmt.analyze(SelectStmt.java:274)
        at 
org.apache.impala.analysis.AnalysisContext.analyze(AnalysisContext.java:545)
        at 
org.apache.impala.analysis.AnalysisContext.analyzeAndAuthorize(AnalysisContext.java:492)
        at 
org.apache.impala.service.Frontend.doCreateExecRequest(Frontend.java:2364)
        at 
org.apache.impala.service.Frontend.getTExecRequest(Frontend.java:2110)
        at 
org.apache.impala.service.Frontend.createExecRequest(Frontend.java:1883)
        at 
org.apache.impala.service.JniFrontend.createExecRequest(JniFrontend.java:169) 
{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Updated] (IMPALA-12809) Iceberg metadata table scanner can be scheduled to executors

2024-02-12 Thread Tamas Mate (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-12809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Mate updated IMPALA-12809:

Description: 
On larger clusters the Iceberg metadata scanner can be scheduled to executors, 
for example during a join. The fragment in this case will fail a precondition 
check, because either the frontend_ object will not be present or the table. 
Setting {{exec_at_coord}} to true is not enough and these fragments should be 
scheduled to the {{{}coord_only_executor_group{}}}.

Additionally, setting NUM_NODES=1 should be a viable workaround.

Reproducible with the following local dev Impala cluster:
{{./bin/start-impala-cluster.py --cluster_size=3 --num_coordinators=1 
--use_exclusive_coordinators}}
and query:
{{select count(b.parent_id) from 
functional_parquet.iceberg_query_metadata.history a}}
{{join functional_parquet.iceberg_query_metadata.history b on a.snapshot_id = 
b.snapshot_id;}}

  was:
On larger clusters the Iceberg metadata scanner can be scheduled to executors, 
for example during a join. The fragment in this case will fail a precondition 
check, because either the frontend_ object will not be present or the table. 
Setting {{exec_at_coord}} to true is not enough and these fragments should be 
scheduled to the {{coord_only_executor_group}}.
 
Additionally, setting NUM_NODES=1 should be a viable workaround.

Reproducible with the following local dev Impala cluster:
{{./bin/start-impala-cluster.py --cluster_size=3 --num_coordinators=1 
--use_exclusive_coordinators}}
and query:
{{select count(b.parent_id) from 
functional_parquet.iceberg_query_metadata.history a
join functional_parquet.iceberg_query_metadata.history b on a.snapshot_id = 
b.snapshot_id;}}


> Iceberg metadata table scanner can be scheduled to executors
> 
>
> Key: IMPALA-12809
> URL: https://issues.apache.org/jira/browse/IMPALA-12809
> Project: IMPALA
>  Issue Type: Bug
>  Components: Backend
>Affects Versions: Impala 4.4.0
>Reporter: Tamas Mate
>Priority: Major
>  Labels: impala-iceberg
>
> On larger clusters the Iceberg metadata scanner can be scheduled to 
> executors, for example during a join. The fragment in this case will fail a 
> precondition check, because either the frontend_ object will not be present 
> or the table. Setting {{exec_at_coord}} to true is not enough and these 
> fragments should be scheduled to the {{{}coord_only_executor_group{}}}.
> Additionally, setting NUM_NODES=1 should be a viable workaround.
> Reproducible with the following local dev Impala cluster:
> {{./bin/start-impala-cluster.py --cluster_size=3 --num_coordinators=1 
> --use_exclusive_coordinators}}
> and query:
> {{select count(b.parent_id) from 
> functional_parquet.iceberg_query_metadata.history a}}
> {{join functional_parquet.iceberg_query_metadata.history b on a.snapshot_id = 
> b.snapshot_id;}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Updated] (IMPALA-12809) Iceberg metadata table scanner can be scheduled to executors

2024-02-12 Thread Tamas Mate (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-12809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Mate updated IMPALA-12809:

Description: 
On larger clusters the Iceberg metadata scanner can be scheduled to executors, 
for example during a join. The fragment in this case will fail a precondition 
check, because either the frontend_ object will not be present or the table. 
Setting {{exec_at_coord}} to true is not enough and these fragments should be 
scheduled to the {{coord_only_executor_group}}.
 
Additionally, setting NUM_NODES=1 should be a viable workaround.

Reproducible with the following local dev Impala cluster:
{{./bin/start-impala-cluster.py --cluster_size=3 --num_coordinators=1 
--use_exclusive_coordinators}}
and query:
{{select count(b.parent_id) from 
functional_parquet.iceberg_query_metadata.history a
join functional_parquet.iceberg_query_metadata.history b on a.snapshot_id = 
b.snapshot_id;}}

  was:
On larger clusters the Iceberg metadata scanner can be scheduled to executors, 
for example during a join. The fragment in this case will fail a precondition 
check, because either the frontend_ object will not be present or the table. 
Setting {{exec_at_coord}} to true is not enough and these fragments should be 
scheduled to the {{coord_only_executor_group}}.
 
Additionally, setting NUM_NODES=1 should be a viable workaround.

Reproducible with the following local dev Impala cluster:
{{./bin/start-impala-cluster.py --cluster_size=3 --num_coordinators=1 
--use_exclusive_coordinators}}


> Iceberg metadata table scanner can be scheduled to executors
> 
>
> Key: IMPALA-12809
> URL: https://issues.apache.org/jira/browse/IMPALA-12809
> Project: IMPALA
>  Issue Type: Bug
>  Components: Backend
>Affects Versions: Impala 4.4.0
>Reporter: Tamas Mate
>Priority: Major
>  Labels: impala-iceberg
>
> On larger clusters the Iceberg metadata scanner can be scheduled to 
> executors, for example during a join. The fragment in this case will fail a 
> precondition check, because either the frontend_ object will not be present 
> or the table. Setting {{exec_at_coord}} to true is not enough and these 
> fragments should be scheduled to the {{coord_only_executor_group}}.
>  
> Additionally, setting NUM_NODES=1 should be a viable workaround.
> Reproducible with the following local dev Impala cluster:
> {{./bin/start-impala-cluster.py --cluster_size=3 --num_coordinators=1 
> --use_exclusive_coordinators}}
> and query:
> {{select count(b.parent_id) from 
> functional_parquet.iceberg_query_metadata.history a
> join functional_parquet.iceberg_query_metadata.history b on a.snapshot_id = 
> b.snapshot_id;}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Updated] (IMPALA-12809) Iceberg metadata table scanner can be scheduled to executors

2024-02-12 Thread Tamas Mate (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-12809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Mate updated IMPALA-12809:

Description: 
On larger clusters the Iceberg metadata scanner can be scheduled to executors, 
for example during a join. The fragment in this case will fail a precondition 
check, because either the frontend_ object will not be present or the table. 
Setting {{exec_at_coord}} to true is not enough and these fragments should be 
scheduled to the {{coord_only_executor_group}}.
 
Additionally, setting NUM_NODES=1 should be a viable workaround.

Reproducible with the following local dev Impala cluster:
{{./bin/start-impala-cluster.py --cluster_size=3 --num_coordinators=1 
--use_exclusive_coordinators}}

  was:
On larger clusters the Iceberg metadata scanner can be scheduled to executors, 
for example during a join. The fragment in this case will fail a precondition 
check, because either the frontend_ object will not be present or the table. 
Setting {{exec_at_coord}} to true is not enough and these fragments should be 
scheduled to the {{coord_only_executor_group}}.
 
Additionally, setting NUM_NODES=1 should be a viable workaround.


> Iceberg metadata table scanner can be scheduled to executors
> 
>
> Key: IMPALA-12809
> URL: https://issues.apache.org/jira/browse/IMPALA-12809
> Project: IMPALA
>  Issue Type: Bug
>  Components: Backend
>Affects Versions: Impala 4.4.0
>Reporter: Tamas Mate
>Priority: Major
>  Labels: impala-iceberg
>
> On larger clusters the Iceberg metadata scanner can be scheduled to 
> executors, for example during a join. The fragment in this case will fail a 
> precondition check, because either the frontend_ object will not be present 
> or the table. Setting {{exec_at_coord}} to true is not enough and these 
> fragments should be scheduled to the {{coord_only_executor_group}}.
>  
> Additionally, setting NUM_NODES=1 should be a viable workaround.
> Reproducible with the following local dev Impala cluster:
> {{./bin/start-impala-cluster.py --cluster_size=3 --num_coordinators=1 
> --use_exclusive_coordinators}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Updated] (IMPALA-12809) Iceberg metadata table scanner can be scheduled to executors

2024-02-12 Thread Tamas Mate (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-12809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Mate updated IMPALA-12809:

Labels: impala-iceberg  (was: )

> Iceberg metadata table scanner can be scheduled to executors
> 
>
> Key: IMPALA-12809
> URL: https://issues.apache.org/jira/browse/IMPALA-12809
> Project: IMPALA
>  Issue Type: Bug
>  Components: Backend
>Affects Versions: Impala 4.4.0
>Reporter: Tamas Mate
>Priority: Major
>  Labels: impala-iceberg
>
> On larger clusters the Iceberg metadata scanner can be scheduled to 
> executors, for example during a join. The fragment in this case will fail a 
> precondition check, because either the frontend_ object will not be present 
> or the table. Setting {{exec_at_coord}} to true is not enough and these 
> fragments should be scheduled to the {{coord_only_executor_group}}.
>  
> Additionally, setting NUM_NODES=1 should be a viable workaround.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-12809) Iceberg metadata table scanner can be scheduled to executors

2024-02-12 Thread Tamas Mate (Jira)
Tamas Mate created IMPALA-12809:
---

 Summary: Iceberg metadata table scanner can be scheduled to 
executors
 Key: IMPALA-12809
 URL: https://issues.apache.org/jira/browse/IMPALA-12809
 Project: IMPALA
  Issue Type: Bug
  Components: Backend
Affects Versions: Impala 4.4.0
Reporter: Tamas Mate


On larger clusters the Iceberg metadata scanner can be scheduled to executors, 
for example during a join. The fragment in this case will fail a precondition 
check, because either the frontend_ object will not be present or the table. 
Setting {{exec_at_coord}} to true is not enough and these fragments should be 
scheduled to the {{coord_only_executor_group}}.
 
Additionally, setting NUM_NODES=1 should be a viable workaround.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Closed] (IMPALA-12773) Log the snapshot id add it to the plan node for Iceberg queries

2024-01-31 Thread Tamas Mate (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-12773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Mate closed IMPALA-12773.
---
Resolution: Duplicate

> Log the snapshot id add it to the plan node for Iceberg queries 
> 
>
> Key: IMPALA-12773
> URL: https://issues.apache.org/jira/browse/IMPALA-12773
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Frontend
>Affects Versions: Impala 4.4.0
>Reporter: Tamas Mate
>Priority: Major
>  Labels: impala-iceberg
>
> For supportability purposes Impala should track the snapshot id that will be 
> used to query the Iceberg table. This could help identify problems like:
> - whether two engines are reading from the same snapshots
> - trace back which snapshot was read by a specific query. Useful to see if 
> there were any changes between query executions.
> The snapshot id could be logged INFO level and added to the query plan tree 
> as well as an attribute of a SCAN node.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Updated] (IMPALA-12773) Log the snapshot id add it to the plan node for Iceberg queries

2024-01-30 Thread Tamas Mate (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-12773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Mate updated IMPALA-12773:

Labels: impala-iceberg  (was: )

> Log the snapshot id add it to the plan node for Iceberg queries 
> 
>
> Key: IMPALA-12773
> URL: https://issues.apache.org/jira/browse/IMPALA-12773
> Project: IMPALA
>  Issue Type: Improvement
>  Components: Frontend
>Affects Versions: Impala 4.4.0
>Reporter: Tamas Mate
>Priority: Major
>  Labels: impala-iceberg
>
> For supportability purposes Impala should track the snapshot id that will be 
> used to query the Iceberg table. This could help identify problems like:
> - whether two engines are reading from the same snapshots
> - trace back which snapshot was read by a specific query. Useful to see if 
> there were any changes between query executions.
> The snapshot id could be logged INFO level and added to the query plan tree 
> as well as an attribute of a SCAN node.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-12773) Log the snapshot id add it to the plan node for Iceberg queries

2024-01-30 Thread Tamas Mate (Jira)
Tamas Mate created IMPALA-12773:
---

 Summary: Log the snapshot id add it to the plan node for Iceberg 
queries 
 Key: IMPALA-12773
 URL: https://issues.apache.org/jira/browse/IMPALA-12773
 Project: IMPALA
  Issue Type: Improvement
  Components: Frontend
Affects Versions: Impala 4.4.0
Reporter: Tamas Mate


For supportability purposes Impala should track the snapshot id that will be 
used to query the Iceberg table. This could help identify problems like:
- whether two engines are reading from the same snapshots
- trace back which snapshot was read by a specific query. Useful to see if 
there were any changes between query executions.

The snapshot id could be logged INFO level and added to the query plan tree as 
well as an attribute of a SCAN node.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Updated] (IMPALA-12764) LIMIT is not evaluated during Iceberg metadata table querying

2024-01-29 Thread Tamas Mate (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-12764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Mate updated IMPALA-12764:

Summary: LIMIT is not evaluated during Iceberg metadata table querying  
(was: LIMIT is not used during Iceberg metadata table querying)

> LIMIT is not evaluated during Iceberg metadata table querying
> -
>
> Key: IMPALA-12764
> URL: https://issues.apache.org/jira/browse/IMPALA-12764
> Project: IMPALA
>  Issue Type: Bug
>Affects Versions: Impala 4.4.0
>Reporter: Tamas Mate
>Assignee: Tamas Mate
>Priority: Major
>  Labels: impala-iceberg
>
> ReachedLimit() is not evaluated when iterating over the results.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-12764) LIMIT is not used during Iceberg metadata table querying

2024-01-29 Thread Tamas Mate (Jira)
Tamas Mate created IMPALA-12764:
---

 Summary: LIMIT is not used during Iceberg metadata table querying
 Key: IMPALA-12764
 URL: https://issues.apache.org/jira/browse/IMPALA-12764
 Project: IMPALA
  Issue Type: Bug
Affects Versions: Impala 4.4.0
Reporter: Tamas Mate
Assignee: Tamas Mate






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Updated] (IMPALA-12741) Iceberg metadata query throws NPE when the base table is not loaded

2024-01-22 Thread Tamas Mate (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-12741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Mate updated IMPALA-12741:

Description: This exception should be handled more gracefully than throwing 
an NPE.  (was: This exception should be handled more gracefully that throwing 
an NPE.)

> Iceberg metadata query throws NPE when the base table is not loaded
> ---
>
> Key: IMPALA-12741
> URL: https://issues.apache.org/jira/browse/IMPALA-12741
> Project: IMPALA
>  Issue Type: Bug
>  Components: Frontend
>Affects Versions: Impala 4.4.0
>Reporter: Tamas Mate
>Assignee: Tamas Mate
>Priority: Major
>
> This exception should be handled more gracefully than throwing an NPE.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-12741) Iceberg metadata query throws NPE when the base table is not loaded

2024-01-22 Thread Tamas Mate (Jira)
Tamas Mate created IMPALA-12741:
---

 Summary: Iceberg metadata query throws NPE when the base table is 
not loaded
 Key: IMPALA-12741
 URL: https://issues.apache.org/jira/browse/IMPALA-12741
 Project: IMPALA
  Issue Type: Bug
  Components: Frontend
Affects Versions: Impala 4.4.0
Reporter: Tamas Mate
Assignee: Tamas Mate


This exception should be handled more gracefully that throwing an NPE.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Updated] (IMPALA-12728) Test failure with 'Bad data encounted in numeric data'

2024-01-18 Thread Tamas Mate (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-12728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Mate updated IMPALA-12728:

Description: 
During our internal JDK11 build 
{{custom_cluster.test_executor_groups.TestExecutorGroups.test_query_cpu_count_divisor_default}}
 failed with {{Bad data encounted in numeric data}}.

Attached complete catalogd logs and the trimmed hms logs.

*Error Message*
{code:none}
ImpalaBeeswaxException: ImpalaBeeswaxException:  Query 
aborted:InternalException: Error adding partitions CAUSED BY: CatalogException: 
Unable to create a metastore event CAUSED BY: MetastoreNotificationException: 
java.lang.RuntimeException: org.apache.thrift.protocol.TProtocolException: Bad 
data encounted in numeric data CAUSED BY: RuntimeException: 
org.apache.thrift.protocol.TProtocolException: Bad data encounted in numeric 
data CAUSED BY: TProtocolException: Bad data encounted in numeric data
{code}

*Stacktrace*
{code:none}
custom_cluster/test_executor_groups.py:947: in 
test_query_cpu_count_divisor_default
"Verdict: Match", "CpuAsk: 10"])
custom_cluster/test_executor_groups.py:880: in _run_query_and_verify_profile
result = self.execute_query_expect_success(self.client, query)
common/impala_test_suite.py:944: in wrapper
return function(*args, **kwargs)
common/impala_test_suite.py:952: in execute_query_expect_success
result = cls.__execute_query(impalad_client, query, query_options, user)
common/impala_test_suite.py:1069: in __execute_query
return impalad_client.execute(query, user=user)
common/impala_connection.py:218: in execute
return self.__beeswax_client.execute(sql_stmt, user=user)
beeswax/impala_beeswax.py:191: in execute
handle = self.__execute_query(query_string.strip(), user=user)
beeswax/impala_beeswax.py:369: in __execute_query
self.wait_for_finished(handle)
beeswax/impala_beeswax.py:390: in wait_for_finished
raise ImpalaBeeswaxException("Query aborted:" + error_log, None)
E   ImpalaBeeswaxException: ImpalaBeeswaxException:
EQuery aborted:InternalException: Error adding partitions
E   CAUSED BY: CatalogException: Unable to create a metastore event
E   CAUSED BY: MetastoreNotificationException: java.lang.RuntimeException: 
org.apache.thrift.protocol.TProtocolException: Bad data encounted in numeric 
data
E   CAUSED BY: RuntimeException: org.apache.thrift.protocol.TProtocolException: 
Bad data encounted in numeric data
E   CAUSED BY: TProtocolException: Bad data encounted in numeric data
{code}

  was:
During our internal JDK11 build 
{{custom_cluster.test_executor_groups.TestExecutorGroups.test_query_cpu_count_divisor_default}}
 failed with {{Bad data encounted in numeric data}}.

*Error Message*
{code:none}
ImpalaBeeswaxException: ImpalaBeeswaxException:  Query 
aborted:InternalException: Error adding partitions CAUSED BY: CatalogException: 
Unable to create a metastore event CAUSED BY: MetastoreNotificationException: 
java.lang.RuntimeException: org.apache.thrift.protocol.TProtocolException: Bad 
data encounted in numeric data CAUSED BY: RuntimeException: 
org.apache.thrift.protocol.TProtocolException: Bad data encounted in numeric 
data CAUSED BY: TProtocolException: Bad data encounted in numeric data
{code}

*Stacktrace*
{code:none}
custom_cluster/test_executor_groups.py:947: in 
test_query_cpu_count_divisor_default
"Verdict: Match", "CpuAsk: 10"])
custom_cluster/test_executor_groups.py:880: in _run_query_and_verify_profile
result = self.execute_query_expect_success(self.client, query)
common/impala_test_suite.py:944: in wrapper
return function(*args, **kwargs)
common/impala_test_suite.py:952: in execute_query_expect_success
result = cls.__execute_query(impalad_client, query, query_options, user)
common/impala_test_suite.py:1069: in __execute_query
return impalad_client.execute(query, user=user)
common/impala_connection.py:218: in execute
return self.__beeswax_client.execute(sql_stmt, user=user)
beeswax/impala_beeswax.py:191: in execute
handle = self.__execute_query(query_string.strip(), user=user)
beeswax/impala_beeswax.py:369: in __execute_query
self.wait_for_finished(handle)
beeswax/impala_beeswax.py:390: in wait_for_finished
raise ImpalaBeeswaxException("Query aborted:" + error_log, None)
E   ImpalaBeeswaxException: ImpalaBeeswaxException:
EQuery aborted:InternalException: Error adding partitions
E   CAUSED BY: CatalogException: Unable to create a metastore event
E   CAUSED BY: MetastoreNotificationException: java.lang.RuntimeException: 
org.apache.thrift.protocol.TProtocolException: Bad data encounted in numeric 
data
E   CAUSED BY: RuntimeException: org.apache.thrift.protocol.TProtocolException: 
Bad data encounted in numeric data
E   CAUSED BY: TProtocolException: Bad data encounted in numeric data
{code}


> Test failure with 'Bad data encounted in numeric data'
> --
>

[jira] [Updated] (IMPALA-12728) Test failure with 'Bad data encounted in numeric data'

2024-01-18 Thread Tamas Mate (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-12728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Mate updated IMPALA-12728:

Attachment: hive-metastore.log

> Test failure with 'Bad data encounted in numeric data'
> --
>
> Key: IMPALA-12728
> URL: https://issues.apache.org/jira/browse/IMPALA-12728
> Project: IMPALA
>  Issue Type: Bug
>  Components: Catalog
>Affects Versions: Impala 4.4.0
>Reporter: Tamas Mate
>Priority: Critical
> Attachments: 
> catalogd.impala-ec2-centos79-m6i-4xlarge-ondemand-1966.vpc.cloudera.com.jenkins.log.INFO.20240117-062443.475,
>  hive-metastore.log
>
>
> During our internal JDK11 build 
> {{custom_cluster.test_executor_groups.TestExecutorGroups.test_query_cpu_count_divisor_default}}
>  failed with {{Bad data encounted in numeric data}}.
> Attached complete catalogd logs and the trimmed hms logs.
> *Error Message*
> {code:none}
> ImpalaBeeswaxException: ImpalaBeeswaxException:  Query 
> aborted:InternalException: Error adding partitions CAUSED BY: 
> CatalogException: Unable to create a metastore event CAUSED BY: 
> MetastoreNotificationException: java.lang.RuntimeException: 
> org.apache.thrift.protocol.TProtocolException: Bad data encounted in numeric 
> data CAUSED BY: RuntimeException: 
> org.apache.thrift.protocol.TProtocolException: Bad data encounted in numeric 
> data CAUSED BY: TProtocolException: Bad data encounted in numeric data
> {code}
> *Stacktrace*
> {code:none}
> custom_cluster/test_executor_groups.py:947: in 
> test_query_cpu_count_divisor_default
> "Verdict: Match", "CpuAsk: 10"])
> custom_cluster/test_executor_groups.py:880: in _run_query_and_verify_profile
> result = self.execute_query_expect_success(self.client, query)
> common/impala_test_suite.py:944: in wrapper
> return function(*args, **kwargs)
> common/impala_test_suite.py:952: in execute_query_expect_success
> result = cls.__execute_query(impalad_client, query, query_options, user)
> common/impala_test_suite.py:1069: in __execute_query
> return impalad_client.execute(query, user=user)
> common/impala_connection.py:218: in execute
> return self.__beeswax_client.execute(sql_stmt, user=user)
> beeswax/impala_beeswax.py:191: in execute
> handle = self.__execute_query(query_string.strip(), user=user)
> beeswax/impala_beeswax.py:369: in __execute_query
> self.wait_for_finished(handle)
> beeswax/impala_beeswax.py:390: in wait_for_finished
> raise ImpalaBeeswaxException("Query aborted:" + error_log, None)
> E   ImpalaBeeswaxException: ImpalaBeeswaxException:
> EQuery aborted:InternalException: Error adding partitions
> E   CAUSED BY: CatalogException: Unable to create a metastore event
> E   CAUSED BY: MetastoreNotificationException: java.lang.RuntimeException: 
> org.apache.thrift.protocol.TProtocolException: Bad data encounted in numeric 
> data
> E   CAUSED BY: RuntimeException: 
> org.apache.thrift.protocol.TProtocolException: Bad data encounted in numeric 
> data
> E   CAUSED BY: TProtocolException: Bad data encounted in numeric data
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Updated] (IMPALA-12728) Test failure with 'Bad data encounted in numeric data'

2024-01-18 Thread Tamas Mate (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-12728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Mate updated IMPALA-12728:

Attachment: 
catalogd.impala-ec2-centos79-m6i-4xlarge-ondemand-1966.vpc.cloudera.com.jenkins.log.INFO.20240117-062443.475

> Test failure with 'Bad data encounted in numeric data'
> --
>
> Key: IMPALA-12728
> URL: https://issues.apache.org/jira/browse/IMPALA-12728
> Project: IMPALA
>  Issue Type: Bug
>  Components: Catalog
>Affects Versions: Impala 4.4.0
>Reporter: Tamas Mate
>Priority: Critical
> Attachments: 
> catalogd.impala-ec2-centos79-m6i-4xlarge-ondemand-1966.vpc.cloudera.com.jenkins.log.INFO.20240117-062443.475
>
>
> During our internal JDK11 build 
> {{custom_cluster.test_executor_groups.TestExecutorGroups.test_query_cpu_count_divisor_default}}
>  failed with {{Bad data encounted in numeric data}}.
> *Error Message*
> {code:none}
> ImpalaBeeswaxException: ImpalaBeeswaxException:  Query 
> aborted:InternalException: Error adding partitions CAUSED BY: 
> CatalogException: Unable to create a metastore event CAUSED BY: 
> MetastoreNotificationException: java.lang.RuntimeException: 
> org.apache.thrift.protocol.TProtocolException: Bad data encounted in numeric 
> data CAUSED BY: RuntimeException: 
> org.apache.thrift.protocol.TProtocolException: Bad data encounted in numeric 
> data CAUSED BY: TProtocolException: Bad data encounted in numeric data
> {code}
> *Stacktrace*
> {code:none}
> custom_cluster/test_executor_groups.py:947: in 
> test_query_cpu_count_divisor_default
> "Verdict: Match", "CpuAsk: 10"])
> custom_cluster/test_executor_groups.py:880: in _run_query_and_verify_profile
> result = self.execute_query_expect_success(self.client, query)
> common/impala_test_suite.py:944: in wrapper
> return function(*args, **kwargs)
> common/impala_test_suite.py:952: in execute_query_expect_success
> result = cls.__execute_query(impalad_client, query, query_options, user)
> common/impala_test_suite.py:1069: in __execute_query
> return impalad_client.execute(query, user=user)
> common/impala_connection.py:218: in execute
> return self.__beeswax_client.execute(sql_stmt, user=user)
> beeswax/impala_beeswax.py:191: in execute
> handle = self.__execute_query(query_string.strip(), user=user)
> beeswax/impala_beeswax.py:369: in __execute_query
> self.wait_for_finished(handle)
> beeswax/impala_beeswax.py:390: in wait_for_finished
> raise ImpalaBeeswaxException("Query aborted:" + error_log, None)
> E   ImpalaBeeswaxException: ImpalaBeeswaxException:
> EQuery aborted:InternalException: Error adding partitions
> E   CAUSED BY: CatalogException: Unable to create a metastore event
> E   CAUSED BY: MetastoreNotificationException: java.lang.RuntimeException: 
> org.apache.thrift.protocol.TProtocolException: Bad data encounted in numeric 
> data
> E   CAUSED BY: RuntimeException: 
> org.apache.thrift.protocol.TProtocolException: Bad data encounted in numeric 
> data
> E   CAUSED BY: TProtocolException: Bad data encounted in numeric data
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-12728) Test failure with 'Bad data encounted in numeric data'

2024-01-18 Thread Tamas Mate (Jira)
Tamas Mate created IMPALA-12728:
---

 Summary: Test failure with 'Bad data encounted in numeric data'
 Key: IMPALA-12728
 URL: https://issues.apache.org/jira/browse/IMPALA-12728
 Project: IMPALA
  Issue Type: Bug
  Components: Catalog
Affects Versions: Impala 4.4.0
Reporter: Tamas Mate
 Attachments: 
catalogd.impala-ec2-centos79-m6i-4xlarge-ondemand-1966.vpc.cloudera.com.jenkins.log.INFO.20240117-062443.475

During our internal JDK11 build 
{{custom_cluster.test_executor_groups.TestExecutorGroups.test_query_cpu_count_divisor_default}}
 failed with {{Bad data encounted in numeric data}}.

*Error Message*
{code:none}
ImpalaBeeswaxException: ImpalaBeeswaxException:  Query 
aborted:InternalException: Error adding partitions CAUSED BY: CatalogException: 
Unable to create a metastore event CAUSED BY: MetastoreNotificationException: 
java.lang.RuntimeException: org.apache.thrift.protocol.TProtocolException: Bad 
data encounted in numeric data CAUSED BY: RuntimeException: 
org.apache.thrift.protocol.TProtocolException: Bad data encounted in numeric 
data CAUSED BY: TProtocolException: Bad data encounted in numeric data
{code}

*Stacktrace*
{code:none}
custom_cluster/test_executor_groups.py:947: in 
test_query_cpu_count_divisor_default
"Verdict: Match", "CpuAsk: 10"])
custom_cluster/test_executor_groups.py:880: in _run_query_and_verify_profile
result = self.execute_query_expect_success(self.client, query)
common/impala_test_suite.py:944: in wrapper
return function(*args, **kwargs)
common/impala_test_suite.py:952: in execute_query_expect_success
result = cls.__execute_query(impalad_client, query, query_options, user)
common/impala_test_suite.py:1069: in __execute_query
return impalad_client.execute(query, user=user)
common/impala_connection.py:218: in execute
return self.__beeswax_client.execute(sql_stmt, user=user)
beeswax/impala_beeswax.py:191: in execute
handle = self.__execute_query(query_string.strip(), user=user)
beeswax/impala_beeswax.py:369: in __execute_query
self.wait_for_finished(handle)
beeswax/impala_beeswax.py:390: in wait_for_finished
raise ImpalaBeeswaxException("Query aborted:" + error_log, None)
E   ImpalaBeeswaxException: ImpalaBeeswaxException:
EQuery aborted:InternalException: Error adding partitions
E   CAUSED BY: CatalogException: Unable to create a metastore event
E   CAUSED BY: MetastoreNotificationException: java.lang.RuntimeException: 
org.apache.thrift.protocol.TProtocolException: Bad data encounted in numeric 
data
E   CAUSED BY: RuntimeException: org.apache.thrift.protocol.TProtocolException: 
Bad data encounted in numeric data
E   CAUSED BY: TProtocolException: Bad data encounted in numeric data
{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Updated] (IMPALA-12715) Failing TestRanger.test_allow_metadata_update_local_catalog test

2024-01-18 Thread Tamas Mate (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-12715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Mate updated IMPALA-12715:

Priority: Critical  (was: Major)

> Failing TestRanger.test_allow_metadata_update_local_catalog test
> 
>
> Key: IMPALA-12715
> URL: https://issues.apache.org/jira/browse/IMPALA-12715
> Project: IMPALA
>  Issue Type: Bug
>  Components: Frontend
>Affects Versions: Impala 4.4.0
>Reporter: Tamas Mate
>Priority: Critical
>
> Two internal JDK17 core builds have failed with the bellow 
> TestRanger.test_allow_metadata_update_local_catalog failure. Standard error 
> shows connection failures.
> {code:none}
> ImpalaBeeswaxException: ImpalaBeeswaxException: INNER EXCEPTION:  'beeswaxd.ttypes.BeeswaxException'> MESSAGE: AuthorizationException: User 
> 'jenkins' does not have privileges to execute 'INVALIDATE METADATA/REFRESH' 
> on: functional.alltypestiny
> {code}
> *Stacktrace*
> {code:none}
> authorization/test_ranger.py:1582: in test_allow_metadata_update_local_catalog
> self.__test_allow_catalog_cache_op_from_masked_users(unique_name)
> authorization/test_ranger.py:1615: in 
> __test_allow_catalog_cache_op_from_masked_users
> non_admin_client, "invalidate metadata functional.alltypestiny", 
> user=user)
> common/impala_test_suite.py:944: in wrapper
> return function(*args, **kwargs)
> common/impala_test_suite.py:952: in execute_query_expect_success
> result = cls.__execute_query(impalad_client, query, query_options, user)
> common/impala_test_suite.py:1069: in __execute_query
> return impalad_client.execute(query, user=user)
> common/impala_connection.py:218: in execute
> return self.__beeswax_client.execute(sql_stmt, user=user)
> beeswax/impala_beeswax.py:191: in execute
> handle = self.__execute_query(query_string.strip(), user=user)
> beeswax/impala_beeswax.py:367: in __execute_query
> handle = self.execute_query_async(query_string, user=user)
> beeswax/impala_beeswax.py:361: in execute_query_async
> handle = self.__do_rpc(lambda: self.imp_service.query(query,))
> beeswax/impala_beeswax.py:524: in __do_rpc
> raise ImpalaBeeswaxException(self.__build_error_message(b), b)
> E   ImpalaBeeswaxException: ImpalaBeeswaxException:
> EINNER EXCEPTION: 
> EMESSAGE: AuthorizationException: User 'jenkins' does not have privileges 
> to execute 'INVALIDATE METADATA/REFRESH' on: functional.alltypestiny
> {code}
> *Standard error:*
> {code:none}
> -- 2024-01-12 01:49:18,544 DEBUGMainThread: Getting 
> num_known_live_backends from 
> impala-ec2-centos79-m6i-4xlarge-ondemand-1347.vpc.cloudera.com:25002
> -- 2024-01-12 01:49:18,546 INFO MainThread: num_known_live_backends has 
> reached value: 3
> SET 
> client_identifier=authorization/test_ranger.py::TestRanger::()::test_allow_metadata_update_local_catalog;
> -- connecting to: localhost:21000
> -- 2024-01-12 01:49:18,546 INFO MainThread: Could not connect to ('::1', 
> 21000, 0, 0)
> Traceback (most recent call last):
>   File 
> "/data/jenkins/workspace/impala-cdw-master-staging-core-jdk17/Impala-Toolchain/toolchain-packages-gcc10.4.0/thrift-0.16.0-p6/python/lib/python2.7/site-packages/thrift/transport/TSocket.py",
>  line 137, in open
> handle.connect(sockaddr)
>   File 
> "/data/jenkins/workspace/impala-cdw-master-staging-core-jdk17/Impala-Toolchain/toolchain-packages-gcc10.4.0/python-2.7.16/lib/python2.7/socket.py",
>  line 228, in meth
> return getattr(self._sock,name)(*args)
> error: [Errno 111] Connection refused
> -- connecting to localhost:21050 with impyla
> -- 2024-01-12 01:49:18,546 INFO MainThread: Could not connect to ('::1', 
> 21050, 0, 0)
> Traceback (most recent call last):
>   File 
> "/data/jenkins/workspace/impala-cdw-master-staging-core-jdk17/Impala-Toolchain/toolchain-packages-gcc10.4.0/thrift-0.16.0-p6/python/lib/python2.7/site-packages/thrift/transport/TSocket.py",
>  line 137, in open
> handle.connect(sockaddr)
>   File 
> "/data/jenkins/workspace/impala-cdw-master-staging-core-jdk17/Impala-Toolchain/toolchain-packages-gcc10.4.0/python-2.7.16/lib/python2.7/socket.py",
>  line 228, in meth
> return getattr(self._sock,name)(*args)
> error: [Errno 111] Connection refused
> -- 2024-01-12 01:49:18,654 INFO MainThread: Closing active operation
> -- connecting to localhost:28000 with impyla
> -- 2024-01-12 01:49:18,671 INFO MainThread: Closing active operation
> -- connecting to localhost:11050 with impyla
> SET 
> client_identifier=authorization/test_ranger.py::TestRanger::()::test_allow_metadata_update_local_catalog;
> -- connecting to: localhost:21000
> -- 2024-01-12 01:49:18,677 INFO MainThread: Could not connect to ('::1', 
> 21000, 0, 0)
> Traceback (most recent call last):
>   File 
> 

[jira] [Updated] (IMPALA-12716) Failing TestWebPage.test_catalog_operations_with_rpc_retry

2024-01-18 Thread Tamas Mate (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-12716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Mate updated IMPALA-12716:

Priority: Critical  (was: Major)

> Failing TestWebPage.test_catalog_operations_with_rpc_retry
> --
>
> Key: IMPALA-12716
> URL: https://issues.apache.org/jira/browse/IMPALA-12716
> Project: IMPALA
>  Issue Type: Bug
>  Components: Catalog
>Affects Versions: Impala 4.4.0
>Reporter: Tamas Mate
>Assignee: Quanlong Huang
>Priority: Critical
> Attachments: 
> impalad.impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com.jenkins.log.INFO.20240113-170731.2913
>
>
> TestWebPage.test_catalog_operations_with_rpc_retry failing, this was observed 
> during and exhaustive build.
> {code:none}
> ImpalaBeeswaxException: ImpalaBeeswaxException:  INNER EXCEPTION:  'beeswaxd.ttypes.BeeswaxException'>  MESSAGE: InternalException: Error 
> requesting prioritized load: RPC recv timed out: dest address: 
> impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com:26000, rpc: 
> N6impala23TPrioritizeLoadResponseE Error making an RPC call to Catalog server.
> {code}
> ImpalaD logs
> {code:none}
> I0113 17:07:38.261853  5449 client-cache.h:371] 
> be4983a09892a39e:33268ac6] RPC recv timed out: dest address: 
> impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com:26000, rpc: 
> N6impala23TPrioritizeLoadResponseE
> I0113 17:07:38.261864  5449 client-cache.h:314] 
> be4983a09892a39e:33268ac6] RPC to 
> impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com:26000 failed 
> RPC recv timed out: dest address: 
> impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com:26000, rpc: 
> N6impala23TPrioritizeLoadResponseE
> I0113 17:07:38.261871  5449 client-cache.cc:174] 
> be4983a09892a39e:33268ac6] Broken Connection, destroy client for 
> impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com:26000
> E0113 17:07:38.261902  5449 fe-support.cc:542] 
> be4983a09892a39e:33268ac6] RPC recv timed out: dest address: 
> impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com:26000, rpc: 
> N6impala23TPrioritizeLoadResponseE
> I0113 17:07:38.267652  5449 jni-util.cc:302] 
> be4983a09892a39e:33268ac6] 
> org.apache.impala.common.InternalException: Error requesting prioritized 
> load: RPC recv timed out: dest address: 
> impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com:26000, rpc: 
> N6impala23TPrioritizeLoadResponseE
> Error making an RPC call to Catalog server.
> at 
> org.apache.impala.service.FeSupport.PrioritizeLoad(FeSupport.java:353)
> at 
> org.apache.impala.catalog.ImpaladCatalog.prioritizeLoad(ImpaladCatalog.java:287)
> at 
> org.apache.impala.analysis.StmtMetadataLoader.loadTables(StmtMetadataLoader.java:202)
> at 
> org.apache.impala.analysis.StmtMetadataLoader.loadTables(StmtMetadataLoader.java:145)
> at 
> org.apache.impala.service.Frontend.doCreateExecRequest(Frontend.java:2379)
> at 
> org.apache.impala.service.Frontend.getTExecRequest(Frontend.java:2142)
> at 
> org.apache.impala.service.Frontend.createExecRequest(Frontend.java:1911)
> at 
> org.apache.impala.service.JniFrontend.createExecRequest(JniFrontend.java:169)
> I0113 17:07:38.267665  5449 status.cc:129] be4983a09892a39e:33268ac6] 
> InternalException: Error requesting prioritized load: RPC recv timed out: 
> dest address: 
> impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com:26000, rpc: 
> N6impala23TPrioritizeLoadResponseE
> Error making an RPC call to Catalog server.
> @  0x107d694
> @  0x1bd17b4
> @  0x17b8963
> @  0x241fc7d
> @  0x18ac313
> @  0x18b90d8
> @  0x1a9a902
>   Caught signal: SIGTERM. Daemon will exit.
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Resolved] (IMPALA-12706) Failing DCHECK when querying STRUCT inside a STRUCT for Iceberg metadata table

2024-01-15 Thread Tamas Mate (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-12706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Mate resolved IMPALA-12706.
-
Resolution: Fixed

> Failing DCHECK when querying STRUCT inside a STRUCT for Iceberg metadata table
> --
>
> Key: IMPALA-12706
> URL: https://issues.apache.org/jira/browse/IMPALA-12706
> Project: IMPALA
>  Issue Type: Bug
>  Components: Backend
>Affects Versions: Impala 4.4.0
>Reporter: Tamas Mate
>Assignee: Tamas Mate
>Priority: Major
>  Labels: impala-iceberg
>
> When querying a STRUCT type inside a STRUCT type there is a failing DCHECK.
> {code:none}
> F0111 09:01:35.626691 15777 descriptors.h:366] 
> 83474e353d7baccd:d966f47c] Check failed: slot_desc->col_path().size() 
> == 1 (2 vs. 1)
> {code}
> While the following is working:
> {code:none}
> select readable_metrics from 
> functional_parquet.iceberg_query_metadata.data_files;
> {code}
> this fails:
> {code:none}
> select readable_metrics.i from 
> functional_parquet.iceberg_query_metadata.data_files;
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Assigned] (IMPALA-12716) Failing TestWebPage.test_catalog_operations_with_rpc_retry

2024-01-15 Thread Tamas Mate (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-12716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Mate reassigned IMPALA-12716:
---

Assignee: Quanlong Huang

[~stigahuang], I hope you don't mind assigning this to you, the tests was 
recently added by you and I believe you have the most context.

> Failing TestWebPage.test_catalog_operations_with_rpc_retry
> --
>
> Key: IMPALA-12716
> URL: https://issues.apache.org/jira/browse/IMPALA-12716
> Project: IMPALA
>  Issue Type: Bug
>  Components: Catalog
>Affects Versions: Impala 4.4.0
>Reporter: Tamas Mate
>Assignee: Quanlong Huang
>Priority: Major
> Attachments: 
> impalad.impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com.jenkins.log.INFO.20240113-170731.2913
>
>
> TestWebPage.test_catalog_operations_with_rpc_retry failing, this was observed 
> during and exhaustive build.
> {code:none}
> ImpalaBeeswaxException: ImpalaBeeswaxException:  INNER EXCEPTION:  'beeswaxd.ttypes.BeeswaxException'>  MESSAGE: InternalException: Error 
> requesting prioritized load: RPC recv timed out: dest address: 
> impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com:26000, rpc: 
> N6impala23TPrioritizeLoadResponseE Error making an RPC call to Catalog server.
> {code}
> ImpalaD logs
> {code:none}
> I0113 17:07:38.261853  5449 client-cache.h:371] 
> be4983a09892a39e:33268ac6] RPC recv timed out: dest address: 
> impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com:26000, rpc: 
> N6impala23TPrioritizeLoadResponseE
> I0113 17:07:38.261864  5449 client-cache.h:314] 
> be4983a09892a39e:33268ac6] RPC to 
> impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com:26000 failed 
> RPC recv timed out: dest address: 
> impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com:26000, rpc: 
> N6impala23TPrioritizeLoadResponseE
> I0113 17:07:38.261871  5449 client-cache.cc:174] 
> be4983a09892a39e:33268ac6] Broken Connection, destroy client for 
> impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com:26000
> E0113 17:07:38.261902  5449 fe-support.cc:542] 
> be4983a09892a39e:33268ac6] RPC recv timed out: dest address: 
> impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com:26000, rpc: 
> N6impala23TPrioritizeLoadResponseE
> I0113 17:07:38.267652  5449 jni-util.cc:302] 
> be4983a09892a39e:33268ac6] 
> org.apache.impala.common.InternalException: Error requesting prioritized 
> load: RPC recv timed out: dest address: 
> impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com:26000, rpc: 
> N6impala23TPrioritizeLoadResponseE
> Error making an RPC call to Catalog server.
> at 
> org.apache.impala.service.FeSupport.PrioritizeLoad(FeSupport.java:353)
> at 
> org.apache.impala.catalog.ImpaladCatalog.prioritizeLoad(ImpaladCatalog.java:287)
> at 
> org.apache.impala.analysis.StmtMetadataLoader.loadTables(StmtMetadataLoader.java:202)
> at 
> org.apache.impala.analysis.StmtMetadataLoader.loadTables(StmtMetadataLoader.java:145)
> at 
> org.apache.impala.service.Frontend.doCreateExecRequest(Frontend.java:2379)
> at 
> org.apache.impala.service.Frontend.getTExecRequest(Frontend.java:2142)
> at 
> org.apache.impala.service.Frontend.createExecRequest(Frontend.java:1911)
> at 
> org.apache.impala.service.JniFrontend.createExecRequest(JniFrontend.java:169)
> I0113 17:07:38.267665  5449 status.cc:129] be4983a09892a39e:33268ac6] 
> InternalException: Error requesting prioritized load: RPC recv timed out: 
> dest address: 
> impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com:26000, rpc: 
> N6impala23TPrioritizeLoadResponseE
> Error making an RPC call to Catalog server.
> @  0x107d694
> @  0x1bd17b4
> @  0x17b8963
> @  0x241fc7d
> @  0x18ac313
> @  0x18b90d8
> @  0x1a9a902
>   Caught signal: SIGTERM. Daemon will exit.
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Comment Edited] (IMPALA-12716) Failing TestWebPage.test_catalog_operations_with_rpc_retry

2024-01-15 Thread Tamas Mate (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-12716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17806776#comment-17806776
 ] 

Tamas Mate edited comment on IMPALA-12716 at 1/15/24 12:12 PM:
---

[~stigahuang], I hope you don't mind that I am assigning this to you, the tests 
was recently added by you and I believe you have context.


was (Author: tmate):
[~stigahuang], I hope you don't mind that I am assigning this to you, the tests 
was recently added by you and I believe you have the most context.

> Failing TestWebPage.test_catalog_operations_with_rpc_retry
> --
>
> Key: IMPALA-12716
> URL: https://issues.apache.org/jira/browse/IMPALA-12716
> Project: IMPALA
>  Issue Type: Bug
>  Components: Catalog
>Affects Versions: Impala 4.4.0
>Reporter: Tamas Mate
>Assignee: Quanlong Huang
>Priority: Major
> Attachments: 
> impalad.impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com.jenkins.log.INFO.20240113-170731.2913
>
>
> TestWebPage.test_catalog_operations_with_rpc_retry failing, this was observed 
> during and exhaustive build.
> {code:none}
> ImpalaBeeswaxException: ImpalaBeeswaxException:  INNER EXCEPTION:  'beeswaxd.ttypes.BeeswaxException'>  MESSAGE: InternalException: Error 
> requesting prioritized load: RPC recv timed out: dest address: 
> impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com:26000, rpc: 
> N6impala23TPrioritizeLoadResponseE Error making an RPC call to Catalog server.
> {code}
> ImpalaD logs
> {code:none}
> I0113 17:07:38.261853  5449 client-cache.h:371] 
> be4983a09892a39e:33268ac6] RPC recv timed out: dest address: 
> impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com:26000, rpc: 
> N6impala23TPrioritizeLoadResponseE
> I0113 17:07:38.261864  5449 client-cache.h:314] 
> be4983a09892a39e:33268ac6] RPC to 
> impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com:26000 failed 
> RPC recv timed out: dest address: 
> impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com:26000, rpc: 
> N6impala23TPrioritizeLoadResponseE
> I0113 17:07:38.261871  5449 client-cache.cc:174] 
> be4983a09892a39e:33268ac6] Broken Connection, destroy client for 
> impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com:26000
> E0113 17:07:38.261902  5449 fe-support.cc:542] 
> be4983a09892a39e:33268ac6] RPC recv timed out: dest address: 
> impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com:26000, rpc: 
> N6impala23TPrioritizeLoadResponseE
> I0113 17:07:38.267652  5449 jni-util.cc:302] 
> be4983a09892a39e:33268ac6] 
> org.apache.impala.common.InternalException: Error requesting prioritized 
> load: RPC recv timed out: dest address: 
> impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com:26000, rpc: 
> N6impala23TPrioritizeLoadResponseE
> Error making an RPC call to Catalog server.
> at 
> org.apache.impala.service.FeSupport.PrioritizeLoad(FeSupport.java:353)
> at 
> org.apache.impala.catalog.ImpaladCatalog.prioritizeLoad(ImpaladCatalog.java:287)
> at 
> org.apache.impala.analysis.StmtMetadataLoader.loadTables(StmtMetadataLoader.java:202)
> at 
> org.apache.impala.analysis.StmtMetadataLoader.loadTables(StmtMetadataLoader.java:145)
> at 
> org.apache.impala.service.Frontend.doCreateExecRequest(Frontend.java:2379)
> at 
> org.apache.impala.service.Frontend.getTExecRequest(Frontend.java:2142)
> at 
> org.apache.impala.service.Frontend.createExecRequest(Frontend.java:1911)
> at 
> org.apache.impala.service.JniFrontend.createExecRequest(JniFrontend.java:169)
> I0113 17:07:38.267665  5449 status.cc:129] be4983a09892a39e:33268ac6] 
> InternalException: Error requesting prioritized load: RPC recv timed out: 
> dest address: 
> impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com:26000, rpc: 
> N6impala23TPrioritizeLoadResponseE
> Error making an RPC call to Catalog server.
> @  0x107d694
> @  0x1bd17b4
> @  0x17b8963
> @  0x241fc7d
> @  0x18ac313
> @  0x18b90d8
> @  0x1a9a902
>   Caught signal: SIGTERM. Daemon will exit.
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Comment Edited] (IMPALA-12716) Failing TestWebPage.test_catalog_operations_with_rpc_retry

2024-01-15 Thread Tamas Mate (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-12716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17806776#comment-17806776
 ] 

Tamas Mate edited comment on IMPALA-12716 at 1/15/24 12:11 PM:
---

[~stigahuang], I hope you don't mind that I am assigning this to you, the tests 
was recently added by you and I believe you have the most context.


was (Author: tmate):
[~stigahuang], I hope you don't mind assigning this to you, the tests was 
recently added by you and I believe you have the most context.

> Failing TestWebPage.test_catalog_operations_with_rpc_retry
> --
>
> Key: IMPALA-12716
> URL: https://issues.apache.org/jira/browse/IMPALA-12716
> Project: IMPALA
>  Issue Type: Bug
>  Components: Catalog
>Affects Versions: Impala 4.4.0
>Reporter: Tamas Mate
>Assignee: Quanlong Huang
>Priority: Major
> Attachments: 
> impalad.impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com.jenkins.log.INFO.20240113-170731.2913
>
>
> TestWebPage.test_catalog_operations_with_rpc_retry failing, this was observed 
> during and exhaustive build.
> {code:none}
> ImpalaBeeswaxException: ImpalaBeeswaxException:  INNER EXCEPTION:  'beeswaxd.ttypes.BeeswaxException'>  MESSAGE: InternalException: Error 
> requesting prioritized load: RPC recv timed out: dest address: 
> impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com:26000, rpc: 
> N6impala23TPrioritizeLoadResponseE Error making an RPC call to Catalog server.
> {code}
> ImpalaD logs
> {code:none}
> I0113 17:07:38.261853  5449 client-cache.h:371] 
> be4983a09892a39e:33268ac6] RPC recv timed out: dest address: 
> impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com:26000, rpc: 
> N6impala23TPrioritizeLoadResponseE
> I0113 17:07:38.261864  5449 client-cache.h:314] 
> be4983a09892a39e:33268ac6] RPC to 
> impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com:26000 failed 
> RPC recv timed out: dest address: 
> impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com:26000, rpc: 
> N6impala23TPrioritizeLoadResponseE
> I0113 17:07:38.261871  5449 client-cache.cc:174] 
> be4983a09892a39e:33268ac6] Broken Connection, destroy client for 
> impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com:26000
> E0113 17:07:38.261902  5449 fe-support.cc:542] 
> be4983a09892a39e:33268ac6] RPC recv timed out: dest address: 
> impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com:26000, rpc: 
> N6impala23TPrioritizeLoadResponseE
> I0113 17:07:38.267652  5449 jni-util.cc:302] 
> be4983a09892a39e:33268ac6] 
> org.apache.impala.common.InternalException: Error requesting prioritized 
> load: RPC recv timed out: dest address: 
> impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com:26000, rpc: 
> N6impala23TPrioritizeLoadResponseE
> Error making an RPC call to Catalog server.
> at 
> org.apache.impala.service.FeSupport.PrioritizeLoad(FeSupport.java:353)
> at 
> org.apache.impala.catalog.ImpaladCatalog.prioritizeLoad(ImpaladCatalog.java:287)
> at 
> org.apache.impala.analysis.StmtMetadataLoader.loadTables(StmtMetadataLoader.java:202)
> at 
> org.apache.impala.analysis.StmtMetadataLoader.loadTables(StmtMetadataLoader.java:145)
> at 
> org.apache.impala.service.Frontend.doCreateExecRequest(Frontend.java:2379)
> at 
> org.apache.impala.service.Frontend.getTExecRequest(Frontend.java:2142)
> at 
> org.apache.impala.service.Frontend.createExecRequest(Frontend.java:1911)
> at 
> org.apache.impala.service.JniFrontend.createExecRequest(JniFrontend.java:169)
> I0113 17:07:38.267665  5449 status.cc:129] be4983a09892a39e:33268ac6] 
> InternalException: Error requesting prioritized load: RPC recv timed out: 
> dest address: 
> impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com:26000, rpc: 
> N6impala23TPrioritizeLoadResponseE
> Error making an RPC call to Catalog server.
> @  0x107d694
> @  0x1bd17b4
> @  0x17b8963
> @  0x241fc7d
> @  0x18ac313
> @  0x18b90d8
> @  0x1a9a902
>   Caught signal: SIGTERM. Daemon will exit.
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Updated] (IMPALA-12716) Failing TestWebPage.test_catalog_operations_with_rpc_retry

2024-01-15 Thread Tamas Mate (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-12716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Mate updated IMPALA-12716:

Attachment: 
impalad.impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com.jenkins.log.INFO.20240113-170731.2913

> Failing TestWebPage.test_catalog_operations_with_rpc_retry
> --
>
> Key: IMPALA-12716
> URL: https://issues.apache.org/jira/browse/IMPALA-12716
> Project: IMPALA
>  Issue Type: Bug
>  Components: Catalog
>Affects Versions: Impala 4.4.0
>Reporter: Tamas Mate
>Priority: Major
> Attachments: 
> impalad.impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com.jenkins.log.INFO.20240113-170731.2913
>
>
> TestWebPage.test_catalog_operations_with_rpc_retry failing, this was observed 
> during and exhaustive build.
> {code:none}
> ImpalaBeeswaxException: ImpalaBeeswaxException:  INNER EXCEPTION:  'beeswaxd.ttypes.BeeswaxException'>  MESSAGE: InternalException: Error 
> requesting prioritized load: RPC recv timed out: dest address: 
> impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com:26000, rpc: 
> N6impala23TPrioritizeLoadResponseE Error making an RPC call to Catalog server.
> {code}
> CatalogD logs
> {code:none}
> I0113 17:07:38.261853  5449 client-cache.h:371] 
> be4983a09892a39e:33268ac6] RPC recv timed out: dest address: 
> impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com:26000, rpc: 
> N6impala23TPrioritizeLoadResponseE
> I0113 17:07:38.261864  5449 client-cache.h:314] 
> be4983a09892a39e:33268ac6] RPC to 
> impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com:26000 failed 
> RPC recv timed out: dest address: 
> impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com:26000, rpc: 
> N6impala23TPrioritizeLoadResponseE
> I0113 17:07:38.261871  5449 client-cache.cc:174] 
> be4983a09892a39e:33268ac6] Broken Connection, destroy client for 
> impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com:26000
> E0113 17:07:38.261902  5449 fe-support.cc:542] 
> be4983a09892a39e:33268ac6] RPC recv timed out: dest address: 
> impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com:26000, rpc: 
> N6impala23TPrioritizeLoadResponseE
> I0113 17:07:38.267652  5449 jni-util.cc:302] 
> be4983a09892a39e:33268ac6] 
> org.apache.impala.common.InternalException: Error requesting prioritized 
> load: RPC recv timed out: dest address: 
> impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com:26000, rpc: 
> N6impala23TPrioritizeLoadResponseE
> Error making an RPC call to Catalog server.
> at 
> org.apache.impala.service.FeSupport.PrioritizeLoad(FeSupport.java:353)
> at 
> org.apache.impala.catalog.ImpaladCatalog.prioritizeLoad(ImpaladCatalog.java:287)
> at 
> org.apache.impala.analysis.StmtMetadataLoader.loadTables(StmtMetadataLoader.java:202)
> at 
> org.apache.impala.analysis.StmtMetadataLoader.loadTables(StmtMetadataLoader.java:145)
> at 
> org.apache.impala.service.Frontend.doCreateExecRequest(Frontend.java:2379)
> at 
> org.apache.impala.service.Frontend.getTExecRequest(Frontend.java:2142)
> at 
> org.apache.impala.service.Frontend.createExecRequest(Frontend.java:1911)
> at 
> org.apache.impala.service.JniFrontend.createExecRequest(JniFrontend.java:169)
> I0113 17:07:38.267665  5449 status.cc:129] be4983a09892a39e:33268ac6] 
> InternalException: Error requesting prioritized load: RPC recv timed out: 
> dest address: 
> impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com:26000, rpc: 
> N6impala23TPrioritizeLoadResponseE
> Error making an RPC call to Catalog server.
> @  0x107d694
> @  0x1bd17b4
> @  0x17b8963
> @  0x241fc7d
> @  0x18ac313
> @  0x18b90d8
> @  0x1a9a902
>   Caught signal: SIGTERM. Daemon will exit.
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Updated] (IMPALA-12716) Failing TestWebPage.test_catalog_operations_with_rpc_retry

2024-01-15 Thread Tamas Mate (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-12716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Mate updated IMPALA-12716:

Description: 
TestWebPage.test_catalog_operations_with_rpc_retry failing, this was observed 
during and exhaustive build.

{code:none}
ImpalaBeeswaxException: ImpalaBeeswaxException:  INNER EXCEPTION:   MESSAGE: InternalException: Error 
requesting prioritized load: RPC recv timed out: dest address: 
impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com:26000, rpc: 
N6impala23TPrioritizeLoadResponseE Error making an RPC call to Catalog server.
{code}

ImpalaD logs
{code:none}
I0113 17:07:38.261853  5449 client-cache.h:371] 
be4983a09892a39e:33268ac6] RPC recv timed out: dest address: 
impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com:26000, rpc: 
N6impala23TPrioritizeLoadResponseE
I0113 17:07:38.261864  5449 client-cache.h:314] 
be4983a09892a39e:33268ac6] RPC to 
impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com:26000 failed RPC 
recv timed out: dest address: 
impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com:26000, rpc: 
N6impala23TPrioritizeLoadResponseE
I0113 17:07:38.261871  5449 client-cache.cc:174] 
be4983a09892a39e:33268ac6] Broken Connection, destroy client for 
impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com:26000
E0113 17:07:38.261902  5449 fe-support.cc:542] 
be4983a09892a39e:33268ac6] RPC recv timed out: dest address: 
impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com:26000, rpc: 
N6impala23TPrioritizeLoadResponseE
I0113 17:07:38.267652  5449 jni-util.cc:302] be4983a09892a39e:33268ac6] 
org.apache.impala.common.InternalException: Error requesting prioritized load: 
RPC recv timed out: dest address: 
impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com:26000, rpc: 
N6impala23TPrioritizeLoadResponseE
Error making an RPC call to Catalog server.
at 
org.apache.impala.service.FeSupport.PrioritizeLoad(FeSupport.java:353)
at 
org.apache.impala.catalog.ImpaladCatalog.prioritizeLoad(ImpaladCatalog.java:287)
at 
org.apache.impala.analysis.StmtMetadataLoader.loadTables(StmtMetadataLoader.java:202)
at 
org.apache.impala.analysis.StmtMetadataLoader.loadTables(StmtMetadataLoader.java:145)
at 
org.apache.impala.service.Frontend.doCreateExecRequest(Frontend.java:2379)
at 
org.apache.impala.service.Frontend.getTExecRequest(Frontend.java:2142)
at 
org.apache.impala.service.Frontend.createExecRequest(Frontend.java:1911)
at 
org.apache.impala.service.JniFrontend.createExecRequest(JniFrontend.java:169)
I0113 17:07:38.267665  5449 status.cc:129] be4983a09892a39e:33268ac6] 
InternalException: Error requesting prioritized load: RPC recv timed out: dest 
address: impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com:26000, 
rpc: N6impala23TPrioritizeLoadResponseE
Error making an RPC call to Catalog server.
@  0x107d694
@  0x1bd17b4
@  0x17b8963
@  0x241fc7d
@  0x18ac313
@  0x18b90d8
@  0x1a9a902
  Caught signal: SIGTERM. Daemon will exit.
{code}



  was:
TestWebPage.test_catalog_operations_with_rpc_retry failing, this was observed 
during and exhaustive build.

{code:none}
ImpalaBeeswaxException: ImpalaBeeswaxException:  INNER EXCEPTION:   MESSAGE: InternalException: Error 
requesting prioritized load: RPC recv timed out: dest address: 
impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com:26000, rpc: 
N6impala23TPrioritizeLoadResponseE Error making an RPC call to Catalog server.
{code}

CatalogD logs
{code:none}
I0113 17:07:38.261853  5449 client-cache.h:371] 
be4983a09892a39e:33268ac6] RPC recv timed out: dest address: 
impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com:26000, rpc: 
N6impala23TPrioritizeLoadResponseE
I0113 17:07:38.261864  5449 client-cache.h:314] 
be4983a09892a39e:33268ac6] RPC to 
impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com:26000 failed RPC 
recv timed out: dest address: 
impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com:26000, rpc: 
N6impala23TPrioritizeLoadResponseE
I0113 17:07:38.261871  5449 client-cache.cc:174] 
be4983a09892a39e:33268ac6] Broken Connection, destroy client for 
impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com:26000
E0113 17:07:38.261902  5449 fe-support.cc:542] 
be4983a09892a39e:33268ac6] RPC recv timed out: dest address: 
impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com:26000, rpc: 
N6impala23TPrioritizeLoadResponseE
I0113 17:07:38.267652  5449 jni-util.cc:302] be4983a09892a39e:33268ac6] 
org.apache.impala.common.InternalException: Error requesting prioritized load: 
RPC recv timed out: dest address: 
impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com:26000, 

[jira] [Created] (IMPALA-12716) TestWebPage.test_catalog_operations_with_rpc_retry failing

2024-01-15 Thread Tamas Mate (Jira)
Tamas Mate created IMPALA-12716:
---

 Summary: TestWebPage.test_catalog_operations_with_rpc_retry failing
 Key: IMPALA-12716
 URL: https://issues.apache.org/jira/browse/IMPALA-12716
 Project: IMPALA
  Issue Type: Bug
  Components: Catalog
Affects Versions: Impala 4.4.0
Reporter: Tamas Mate


TestWebPage.test_catalog_operations_with_rpc_retry failing, this was observed 
during and exhaustive build.

{code:none}
ImpalaBeeswaxException: ImpalaBeeswaxException:  INNER EXCEPTION:   MESSAGE: InternalException: Error 
requesting prioritized load: RPC recv timed out: dest address: 
impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com:26000, rpc: 
N6impala23TPrioritizeLoadResponseE Error making an RPC call to Catalog server.
{code}

CatalogD logs
{code:none}
I0113 17:07:38.261853  5449 client-cache.h:371] 
be4983a09892a39e:33268ac6] RPC recv timed out: dest address: 
impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com:26000, rpc: 
N6impala23TPrioritizeLoadResponseE
I0113 17:07:38.261864  5449 client-cache.h:314] 
be4983a09892a39e:33268ac6] RPC to 
impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com:26000 failed RPC 
recv timed out: dest address: 
impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com:26000, rpc: 
N6impala23TPrioritizeLoadResponseE
I0113 17:07:38.261871  5449 client-cache.cc:174] 
be4983a09892a39e:33268ac6] Broken Connection, destroy client for 
impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com:26000
E0113 17:07:38.261902  5449 fe-support.cc:542] 
be4983a09892a39e:33268ac6] RPC recv timed out: dest address: 
impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com:26000, rpc: 
N6impala23TPrioritizeLoadResponseE
I0113 17:07:38.267652  5449 jni-util.cc:302] be4983a09892a39e:33268ac6] 
org.apache.impala.common.InternalException: Error requesting prioritized load: 
RPC recv timed out: dest address: 
impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com:26000, rpc: 
N6impala23TPrioritizeLoadResponseE
Error making an RPC call to Catalog server.
at 
org.apache.impala.service.FeSupport.PrioritizeLoad(FeSupport.java:353)
at 
org.apache.impala.catalog.ImpaladCatalog.prioritizeLoad(ImpaladCatalog.java:287)
at 
org.apache.impala.analysis.StmtMetadataLoader.loadTables(StmtMetadataLoader.java:202)
at 
org.apache.impala.analysis.StmtMetadataLoader.loadTables(StmtMetadataLoader.java:145)
at 
org.apache.impala.service.Frontend.doCreateExecRequest(Frontend.java:2379)
at 
org.apache.impala.service.Frontend.getTExecRequest(Frontend.java:2142)
at 
org.apache.impala.service.Frontend.createExecRequest(Frontend.java:1911)
at 
org.apache.impala.service.JniFrontend.createExecRequest(JniFrontend.java:169)
I0113 17:07:38.267665  5449 status.cc:129] be4983a09892a39e:33268ac6] 
InternalException: Error requesting prioritized load: RPC recv timed out: dest 
address: impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com:26000, 
rpc: N6impala23TPrioritizeLoadResponseE
Error making an RPC call to Catalog server.
@  0x107d694
@  0x1bd17b4
@  0x17b8963
@  0x241fc7d
@  0x18ac313
@  0x18b90d8
@  0x1a9a902
  Caught signal: SIGTERM. Daemon will exit.
{code}





--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Updated] (IMPALA-12716) Failing TestWebPage.test_catalog_operations_with_rpc_retry

2024-01-15 Thread Tamas Mate (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-12716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Mate updated IMPALA-12716:

Summary: Failing TestWebPage.test_catalog_operations_with_rpc_retry  (was: 
TestWebPage.test_catalog_operations_with_rpc_retry failing)

> Failing TestWebPage.test_catalog_operations_with_rpc_retry
> --
>
> Key: IMPALA-12716
> URL: https://issues.apache.org/jira/browse/IMPALA-12716
> Project: IMPALA
>  Issue Type: Bug
>  Components: Catalog
>Affects Versions: Impala 4.4.0
>Reporter: Tamas Mate
>Priority: Major
>
> TestWebPage.test_catalog_operations_with_rpc_retry failing, this was observed 
> during and exhaustive build.
> {code:none}
> ImpalaBeeswaxException: ImpalaBeeswaxException:  INNER EXCEPTION:  'beeswaxd.ttypes.BeeswaxException'>  MESSAGE: InternalException: Error 
> requesting prioritized load: RPC recv timed out: dest address: 
> impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com:26000, rpc: 
> N6impala23TPrioritizeLoadResponseE Error making an RPC call to Catalog server.
> {code}
> CatalogD logs
> {code:none}
> I0113 17:07:38.261853  5449 client-cache.h:371] 
> be4983a09892a39e:33268ac6] RPC recv timed out: dest address: 
> impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com:26000, rpc: 
> N6impala23TPrioritizeLoadResponseE
> I0113 17:07:38.261864  5449 client-cache.h:314] 
> be4983a09892a39e:33268ac6] RPC to 
> impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com:26000 failed 
> RPC recv timed out: dest address: 
> impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com:26000, rpc: 
> N6impala23TPrioritizeLoadResponseE
> I0113 17:07:38.261871  5449 client-cache.cc:174] 
> be4983a09892a39e:33268ac6] Broken Connection, destroy client for 
> impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com:26000
> E0113 17:07:38.261902  5449 fe-support.cc:542] 
> be4983a09892a39e:33268ac6] RPC recv timed out: dest address: 
> impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com:26000, rpc: 
> N6impala23TPrioritizeLoadResponseE
> I0113 17:07:38.267652  5449 jni-util.cc:302] 
> be4983a09892a39e:33268ac6] 
> org.apache.impala.common.InternalException: Error requesting prioritized 
> load: RPC recv timed out: dest address: 
> impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com:26000, rpc: 
> N6impala23TPrioritizeLoadResponseE
> Error making an RPC call to Catalog server.
> at 
> org.apache.impala.service.FeSupport.PrioritizeLoad(FeSupport.java:353)
> at 
> org.apache.impala.catalog.ImpaladCatalog.prioritizeLoad(ImpaladCatalog.java:287)
> at 
> org.apache.impala.analysis.StmtMetadataLoader.loadTables(StmtMetadataLoader.java:202)
> at 
> org.apache.impala.analysis.StmtMetadataLoader.loadTables(StmtMetadataLoader.java:145)
> at 
> org.apache.impala.service.Frontend.doCreateExecRequest(Frontend.java:2379)
> at 
> org.apache.impala.service.Frontend.getTExecRequest(Frontend.java:2142)
> at 
> org.apache.impala.service.Frontend.createExecRequest(Frontend.java:1911)
> at 
> org.apache.impala.service.JniFrontend.createExecRequest(JniFrontend.java:169)
> I0113 17:07:38.267665  5449 status.cc:129] be4983a09892a39e:33268ac6] 
> InternalException: Error requesting prioritized load: RPC recv timed out: 
> dest address: 
> impala-ec2-centos79-m6i-4xlarge-ondemand-0419.vpc.cloudera.com:26000, rpc: 
> N6impala23TPrioritizeLoadResponseE
> Error making an RPC call to Catalog server.
> @  0x107d694
> @  0x1bd17b4
> @  0x17b8963
> @  0x241fc7d
> @  0x18ac313
> @  0x18b90d8
> @  0x1a9a902
>   Caught signal: SIGTERM. Daemon will exit.
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-12715) Failing TestRanger.test_allow_metadata_update_local_catalog test

2024-01-15 Thread Tamas Mate (Jira)
Tamas Mate created IMPALA-12715:
---

 Summary: Failing 
TestRanger.test_allow_metadata_update_local_catalog test
 Key: IMPALA-12715
 URL: https://issues.apache.org/jira/browse/IMPALA-12715
 Project: IMPALA
  Issue Type: Bug
  Components: Frontend
Affects Versions: Impala 4.4.0
Reporter: Tamas Mate


Two internal JDK17 core builds have failed with the bellow 
TestRanger.test_allow_metadata_update_local_catalog failure. Standard error 
shows connection failures.
{code:none}
ImpalaBeeswaxException: ImpalaBeeswaxException: INNER EXCEPTION:  MESSAGE: AuthorizationException: User 
'jenkins' does not have privileges to execute 'INVALIDATE METADATA/REFRESH' on: 
functional.alltypestiny
{code}
*Stacktrace*
{code:none}
authorization/test_ranger.py:1582: in test_allow_metadata_update_local_catalog
self.__test_allow_catalog_cache_op_from_masked_users(unique_name)
authorization/test_ranger.py:1615: in 
__test_allow_catalog_cache_op_from_masked_users
non_admin_client, "invalidate metadata functional.alltypestiny", user=user)
common/impala_test_suite.py:944: in wrapper
return function(*args, **kwargs)
common/impala_test_suite.py:952: in execute_query_expect_success
result = cls.__execute_query(impalad_client, query, query_options, user)
common/impala_test_suite.py:1069: in __execute_query
return impalad_client.execute(query, user=user)
common/impala_connection.py:218: in execute
return self.__beeswax_client.execute(sql_stmt, user=user)
beeswax/impala_beeswax.py:191: in execute
handle = self.__execute_query(query_string.strip(), user=user)
beeswax/impala_beeswax.py:367: in __execute_query
handle = self.execute_query_async(query_string, user=user)
beeswax/impala_beeswax.py:361: in execute_query_async
handle = self.__do_rpc(lambda: self.imp_service.query(query,))
beeswax/impala_beeswax.py:524: in __do_rpc
raise ImpalaBeeswaxException(self.__build_error_message(b), b)
E   ImpalaBeeswaxException: ImpalaBeeswaxException:
EINNER EXCEPTION: 
EMESSAGE: AuthorizationException: User 'jenkins' does not have privileges 
to execute 'INVALIDATE METADATA/REFRESH' on: functional.alltypestiny
{code}
*Standard error:*
{code:none}
-- 2024-01-12 01:49:18,544 DEBUGMainThread: Getting num_known_live_backends 
from impala-ec2-centos79-m6i-4xlarge-ondemand-1347.vpc.cloudera.com:25002
-- 2024-01-12 01:49:18,546 INFO MainThread: num_known_live_backends has 
reached value: 3
SET 
client_identifier=authorization/test_ranger.py::TestRanger::()::test_allow_metadata_update_local_catalog;
-- connecting to: localhost:21000
-- 2024-01-12 01:49:18,546 INFO MainThread: Could not connect to ('::1', 
21000, 0, 0)
Traceback (most recent call last):
  File 
"/data/jenkins/workspace/impala-cdw-master-staging-core-jdk17/Impala-Toolchain/toolchain-packages-gcc10.4.0/thrift-0.16.0-p6/python/lib/python2.7/site-packages/thrift/transport/TSocket.py",
 line 137, in open
handle.connect(sockaddr)
  File 
"/data/jenkins/workspace/impala-cdw-master-staging-core-jdk17/Impala-Toolchain/toolchain-packages-gcc10.4.0/python-2.7.16/lib/python2.7/socket.py",
 line 228, in meth
return getattr(self._sock,name)(*args)
error: [Errno 111] Connection refused
-- connecting to localhost:21050 with impyla
-- 2024-01-12 01:49:18,546 INFO MainThread: Could not connect to ('::1', 
21050, 0, 0)
Traceback (most recent call last):
  File 
"/data/jenkins/workspace/impala-cdw-master-staging-core-jdk17/Impala-Toolchain/toolchain-packages-gcc10.4.0/thrift-0.16.0-p6/python/lib/python2.7/site-packages/thrift/transport/TSocket.py",
 line 137, in open
handle.connect(sockaddr)
  File 
"/data/jenkins/workspace/impala-cdw-master-staging-core-jdk17/Impala-Toolchain/toolchain-packages-gcc10.4.0/python-2.7.16/lib/python2.7/socket.py",
 line 228, in meth
return getattr(self._sock,name)(*args)
error: [Errno 111] Connection refused
-- 2024-01-12 01:49:18,654 INFO MainThread: Closing active operation
-- connecting to localhost:28000 with impyla
-- 2024-01-12 01:49:18,671 INFO MainThread: Closing active operation
-- connecting to localhost:11050 with impyla
SET 
client_identifier=authorization/test_ranger.py::TestRanger::()::test_allow_metadata_update_local_catalog;
-- connecting to: localhost:21000
-- 2024-01-12 01:49:18,677 INFO MainThread: Could not connect to ('::1', 
21000, 0, 0)
Traceback (most recent call last):
  File 
"/data/jenkins/workspace/impala-cdw-master-staging-core-jdk17/Impala-Toolchain/toolchain-packages-gcc10.4.0/thrift-0.16.0-p6/python/lib/python2.7/site-packages/thrift/transport/TSocket.py",
 line 137, in open
handle.connect(sockaddr)
  File 
"/data/jenkins/workspace/impala-cdw-master-staging-core-jdk17/Impala-Toolchain/toolchain-packages-gcc10.4.0/python-2.7.16/lib/python2.7/socket.py",
 line 228, in meth
return getattr(self._sock,name)(*args)
error: [Errno 111] Connection 

[jira] [Assigned] (IMPALA-12714) test_reduced_cardinality_by_filter is failing on non HDFS builds

2024-01-15 Thread Tamas Mate (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-12714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Mate reassigned IMPALA-12714:
---

Assignee: Riza Suminto

> test_reduced_cardinality_by_filter is failing on non HDFS builds
> 
>
> Key: IMPALA-12714
> URL: https://issues.apache.org/jira/browse/IMPALA-12714
> Project: IMPALA
>  Issue Type: Bug
>  Components: Frontend
>Affects Versions: Impala 4.4.0
>Reporter: Tamas Mate
>Assignee: Riza Suminto
>Priority: Major
>
> Error Message, similar failures can be observed on Ozone builds:
> {code:none}
> query_test/test_observability.py:885: in test_reduced_cardinality_by_filter   
>   assert scan['operator'] == '00:SCAN HDFS' E   assert '00:SCAN S3' == 
> '00:SCAN HDFS' E - 00:SCAN S3 E ?  - E + 00:SCAN HDFS E   
>   ? +++
> Stacktrace
> query_test/test_observability.py:885: in test_reduced_cardinality_by_filter
> assert scan['operator'] == '00:SCAN HDFS'
> E   assert '00:SCAN S3' == '00:SCAN HDFS'
> E - 00:SCAN S3
> E ?  -
> E + 00:SCAN HDFS
> E ? +++
> {code}
> Standard Error
> {code:none}
> SET 
> client_identifier=query_test/test_observability.py::TestObservability::()::test_reduced_cardinality_by_filter;
> SET compute_processing_cost=True;
> -- executing against localhost:21000
> select STRAIGHT_JOIN count(*) from
> (select l_orderkey from tpch_parquet.lineitem) a
> join (select o_orderkey, o_custkey from tpch_parquet.orders) l1
>   on a.l_orderkey = l1.o_orderkey
> where l1.o_custkey < 1000;
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-12714) test_reduced_cardinality_by_filter is failing on non HDFS builds

2024-01-15 Thread Tamas Mate (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-12714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17806737#comment-17806737
 ] 

Tamas Mate commented on IMPALA-12714:
-

[~rizaon], I hope you don't mind, I am assigning this to you.

> test_reduced_cardinality_by_filter is failing on non HDFS builds
> 
>
> Key: IMPALA-12714
> URL: https://issues.apache.org/jira/browse/IMPALA-12714
> Project: IMPALA
>  Issue Type: Bug
>  Components: Frontend
>Affects Versions: Impala 4.4.0
>Reporter: Tamas Mate
>Priority: Major
>
> Error Message, similar failures can be observed on Ozone builds:
> {code:none}
> query_test/test_observability.py:885: in test_reduced_cardinality_by_filter   
>   assert scan['operator'] == '00:SCAN HDFS' E   assert '00:SCAN S3' == 
> '00:SCAN HDFS' E - 00:SCAN S3 E ?  - E + 00:SCAN HDFS E   
>   ? +++
> Stacktrace
> query_test/test_observability.py:885: in test_reduced_cardinality_by_filter
> assert scan['operator'] == '00:SCAN HDFS'
> E   assert '00:SCAN S3' == '00:SCAN HDFS'
> E - 00:SCAN S3
> E ?  -
> E + 00:SCAN HDFS
> E ? +++
> {code}
> Standard Error
> {code:none}
> SET 
> client_identifier=query_test/test_observability.py::TestObservability::()::test_reduced_cardinality_by_filter;
> SET compute_processing_cost=True;
> -- executing against localhost:21000
> select STRAIGHT_JOIN count(*) from
> (select l_orderkey from tpch_parquet.lineitem) a
> join (select o_orderkey, o_custkey from tpch_parquet.orders) l1
>   on a.l_orderkey = l1.o_orderkey
> where l1.o_custkey < 1000;
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-12714) test_reduced_cardinality_by_filter is failing on non HDFS builds

2024-01-15 Thread Tamas Mate (Jira)
Tamas Mate created IMPALA-12714:
---

 Summary: test_reduced_cardinality_by_filter is failing on non HDFS 
builds
 Key: IMPALA-12714
 URL: https://issues.apache.org/jira/browse/IMPALA-12714
 Project: IMPALA
  Issue Type: Bug
  Components: Frontend
Affects Versions: Impala 4.4.0
Reporter: Tamas Mate


Error Message, similar failures can be observed on Ozone builds:
{code:none}
query_test/test_observability.py:885: in test_reduced_cardinality_by_filter 
assert scan['operator'] == '00:SCAN HDFS' E   assert '00:SCAN S3' == '00:SCAN 
HDFS' E - 00:SCAN S3 E ?  - E + 00:SCAN HDFS E ?
 +++
Stacktrace
query_test/test_observability.py:885: in test_reduced_cardinality_by_filter
assert scan['operator'] == '00:SCAN HDFS'
E   assert '00:SCAN S3' == '00:SCAN HDFS'
E - 00:SCAN S3
E ?  -
E + 00:SCAN HDFS
E ? +++
{code}
Standard Error
{code:none}
SET 
client_identifier=query_test/test_observability.py::TestObservability::()::test_reduced_cardinality_by_filter;
SET compute_processing_cost=True;
-- executing against localhost:21000

select STRAIGHT_JOIN count(*) from
(select l_orderkey from tpch_parquet.lineitem) a
join (select o_orderkey, o_custkey from tpch_parquet.orders) l1
  on a.l_orderkey = l1.o_orderkey
where l1.o_custkey < 1000;
{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Updated] (IMPALA-11568) Backend test TimeSeriesCounterToJsonTest in runtime-profile-test is flaky

2024-01-15 Thread Tamas Mate (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-11568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Mate updated IMPALA-11568:

Summary: Backend test TimeSeriesCounterToJsonTest in runtime-profile-test 
is flaky  (was: Backend test TimeSeriesCounterToJsonTest in 
runtime-profile-test failed in ASAN run)

Renaming the ticket, this test just failed during an internal JDK 17 build as 
well.

> Backend test TimeSeriesCounterToJsonTest in runtime-profile-test is flaky
> -
>
> Key: IMPALA-11568
> URL: https://issues.apache.org/jira/browse/IMPALA-11568
> Project: IMPALA
>  Issue Type: Bug
>Affects Versions: Impala 4.2.0
>Reporter: Laszlo Gaal
>Priority: Blocker
>
> Console log:
> {code}
> [ RUN  ] ToJson.TimeSeriesCounterToJsonTest
> /data/jenkins/workspace/impala-asf-master-core-asan/repos/Impala/be/src/util/runtime-profile-test.cc:1862:
>  Failure
> Value of: "0,2,4,6"
>   Actual: "0,2,4,6"
> Expected: a substring of 
> doc["contents"]["time_series_counters"][0]["data"].GetString()
> Which is: 
> "0,1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77"
> /data/jenkins/workspace/impala-asf-master-core-asan/repos/Impala/be/src/util/runtime-profile-test.cc:1865:
>  Failure
> Value of: "72,74,76,78"
>   Actual: "72,74,76,78"
> Expected: a substring of 
> doc["contents"]["time_series_counters"][0]["data"].GetString()
> Which is: 
> "0,1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77"
> [  FAILED  ] ToJson.TimeSeriesCounterToJsonTest (1000 ms)
> {code}
> The failed git hash is {{d74d6994cc25ed2090886d6b406cf477a6ccf6b4}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Resolved] (IMPALA-12495) Describe command for Iceberg metadata tables

2024-01-11 Thread Tamas Mate (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-12495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Mate resolved IMPALA-12495.
-
 Fix Version/s: Impala 4.4.0
Target Version: Impala 4.4.0
Resolution: Fixed

> Describe command for Iceberg metadata tables
> 
>
> Key: IMPALA-12495
> URL: https://issues.apache.org/jira/browse/IMPALA-12495
> Project: IMPALA
>  Issue Type: Sub-task
>  Components: Backend, Frontend
>Affects Versions: Impala 4.4.0
>Reporter: Tamas Mate
>Assignee: Tamas Mate
>Priority: Major
>  Labels: impala-iceberg
> Fix For: Impala 4.4.0
>
>
> We should provide a statement do describe metadata tables.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Work started] (IMPALA-12706) Failing DCHECK when querying STRUCT inside a STRUCT for Iceberg metadata table

2024-01-11 Thread Tamas Mate (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-12706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on IMPALA-12706 started by Tamas Mate.
---
> Failing DCHECK when querying STRUCT inside a STRUCT for Iceberg metadata table
> --
>
> Key: IMPALA-12706
> URL: https://issues.apache.org/jira/browse/IMPALA-12706
> Project: IMPALA
>  Issue Type: Bug
>  Components: Backend
>Affects Versions: Impala 4.4.0
>Reporter: Tamas Mate
>Assignee: Tamas Mate
>Priority: Major
>  Labels: impala-iceberg
>
> When querying a STRUCT type inside a STRUCT type there is a failing DCHECK.
> {code:none}
> F0111 09:01:35.626691 15777 descriptors.h:366] 
> 83474e353d7baccd:d966f47c] Check failed: slot_desc->col_path().size() 
> == 1 (2 vs. 1)
> {code}
> While the following is working:
> {code:none}
> select readable_metrics from 
> functional_parquet.iceberg_query_metadata.data_files;
> {code}
> this fails:
> {code:none}
> select readable_metrics.i from 
> functional_parquet.iceberg_query_metadata.data_files;
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-12706) Failing DCHECK when querying STRUCT inside a STRUCT for Iceberg metadata table

2024-01-11 Thread Tamas Mate (Jira)
Tamas Mate created IMPALA-12706:
---

 Summary: Failing DCHECK when querying STRUCT inside a STRUCT for 
Iceberg metadata table
 Key: IMPALA-12706
 URL: https://issues.apache.org/jira/browse/IMPALA-12706
 Project: IMPALA
  Issue Type: Bug
  Components: Backend
Affects Versions: Impala 4.4.0
Reporter: Tamas Mate
Assignee: Tamas Mate


When querying a STRUCT type inside a STRUCT type there is a failing DCHECK.
{code:none}
F0111 09:01:35.626691 15777 descriptors.h:366] 
83474e353d7baccd:d966f47c] Check failed: slot_desc->col_path().size() 
== 1 (2 vs. 1)
{code}

While the following is working:
{code:none}
select readable_metrics from 
functional_parquet.iceberg_query_metadata.data_files;
{code}
this fails:
{code:none}
select readable_metrics.i from 
functional_parquet.iceberg_query_metadata.data_files;
{code}




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Work stopped] (IMPALA-12651) Add support to BINARY type Iceberg Metadata table columns

2024-01-08 Thread Tamas Mate (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-12651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on IMPALA-12651 stopped by Tamas Mate.
---
> Add support to BINARY type Iceberg Metadata table columns
> -
>
> Key: IMPALA-12651
> URL: https://issues.apache.org/jira/browse/IMPALA-12651
> Project: IMPALA
>  Issue Type: Sub-task
>  Components: Backend, Frontend
>Affects Versions: Impala 4.4.0
>Reporter: Tamas Mate
>Assignee: Tamas Mate
>Priority: Major
>  Labels: impala-iceberg
>
> Impala should be able to read BINARY type columns from Iceberg metadata 
> tables as strings, additionally this should be allowed when reading these 
> types from complex types.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Work started] (IMPALA-12610) Add support to ARRAY type Iceberg Metadata table columns

2024-01-08 Thread Tamas Mate (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-12610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on IMPALA-12610 started by Tamas Mate.
---
> Add support to ARRAY type Iceberg Metadata table columns
> 
>
> Key: IMPALA-12610
> URL: https://issues.apache.org/jira/browse/IMPALA-12610
> Project: IMPALA
>  Issue Type: Sub-task
>  Components: Backend, Frontend
>Affects Versions: Impala 4.4.0
>Reporter: Tamas Mate
>Assignee: Tamas Mate
>Priority: Major
>  Labels: impala-iceberg
>
> ARRAY type columns are currently filled with NULLs, we should populate them.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Work started] (IMPALA-12651) Add support to BINARY type Iceberg Metadata table columns

2024-01-02 Thread Tamas Mate (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-12651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on IMPALA-12651 started by Tamas Mate.
---
> Add support to BINARY type Iceberg Metadata table columns
> -
>
> Key: IMPALA-12651
> URL: https://issues.apache.org/jira/browse/IMPALA-12651
> Project: IMPALA
>  Issue Type: Sub-task
>  Components: Backend, Frontend
>Affects Versions: Impala 4.4.0
>Reporter: Tamas Mate
>Assignee: Tamas Mate
>Priority: Major
>  Labels: impala-iceberg
>
> Impala should be able to read BINARY type columns from Iceberg metadata 
> tables as strings, additionally this should be allowed when reading these 
> types from complex types.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Updated] (IMPALA-10947) SQL support for querying Iceberg metadata

2024-01-02 Thread Tamas Mate (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-10947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Mate updated IMPALA-10947:

Epic Link: (was: IMPALA-10149)

> SQL support for querying Iceberg metadata
> -
>
> Key: IMPALA-10947
> URL: https://issues.apache.org/jira/browse/IMPALA-10947
> Project: IMPALA
>  Issue Type: Epic
>  Components: Frontend
>Reporter: Zoltán Borók-Nagy
>Assignee: Tamas Mate
>Priority: Major
>  Labels: impala-iceberg
>
> HIVE-25457 added support for querying Iceberg table metadata to Hive.
> They support the following syntax:
> SELECT * FROM default.iceberg_table.history;
> Spark uses the same syntax: https://iceberg.apache.org/spark-queries/#history
> Other than "history", the following metadata tables are available in Iceberg:
> The following metadata tables are available in Iceberg:
> * ENTRIES,
> * FILES,
> * HISTORY,
> * SNAPSHOTS,
> * MANIFESTS,
> * PARTITIONS,
> * ALL_DATA_FILES,
> * ALL_MANIFESTS,
> * ALL_ENTRIES
> Impala currently only supports "DESCRIBE HISTORY ". The above SELECT 
> syntax would be more convenient for the users, also it would be more flexible 
> as users could easily define filters in WHERE clauses. And of course we would 
> be consistent with other engines.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Updated] (IMPALA-10947) SQL support for querying Iceberg metadata

2024-01-02 Thread Tamas Mate (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-10947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Mate updated IMPALA-10947:

 Epic Link: IMPALA-10149
Issue Type: Epic  (was: New Feature)

> SQL support for querying Iceberg metadata
> -
>
> Key: IMPALA-10947
> URL: https://issues.apache.org/jira/browse/IMPALA-10947
> Project: IMPALA
>  Issue Type: Epic
>  Components: Frontend
>Reporter: Zoltán Borók-Nagy
>Assignee: Tamas Mate
>Priority: Major
>  Labels: impala-iceberg
>
> HIVE-25457 added support for querying Iceberg table metadata to Hive.
> They support the following syntax:
> SELECT * FROM default.iceberg_table.history;
> Spark uses the same syntax: https://iceberg.apache.org/spark-queries/#history
> Other than "history", the following metadata tables are available in Iceberg:
> The following metadata tables are available in Iceberg:
> * ENTRIES,
> * FILES,
> * HISTORY,
> * SNAPSHOTS,
> * MANIFESTS,
> * PARTITIONS,
> * ALL_DATA_FILES,
> * ALL_MANIFESTS,
> * ALL_ENTRIES
> Impala currently only supports "DESCRIBE HISTORY ". The above SELECT 
> syntax would be more convenient for the users, also it would be more flexible 
> as users could easily define filters in WHERE clauses. And of course we would 
> be consistent with other engines.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Resolved] (IMPALA-11853) Overlapping text in CSS documentation

2024-01-02 Thread Tamas Mate (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-11853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Mate resolved IMPALA-11853.
-
Resolution: Fixed

> Overlapping text in CSS documentation
> -
>
> Key: IMPALA-11853
> URL: https://issues.apache.org/jira/browse/IMPALA-11853
> Project: IMPALA
>  Issue Type: Documentation
>Reporter: Daniel Becker
>Assignee: Tamas Mate
>Priority: Major
> Attachments: screenshot-1.png
>
>
> In the new CSS enabled documentation (introduced in 
> [IMPALA-11676|https://issues.apache.org/jira/browse/IMPALA-11676]) , on some 
> pages (for example 
> https://impala.apache.org/docs/build/html/topics/impala_query_options.html) 
> the text in the left column overlaps with the main text.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Resolved] (IMPALA-12205) Add support to STRUCT type Iceberg Metadata table columns

2023-12-19 Thread Tamas Mate (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-12205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Mate resolved IMPALA-12205.
-
Fix Version/s: Impala 4.4.0
   Resolution: Fixed

> Add support to STRUCT type Iceberg Metadata table columns
> -
>
> Key: IMPALA-12205
> URL: https://issues.apache.org/jira/browse/IMPALA-12205
> Project: IMPALA
>  Issue Type: Sub-task
>Reporter: Tamas Mate
>Assignee: Tamas Mate
>Priority: Major
>  Labels: impala-iceberg
> Fix For: Impala 4.4.0
>
>
> Meatdata table columns can be struct as well, this Jira is to extend the type 
> support to structs as well, which will not be part of the executor change.
> Complex types are hidden by default when not specified directly in the select 
> list, this should be revisited.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-12651) Add support to BINARY type Iceberg Metadata table columns

2023-12-18 Thread Tamas Mate (Jira)
Tamas Mate created IMPALA-12651:
---

 Summary: Add support to BINARY type Iceberg Metadata table columns
 Key: IMPALA-12651
 URL: https://issues.apache.org/jira/browse/IMPALA-12651
 Project: IMPALA
  Issue Type: Sub-task
  Components: Backend, Frontend
Affects Versions: Impala 4.4.0
Reporter: Tamas Mate
Assignee: Tamas Mate


Impala should be able to read BINARY type columns from Iceberg metadata tables 
as strings, additionally this should be allowed when reading these types from 
complex types.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Updated] (IMPALA-12610) Add support to ARRAY type Iceberg Metadata table columns

2023-12-08 Thread Tamas Mate (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-12610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Mate updated IMPALA-12610:

Labels: impala-iceberg  (was: )

> Add support to ARRAY type Iceberg Metadata table columns
> 
>
> Key: IMPALA-12610
> URL: https://issues.apache.org/jira/browse/IMPALA-12610
> Project: IMPALA
>  Issue Type: Sub-task
>  Components: Backend, Frontend
>Affects Versions: Impala 4.4.0
>Reporter: Tamas Mate
>Assignee: Tamas Mate
>Priority: Major
>  Labels: impala-iceberg
>
> ARRAY type columns are currently filled with NULLs, we should populate them.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-12610) Add support to ARRAY type Iceberg Metadata table columns

2023-12-08 Thread Tamas Mate (Jira)
Tamas Mate created IMPALA-12610:
---

 Summary: Add support to ARRAY type Iceberg Metadata table columns
 Key: IMPALA-12610
 URL: https://issues.apache.org/jira/browse/IMPALA-12610
 Project: IMPALA
  Issue Type: Sub-task
  Components: Backend, Frontend
Affects Versions: Impala 4.4.0
Reporter: Tamas Mate
Assignee: Tamas Mate


ARRAY type columns are currently filled with NULLs, we should populate them.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Assigned] (IMPALA-12609) Implement SHOW TABLES IN statement to list Iceberg Metadata tables

2023-12-08 Thread Tamas Mate (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-12609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Mate reassigned IMPALA-12609:
---

Assignee: Tamas Mate

> Implement SHOW TABLES IN statement to list Iceberg Metadata tables
> --
>
> Key: IMPALA-12609
> URL: https://issues.apache.org/jira/browse/IMPALA-12609
> Project: IMPALA
>  Issue Type: Sub-task
>  Components: Frontend
>Affects Versions: Impala 4.4.0
>Reporter: Tamas Mate
>Assignee: Tamas Mate
>Priority: Minor
>  Labels: impala-iceberg
>
> {{SHOW TABLES IN}} statement could be used to list all the available metadata 
> tables of an Iceberg table.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-12611) Add support to MAP type Iceberg Metadata table columns

2023-12-08 Thread Tamas Mate (Jira)
Tamas Mate created IMPALA-12611:
---

 Summary: Add support to MAP type Iceberg Metadata table columns
 Key: IMPALA-12611
 URL: https://issues.apache.org/jira/browse/IMPALA-12611
 Project: IMPALA
  Issue Type: Sub-task
  Components: Backend, Frontend
Affects Versions: Impala 4.4.0
Reporter: Tamas Mate
Assignee: Tamas Mate


MAP type columns are currently filled with NULLs, we should populate them.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Assigned] (IMPALA-12609) Implement SHOW TABLES IN statement to list Iceberg Metadata tables

2023-12-08 Thread Tamas Mate (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-12609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Mate reassigned IMPALA-12609:
---

Assignee: (was: Tamas Mate)

> Implement SHOW TABLES IN statement to list Iceberg Metadata tables
> --
>
> Key: IMPALA-12609
> URL: https://issues.apache.org/jira/browse/IMPALA-12609
> Project: IMPALA
>  Issue Type: Sub-task
>  Components: Frontend
>Reporter: Tamas Mate
>Priority: Minor
>  Labels: impala-iceberg
> Fix For: Impala 4.4.0
>
>
> {{SHOW TABLES IN}} statement could be used to list all the available metadata 
> tables of an Iceberg table.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Updated] (IMPALA-12609) Implement SHOW TABLES IN statement to list Iceberg Metadata tables

2023-12-08 Thread Tamas Mate (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-12609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Mate updated IMPALA-12609:

Component/s: Frontend
 (was: fe)

> Implement SHOW TABLES IN statement to list Iceberg Metadata tables
> --
>
> Key: IMPALA-12609
> URL: https://issues.apache.org/jira/browse/IMPALA-12609
> Project: IMPALA
>  Issue Type: Sub-task
>  Components: Frontend
>Reporter: Tamas Mate
>Assignee: Tamas Mate
>Priority: Minor
>  Labels: impala-iceberg
> Fix For: Impala 4.4.0
>
>
> {{SHOW TABLES IN}} statement could be used to list all the available metadata 
> tables of an Iceberg table.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Updated] (IMPALA-12609) Implement SHOW TABLES IN statement to list Iceberg Metadata tables

2023-12-08 Thread Tamas Mate (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-12609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Mate updated IMPALA-12609:

Labels: impala-iceberg  (was: )

> Implement SHOW TABLES IN statement to list Iceberg Metadata tables
> --
>
> Key: IMPALA-12609
> URL: https://issues.apache.org/jira/browse/IMPALA-12609
> Project: IMPALA
>  Issue Type: Sub-task
>  Components: fe
>Reporter: Tamas Mate
>Assignee: Tamas Mate
>Priority: Minor
>  Labels: impala-iceberg
> Fix For: Impala 4.4.0
>
>
> {{SHOW TABLES IN}} statement could be used to list all the available metadata 
> tables of an Iceberg table.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Updated] (IMPALA-12609) Implement SHOW TABLES IN statement to list Iceberg Metadata tables

2023-12-08 Thread Tamas Mate (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-12609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Mate updated IMPALA-12609:

Affects Version/s: Impala 4.4.0

> Implement SHOW TABLES IN statement to list Iceberg Metadata tables
> --
>
> Key: IMPALA-12609
> URL: https://issues.apache.org/jira/browse/IMPALA-12609
> Project: IMPALA
>  Issue Type: Sub-task
>  Components: Frontend
>Affects Versions: Impala 4.4.0
>Reporter: Tamas Mate
>Priority: Minor
>  Labels: impala-iceberg
>
> {{SHOW TABLES IN}} statement could be used to list all the available metadata 
> tables of an Iceberg table.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Updated] (IMPALA-12609) Implement SHOW TABLES IN statement to list Iceberg Metadata tables

2023-12-08 Thread Tamas Mate (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-12609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Mate updated IMPALA-12609:

Fix Version/s: (was: Impala 4.4.0)

> Implement SHOW TABLES IN statement to list Iceberg Metadata tables
> --
>
> Key: IMPALA-12609
> URL: https://issues.apache.org/jira/browse/IMPALA-12609
> Project: IMPALA
>  Issue Type: Sub-task
>  Components: Frontend
>Reporter: Tamas Mate
>Priority: Minor
>  Labels: impala-iceberg
>
> {{SHOW TABLES IN}} statement could be used to list all the available metadata 
> tables of an Iceberg table.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-12609) Implement SHOW TABLES IN statement to list Iceberg Metadata tables

2023-12-08 Thread Tamas Mate (Jira)
Tamas Mate created IMPALA-12609:
---

 Summary: Implement SHOW TABLES IN statement to list Iceberg 
Metadata tables
 Key: IMPALA-12609
 URL: https://issues.apache.org/jira/browse/IMPALA-12609
 Project: IMPALA
  Issue Type: Sub-task
  Components: fe
Reporter: Tamas Mate
Assignee: Tamas Mate
 Fix For: Impala 4.4.0


{{SHOW TABLES IN}} statement could be used to list all the available metadata 
tables of an Iceberg table.




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Updated] (IMPALA-12205) Add support to STRUCT type Iceberg Metadata table columns

2023-12-06 Thread Tamas Mate (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-12205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Mate updated IMPALA-12205:

Summary: Add support to STRUCT type Iceberg Metadata table columns  (was: 
Add support to nested type Iceberg Metadata table columns)

> Add support to STRUCT type Iceberg Metadata table columns
> -
>
> Key: IMPALA-12205
> URL: https://issues.apache.org/jira/browse/IMPALA-12205
> Project: IMPALA
>  Issue Type: Sub-task
>Reporter: Tamas Mate
>Assignee: Tamas Mate
>Priority: Major
>  Labels: impala-iceberg
>
> Meatdata table columns can be struct as well, this Jira is to extend the type 
> support to structs as well, which will not be part of the executor change.
> Complex types are hidden by default when not specified directly in the select 
> list, this should be revisited.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Resolved] (IMPALA-12527) test_metadata_tables could occasionally fail in the s3 build

2023-11-13 Thread Tamas Mate (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-12527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Mate resolved IMPALA-12527.
-
Fix Version/s: Impala 4.4.0
   Resolution: Fixed

> test_metadata_tables could occasionally fail in the s3 build
> 
>
> Key: IMPALA-12527
> URL: https://issues.apache.org/jira/browse/IMPALA-12527
> Project: IMPALA
>  Issue Type: Sub-task
>Reporter: Fang-Yu Rao
>Assignee: Tamas Mate
>Priority: Major
>  Labels: broken-build, flaky-test, impala-iceberg
> Fix For: Impala 4.4.0
>
>
> We found that 
> [test_metadata_tables()|https://github.infra.cloudera.com/CDH/Impala/blame/cdw-master-staging/tests/query_test/test_iceberg.py#L1219]
>  that runs 
> [iceberg-metadata-tables.test|https://github.com/apache/impala/blob/master/testdata/workloads/functional-query/queries/QueryTest/iceberg-metadata-tables.test]
>  could occasionally fail with the following error message.
> It looks like the actual result does not match the expected result for some 
> columns.
> Stacktrace
> {code}
> query_test/test_iceberg.py:1226: in test_metadata_tables
> '$OVERWRITE_SNAPSHOT_TS': str(overwrite_snapshot_ts.data[0])})
> common/impala_test_suite.py:751: in run_test_case
> self.__verify_results_and_errors(vector, test_section, result, use_db)
> common/impala_test_suite.py:587: in __verify_results_and_errors
> replace_filenames_with_placeholder)
> common/test_result_verifier.py:487: in verify_raw_results
> VERIFIER_MAP[verifier](expected, actual)
> common/test_result_verifier.py:296: in verify_query_result_is_equal
> assert expected_results == actual_results
> E   assert Comparing QueryTestResults (expected vs actual):
> E 
> row_regex:0,'s3a://impala-test-uswest2-2/test-warehouse/iceberg_test/hadoop_catalog/ice/iceberg_query_metadata/data/.*.parq','PARQUET',0,1,[1-9]\d*|0,'',0
>  != 
> 0,'/test-warehouse/iceberg_test/hadoop_catalog/ice/iceberg_query_metadata/data/7d479ffb82bfffd3-7ce667e5_544607964_data.0.parq','PARQUET',0,1,351,'NULL',0
> E 
> row_regex:0,'s3a://impala-test-uswest2-2/test-warehouse/iceberg_test/hadoop_catalog/ice/iceberg_query_metadata/data/.*.parq','PARQUET',0,1,[1-9]\d*|0,'',0
>  != 
> 0,'/test-warehouse/iceberg_test/hadoop_catalog/ice/iceberg_query_metadata/data/ab4ffd0d75a5a68d-13da0831_1541521750_data.0.parq','PARQUET',0,1,351,'NULL',0
> E 
> row_regex:0,'s3a://impala-test-uswest2-2/test-warehouse/iceberg_test/hadoop_catalog/ice/iceberg_query_metadata/data/.*.parq','PARQUET',0,1,[1-9]\d*|0,'',0
>  != 
> 0,'/test-warehouse/iceberg_test/hadoop_catalog/ice/iceberg_query_metadata/data/b04d1095845359f5-f0799bd0_1209897284_data.0.parq','PARQUET',0,1,351,'NULL',0
> E 
> row_regex:1,'s3a://impala-test-uswest2-2/test-warehouse/iceberg_test/hadoop_catalog/ice/iceberg_query_metadata/data/.*.parq','PARQUET',0,1,[1-9]\d*|0,'NULL',NULL
>  != 
> 1,'/test-warehouse/iceberg_test/hadoop_catalog/ice/iceberg_query_metadata/data/delete-1b45db885b2bdd56-4023218d0002_1697110314_data.0.parq','PARQUET',0,1,1531,'NULL',NULL
> {code}
> Specifically, it seems the value of the second last column are different from 
> the expected value in some rows.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Updated] (IMPALA-12495) Describe command for Iceberg metadata tables

2023-11-13 Thread Tamas Mate (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-12495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Mate updated IMPALA-12495:

Labels: impala-iceberg  (was: )

> Describe command for Iceberg metadata tables
> 
>
> Key: IMPALA-12495
> URL: https://issues.apache.org/jira/browse/IMPALA-12495
> Project: IMPALA
>  Issue Type: Sub-task
>  Components: Backend, Frontend
>Affects Versions: Impala 4.4.0
>Reporter: Tamas Mate
>Assignee: Tamas Mate
>Priority: Major
>  Labels: impala-iceberg
>
> We should provide a statement do describe metadata tables.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Work started] (IMPALA-12495) Describe command for Iceberg metadata tables

2023-11-13 Thread Tamas Mate (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-12495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on IMPALA-12495 started by Tamas Mate.
---
> Describe command for Iceberg metadata tables
> 
>
> Key: IMPALA-12495
> URL: https://issues.apache.org/jira/browse/IMPALA-12495
> Project: IMPALA
>  Issue Type: Sub-task
>  Components: Backend, Frontend
>Affects Versions: Impala 4.4.0
>Reporter: Tamas Mate
>Assignee: Tamas Mate
>Priority: Major
>  Labels: impala-iceberg
>
> We should provide a statement do describe metadata tables.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Closed] (IMPALA-12537) Iceberg returns a deleted file's name when INPUT__FILE__NAME is in the select list

2023-11-04 Thread Tamas Mate (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-12537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Mate closed IMPALA-12537.
---
Resolution: Won't Fix

Thanks [~boroknagyz]!

I think I will go with SkipIf for now, it feels more reasonable.

Closing this with won't fix.

> Iceberg returns a deleted file's name when INPUT__FILE__NAME is in the select 
> list
> --
>
> Key: IMPALA-12537
> URL: https://issues.apache.org/jira/browse/IMPALA-12537
> Project: IMPALA
>  Issue Type: Bug
>  Components: Backend
>Affects Versions: Impala 4.3.0
>Reporter: Tamas Mate
>Priority: Major
>  Labels: impala-iceberg
>
> On S3, Impala returns 3 rows when {{INPUT__FILE__NAME}} is specified, but it 
> only returns the 2 data file names on HDFS.
> iceberg_query_metadata table had 3 records originally (i=1, i=2, i=3) and the 
> second one i=2 was deleted, I observed the following test failure:
> {code:none}
> 20:28:08 SELECT i, INPUT__FILE__NAME from 
> functional_parquet.iceberg_query_metadata tbl;
> 20:28:08 
> 20:28:08 -- 2023-11-02 12:14:08,868 INFO MainThread: Started query 
> 4448ab74f3a639a5:2b8cf6ea
> 20:28:08 -- 2023-11-02 12:14:08,997 ERRORMainThread: Comparing 
> QueryTestResults (expected vs actual):
> 20:28:08 
> row_regex:[1-9]\d*|0,'.*/test-warehouse/iceberg_test/hadoop_catalog/ice/iceberg_query_metadata/data/.*.parq'
>  == 
> 1,'s3a://impala-test-uswest2-3/test-warehouse/iceberg_test/hadoop_catalog/ice/iceberg_query_metadata/data/ed4288065b402e80-c705c474_264336845_data.0.parq'
> 20:28:08 
> row_regex:[1-9]\d*|0,'.*/test-warehouse/iceberg_test/hadoop_catalog/ice/iceberg_query_metadata/data/.*.parq'
>  == 
> 2,'s3a://impala-test-uswest2-3/test-warehouse/iceberg_test/hadoop_catalog/ice/iceberg_query_metadata/data/0a4510a5e8578659-1607b640_435794406_data.0.parq'
> 20:28:08 Number of rows returned (expected vs actual): 2 != 3
> {code}
> cc.:[~boroknagyz], [~gaborkaszab] 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-12537) Iceberg returns a deleted file's name when INPUT__FILE__NAME is in the select list

2023-11-03 Thread Tamas Mate (Jira)
Tamas Mate created IMPALA-12537:
---

 Summary: Iceberg returns a deleted file's name when 
INPUT__FILE__NAME is in the select list
 Key: IMPALA-12537
 URL: https://issues.apache.org/jira/browse/IMPALA-12537
 Project: IMPALA
  Issue Type: Bug
  Components: Backend
Affects Versions: Impala 4.3.0
Reporter: Tamas Mate


On S3, Impala returns 3 rows when {{INPUT__FILE__NAME}} is specified, but it 
only returns the 2 data file names on HDFS.
iceberg_query_metadata table had 3 records originally (i=1, i=2, i=3) and the 
second one i=2 was deleted, I observed the following test failure:
{code:none}
20:28:08 SELECT i, INPUT__FILE__NAME from 
functional_parquet.iceberg_query_metadata tbl;
20:28:08 
20:28:08 -- 2023-11-02 12:14:08,868 INFO MainThread: Started query 
4448ab74f3a639a5:2b8cf6ea
20:28:08 -- 2023-11-02 12:14:08,997 ERRORMainThread: Comparing 
QueryTestResults (expected vs actual):
20:28:08 
row_regex:[1-9]\d*|0,'.*/test-warehouse/iceberg_test/hadoop_catalog/ice/iceberg_query_metadata/data/.*.parq'
 == 
1,'s3a://impala-test-uswest2-3/test-warehouse/iceberg_test/hadoop_catalog/ice/iceberg_query_metadata/data/ed4288065b402e80-c705c474_264336845_data.0.parq'
20:28:08 
row_regex:[1-9]\d*|0,'.*/test-warehouse/iceberg_test/hadoop_catalog/ice/iceberg_query_metadata/data/.*.parq'
 == 
2,'s3a://impala-test-uswest2-3/test-warehouse/iceberg_test/hadoop_catalog/ice/iceberg_query_metadata/data/0a4510a5e8578659-1607b640_435794406_data.0.parq'
20:28:08 Number of rows returned (expected vs actual): 2 != 3
{code}
cc.:[~boroknagyz], [~gaborkaszab] 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Comment Edited] (IMPALA-12428) TestIcebergTable.test_iceberg_negative and TestIceberg.test_iceberg_query failed

2023-11-02 Thread Tamas Mate (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-12428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17782336#comment-17782336
 ] 

Tamas Mate edited comment on IMPALA-12428 at 11/2/23 9:44 PM:
--

This was due to the snapshotting behaviour of our builds.


was (Author: tmate):
This was due to the snapshotting behaviour in our builds.

> TestIcebergTable.test_iceberg_negative and TestIceberg.test_iceberg_query 
> failed
> 
>
> Key: IMPALA-12428
> URL: https://issues.apache.org/jira/browse/IMPALA-12428
> Project: IMPALA
>  Issue Type: Bug
>Reporter: Wenzhe Zhou
>Assignee: Tamás Máté
>Priority: Major
>
> Following TestIcebergTable.test_iceberg_negative and 
> TestIceberg.test_iceberg_query failed after the patch for IMPALA-12407 (Add 
> test table with Iceberg Equality deletes) was merged.
> {code:java}
> query_test.test_iceberg.TestIcebergTable.test_iceberg_negative[protocol: 
> beeswax | exec_option: {'test_replan': 1, 'batch_size': 0, 'num_nodes': 0, 
> 'disable_codegen_rows_threshold': 0, 'disable_codegen': False, 
> 'abort_on_error': 1, 'exec_single_node_rows_threshold': 0} | table_format: 
> parquet/none]
> query_test.test_scanners.TestIceberg.test_iceberg_query[protocol: beeswax | 
> exec_option: {'test_replan': 1, 'batch_size': 0, 'num_nodes': 0, 
> 'disable_codegen_rows_threshold': 0, 'disable_codegen': True, 
> 'abort_on_error': 1, 'debug_action': None, 'exec_single_node_rows_threshold': 
> 0} | table_format: parquet/none]
> query_test.test_scanners.TestIceberg.test_iceberg_query[protocol: beeswax | 
> exec_option: {'test_replan': 1, 'batch_size': 0, 'num_nodes': 0, 
> 'disable_codegen_rows_threshold': 0, 'disable_codegen': False, 
> 'abort_on_error': 1, 'debug_action': None, 'exec_single_node_rows_threshold': 
> 0} | table_format: parquet/none]
> query_test.test_scanners.TestIceberg.test_iceberg_query[protocol: beeswax | 
> exec_option: {'test_replan': 1, 'batch_size': 0, 'num_nodes': 0, 
> 'disable_codegen_rows_threshold': 0, 'disable_codegen': True, 
> 'abort_on_error': 1, 'debug_action': 
> '-1:OPEN:SET_DENY_RESERVATION_PROBABILITY@0.5', 
> 'exec_single_node_rows_threshold': 0} | table_format: parquet/none]
> query_test.test_scanners.TestIceberg.test_iceberg_query[protocol: beeswax | 
> exec_option: {'test_replan': 1, 'batch_size': 0, 'num_nodes': 0, 
> 'disable_codegen_rows_threshold': 0, 'disable_codegen': False, 
> 'abort_on_error': 1, 'debug_action': 
> '-1:OPEN:SET_DENY_RESERVATION_PROBABILITY@0.5', 
> 'exec_single_node_rows_threshold': 0} | table_format: parquet/none]
> query_test.test_scanners.TestIceberg.test_iceberg_query[protocol: beeswax | 
> exec_option: {'test_replan': 1, 'batch_size': 0, 'num_nodes': 0, 
> 'disable_codegen_rows_threshold': 0, 'disable_codegen': True, 
> 'abort_on_error': 1, 'debug_action': 
> '-1:OPEN:SET_DENY_RESERVATION_PROBABILITY@1.0', 
> 'exec_single_node_rows_threshold': 0} | table_format: parquet/none]
> query_test.test_scanners.TestIceberg.test_iceberg_query[protocol: beeswax | 
> exec_option: {'test_replan': 1, 'batch_size': 0, 'num_nodes': 0, 
> 'disable_codegen_rows_threshold': 0, 'disable_codegen': False, 
> 'abort_on_error': 1, 'debug_action': 
> '-1:OPEN:SET_DENY_RESERVATION_PROBABILITY@1.0', 
> 'exec_single_node_rows_threshold': 0} | table_format: parquet/none]
> query_test.test_scanners.TestIceberg.test_iceberg_query[protocol: beeswax | 
> exec_option: {'test_replan': 1, 'batch_size': 0, 'num_nodes': 0, 
> 'disable_codegen_rows_threshold': 0, 'disable_codegen': True, 
> 'abort_on_error': 1, 'debug_action': 
> 'HDFS_SCANNER_THREAD_CHECK_SOFT_MEM_LIMIT:FAIL@0.5', 
> 'exec_single_node_rows_threshold': 0} | table_format: parquet/none]
> query_test.test_scanners.TestIceberg.test_iceberg_query[protocol: beeswax | 
> exec_option: {'test_replan': 1, 'batch_size': 0, 'num_nodes': 0, 
> 'disable_codegen_rows_threshold': 0, 'disable_codegen': False, 
> 'abort_on_error': 1, 'debug_action': 
> 'HDFS_SCANNER_THREAD_CHECK_SOFT_MEM_LIMIT:FAIL@0.5', 
> 'exec_single_node_rows_threshold': 0} | table_format: parquet/none]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-12428) TestIcebergTable.test_iceberg_negative and TestIceberg.test_iceberg_query failed

2023-11-02 Thread Tamas Mate (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-12428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17782336#comment-17782336
 ] 

Tamas Mate commented on IMPALA-12428:
-

This was due to the snapshotting behaviour in our builds.

> TestIcebergTable.test_iceberg_negative and TestIceberg.test_iceberg_query 
> failed
> 
>
> Key: IMPALA-12428
> URL: https://issues.apache.org/jira/browse/IMPALA-12428
> Project: IMPALA
>  Issue Type: Bug
>Reporter: Wenzhe Zhou
>Assignee: Tamás Máté
>Priority: Major
>
> Following TestIcebergTable.test_iceberg_negative and 
> TestIceberg.test_iceberg_query failed after the patch for IMPALA-12407 (Add 
> test table with Iceberg Equality deletes) was merged.
> {code:java}
> query_test.test_iceberg.TestIcebergTable.test_iceberg_negative[protocol: 
> beeswax | exec_option: {'test_replan': 1, 'batch_size': 0, 'num_nodes': 0, 
> 'disable_codegen_rows_threshold': 0, 'disable_codegen': False, 
> 'abort_on_error': 1, 'exec_single_node_rows_threshold': 0} | table_format: 
> parquet/none]
> query_test.test_scanners.TestIceberg.test_iceberg_query[protocol: beeswax | 
> exec_option: {'test_replan': 1, 'batch_size': 0, 'num_nodes': 0, 
> 'disable_codegen_rows_threshold': 0, 'disable_codegen': True, 
> 'abort_on_error': 1, 'debug_action': None, 'exec_single_node_rows_threshold': 
> 0} | table_format: parquet/none]
> query_test.test_scanners.TestIceberg.test_iceberg_query[protocol: beeswax | 
> exec_option: {'test_replan': 1, 'batch_size': 0, 'num_nodes': 0, 
> 'disable_codegen_rows_threshold': 0, 'disable_codegen': False, 
> 'abort_on_error': 1, 'debug_action': None, 'exec_single_node_rows_threshold': 
> 0} | table_format: parquet/none]
> query_test.test_scanners.TestIceberg.test_iceberg_query[protocol: beeswax | 
> exec_option: {'test_replan': 1, 'batch_size': 0, 'num_nodes': 0, 
> 'disable_codegen_rows_threshold': 0, 'disable_codegen': True, 
> 'abort_on_error': 1, 'debug_action': 
> '-1:OPEN:SET_DENY_RESERVATION_PROBABILITY@0.5', 
> 'exec_single_node_rows_threshold': 0} | table_format: parquet/none]
> query_test.test_scanners.TestIceberg.test_iceberg_query[protocol: beeswax | 
> exec_option: {'test_replan': 1, 'batch_size': 0, 'num_nodes': 0, 
> 'disable_codegen_rows_threshold': 0, 'disable_codegen': False, 
> 'abort_on_error': 1, 'debug_action': 
> '-1:OPEN:SET_DENY_RESERVATION_PROBABILITY@0.5', 
> 'exec_single_node_rows_threshold': 0} | table_format: parquet/none]
> query_test.test_scanners.TestIceberg.test_iceberg_query[protocol: beeswax | 
> exec_option: {'test_replan': 1, 'batch_size': 0, 'num_nodes': 0, 
> 'disable_codegen_rows_threshold': 0, 'disable_codegen': True, 
> 'abort_on_error': 1, 'debug_action': 
> '-1:OPEN:SET_DENY_RESERVATION_PROBABILITY@1.0', 
> 'exec_single_node_rows_threshold': 0} | table_format: parquet/none]
> query_test.test_scanners.TestIceberg.test_iceberg_query[protocol: beeswax | 
> exec_option: {'test_replan': 1, 'batch_size': 0, 'num_nodes': 0, 
> 'disable_codegen_rows_threshold': 0, 'disable_codegen': False, 
> 'abort_on_error': 1, 'debug_action': 
> '-1:OPEN:SET_DENY_RESERVATION_PROBABILITY@1.0', 
> 'exec_single_node_rows_threshold': 0} | table_format: parquet/none]
> query_test.test_scanners.TestIceberg.test_iceberg_query[protocol: beeswax | 
> exec_option: {'test_replan': 1, 'batch_size': 0, 'num_nodes': 0, 
> 'disable_codegen_rows_threshold': 0, 'disable_codegen': True, 
> 'abort_on_error': 1, 'debug_action': 
> 'HDFS_SCANNER_THREAD_CHECK_SOFT_MEM_LIMIT:FAIL@0.5', 
> 'exec_single_node_rows_threshold': 0} | table_format: parquet/none]
> query_test.test_scanners.TestIceberg.test_iceberg_query[protocol: beeswax | 
> exec_option: {'test_replan': 1, 'batch_size': 0, 'num_nodes': 0, 
> 'disable_codegen_rows_threshold': 0, 'disable_codegen': False, 
> 'abort_on_error': 1, 'debug_action': 
> 'HDFS_SCANNER_THREAD_CHECK_SOFT_MEM_LIMIT:FAIL@0.5', 
> 'exec_single_node_rows_threshold': 0} | table_format: parquet/none]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Resolved] (IMPALA-12428) TestIcebergTable.test_iceberg_negative and TestIceberg.test_iceberg_query failed

2023-11-02 Thread Tamas Mate (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-12428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Mate resolved IMPALA-12428.
-
Resolution: Won't Fix

> TestIcebergTable.test_iceberg_negative and TestIceberg.test_iceberg_query 
> failed
> 
>
> Key: IMPALA-12428
> URL: https://issues.apache.org/jira/browse/IMPALA-12428
> Project: IMPALA
>  Issue Type: Bug
>Reporter: Wenzhe Zhou
>Assignee: Tamás Máté
>Priority: Major
>
> Following TestIcebergTable.test_iceberg_negative and 
> TestIceberg.test_iceberg_query failed after the patch for IMPALA-12407 (Add 
> test table with Iceberg Equality deletes) was merged.
> {code:java}
> query_test.test_iceberg.TestIcebergTable.test_iceberg_negative[protocol: 
> beeswax | exec_option: {'test_replan': 1, 'batch_size': 0, 'num_nodes': 0, 
> 'disable_codegen_rows_threshold': 0, 'disable_codegen': False, 
> 'abort_on_error': 1, 'exec_single_node_rows_threshold': 0} | table_format: 
> parquet/none]
> query_test.test_scanners.TestIceberg.test_iceberg_query[protocol: beeswax | 
> exec_option: {'test_replan': 1, 'batch_size': 0, 'num_nodes': 0, 
> 'disable_codegen_rows_threshold': 0, 'disable_codegen': True, 
> 'abort_on_error': 1, 'debug_action': None, 'exec_single_node_rows_threshold': 
> 0} | table_format: parquet/none]
> query_test.test_scanners.TestIceberg.test_iceberg_query[protocol: beeswax | 
> exec_option: {'test_replan': 1, 'batch_size': 0, 'num_nodes': 0, 
> 'disable_codegen_rows_threshold': 0, 'disable_codegen': False, 
> 'abort_on_error': 1, 'debug_action': None, 'exec_single_node_rows_threshold': 
> 0} | table_format: parquet/none]
> query_test.test_scanners.TestIceberg.test_iceberg_query[protocol: beeswax | 
> exec_option: {'test_replan': 1, 'batch_size': 0, 'num_nodes': 0, 
> 'disable_codegen_rows_threshold': 0, 'disable_codegen': True, 
> 'abort_on_error': 1, 'debug_action': 
> '-1:OPEN:SET_DENY_RESERVATION_PROBABILITY@0.5', 
> 'exec_single_node_rows_threshold': 0} | table_format: parquet/none]
> query_test.test_scanners.TestIceberg.test_iceberg_query[protocol: beeswax | 
> exec_option: {'test_replan': 1, 'batch_size': 0, 'num_nodes': 0, 
> 'disable_codegen_rows_threshold': 0, 'disable_codegen': False, 
> 'abort_on_error': 1, 'debug_action': 
> '-1:OPEN:SET_DENY_RESERVATION_PROBABILITY@0.5', 
> 'exec_single_node_rows_threshold': 0} | table_format: parquet/none]
> query_test.test_scanners.TestIceberg.test_iceberg_query[protocol: beeswax | 
> exec_option: {'test_replan': 1, 'batch_size': 0, 'num_nodes': 0, 
> 'disable_codegen_rows_threshold': 0, 'disable_codegen': True, 
> 'abort_on_error': 1, 'debug_action': 
> '-1:OPEN:SET_DENY_RESERVATION_PROBABILITY@1.0', 
> 'exec_single_node_rows_threshold': 0} | table_format: parquet/none]
> query_test.test_scanners.TestIceberg.test_iceberg_query[protocol: beeswax | 
> exec_option: {'test_replan': 1, 'batch_size': 0, 'num_nodes': 0, 
> 'disable_codegen_rows_threshold': 0, 'disable_codegen': False, 
> 'abort_on_error': 1, 'debug_action': 
> '-1:OPEN:SET_DENY_RESERVATION_PROBABILITY@1.0', 
> 'exec_single_node_rows_threshold': 0} | table_format: parquet/none]
> query_test.test_scanners.TestIceberg.test_iceberg_query[protocol: beeswax | 
> exec_option: {'test_replan': 1, 'batch_size': 0, 'num_nodes': 0, 
> 'disable_codegen_rows_threshold': 0, 'disable_codegen': True, 
> 'abort_on_error': 1, 'debug_action': 
> 'HDFS_SCANNER_THREAD_CHECK_SOFT_MEM_LIMIT:FAIL@0.5', 
> 'exec_single_node_rows_threshold': 0} | table_format: parquet/none]
> query_test.test_scanners.TestIceberg.test_iceberg_query[protocol: beeswax | 
> exec_option: {'test_replan': 1, 'batch_size': 0, 'num_nodes': 0, 
> 'disable_codegen_rows_threshold': 0, 'disable_codegen': False, 
> 'abort_on_error': 1, 'debug_action': 
> 'HDFS_SCANNER_THREAD_CHECK_SOFT_MEM_LIMIT:FAIL@0.5', 
> 'exec_single_node_rows_threshold': 0} | table_format: parquet/none]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Work started] (IMPALA-12205) Add support to nested type Iceberg Metadata table columns

2023-10-30 Thread Tamas Mate (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-12205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on IMPALA-12205 started by Tamas Mate.
---
> Add support to nested type Iceberg Metadata table columns
> -
>
> Key: IMPALA-12205
> URL: https://issues.apache.org/jira/browse/IMPALA-12205
> Project: IMPALA
>  Issue Type: Sub-task
>Reporter: Tamas Mate
>Assignee: Tamas Mate
>Priority: Major
>  Labels: impala-iceberg
>
> Meatdata table columns can be struct as well, this Jira is to extend the type 
> support to structs as well, which will not be part of the executor change.
> Complex types are hidden by default when not specified directly in the select 
> list, this should be revisited.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Work started] (IMPALA-12527) test_metadata_tables could occasionally fail in the s3 build

2023-10-30 Thread Tamas Mate (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-12527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on IMPALA-12527 started by Tamas Mate.
---
> test_metadata_tables could occasionally fail in the s3 build
> 
>
> Key: IMPALA-12527
> URL: https://issues.apache.org/jira/browse/IMPALA-12527
> Project: IMPALA
>  Issue Type: Sub-task
>Reporter: Fang-Yu Rao
>Assignee: Tamas Mate
>Priority: Major
>  Labels: broken-build, flaky-test
>
> We found that 
> [test_metadata_tables()|https://github.infra.cloudera.com/CDH/Impala/blame/cdw-master-staging/tests/query_test/test_iceberg.py#L1219]
>  that runs 
> [iceberg-metadata-tables.test|https://github.com/apache/impala/blob/master/testdata/workloads/functional-query/queries/QueryTest/iceberg-metadata-tables.test]
>  could occasionally fail with the following error message.
> It looks like the actual result does not match the expected result for some 
> columns.
> Stacktrace
> {code}
> query_test/test_iceberg.py:1226: in test_metadata_tables
> '$OVERWRITE_SNAPSHOT_TS': str(overwrite_snapshot_ts.data[0])})
> common/impala_test_suite.py:751: in run_test_case
> self.__verify_results_and_errors(vector, test_section, result, use_db)
> common/impala_test_suite.py:587: in __verify_results_and_errors
> replace_filenames_with_placeholder)
> common/test_result_verifier.py:487: in verify_raw_results
> VERIFIER_MAP[verifier](expected, actual)
> common/test_result_verifier.py:296: in verify_query_result_is_equal
> assert expected_results == actual_results
> E   assert Comparing QueryTestResults (expected vs actual):
> E 
> row_regex:0,'s3a://impala-test-uswest2-2/test-warehouse/iceberg_test/hadoop_catalog/ice/iceberg_query_metadata/data/.*.parq','PARQUET',0,1,[1-9]\d*|0,'',0
>  != 
> 0,'/test-warehouse/iceberg_test/hadoop_catalog/ice/iceberg_query_metadata/data/7d479ffb82bfffd3-7ce667e5_544607964_data.0.parq','PARQUET',0,1,351,'NULL',0
> E 
> row_regex:0,'s3a://impala-test-uswest2-2/test-warehouse/iceberg_test/hadoop_catalog/ice/iceberg_query_metadata/data/.*.parq','PARQUET',0,1,[1-9]\d*|0,'',0
>  != 
> 0,'/test-warehouse/iceberg_test/hadoop_catalog/ice/iceberg_query_metadata/data/ab4ffd0d75a5a68d-13da0831_1541521750_data.0.parq','PARQUET',0,1,351,'NULL',0
> E 
> row_regex:0,'s3a://impala-test-uswest2-2/test-warehouse/iceberg_test/hadoop_catalog/ice/iceberg_query_metadata/data/.*.parq','PARQUET',0,1,[1-9]\d*|0,'',0
>  != 
> 0,'/test-warehouse/iceberg_test/hadoop_catalog/ice/iceberg_query_metadata/data/b04d1095845359f5-f0799bd0_1209897284_data.0.parq','PARQUET',0,1,351,'NULL',0
> E 
> row_regex:1,'s3a://impala-test-uswest2-2/test-warehouse/iceberg_test/hadoop_catalog/ice/iceberg_query_metadata/data/.*.parq','PARQUET',0,1,[1-9]\d*|0,'NULL',NULL
>  != 
> 1,'/test-warehouse/iceberg_test/hadoop_catalog/ice/iceberg_query_metadata/data/delete-1b45db885b2bdd56-4023218d0002_1697110314_data.0.parq','PARQUET',0,1,1531,'NULL',NULL
> {code}
> Specifically, it seems the value of the second last column are different from 
> the expected value in some rows.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Updated] (IMPALA-12527) test_metadata_tables could occasionally fail in the s3 build

2023-10-30 Thread Tamas Mate (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-12527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Mate updated IMPALA-12527:

Parent: IMPALA-10947
Issue Type: Sub-task  (was: Bug)

> test_metadata_tables could occasionally fail in the s3 build
> 
>
> Key: IMPALA-12527
> URL: https://issues.apache.org/jira/browse/IMPALA-12527
> Project: IMPALA
>  Issue Type: Sub-task
>Reporter: Fang-Yu Rao
>Assignee: Tamas Mate
>Priority: Major
>  Labels: broken-build, flaky-test
>
> We found that 
> [test_metadata_tables()|https://github.infra.cloudera.com/CDH/Impala/blame/cdw-master-staging/tests/query_test/test_iceberg.py#L1219]
>  that runs 
> [iceberg-metadata-tables.test|https://github.com/apache/impala/blob/master/testdata/workloads/functional-query/queries/QueryTest/iceberg-metadata-tables.test]
>  could occasionally fail with the following error message.
> It looks like the actual result does not match the expected result for some 
> columns.
> Stacktrace
> {code}
> query_test/test_iceberg.py:1226: in test_metadata_tables
> '$OVERWRITE_SNAPSHOT_TS': str(overwrite_snapshot_ts.data[0])})
> common/impala_test_suite.py:751: in run_test_case
> self.__verify_results_and_errors(vector, test_section, result, use_db)
> common/impala_test_suite.py:587: in __verify_results_and_errors
> replace_filenames_with_placeholder)
> common/test_result_verifier.py:487: in verify_raw_results
> VERIFIER_MAP[verifier](expected, actual)
> common/test_result_verifier.py:296: in verify_query_result_is_equal
> assert expected_results == actual_results
> E   assert Comparing QueryTestResults (expected vs actual):
> E 
> row_regex:0,'s3a://impala-test-uswest2-2/test-warehouse/iceberg_test/hadoop_catalog/ice/iceberg_query_metadata/data/.*.parq','PARQUET',0,1,[1-9]\d*|0,'',0
>  != 
> 0,'/test-warehouse/iceberg_test/hadoop_catalog/ice/iceberg_query_metadata/data/7d479ffb82bfffd3-7ce667e5_544607964_data.0.parq','PARQUET',0,1,351,'NULL',0
> E 
> row_regex:0,'s3a://impala-test-uswest2-2/test-warehouse/iceberg_test/hadoop_catalog/ice/iceberg_query_metadata/data/.*.parq','PARQUET',0,1,[1-9]\d*|0,'',0
>  != 
> 0,'/test-warehouse/iceberg_test/hadoop_catalog/ice/iceberg_query_metadata/data/ab4ffd0d75a5a68d-13da0831_1541521750_data.0.parq','PARQUET',0,1,351,'NULL',0
> E 
> row_regex:0,'s3a://impala-test-uswest2-2/test-warehouse/iceberg_test/hadoop_catalog/ice/iceberg_query_metadata/data/.*.parq','PARQUET',0,1,[1-9]\d*|0,'',0
>  != 
> 0,'/test-warehouse/iceberg_test/hadoop_catalog/ice/iceberg_query_metadata/data/b04d1095845359f5-f0799bd0_1209897284_data.0.parq','PARQUET',0,1,351,'NULL',0
> E 
> row_regex:1,'s3a://impala-test-uswest2-2/test-warehouse/iceberg_test/hadoop_catalog/ice/iceberg_query_metadata/data/.*.parq','PARQUET',0,1,[1-9]\d*|0,'NULL',NULL
>  != 
> 1,'/test-warehouse/iceberg_test/hadoop_catalog/ice/iceberg_query_metadata/data/delete-1b45db885b2bdd56-4023218d0002_1697110314_data.0.parq','PARQUET',0,1,1531,'NULL',NULL
> {code}
> Specifically, it seems the value of the second last column are different from 
> the expected value in some rows.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Updated] (IMPALA-12527) test_metadata_tables could occasionally fail in the s3 build

2023-10-30 Thread Tamas Mate (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-12527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Mate updated IMPALA-12527:

Labels: broken-build flaky-test impala-iceberg  (was: broken-build 
flaky-test)

> test_metadata_tables could occasionally fail in the s3 build
> 
>
> Key: IMPALA-12527
> URL: https://issues.apache.org/jira/browse/IMPALA-12527
> Project: IMPALA
>  Issue Type: Sub-task
>Reporter: Fang-Yu Rao
>Assignee: Tamas Mate
>Priority: Major
>  Labels: broken-build, flaky-test, impala-iceberg
>
> We found that 
> [test_metadata_tables()|https://github.infra.cloudera.com/CDH/Impala/blame/cdw-master-staging/tests/query_test/test_iceberg.py#L1219]
>  that runs 
> [iceberg-metadata-tables.test|https://github.com/apache/impala/blob/master/testdata/workloads/functional-query/queries/QueryTest/iceberg-metadata-tables.test]
>  could occasionally fail with the following error message.
> It looks like the actual result does not match the expected result for some 
> columns.
> Stacktrace
> {code}
> query_test/test_iceberg.py:1226: in test_metadata_tables
> '$OVERWRITE_SNAPSHOT_TS': str(overwrite_snapshot_ts.data[0])})
> common/impala_test_suite.py:751: in run_test_case
> self.__verify_results_and_errors(vector, test_section, result, use_db)
> common/impala_test_suite.py:587: in __verify_results_and_errors
> replace_filenames_with_placeholder)
> common/test_result_verifier.py:487: in verify_raw_results
> VERIFIER_MAP[verifier](expected, actual)
> common/test_result_verifier.py:296: in verify_query_result_is_equal
> assert expected_results == actual_results
> E   assert Comparing QueryTestResults (expected vs actual):
> E 
> row_regex:0,'s3a://impala-test-uswest2-2/test-warehouse/iceberg_test/hadoop_catalog/ice/iceberg_query_metadata/data/.*.parq','PARQUET',0,1,[1-9]\d*|0,'',0
>  != 
> 0,'/test-warehouse/iceberg_test/hadoop_catalog/ice/iceberg_query_metadata/data/7d479ffb82bfffd3-7ce667e5_544607964_data.0.parq','PARQUET',0,1,351,'NULL',0
> E 
> row_regex:0,'s3a://impala-test-uswest2-2/test-warehouse/iceberg_test/hadoop_catalog/ice/iceberg_query_metadata/data/.*.parq','PARQUET',0,1,[1-9]\d*|0,'',0
>  != 
> 0,'/test-warehouse/iceberg_test/hadoop_catalog/ice/iceberg_query_metadata/data/ab4ffd0d75a5a68d-13da0831_1541521750_data.0.parq','PARQUET',0,1,351,'NULL',0
> E 
> row_regex:0,'s3a://impala-test-uswest2-2/test-warehouse/iceberg_test/hadoop_catalog/ice/iceberg_query_metadata/data/.*.parq','PARQUET',0,1,[1-9]\d*|0,'',0
>  != 
> 0,'/test-warehouse/iceberg_test/hadoop_catalog/ice/iceberg_query_metadata/data/b04d1095845359f5-f0799bd0_1209897284_data.0.parq','PARQUET',0,1,351,'NULL',0
> E 
> row_regex:1,'s3a://impala-test-uswest2-2/test-warehouse/iceberg_test/hadoop_catalog/ice/iceberg_query_metadata/data/.*.parq','PARQUET',0,1,[1-9]\d*|0,'NULL',NULL
>  != 
> 1,'/test-warehouse/iceberg_test/hadoop_catalog/ice/iceberg_query_metadata/data/delete-1b45db885b2bdd56-4023218d0002_1697110314_data.0.parq','PARQUET',0,1,1531,'NULL',NULL
> {code}
> Specifically, it seems the value of the second last column are different from 
> the expected value in some rows.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Resolved] (IMPALA-11996) Iceberg Metadata querying executor change

2023-10-30 Thread Tamas Mate (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-11996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Mate resolved IMPALA-11996.
-
Fix Version/s: Impala 4.4.0
   (was: Impala 4.3.0)
   Resolution: Fixed

> Iceberg Metadata querying executor change
> -
>
> Key: IMPALA-11996
> URL: https://issues.apache.org/jira/browse/IMPALA-11996
> Project: IMPALA
>  Issue Type: Sub-task
>  Components: Backend
>Affects Versions: Impala 4.2.0
>Reporter: Tamas Mate
>Assignee: Tamas Mate
>Priority: Major
>  Labels: impala-iceberg
> Fix For: Impala 4.4.0
>
>
> After the parser and planner changes are ready the executor should execute 
> the created plan.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Updated] (IMPALA-12205) Add support to nested type Iceberg Metadata table columns

2023-10-24 Thread Tamas Mate (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-12205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Mate updated IMPALA-12205:

Summary: Add support to nested type Iceberg Metadata table columns  (was: 
Add support to struct Iceberg Metadata table columns)

> Add support to nested type Iceberg Metadata table columns
> -
>
> Key: IMPALA-12205
> URL: https://issues.apache.org/jira/browse/IMPALA-12205
> Project: IMPALA
>  Issue Type: Sub-task
>Reporter: Tamas Mate
>Assignee: Tamas Mate
>Priority: Major
>  Labels: impala-iceberg
>
> Meatdata table columns can be struct as well, this Jira is to extend the type 
> support to structs as well, which will not be part of the executor change.
> Complex types are hidden by default when not specified directly in the select 
> list, this should be revisited.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-12497) Use C++ Avro implementation instead of C

2023-10-16 Thread Tamas Mate (Jira)
Tamas Mate created IMPALA-12497:
---

 Summary: Use C++ Avro implementation instead of C
 Key: IMPALA-12497
 URL: https://issues.apache.org/jira/browse/IMPALA-12497
 Project: IMPALA
  Issue Type: Sub-task
  Components: Backend
Reporter: Tamas Mate
Assignee: Tamas Mate


The C++ avro lib is part of the native toolchain, we should use that instead of 
C.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Updated] (IMPALA-12497) Use C++ Avro implementation instead of C

2023-10-16 Thread Tamas Mate (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-12497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Mate updated IMPALA-12497:

Labels: impala-iceberg  (was: iceberg)

> Use C++ Avro implementation instead of C
> 
>
> Key: IMPALA-12497
> URL: https://issues.apache.org/jira/browse/IMPALA-12497
> Project: IMPALA
>  Issue Type: Sub-task
>  Components: Backend
>Reporter: Tamas Mate
>Assignee: Tamas Mate
>Priority: Major
>  Labels: impala-iceberg
>
> The C++ avro lib is part of the native toolchain, we should use that instead 
> of C.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Reopened] (IMPALA-11996) Iceberg Metadata querying executor change

2023-10-12 Thread Tamas Mate (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-11996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Mate reopened IMPALA-11996:
-

> Iceberg Metadata querying executor change
> -
>
> Key: IMPALA-11996
> URL: https://issues.apache.org/jira/browse/IMPALA-11996
> Project: IMPALA
>  Issue Type: Sub-task
>  Components: Backend
>Affects Versions: Impala 4.2.0
>Reporter: Tamas Mate
>Assignee: Tamas Mate
>Priority: Major
>  Labels: impala-iceberg
> Fix For: Impala 4.3.0
>
>
> After the parser and planner changes are ready the executor should execute 
> the created plan.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-12495) Describe command for Iceberg metadata tables

2023-10-12 Thread Tamas Mate (Jira)
Tamas Mate created IMPALA-12495:
---

 Summary: Describe command for Iceberg metadata tables
 Key: IMPALA-12495
 URL: https://issues.apache.org/jira/browse/IMPALA-12495
 Project: IMPALA
  Issue Type: Sub-task
  Components: Backend, Frontend
Affects Versions: Impala 4.4.0
Reporter: Tamas Mate
Assignee: Tamas Mate


We should provide a statement do describe metadata tables.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Resolved] (IMPALA-12279) Bump CDP_BUILD_NUMBER for Iceberg 1.3

2023-10-04 Thread Tamas Mate (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-12279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Mate resolved IMPALA-12279.
-
Fix Version/s: Impala 4.4.0
   Resolution: Fixed

> Bump CDP_BUILD_NUMBER for Iceberg 1.3
> -
>
> Key: IMPALA-12279
> URL: https://issues.apache.org/jira/browse/IMPALA-12279
> Project: IMPALA
>  Issue Type: Task
>  Components: fe
>Reporter: Tamas Mate
>Assignee: Tamas Mate
>Priority: Major
> Fix For: Impala 4.4.0
>
>
> Iceberg 1.3 is available in the CPD build, we should bump the CDP build 
> version.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Assigned] (IMPALA-11876) Support 'fixed' data type in AVRO

2023-09-06 Thread Tamas Mate (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-11876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Mate reassigned IMPALA-11876:
---

Assignee: Tamas Mate  (was: Noemi Pap-Takacs)

> Support 'fixed' data type in AVRO
> -
>
> Key: IMPALA-11876
> URL: https://issues.apache.org/jira/browse/IMPALA-11876
> Project: IMPALA
>  Issue Type: New Feature
>  Components: Backend
>Reporter: Noemi Pap-Takacs
>Assignee: Tamas Mate
>Priority: Major
>  Labels: avro, impala-iceberg
>
> Impala supports 'decimal' type in AVRO. 'Decimal' is a logical type, that can 
> annotate either 'bytes' or 'fixed' type underneath. Impala can read 'bytes' 
> but not 'fixed'.
> Iceberg writes 'decimal' type with underlying 'fixed' type. This means that 
> Impala is currently unable to support 'decimal' in AVRO tables written by 
> Iceberg. In order to fully support all implementations of 'decimal', 'fixed' 
> type must be supported. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Assigned] (IMPALA-12402) Add some configurations for CatalogdMetaProvider's cache_

2023-08-29 Thread Tamas Mate (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-12402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Mate reassigned IMPALA-12402:
---

Assignee: (was: Tamas Mate)

> Add some configurations for CatalogdMetaProvider's cache_
> -
>
> Key: IMPALA-12402
> URL: https://issues.apache.org/jira/browse/IMPALA-12402
> Project: IMPALA
>  Issue Type: Improvement
>  Components: fe
>Reporter: Maxwell Guo
>Priority: Minor
>  Labels: pull-request-available
> Attachments: 
> 0001-IMPALA-12402-Add-some-configurations-for-CatalogdMet.patch
>
>
> when the cluster contains many db and tables such as if there are more than 
> 10 tables, and if we restart the impalad , the local cache_ 
> CatalogMetaProvider's need to doing some loading process. 
> As we know that the goole's guava cache 's concurrencyLevel os set to 4 by 
> default. 
> but if there is many tables the loading process will need more time and 
> increase the probability of lock contention, see 
> [here|https://github.com/google/guava/blob/master/guava/src/com/google/common/cache/CacheBuilder.java#L437].
>  
> So we propose to add some configurations here, the first is the concurrency 
> of cache.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Assigned] (IMPALA-12402) Add some configurations for CatalogdMetaProvider's cache_

2023-08-29 Thread Tamas Mate (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-12402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Mate reassigned IMPALA-12402:
---

Assignee: Tamas Mate

> Add some configurations for CatalogdMetaProvider's cache_
> -
>
> Key: IMPALA-12402
> URL: https://issues.apache.org/jira/browse/IMPALA-12402
> Project: IMPALA
>  Issue Type: Improvement
>  Components: fe
>Reporter: Maxwell Guo
>Assignee: Tamas Mate
>Priority: Minor
>  Labels: pull-request-available
> Attachments: 
> 0001-IMPALA-12402-Add-some-configurations-for-CatalogdMet.patch
>
>
> when the cluster contains many db and tables such as if there are more than 
> 10 tables, and if we restart the impalad , the local cache_ 
> CatalogMetaProvider's need to doing some loading process. 
> As we know that the goole's guava cache 's concurrencyLevel os set to 4 by 
> default. 
> but if there is many tables the loading process will need more time and 
> increase the probability of lock contention, see 
> [here|https://github.com/google/guava/blob/master/guava/src/com/google/common/cache/CacheBuilder.java#L437].
>  
> So we propose to add some configurations here, the first is the concurrency 
> of cache.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Work started] (IMPALA-12407) Create a simple test table with equality deletes

2023-08-25 Thread Tamas Mate (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-12407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on IMPALA-12407 started by Tamas Mate.
---
> Create a simple test table with equality deletes
> 
>
> Key: IMPALA-12407
> URL: https://issues.apache.org/jira/browse/IMPALA-12407
> Project: IMPALA
>  Issue Type: Sub-task
>  Components: Infrastructure
>Reporter: Tamas Mate
>Assignee: Tamas Mate
>Priority: Major
>  Labels: impala-iceberg
>
> The current table with equality delete is a modified positional delete table, 
> the actual delete files are not equality delete files. As a first step we 
> should either create a delete file manually or use a service that can create 
> it, for example Flink.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Updated] (IMPALA-12407) Create a simple test table with equality deletes

2023-08-25 Thread Tamas Mate (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-12407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Mate updated IMPALA-12407:

Component/s: Infrastructure

> Create a simple test table with equality deletes
> 
>
> Key: IMPALA-12407
> URL: https://issues.apache.org/jira/browse/IMPALA-12407
> Project: IMPALA
>  Issue Type: Sub-task
>  Components: Infrastructure
>Reporter: Tamas Mate
>Assignee: Tamas Mate
>Priority: Major
>  Labels: impala-iceberg
>
> The current table with equality delete is a modified positional delete table, 
> the actual delete files are not equality delete files. As a first step we 
> should either create a delete file manually or use a service that can create 
> it, for example Flink.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Updated] (IMPALA-12407) Create a simple test table with equality deletes

2023-08-25 Thread Tamas Mate (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-12407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Mate updated IMPALA-12407:

Labels: impala-iceberg  (was: )

> Create a simple test table with equality deletes
> 
>
> Key: IMPALA-12407
> URL: https://issues.apache.org/jira/browse/IMPALA-12407
> Project: IMPALA
>  Issue Type: Sub-task
>Reporter: Tamas Mate
>Assignee: Tamas Mate
>Priority: Major
>  Labels: impala-iceberg
>
> The current table with equality delete is a modified positional delete table, 
> the actual delete files are not equality delete files. As a first step we 
> should either create a delete file manually or use a service that can create 
> it, for example Flink.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-12407) Create a simple test table with equality deletes

2023-08-25 Thread Tamas Mate (Jira)
Tamas Mate created IMPALA-12407:
---

 Summary: Create a simple test table with equality deletes
 Key: IMPALA-12407
 URL: https://issues.apache.org/jira/browse/IMPALA-12407
 Project: IMPALA
  Issue Type: Sub-task
Reporter: Tamas Mate
Assignee: Tamas Mate


The current table with equality delete is a modified positional delete table, 
the actual delete files are not equality delete files. As a first step we 
should either create a delete file manually or use a service that can create 
it, for example Flink.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Assigned] (IMPALA-12280) Add storate_handler to Atlas lineage log

2023-08-23 Thread Tamas Mate (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-12280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Mate reassigned IMPALA-12280:
---

Assignee: (was: Tamas Mate)

> Add storate_handler to Atlas lineage log
> 
>
> Key: IMPALA-12280
> URL: https://issues.apache.org/jira/browse/IMPALA-12280
> Project: IMPALA
>  Issue Type: New Feature
>  Components: fe
>Affects Versions: Impala 4.2.0
>Reporter: Tamas Mate
>Priority: Major
> Fix For: Impala 4.2.0
>
>
> Atlas lineage report should have a {{storage_handler}} property as well.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Resolved] (IMPALA-12263) Update CMake Avro module with C++ lib

2023-08-23 Thread Tamas Mate (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-12263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Mate resolved IMPALA-12263.
-
Resolution: Fixed

> Update CMake Avro module with C++ lib
> -
>
> Key: IMPALA-12263
> URL: https://issues.apache.org/jira/browse/IMPALA-12263
> Project: IMPALA
>  Issue Type: Sub-task
>  Components: be
>Affects Versions: Impala 4.2.0
>Reporter: Tamas Mate
>Assignee: Tamas Mate
>Priority: Major
> Fix For: Impala 4.2.0
>
>
> The C++ avro library is available in the toolchain and can be downloaded.
> We should update the FindAvro CMake module to use the C++ library when 
> USE_AVRO_CPP is true.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Updated] (IMPALA-12205) Add support to struct Iceberg Metadata table columns

2023-08-21 Thread Tamas Mate (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-12205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Mate updated IMPALA-12205:

Description: 
Meatdata table columns can be struct as well, this Jira is to extend the type 
support to structs as well, which will not be part of the executor change.

Complext types are hidden by default when not specified directly in the select 
list.

  was:Meatdata table columns can be struct as well, this Jira is to extend the 
type support to structs as well, which will not be part of the executor change.


> Add support to struct Iceberg Metadata table columns
> 
>
> Key: IMPALA-12205
> URL: https://issues.apache.org/jira/browse/IMPALA-12205
> Project: IMPALA
>  Issue Type: Sub-task
>Reporter: Tamas Mate
>Assignee: Tamas Mate
>Priority: Major
>  Labels: impala-iceberg
>
> Meatdata table columns can be struct as well, this Jira is to extend the type 
> support to structs as well, which will not be part of the executor change.
> Complext types are hidden by default when not specified directly in the 
> select list.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Updated] (IMPALA-12205) Add support to struct Iceberg Metadata table columns

2023-08-21 Thread Tamas Mate (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-12205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Mate updated IMPALA-12205:

Description: 
Meatdata table columns can be struct as well, this Jira is to extend the type 
support to structs as well, which will not be part of the executor change.

Complex types are hidden by default when not specified directly in the select 
list, this should be revisited.

  was:
Meatdata table columns can be struct as well, this Jira is to extend the type 
support to structs as well, which will not be part of the executor change.

Complex types are hidden by default when not specified directly in the select 
list,


> Add support to struct Iceberg Metadata table columns
> 
>
> Key: IMPALA-12205
> URL: https://issues.apache.org/jira/browse/IMPALA-12205
> Project: IMPALA
>  Issue Type: Sub-task
>Reporter: Tamas Mate
>Assignee: Tamas Mate
>Priority: Major
>  Labels: impala-iceberg
>
> Meatdata table columns can be struct as well, this Jira is to extend the type 
> support to structs as well, which will not be part of the executor change.
> Complex types are hidden by default when not specified directly in the select 
> list, this should be revisited.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Updated] (IMPALA-12205) Add support to struct Iceberg Metadata table columns

2023-08-21 Thread Tamas Mate (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-12205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Mate updated IMPALA-12205:

Description: 
Meatdata table columns can be struct as well, this Jira is to extend the type 
support to structs as well, which will not be part of the executor change.

Complex types are hidden by default when not specified directly in the select 
list,

  was:
Meatdata table columns can be struct as well, this Jira is to extend the type 
support to structs as well, which will not be part of the executor change.

Complext types are hidden by default when not specified directly in the select 
list.


> Add support to struct Iceberg Metadata table columns
> 
>
> Key: IMPALA-12205
> URL: https://issues.apache.org/jira/browse/IMPALA-12205
> Project: IMPALA
>  Issue Type: Sub-task
>Reporter: Tamas Mate
>Assignee: Tamas Mate
>Priority: Major
>  Labels: impala-iceberg
>
> Meatdata table columns can be struct as well, this Jira is to extend the type 
> support to structs as well, which will not be part of the executor change.
> Complex types are hidden by default when not specified directly in the select 
> list,



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-12280) Add storate_handler to Atlas lineage log

2023-07-12 Thread Tamas Mate (Jira)
Tamas Mate created IMPALA-12280:
---

 Summary: Add storate_handler to Atlas lineage log
 Key: IMPALA-12280
 URL: https://issues.apache.org/jira/browse/IMPALA-12280
 Project: IMPALA
  Issue Type: New Feature
  Components: fe
Affects Versions: Impala 4.2.0
Reporter: Tamas Mate
Assignee: Tamas Mate
 Fix For: Impala 4.2.0


Atlas lineage report should have a {{storage_handler}} property as well.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-12279) Bump CDP_BUILD_NUMBER for Iceberg 1.3

2023-07-12 Thread Tamas Mate (Jira)
Tamas Mate created IMPALA-12279:
---

 Summary: Bump CDP_BUILD_NUMBER for Iceberg 1.3
 Key: IMPALA-12279
 URL: https://issues.apache.org/jira/browse/IMPALA-12279
 Project: IMPALA
  Issue Type: Task
  Components: fe
Reporter: Tamas Mate
Assignee: Tamas Mate


Iceberg 1.3 is available in the CPD build, we should bump the CDP build version.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Updated] (IMPALA-12266) Flaky TestIcebergTable.test_convert_table NPE

2023-07-05 Thread Tamas Mate (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-12266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Mate updated IMPALA-12266:

Attachment: impalad.6c0f48d9ce66.invalid-user.log.INFO.20230704-181940.1

> Flaky TestIcebergTable.test_convert_table NPE
> -
>
> Key: IMPALA-12266
> URL: https://issues.apache.org/jira/browse/IMPALA-12266
> Project: IMPALA
>  Issue Type: Bug
>  Components: fe
>Affects Versions: Impala 4.2.0
>Reporter: Tamas Mate
>Assignee: Gabor Kaszab
>Priority: Major
> Attachments: 
> catalogd.bd40020df22b.invalid-user.log.INFO.20230704-181939.1, 
> impalad.6c0f48d9ce66.invalid-user.log.INFO.20230704-181940.1
>
>
> TestIcebergTable.test_convert_table test failed in a recent verify job's 
> dockerised tests:
> https://jenkins.impala.io/job/ubuntu-16.04-dockerised-tests/7629
> {code:none}
> E   ImpalaBeeswaxException: ImpalaBeeswaxException:
> EINNER EXCEPTION: 
> EMESSAGE: AnalysisException: Failed to load metadata for table: 
> 'parquet_nopartitioned'
> E   CAUSED BY: TableLoadingException: Could not load table 
> test_convert_table_cdba7383.parquet_nopartitioned from catalog
> E   CAUSED BY: TException: 
> TGetPartialCatalogObjectResponse(status:TStatus(status_code:GENERAL, 
> error_msgs:[NullPointerException: null]), lookup_status:OK)
> {code}
> {code:none}
> E0704 19:09:22.980131   833 JniUtil.java:183] 
> 7145c21173f2c47b:2579db55] Error in Getting partial catalog object of 
> TABLE:test_convert_table_cdba7383.parquet_nopartitioned. Time spent: 49ms
> I0704 19:09:22.980309   833 jni-util.cc:288] 
> 7145c21173f2c47b:2579db55] java.lang.NullPointerException
>   at 
> org.apache.impala.catalog.CatalogServiceCatalog.replaceTableIfUnchanged(CatalogServiceCatalog.java:2357)
>   at 
> org.apache.impala.catalog.CatalogServiceCatalog.getOrLoadTable(CatalogServiceCatalog.java:2300)
>   at 
> org.apache.impala.catalog.CatalogServiceCatalog.doGetPartialCatalogObject(CatalogServiceCatalog.java:3587)
>   at 
> org.apache.impala.catalog.CatalogServiceCatalog.getPartialCatalogObject(CatalogServiceCatalog.java:3513)
>   at 
> org.apache.impala.catalog.CatalogServiceCatalog.getPartialCatalogObject(CatalogServiceCatalog.java:3480)
>   at 
> org.apache.impala.service.JniCatalog.lambda$getPartialCatalogObject$11(JniCatalog.java:397)
>   at 
> org.apache.impala.service.JniCatalogOp.lambda$execAndSerialize$1(JniCatalogOp.java:90)
>   at org.apache.impala.service.JniCatalogOp.execOp(JniCatalogOp.java:58)
>   at 
> org.apache.impala.service.JniCatalogOp.execAndSerialize(JniCatalogOp.java:89)
>   at 
> org.apache.impala.service.JniCatalogOp.execAndSerializeSilentStartAndFinish(JniCatalogOp.java:109)
>   at 
> org.apache.impala.service.JniCatalog.execAndSerializeSilentStartAndFinish(JniCatalog.java:238)
>   at 
> org.apache.impala.service.JniCatalog.getPartialCatalogObject(JniCatalog.java:396)
> I0704 19:09:22.980324   833 status.cc:129] 7145c21173f2c47b:2579db55] 
> NullPointerException: null
> @  0x1012f9f  impala::Status::Status()
> @  0x187f964  impala::JniUtil::GetJniExceptionMsg()
> @   0xfee920  impala::JniCall::Call<>()
> @   0xfccd0f  impala::Catalog::GetPartialCatalogObject()
> @   0xfb55a5  
> impala::CatalogServiceThriftIf::GetPartialCatalogObject()
> @   0xf7a691  
> impala::CatalogServiceProcessorT<>::process_GetPartialCatalogObject()
> @   0xf82151  impala::CatalogServiceProcessorT<>::dispatchCall()
> @   0xee330f  apache::thrift::TDispatchProcessor::process()
> @  0x1329246  
> apache::thrift::server::TAcceptQueueServer::Task::run()
> @  0x1315a89  impala::ThriftThread::RunRunnable()
> @  0x131773d  
> boost::detail::function::void_function_obj_invoker0<>::invoke()
> @  0x195ba8c  impala::Thread::SuperviseThread()
> @  0x195c895  boost::detail::thread_data<>::run()
> @  0x23a03a7  thread_proxy
> @ 0x7faaad2a66ba  start_thread
> @ 0x7f2c151d  clone
> E0704 19:09:23.006968   833 catalog-server.cc:278] 
> 7145c21173f2c47b:2579db55] NullPointerException: null
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Updated] (IMPALA-12266) Flaky TestIcebergTable.test_convert_table NPE

2023-07-05 Thread Tamas Mate (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-12266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Mate updated IMPALA-12266:

Description: 
TestIcebergTable.test_convert_table test failed in a recent verify job's 
dockerised tests:
https://jenkins.impala.io/job/ubuntu-16.04-dockerised-tests/7629

{code:none}
E   ImpalaBeeswaxException: ImpalaBeeswaxException:
EINNER EXCEPTION: 
EMESSAGE: AnalysisException: Failed to load metadata for table: 
'parquet_nopartitioned'
E   CAUSED BY: TableLoadingException: Could not load table 
test_convert_table_cdba7383.parquet_nopartitioned from catalog
E   CAUSED BY: TException: 
TGetPartialCatalogObjectResponse(status:TStatus(status_code:GENERAL, 
error_msgs:[NullPointerException: null]), lookup_status:OK)
{code}

{code:none}
E0704 19:09:22.980131   833 JniUtil.java:183] 
7145c21173f2c47b:2579db55] Error in Getting partial catalog object of 
TABLE:test_convert_table_cdba7383.parquet_nopartitioned. Time spent: 49ms
I0704 19:09:22.980309   833 jni-util.cc:288] 7145c21173f2c47b:2579db55] 
java.lang.NullPointerException
at 
org.apache.impala.catalog.CatalogServiceCatalog.replaceTableIfUnchanged(CatalogServiceCatalog.java:2357)
at 
org.apache.impala.catalog.CatalogServiceCatalog.getOrLoadTable(CatalogServiceCatalog.java:2300)
at 
org.apache.impala.catalog.CatalogServiceCatalog.doGetPartialCatalogObject(CatalogServiceCatalog.java:3587)
at 
org.apache.impala.catalog.CatalogServiceCatalog.getPartialCatalogObject(CatalogServiceCatalog.java:3513)
at 
org.apache.impala.catalog.CatalogServiceCatalog.getPartialCatalogObject(CatalogServiceCatalog.java:3480)
at 
org.apache.impala.service.JniCatalog.lambda$getPartialCatalogObject$11(JniCatalog.java:397)
at 
org.apache.impala.service.JniCatalogOp.lambda$execAndSerialize$1(JniCatalogOp.java:90)
at org.apache.impala.service.JniCatalogOp.execOp(JniCatalogOp.java:58)
at 
org.apache.impala.service.JniCatalogOp.execAndSerialize(JniCatalogOp.java:89)
at 
org.apache.impala.service.JniCatalogOp.execAndSerializeSilentStartAndFinish(JniCatalogOp.java:109)
at 
org.apache.impala.service.JniCatalog.execAndSerializeSilentStartAndFinish(JniCatalog.java:238)
at 
org.apache.impala.service.JniCatalog.getPartialCatalogObject(JniCatalog.java:396)
I0704 19:09:22.980324   833 status.cc:129] 7145c21173f2c47b:2579db55] 
NullPointerException: null
@  0x1012f9f  impala::Status::Status()
@  0x187f964  impala::JniUtil::GetJniExceptionMsg()
@   0xfee920  impala::JniCall::Call<>()
@   0xfccd0f  impala::Catalog::GetPartialCatalogObject()
@   0xfb55a5  
impala::CatalogServiceThriftIf::GetPartialCatalogObject()
@   0xf7a691  
impala::CatalogServiceProcessorT<>::process_GetPartialCatalogObject()
@   0xf82151  impala::CatalogServiceProcessorT<>::dispatchCall()
@   0xee330f  apache::thrift::TDispatchProcessor::process()
@  0x1329246  
apache::thrift::server::TAcceptQueueServer::Task::run()
@  0x1315a89  impala::ThriftThread::RunRunnable()
@  0x131773d  
boost::detail::function::void_function_obj_invoker0<>::invoke()
@  0x195ba8c  impala::Thread::SuperviseThread()
@  0x195c895  boost::detail::thread_data<>::run()
@  0x23a03a7  thread_proxy
@ 0x7faaad2a66ba  start_thread
@ 0x7f2c151d  clone
E0704 19:09:23.006968   833 catalog-server.cc:278] 
7145c21173f2c47b:2579db55] NullPointerException: null
{code}

  was:
TestIcebergTable.test_convert_table test failed in a recent verify job's 
dockerised tests:
https://jenkins.impala.io/job/ubuntu-16.04-dockerised-tests/7629

{code:none}
E   ImpalaBeeswaxException: ImpalaBeeswaxException:
EINNER EXCEPTION: 
EMESSAGE: AnalysisException: Failed to load metadata for table: 
'parquet_nopartitioned'
E   CAUSED BY: TableLoadingException: Could not load table 
test_convert_table_cdba7383.parquet_nopartitioned from catalog
E   CAUSED BY: TException: 
TGetPartialCatalogObjectResponse(status:TStatus(status_code:GENERAL, 
error_msgs:[NullPointerException: null]), lookup_status:OK)
{code}


> Flaky TestIcebergTable.test_convert_table NPE
> -
>
> Key: IMPALA-12266
> URL: https://issues.apache.org/jira/browse/IMPALA-12266
> Project: IMPALA
>  Issue Type: Bug
>  Components: fe
>Affects Versions: Impala 4.2.0
>Reporter: Tamas Mate
>Assignee: Gabor Kaszab
>Priority: Major
> Attachments: 
> catalogd.bd40020df22b.invalid-user.log.INFO.20230704-181939.1
>
>
> TestIcebergTable.test_convert_table test failed in a recent verify job's 
> dockerised tests:
> 

  1   2   3   4   5   6   >